API Reference

L-GATr Networks

We provide two main L-GATr networks, LGATr as a stack of transformer encoders, and ConditionalLGATr as a stack of transformer decoders. For tasks where conditional inputs are required, you can process the condition with a LGATr and then include this processed condition using a ConditionalLGATr.

lgatr.nets.lgatr.LGATr(num_blocks, ...[, ...])

L-GATr network.

lgatr.nets.conditional_lgatr.ConditionalLGATr(...)

Conditional L-GATr network.

L-GATr Layers

The LGATr and ConditionalLGATr networks have a structure similar to standard transformers. We construct them using variants of the standard transformer layers adapted to the geometric algebra framework.

lgatr.layers.lgatr_block.LGATrBlock(...[, ...])

L-GATr encoder block.

lgatr.layers.conditional_lgatr_block.ConditionalLGATrBlock(...)

L-GATr decoder block.

lgatr.layers.linear.EquiLinear(...[, ...])

Linear layer.

lgatr.layers.attention.self_attention.SelfAttention(config)

L-GATr self-attention.

lgatr.layers.attention.cross_attention.CrossAttention(config)

L-GATr cross-attention.

lgatr.layers.mlp.mlp.GeoMLP(config)

MLP with geometric product.

lgatr.layers.mlp.geometric_bilinears.GeometricBilinear(...)

Geometric product on multivectors.

lgatr.layers.mlp.nonlinearities.ScalarGatedNonlinearity([...])

Gated nonlinearity on multivectors.

lgatr.layers.layer_norm.EquiLayerNorm([...])

Layer normalization.

lgatr.layers.dropout.GradeDropout([p])

Dropout on multivectors.

L-GATr Primitives

The L-GATr primitives implement the core equivariant operations and are called by the L-GATr layers.

lgatr.primitives.attention

Equivariant attention.

lgatr.primitives.bilinear

Geometric product.

lgatr.primitives.dropout

Grade dropout.

lgatr.primitives.invariants

Invariants, e.g. inner product, absolute squared norm, pin invariants.

lgatr.primitives.linear

Linear operations on multivectors, in particular linear basis maps.

lgatr.primitives.nonlinearities

Gated nonlinearities on multivectors.

lgatr.primitives.normalization

Multivector normalization.

L-GATr Configuration Classes

L-GATr uses dataclass objects to organize less relevant hyperparameters like number of heads or the MLP nonlinearity. The MLPConfig, SelfAttentionConfig and CrossAttentionConfig are arguments for the LGATr/ConditionalLGATr modules, whereas the LGATrConfig is a global object that is accessed within the L-GATr primitives.

lgatr.primitives.config.LGATrConfig([...])

Configuration for global settings like the symmetry group.

lgatr.layers.attention.config.SelfAttentionConfig([...])

Configuration for self-attention.

lgatr.layers.attention.config.CrossAttentionConfig([...])

Configuration for cross-attention.

lgatr.layers.mlp.config.MLPConfig([...])

Geometric MLP configuration.

Interface to the Geometric Algebra

Before we feed data into L-GATr networks and after we extract results, we have to convert between common scalar/vector objects and multivectors. This is very simple, we still introduce convenience methods for this step. We also include functionality to construct spurions, or reference multivectors, which can be added as extra items or channels to break equivariance at the input level.

lgatr.interface.scalar

Embedding and extracting scalars into multivectors.

lgatr.interface.vector

Embedding and extracting vectors into multivectors.

lgatr.interface.pseudoscalar

Embedding and extracting pseudoscalars into multivectors.

lgatr.interface.axialvector

Embedding and extracting axial vectors into multivectors.

lgatr.interface.spurions

Tools to include reference multivectors ('spurions') for symmetry breaking.