lgatr.nets.lgatr_slim.LGATrSlimBlock

class lgatr.nets.lgatr_slim.LGATrSlimBlock(v_channels, s_channels, num_heads, nonlinearity='gelu', mlp_ratio=2, attn_ratio=1, num_layers_mlp=2, dropout_prob=None)[source]

Bases: Module

A single block of the L-GATr-slim, consisting of self-attention and MLP layers, pre-norm and residual connections.

forward(vectors, scalars, **attn_kwargs)[source]
Parameters:
  • vectors (torch.Tensor) – A tensor of shape (…, v_channels, 4) representing Lorentz vectors.

  • scalars (torch.Tensor) – A tensor of shape (…, s_channels) representing scalar features.

  • **attn_kwargs (dict) – Additional keyword arguments for the attention function.

Returns:

Tensors of the same shape as input representing the normalized vectors and scalars.

Return type:

torch.Tensor, torch.Tensor