mir-group/Allegro-MP-L
"Matbench-compliant" Allegro potential.
This is a 'large' Allegro model, optimised for speed and accuracy with a greater priority placed on accuracy. This model is trained on the MPTrj
dataset (~1.5M frames), making a matbench-discovery
'compliant' model.
Note that we do not recommend this model for production applications, as the large Allegro model trained on larger datasets (i.e. OMat24
; 'OAM' models) will have much greater accuracy and with similar speed. The primary purpose of this model is just to provide a 'compliant' matbench-discovery
model for benchmarking and comparisons.
Key model hyperparameters:
- Radial cutoff: 6 Å
- Maximum spherical harmonic rotation order (
l_max
): 3 - Tensor features: 96
- Number of layers: 5
- ZBL: False
- Parity: False
- Allegro MLP depth: 3
- Allegro MLP width: 1024
- Tensor path coupling: True
See the nequip
docs for details on fine-tuning NequIP
/Allegro
models.
Supported Elements
Supported Model Modifiers
Enable custom Triton tensor product kernel for accelerated Allegro inference.
Note that this modifier should only be used during nequip-compile --mode aotinductor
.
Enable CuEquivariance tensor product kernel for accelerated Allegro training and inference.
Modify per-type scales and shifts of a model.
The new scales
and shifts
should be provided as dicts. The keys must correspond to the type_names
registered in the model being modified, and may not include all the possible type_names
of the original model. For example, if one uses a pretrained model with 50 atom types, and seeks to only modify 3 per-atom shifts to be consistent with a fine-tuning dataset's DFT settings, one could use
shifts:
C: 1.23
H: 0.12
O: 2.13
In this case, the per-type atomic energy shifts of the original model will be used for every other atom type, except for atom types with the new shifts specified.
Args:
-
scales: the new per-type atomic energy scales
-
shifts: the new per-type atomic energy shifts (e.g. isolated atom energies of a dataset used for fine-tuning)
-
scales_trainable (bool): whether the new scales are trainable
-
shifts_trainable (bool): whether the new shifts are trainable
Papers Using This Model
Know of a paper that uses this model? Click "Add a paper that uses this model" to let us know.
Model Information
Published Date
August 28, 2025
Tags
License
CC-BY-4.0
Architecture
Allegro
Model Size
18.7M