mir-group/NequIP-OAM-M
$nequip-compile \ nequip.net:mir-group/NequIP-OAM-M:0.1 \ mir-group__NequIP-OAM-M__0.1.nequip.pt2 \ --mode aotinductor \ --device cuda \ --target ase
This command is just an example: please consult the nequip-compile and ASE integration documentation.
nequip-compile must be run in the same hardware environment where the model will run.
Choosing the right accelerations can make inference much faster.
Target:
Medium NequIP GNN foundational potential for materials.
This is a 'medium' NequIP model, optimised for speed and accuracy with equal priority. This model is pre-trained on the OMat24 dataset (~101M frames), and fine-tuned on the sAlex (~10.5M frames) and MPTrj (~1.5M frames) datasets.
We find this model to currently lie on the upper/middle-right quadrant of the Pareto front when compared to other leading foundation models (preprint incoming), showing an optimal balance of speed and accuracy.
See nequip.net for further details – in particular, for details on including model accelerations.
Key model hyperparameters:
- Radial cutoff: 6 Å
- Maximum spherical harmonic rotation order (
l_max): 2 - Tensor features: 128 (l=0), 64 (l=1), 32 (l=2)
- Number of layers: 4
- ZBL: True
- Parity: False
See https://nequip.readthedocs.io/en/latest/guide/training-techniques/fine_tuning.html for details on fine-tuning NequIP/Allegro models.
Supported Elements
Supported Model Modifiers
Enable OpenEquivariance tensor product kernel for accelerated NequIP training and inference.
Enable LAMMPS ML-IAP ghost exchange for inference in LAMMPS ML-IAP.
Modify per-type scales and shifts of a model.
The new scales and shifts should be provided as dicts. The keys must correspond to the type_names registered in the model being modified, and may not include all the possible type_names of the original model. For example, if one uses a pretrained model with 50 atom types, and seeks to only modify 3 per-atom shifts to be consistent with a fine-tuning dataset's DFT settings, one could use
shifts:
C: 1.23
H: 0.12
O: 2.13In this case, the per-type atomic energy shifts of the original model will be used for every other atom type, except for atom types with the new shifts specified.
Args:
-
scales: the new per-type atomic energy scales
-
shifts: the new per-type atomic energy shifts (e.g. isolated atom energies of a dataset used for fine-tuning)
-
scales_trainable (bool): whether the new scales are trainable
-
shifts_trainable (bool): whether the new shifts are trainable
Papers Using This Model
Know of a paper that uses this model? Click "Add a paper that uses this model" to let us know.
Model Information
Published Date
February 25, 2026
Tags
License
CC-BY-4.0
Architecture
NequIP GNN
Model Size
3.2M