Skip to content

Commit

Permalink
List all engines with metatensor interface in the docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Luthaf committed Sep 9, 2024
1 parent 3c1fb1a commit f281de5
Show file tree
Hide file tree
Showing 13 changed files with 450 additions and 1 deletion.
36 changes: 36 additions & 0 deletions docs/src/atomistic/engines/ase.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.. _engine-ase:

ASE
===

.. list-table::
:header-rows: 1

* - Official website
- How is metatensor supported?
* - https://wiki.fysik.dtu.dk/ase/
- As part of the ``metatensor-torch`` package

Supported model outputs
^^^^^^^^^^^^^^^^^^^^^^^

.. py:currentmodule:: metatensor.torch.atomistic.ase_calculator
- the :ref:`energy <energy-output>` output is supported and fully integrated
with ASE calculator interface (i.e. :py:meth:`ase.Atoms.get_potential_energy`,
:py:meth:`ase.Atoms.get_forces`, …);
- arbitrary outputs can be computed for any :py:class:`ase.Atoms` using
:py:meth:`MetatensorCalculator.run_model`;

How to install the code
^^^^^^^^^^^^^^^^^^^^^^^

The code is available in the ``metatensor-torch`` package, in the
:py:class:`metatensor.torch.atomistic.ase_calculator.MetatensorCalculator`
class.

How to use the code
^^^^^^^^^^^^^^^^^^^

See the :ref:`corresponding tutorial <atomistic-tutorial-md>`, and API
documentation of the :py:class:`MetatensorCalculator` class.
57 changes: 57 additions & 0 deletions docs/src/atomistic/engines/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
.. _atomistic-models-engines:

Simulation engines
==================

These pages list the simulation engine able to use metatensor's atomistic models
capabilities, where to find the corresponding code if it is not part of the
official package, and how to use them.

If you add support for another simulation software, please let us know so we can
add it here!

.. note: please keep these in alphabetical order!
.. grid:: 1 2 3 3

.. grid-item-card:: ASE
:link: engine-ase
:link-type: ref

.. image:: /../static/images/logo-ase.png
:width: 120px
:align: center

.. grid-item-card:: i-PI
:link: engine-ipi
:link-type: ref

.. image:: /../static/images/logo-ipi.png
:width: 120px
:align: center

.. grid-item-card:: LAMMPS
:link: engine-lammps
:link-type: ref

.. image:: /../static/images/logo-lammps.png
:width: 120px
:align: center

.. grid-item-card:: PLUMED
:link: engine-plumed
:link-type: ref

.. image:: /../static/images/logo-plumed.png
:width: 120px
:align: center


.. toctree::
:maxdepth: 1
:hidden:

ase
ipi
lammps
plumed
79 changes: 79 additions & 0 deletions docs/src/atomistic/engines/ipi.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
.. _engine-ipi:

i-PI
====

.. list-table::
:header-rows: 1

* - Official website
- How is metatensor supported?
* - https://ipi-code.org/
- In the official version


Supported model outputs
^^^^^^^^^^^^^^^^^^^^^^^

Only the :ref:`energy <energy-output>` output is supported, and used to run path
integral simulations, incorporating quantum nuclear effects in the statistical
sampling.

How to install the code
^^^^^^^^^^^^^^^^^^^^^^^

The metatensor interface is part of i-PI since version 3.0. Please refer to
`i-PI documentation`_ about how to install it.

.. _i-PI documentation: https://ipi-code.org/i-pi/getting-started.html#installing-i-pi

How to use the code
^^^^^^^^^^^^^^^^^^^

.. note::

Here we assume you already have an exported model that you want to use in your
simulations. Please see :ref:`this tutorial <atomistic-tutorial-export>` to
learn how to manually create and export a model; or use a tool like
`metatrain`_ to create a model based on existing architectures and your own
dataset.

.. _metatrain: https://github.com/metatensor/metatrain

The metatensor interface in i-PI provides a custom i-PI client that can be used
in combination with an i-PI server to run simulations. This client is managed
with the ``i-pi-driver-py`` command.

.. code-block:: bash
# minimal version
i-pi-driver-py -m metatensor -o template.xyz,model.pt
# all possible options
i-pi-driver-py -m metatensor -o template.xyz,model.pt,device=cpu,extensions=path/to/extensions,check_consistency=False
The minimal options to give to the ``metatensor`` client are the path to a
template structure for the simulated system (``template.xyz`` in the example
above) and the path to the metatensor model (``model.pt`` above). The template
structure must be a file that `ASE`_ can read. The code only uses it to get the
atomic types (assumed to be the atomic numbers) matching all particles in the
system.

The following options can also be specified using ``key=value`` syntax:

- **extensions**: the path to a folder containing TorchScript extensions. We
will try to load any extension the model requires from there first;
- **device**: torch device to use to execute the model. Typical values would be
``cpu``, ``cuda``, ``cuda:2``, *etc.* By default, the code will find the best
device for the model that is available on the current computer;
- **check_consistency**: whether to run some additional internal checks. Set
this to ``True`` if you are seeing a strange behavior for a given model or
when developing a new model.

.. seealso::

You can use ``i-pi-driver-py --help`` to get all the options for the Python
drivers.


.. _ASE: https://wiki.fysik.dtu.dk/ase/ase/io/io.html
182 changes: 182 additions & 0 deletions docs/src/atomistic/engines/lammps.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
.. _engine-lammps:

LAMMPS
======

.. list-table::
:header-rows: 1

* - Official website
- How is metatensor supported?
* - https://lammps.org
- In a separate `fork <https://github.com/metatensor/lammps>`_


Supported model outputs
^^^^^^^^^^^^^^^^^^^^^^^

Only the :ref:`energy <energy-output>` output is supported in LAMMPS, as a
custom ``pair_style``. This allows running molecular dynamics simulations with
interatomic potentials in metatensor format; distributing the simulation over
multiple nodes and potentially multiple GPUs.

How to install the code
^^^^^^^^^^^^^^^^^^^^^^^

The code is available in a custom fork of LAMMPS,
https://github.com/metatensor/lammps. We recommend you use `LAMMPS' CMake build
system`_ to configure the build.

To build metatensor-enabled LAMMPS, you'll need to provide the C++ version of
``libtorch``, either by installing PyTorch with a Python package manager
(``pip`` or ``conda``), or by downloading the right prebuilt version of the code
from https://pytorch.org/get-started/locally/. To build metatensor itself, you
will also need to install a Rust compiler and ``cargo`` installed, either with
`rustup`_ or the package manager of your operating system.

First, you should run the following code in a bash (or bash-compatible) shell to
get the code on your system and to teach CMake where to find ``libtorch``:

.. code-block:: bash
# point this to the path where you extracted the C++ libtorch
TORCH_PREFIX=<path/to/torch/installation>
# if you used Python to install torch, you can do this:
TORCH_PREFIX=$(python -c "import torch; print(torch.utils.cmake_prefix_path)")
# patch a bug from torch's MKL detection
git clone https://github.com/metatensor/lammps lammps-metatensor
cd lammps-metatensor
./src/ML-METATENSOR/patch-torch.sh "$TORCH_PREFIX"
After what you can configure the build and compile the code:

.. code-block:: bash
mkdir build && cd build
# you can add more options here to enable other packages.
cmake -DPKG_ML-METATENSOR=ON \
-DLAMMPS_INSTALL_RPATH=ON \
-DCMAKE_PREFIX_PATH="$TORCH_PREFIX" \
../cmake
cmake --build . --parallel 4 # or `make -jX`
# optionally install the code on your machine. You can also directly use
# the `lmp` binary in `lammps-metatensor/build/lmp` without installation
cmake --build . --target install # or `make install`
.. _rustup: https://rustup.rs
.. _LAMMPS' CMake build system: https://docs.lammps.org/Build_cmake.html


How to use the code
^^^^^^^^^^^^^^^^^^^

.. note::

Here we assume you already have an exported model that you want to use in your
simulations. Please see :ref:`this tutorial <atomistic-tutorial-export>` to
learn how to manually create and export a model; or use a tool like
`metatrain`_ to create a model based on existing architectures and your own
dataset.

.. _metatrain: https://github.com/metatensor/metatrain

After building and optionally installing the code, you can now use ``pair_style
metatensor`` in your LAMMPS input files! Below is the reference documentation
for this pair style, following a similar structure to the official LAMMPS
documentation.

.. code-block:: shell
pair_style metatensor model_path ... keyword values ...
* ``model_path`` = path to the file containing the exported metatensor model
* ``keyword`` = **device** or **extensions** or **check_consistency**

.. parsed-literal::
**device** values = device_name
device_name = name of the Torch device to use for the calculations
**extensions** values = directory
directory = path to a directory containing TorchScript extensions as
shared libraries. If the model uses extensions, we will try to load
them from this directory first
**check_consistency** values = on or off
set this to on/off to enable/disable internal consistency checks,
verifying both the data passed by LAMMPS to the model, and the data
returned by the model to LAMMPS.
Examples
--------

.. code-block:: shell
pair_style metatensor exported-model.pt device cuda extensions /home/user/torch-extensions/
pair_style metatensor soap-gap.pt check_consistency on
pair_coeff * * 6 8 1
Description
-----------

Pair style ``metatensor`` provides access to models following :ref:`metatensor's
atomistic models <atomistic-models>` interface; and enable using such models as
interatomic potentials to drive a LAMMPS simulation. The models can be fully
defined and trained by the user using Python code, or be existing pre-trained
models. The interface can be used with any type of machine learning model, as
long as the implementation of the model is compatible with TorchScript.

The only required argument for ``pair_style metatensor`` is the path to the model
file, which should be an exported metatensor model.

Optionally, users can define which torch ``device`` (e.g. cpu, cuda, cuda:0,
*etc.*) should be used to run the model. If this is not given, the code will run
on the best available device. If the model uses custom TorchScript operators
defined in a TorchScript extension, the shared library defining these extensions
will be searched in the ``extensions`` path, and loaded before trying to load
the model itself. Finally, ``check_consistency`` can be set to ``on`` or ``off``
to enable (respectively disable) additional internal consistency checks in the
data being passed from LAMMPS to the model and back.

A single ``pair_coeff`` command should be used with the ``metatensor`` style,
specifying the mapping from LAMMPS types to the atomic types the model can
handle. The first 2 arguments must be \* \* so as to span all LAMMPS atom types.
This is followed by a list of N arguments that specify the mapping of metatensor
atomic types to LAMMPS types, where N is the number of LAMMPS atom types.

Sample input file
-----------------

Below is a example input file that creates an FCC crystal of Nickel, and use a
metatensor model to run NPT simulations.

.. code-block:: bash
units metal
boundary p p p
# create the simulation system without reading external data file
atom_style atomic
lattice fcc 3.6
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 box
labelmap atom 1 Ni
mass Ni 58.693
# define the interaction style to use the model in the "nickel-model.pt" file
pair_style metatensor nickel-model.pt device cuda
pair_coeff * * 28
# simulation settings
timestep 0.001 # 1fs timestep
fix 1 all npt temp 243 243 $(100 * dt) iso 0 0 $(1000 * dt) drag 1.0
# output setup
thermo 10
# run the simulation for 10000 steps
run 10000
36 changes: 36 additions & 0 deletions docs/src/atomistic/engines/plumed.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.. _engine-plumed:

PLUMED
======


.. list-table::
:header-rows: 1

* - Official website
- How is metatensor supported?
* - https://www.plumed.org/
- In the official (development) version


Supported model outputs
^^^^^^^^^^^^^^^^^^^^^^^

PLUMED supports the :ref:`features <features-output>` output, and can use it as
collective variables to perform advanced sampling such as metadynamics.
Additionally, it also supports a custom output named ``"plumed::cv"``, with the
same semantics and metadata structure as the :ref:`features <features-output>`
output.

How to install the code
^^^^^^^^^^^^^^^^^^^^^^^

See the official `installation instruction`_ in PLUMED documentation.

How to use the code
^^^^^^^^^^^^^^^^^^^

See the official `syntax reference`_ in PLUMED documentation.

.. _installation instruction: https://www.plumed.org/doc-master/user-doc/html/_m_e_t_a_t_e_n_s_o_r_m_o_d.html
.. _syntax reference: https://www.plumed.org/doc-master/user-doc/html/_m_e_t_a_t_e_n_s_o_r.html
Loading

0 comments on commit f281de5

Please sign in to comment.