Skip to content

An E(3) equivariant Graph Neural Network for predicting electronic Hamiltonian matrix

License

Notifications You must be signed in to change notification settings

QuantumLab-ZY/HamGNN

Repository files navigation

🚀 HamGNN v2.0 Now Available!

Table of Contents

Introduction to HamGNN

HamGNN is an E(3) equivariant graph neural network designed to train and predict ab initio tight-binding (TB) Hamiltonians for molecules and solids. It can be used with common ab initio DFT software that rely on numerical atomic orbitals, such as OpenMX, Siesta, and ABACUS. Additionally, it supports predictions of SU(2) equivariant Hamiltonians with spin-orbit coupling effects. HamGNN provides a high-fidelity approximation of DFT results and offers transferable predictions across material structures. This makes it ideal for high-throughput electronic structure calculations, accelerating computations on large-scale systems.

Requirements

We recommend using Python 3.9. HamGNN requires the following Python libraries:

  • numpy == 1.21.2
  • PyTorch == 1.11.0
  • PyTorch Geometric == 2.0.4
  • pytorch_lightning == 1.5.10
  • e3nn == 0.5.0
  • pymatgen == 2022.3.7
  • tensorboard == 2.8.0
  • tqdm
  • scipy == 1.7.3
  • yaml

Python Libraries

To set up the Python environment for HamGNN, you have two options:

  1. Using environment.yaml:
    Run the following command to create the environment:

    conda env create -f environment.yaml

    Note: The environment created with the current environment.yaml may result in the following error during training of the SOC Hamiltonian:

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
    
  2. Using the Prebuilt Conda Environment:
    Alternatively, you can download the prebuilt HamGNN Conda environment from Zenodo. After downloading the ML.tar.gz file, extract it into your conda/envs directory.

    Recommendation: While this approach may seem less elegant, it is currently the more reliable option.

OpenMX

HamGNN requires the tight-binding Hamiltonian generated by OpenMX. You should be familiar with the basic OpenMX parameters and how to use them. OpenMX can be downloaded from here.

openmx_postprocess

openmx_postprocess is a modified version of OpenMX for computing overlap matrices and other Hamiltonian matrices analytically. It stores the computed data in a binary file called overlap.scfout. To install openmx_postprocess:

  1. First, install the GSL library.
  2. Modify the makefile in the openmx_postprocess directory:
    • Set GSL_lib to the path of the GSL library.
    • Set GSL_include to the include path of GSL.
    • Set MKLROOT to the Intel MKL path.
    • Set CMPLR_ROOT to the Intel compiler path.

After modifying the makefile, execute make to generate the executable programs: openmx_postprocess and read_openmx.

read_openmx

read_openmx is a binary executable used to export matrices from the overlap.scfout file to HS.json.

Installation

To install HamGNN, run the following commands:

git clone https://github.com/QuantumLab-ZY/HamGNN.git
cd HamGNN
python setup.py install

If you are upgrading from an older version of HamGNN, uninstall the previous version first:

pip uninstall HamGNN

Ensure that any residual files in the site-packages directory (e.g., 'HamGNN-x.x.x-py3.9.egg/HamGNN') are deleted before installing the new version.

Usage

Preparation of Hamiltonian Training Data

  1. Generate Structure Files: Create structure files (e.g., POSCAR or CIF) via molecular dynamics or random perturbation.
  2. Convert to OpenMX Format: Edit the poscar2openmx.yaml file with appropriate path settings and run:
    poscar2openmx --config path/to/poscar2openmx.yaml
    This converts the structures into OpenMX’s .dat format.
  3. Run OpenMX: Perform static calculations on the structure files to generate .scfout binary files containing Hamiltonian and overlap matrix information.
  4. Process with openmx_postprocess: Run openmx_postprocess to generate the overlap.scfout file, which contains the Hamiltonian matrix H0, independent of the self-consistent charge density.

Preparation of Hamiltonian Evaluation Data

If you already have a trained model, you can prepare evaluation data for new structures in a manner similar to the training dataset, with one key difference:

  • Skip OpenMX Calculations: Instead of running OpenMX, you can directly treat the overlap.scfout file (generated by the openmx_postprocess tool) as if it were the .scfout file produced by OpenMX. This allows you to bypass the actual OpenMX calculations for evaluation purposes.

Graph Data Conversion

  1. Set the appropriate paths in the graph_data_gen.yaml file.
  2. Run the following to convert the structural and Hamiltonian data into a single input file for the HamGNN network:
    graph_data_gen --config graph_data_gen.yaml

This generates the graph_data.npz file, which will be used as input to HamGNN.

HamGNN Network Training and Prediction

  1. Configure the Network: Set the appropriate parameters in the config.yaml file for network and training configurations.
  2. Train HamGNN: Run the training process with:
    HamGNN2.0/HamGNN1.0 --config config.yaml
  3. Monitor Training: Use TensorBoard to track training progress:
    tensorboard --logdir train_dir
    where train_dir is the directory where HamGNN saves training logs, as specified in config.yaml.
  4. Prediction: After training, the model can be used for predictions:
    • Convert the structures to be predicted into graph_data.npz.
    • Set checkpoint_path in config.yaml to the trained model's path and stage to test.
    • Run:
    HamGNN2.0 --config config.yaml

Training for Bands (Second Step)

After the Hamiltonian matrix training, use the trained network to fine-tune the model for energy band predictions:

  1. Set checkpoint_path to the trained model's weight file.
  2. Enable load_from_checkpoint = True.
  3. Set a smaller learning rate (lr = 0.0001).
  4. Add band_energy loss to the losses_metrics and metrics sections. Set its loss_weight to 0.01 of the Hamiltonian's loss_weight.
  5. Enable calculate_band_energy and set the required parameters (num_k, band_num, k_path).
  6. Start training again.

Band Structure Calculation

To calculate the band structure:

  1. Update the band_cal.yaml configuration file with the correct path to the Hamiltonian data.
  2. Execute the band structure calculation:
    band_cal --config band_cal.yaml
  3. Enable Parallelism: To run in parallel, add this to your job script:
    export OMP_NUM_THREADS=<ncpus_per_node>

Support for ABACUS Software

HamGNN includes utilities for supporting ABACUS software. These tools, located in the utils_abacus directory, include:

  • abacus_postprocess to export the Hamiltonian matrix H0.
  • poscar2abacus.py for generating ABACUS structure files.
  • graph_data_gen_abacus.py for generating graph data in the graph_data.npz format.

For detailed instructions on using these tools, refer to the provided scripts.

Diagonalizing Hamiltonian Matrices for Large-Scale Systems

For large systems, diagonalizing the Hamiltonian matrix with the serial band_cal script may be challenging. To address this, we provide a parallelized version, band_cal_parallel. However, note that some MKL environments may trigger a bug (Intel MKL FATAL ERROR: Cannot load symbol MKLMPI_Get_wrappers). Users can try the solutions provided in Issues #18 and #12 to resolve this issue (thanks to the help from flamingoXu and newplay).

Installation

pip install mpitool-0.0.1-cp39-cp39-manylinux1_x86_64.whl
pip install band_cal_parallel-0.1.12-py3-none-any.whl

Usage

Run the following command with multiple CPUs:

mpirun -np ncpus band_cal_parallel --config band_cal_parallel.yaml

Explanation of the parameters in config.yaml

The input parameters in config.yaml are divided into different modules, which mainly include 'setup', 'dataset_params', 'losses_metrics', 'optim_params' and network-related parameters ('HamGNN_pre' and 'HamGNN_out'). Most of the parameters work well using the default values. The following introduces some commonly used parameters in each module.Please note that the parameters listed here are specific to HamGNNV1.0. We plan to add annotations for the parameters in HamGNN2.0 in the future. However, users can typically understand the purpose of each parameter based on its name.

  • setup:

    • stage: Select the state of the network: training (fit) or testing (test).
    • GNN_Net: Use HamGNN_pre for normal Hamiltonian fitting and use 'HamGNN_pre_charge' for fitting the Hamiltonian of charged defects.
    • property:Select the type of physical quantity to be output by the network, generally set to hamiltonian
    • num_gpus: number of gpus to train on (int) or which GPUs to train on (list or str) applied per node.
    • resume: resume training (true) or start from scratch (false).
    • checkpoint_path: Path of the checkpoint from which training is resumed (stage = fit) or path to the checkpoint you wish to test (stage = test).
    • load_from_checkpoint: If set to true, the model will be initialized with weights from the checkpoint_path.
  • dataset_params:

    • graph_data_path: The directory where the processed compressed graph data files (grah_data.npz) are stored.
    • batch_size: The number of samples or data points that are processed together in a single forward and backward pass during the training of a neural network. defaut: 1.
    • train_ratio: The proportion of the training samples in the entire data set.
    • val_ratio: The proportion of the validation samples in the entire data set.
    • test_ratio:The proportion of the test samples in the entire data set.
  • losses_metrics

    • losses: define multiple loss functions and their respective weights in the total loss value. Currently, HamGNN supports mse, mae, and rmse.
    • metrics:A variety of metric functions can be defined to evaluate the accuracy of the model on the validation set and test set.
  • optim_params

    • min_epochs: Force training for at least these many epochs.
    • max_epochs: Stop training once this number of epochs is reached.
    • lr:learning rate, the default value is 0.001.
  • profiler_params:

    • train_dir: The folder for saving training information and prediction results. This directory can be read by tensorboard to monitor the training process.
  • HamGNN_pre: The representation network to generate the node and pair interaction features

    • num_types:The maximum number of atomic types used to build the one-hot vectors for atoms
    • cutoff: The cutoff radius adopted in the envelope function for interatomic distances.
    • cutoff_func: which envelope function is used for interatomic distances. Options: cos refers to cosine envelope function, pol refers to the polynomial envelope function.
    • rbf_func: The radial basis function type used to expand the interatomic distances
    • num_radial: The number of Bessel basis.
    • num_interaction_layers: The number of interaction layers or orbital convolution layers.
    • add_edge_tp: Whether to utilize the tensor product of i and j to construct pair interaction features. This option requires a significant amount of memory, but it can sometimes improve accuracy.
    • irreps_edge_sh: Spherical harmonic representation of the orientation of an edge
    • irreps_node_features: O(3) irreducible representations of the initial atomic features
    • irreps_edge_output: O(3) irreducible representations of the edge features to output
    • irreps_node_output: O(3) irreducible representations of the atomic features to output
    • feature_irreps_hidden: intermediate O(3) irreducible representations of the atomic features in convelution
    • irreps_triplet_output(deprecated): O(3) irreducible representations of the triplet features to output
    • invariant_layers: The layers of the MLP used to map the invariant edge embeddings to the weights of each tensor product path
    • invariant_neurons: The number of the neurons of the MLP used to map the invariant edge embeddings to the weights of each tensor product path
    • num_charge_attr_feas: The number of features used for doping charge when GNN_Net is set to 'HamGNN_pre_charge'
  • HamGNN_out: The output layer to transform the representation of crystals into Hamiltonian matrix

    • nao_max: It is modified according to the maximum number of atomic orbitals in the data set, which can be 14, 19, 26.For short-period elements such as C, Si, O, etc., a nao_max of 14 is sufficient; the number of atomic bases for most common elements does not exceed 19. Setting nao_max to 26 would allow the description of all elements supported by OpenMX. For the Hamiltonian of ABACUS, nao_max can be set to either 27 (without Al, Hf, Ta, W) or 40 (supporting all elements in ABACUS).
    • add_H0: Generally true, the complete Hamiltonian is predicted as the sum of H_scf plus H_nonscf (H0)
    • symmetrize:if set to true, the Hermitian symmetry constraint is imposed on the Hamiltonian
    • calculate_band_energy: Whether to calculate the energy bands to train the model
    • num_k: When calculating the energy bands, the number of K points to use
    • band_num_control: dict: controls how many orbitals are considered for each atom in energy bands; int: [vbm-num, vbm+num]; null: all bands
    • k_path: auto: Automatically determine the k-point path; null: random k-point path; list: list of k-point paths provided by the user
    • soc_switch: if true, Fit the SOC Hamiltonian
    • nonlinearity_type: norm activation or gate activation as the nonlinear activation function

Minimum Irreps for Node and Edge Features in config.yaml

from e3nn import o3

row=col=o3.Irreps("1x0e+1x0e+1x0e+1x1o+1x1o+1x2e+1x2e") # for 'sssppd'
ham_irreps_dim = []
ham_irreps = o3.Irreps()

for _, li in row:
    for _, lj in col:
        for L in range(abs(li.l-lj.l), li.l+lj.l+1):
            ham_irreps += o3.Irrep(L, (-1)**(li.l+lj.l)) 

print(ham_irreps.sort()[0].simplify())
Output: 17x0e+20x1o+8x1e+8x2o+20x2e+8x3o+4x3e+4x4e

References

The papers related to HamGNN:

[1] Transferable equivariant graph neural networks for the Hamiltonians of molecules and solids

[2] Universal Machine Learning Kohn-Sham Hamiltonian for Materials

[3] Accelerating the electronic-structure calculation of magnetic systems by equivariant neural networks

[4] Topological interfacial states in ferroelectric domain walls of two-dimensional bismuth

[5] Transferable Machine Learning Approach for Predicting Electronic Structures of Charged Defects

Code contributors:

  • Yang Zhong (Fudan University)
  • Changwei Zhang (Fudan University)
  • Zhenxing Dai (Fudan University)
  • Shixu Liu (Fudan University)
  • Hongyu Yu (Fudan University)
  • Yuxing Ma (Fudan University)

Project leaders:

  • Hongjun Xiang (Fudan University)
  • Xingao Gong (Fudan University)

About

An E(3) equivariant Graph Neural Network for predicting electronic Hamiltonian matrix

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published