- Introduction to HamGNN
- Requirements
- Installation
- Usage
- Support for ABACUS Software
- Diagonalizing Hamiltonian Matrices for Large-Scale Systems
- Explanation of the parameters in config.yaml
- Minimum irreps for node and edge features in config.yaml
- References
- Code Contributors
- Project Leaders
HamGNN is an E(3) equivariant graph neural network designed to train and predict ab initio tight-binding (TB) Hamiltonians for molecules and solids. It can be used with common ab initio DFT software that rely on numerical atomic orbitals, such as OpenMX, Siesta, and ABACUS. Additionally, it supports predictions of SU(2) equivariant Hamiltonians with spin-orbit coupling effects. HamGNN provides a high-fidelity approximation of DFT results and offers transferable predictions across material structures. This makes it ideal for high-throughput electronic structure calculations, accelerating computations on large-scale systems.
We recommend using Python 3.9. HamGNN requires the following Python libraries:
numpy == 1.21.2
PyTorch == 1.11.0
PyTorch Geometric == 2.0.4
pytorch_lightning == 1.5.10
e3nn == 0.5.0
pymatgen == 2022.3.7
tensorboard == 2.8.0
tqdm
scipy == 1.7.3
yaml
To set up the Python environment for HamGNN, you have two options:
-
Using
environment.yaml
:
Run the following command to create the environment:conda env create -f environment.yaml
Note: The environment created with the current
environment.yaml
may result in the following error during training of the SOC Hamiltonian:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
-
Using the Prebuilt Conda Environment:
Alternatively, you can download the prebuilt HamGNN Conda environment from Zenodo. After downloading theML.tar.gz
file, extract it into yourconda/envs
directory.Recommendation: While this approach may seem less elegant, it is currently the more reliable option.
HamGNN requires the tight-binding Hamiltonian generated by OpenMX. You should be familiar with the basic OpenMX parameters and how to use them. OpenMX can be downloaded from here.
openmx_postprocess
is a modified version of OpenMX for computing overlap matrices and other Hamiltonian matrices analytically. It stores the computed data in a binary file called overlap.scfout
. To install openmx_postprocess
:
- First, install the GSL library.
- Modify the
makefile
in theopenmx_postprocess
directory:- Set
GSL_lib
to the path of the GSL library. - Set
GSL_include
to the include path of GSL. - Set
MKLROOT
to the Intel MKL path. - Set
CMPLR_ROOT
to the Intel compiler path.
- Set
After modifying the makefile
, execute make
to generate the executable programs: openmx_postprocess
and read_openmx
.
read_openmx
is a binary executable used to export matrices from the overlap.scfout
file to HS.json
.
To install HamGNN, run the following commands:
git clone https://github.com/QuantumLab-ZY/HamGNN.git
cd HamGNN
python setup.py install
If you are upgrading from an older version of HamGNN, uninstall the previous version first:
pip uninstall HamGNN
Ensure that any residual files in the site-packages
directory (e.g., 'HamGNN-x.x.x-py3.9.egg/HamGNN'
) are deleted before installing the new version.
- Generate Structure Files: Create structure files (e.g., POSCAR or CIF) via molecular dynamics or random perturbation.
- Convert to OpenMX Format: Edit the
poscar2openmx.yaml
file with appropriate path settings and run:This converts the structures into OpenMX’sposcar2openmx --config path/to/poscar2openmx.yaml
.dat
format. - Run OpenMX: Perform static calculations on the structure files to generate
.scfout
binary files containing Hamiltonian and overlap matrix information. - Process with openmx_postprocess: Run
openmx_postprocess
to generate theoverlap.scfout
file, which contains the Hamiltonian matrixH0
, independent of the self-consistent charge density.
If you already have a trained model, you can prepare evaluation data for new structures in a manner similar to the training dataset, with one key difference:
- Skip OpenMX Calculations: Instead of running OpenMX, you can directly treat the
overlap.scfout
file (generated by theopenmx_postprocess
tool) as if it were the.scfout
file produced by OpenMX. This allows you to bypass the actual OpenMX calculations for evaluation purposes.
- Set the appropriate paths in the
graph_data_gen.yaml
file. - Run the following to convert the structural and Hamiltonian data into a single input file for the HamGNN network:
graph_data_gen --config graph_data_gen.yaml
This generates the graph_data.npz
file, which will be used as input to HamGNN.
- Configure the Network: Set the appropriate parameters in the
config.yaml
file for network and training configurations. - Train HamGNN: Run the training process with:
HamGNN2.0/HamGNN1.0 --config config.yaml
- Monitor Training: Use TensorBoard to track training progress:
where
tensorboard --logdir train_dir
train_dir
is the directory where HamGNN saves training logs, as specified inconfig.yaml
. - Prediction: After training, the model can be used for predictions:
- Convert the structures to be predicted into
graph_data.npz
. - Set
checkpoint_path
inconfig.yaml
to the trained model's path andstage
totest
. - Run:
HamGNN2.0 --config config.yaml
- Convert the structures to be predicted into
After the Hamiltonian matrix training, use the trained network to fine-tune the model for energy band predictions:
- Set
checkpoint_path
to the trained model's weight file. - Enable
load_from_checkpoint = True
. - Set a smaller learning rate (
lr = 0.0001
). - Add
band_energy
loss to thelosses_metrics
andmetrics
sections. Set itsloss_weight
to 0.01 of the Hamiltonian'sloss_weight
. - Enable
calculate_band_energy
and set the required parameters (num_k
,band_num
,k_path
). - Start training again.
To calculate the band structure:
- Update the
band_cal.yaml
configuration file with the correct path to the Hamiltonian data. - Execute the band structure calculation:
band_cal --config band_cal.yaml
- Enable Parallelism: To run in parallel, add this to your job script:
export OMP_NUM_THREADS=<ncpus_per_node>
HamGNN includes utilities for supporting ABACUS software. These tools, located in the utils_abacus
directory, include:
abacus_postprocess
to export the Hamiltonian matrixH0
.poscar2abacus.py
for generating ABACUS structure files.graph_data_gen_abacus.py
for generating graph data in thegraph_data.npz
format.
For detailed instructions on using these tools, refer to the provided scripts.
For large systems, diagonalizing the Hamiltonian matrix with the serial band_cal
script may be challenging. To address this, we provide a parallelized version, band_cal_parallel
. However, note that some MKL environments may trigger a bug (Intel MKL FATAL ERROR: Cannot load symbol MKLMPI_Get_wrappers
). Users can try the solutions provided in Issues #18 and #12 to resolve this issue (thanks to the help from flamingoXu
and newplay
).
pip install mpitool-0.0.1-cp39-cp39-manylinux1_x86_64.whl
pip install band_cal_parallel-0.1.12-py3-none-any.whl
Run the following command with multiple CPUs:
mpirun -np ncpus band_cal_parallel --config band_cal_parallel.yaml
The input parameters in config.yaml are divided into different modules, which mainly include 'setup'
, 'dataset_params'
, 'losses_metrics'
, 'optim_params'
and network-related parameters ('HamGNN_pre'
and 'HamGNN_out'
). Most of the parameters work well using the default values. The following introduces some commonly used parameters in each module.Please note that the parameters listed here are specific to HamGNNV1.0. We plan to add annotations for the parameters in HamGNN2.0 in the future. However, users can typically understand the purpose of each parameter based on its name.
-
setup
:stage
: Select the state of the network: training (fit
) or testing (test
).GNN_Net
: UseHamGNN_pre
for normal Hamiltonian fitting and use 'HamGNN_pre_charge' for fitting the Hamiltonian of charged defects.property
:Select the type of physical quantity to be output by the network, generally set tohamiltonian
num_gpus
: number of gpus to train on (int
) or which GPUs to train on (list
orstr
) applied per node.resume
: resume training (true
) or start from scratch (false
).checkpoint_path
: Path of the checkpoint from which training is resumed (stage
=fit
) or path to the checkpoint you wish to test (stage
=test
).load_from_checkpoint
: If set totrue
, the model will be initialized with weights from the checkpoint_path.
-
dataset_params
:graph_data_path
: The directory where the processed compressed graph data files (grah_data.npz
) are stored.batch_size
: The number of samples or data points that are processed together in a single forward and backward pass during the training of a neural network. defaut: 1.train_ratio
: The proportion of the training samples in the entire data set.val_ratio
: The proportion of the validation samples in the entire data set.test_ratio
:The proportion of the test samples in the entire data set.
-
losses_metrics
:losses
: define multiple loss functions and their respective weights in the total loss value. Currently, HamGNN supportsmse
,mae
, andrmse
.metrics
:A variety of metric functions can be defined to evaluate the accuracy of the model on the validation set and test set.
-
optim_params
:min_epochs
: Force training for at least these many epochs.max_epochs
: Stop training once this number of epochs is reached.lr
:learning rate, the default value is 0.001.
-
profiler_params
:train_dir
: The folder for saving training information and prediction results. This directory can be read by tensorboard to monitor the training process.
-
HamGNN_pre
: The representation network to generate the node and pair interaction featuresnum_types
:The maximum number of atomic types used to build the one-hot vectors for atomscutoff
: The cutoff radius adopted in the envelope function for interatomic distances.cutoff_func
: which envelope function is used for interatomic distances. Options:cos
refers to cosine envelope function,pol
refers to the polynomial envelope function.rbf_func
: The radial basis function type used to expand the interatomic distancesnum_radial
: The number of Bessel basis.num_interaction_layers
: The number of interaction layers or orbital convolution layers.add_edge_tp
: Whether to utilize the tensor product of i and j to construct pair interaction features. This option requires a significant amount of memory, but it can sometimes improve accuracy.irreps_edge_sh
: Spherical harmonic representation of the orientation of an edgeirreps_node_features
: O(3) irreducible representations of the initial atomic featuresirreps_edge_output
: O(3) irreducible representations of the edge features to outputirreps_node_output
: O(3) irreducible representations of the atomic features to outputfeature_irreps_hidden
: intermediate O(3) irreducible representations of the atomic features in convelutionirreps_triplet_output(deprecated)
: O(3) irreducible representations of the triplet features to outputinvariant_layers
: The layers of the MLP used to map the invariant edge embeddings to the weights of each tensor product pathinvariant_neurons
: The number of the neurons of the MLP used to map the invariant edge embeddings to the weights of each tensor product pathnum_charge_attr_feas
: The number of features used for doping charge whenGNN_Net
is set to 'HamGNN_pre_charge'
-
HamGNN_out
: The output layer to transform the representation of crystals into Hamiltonian matrixnao_max
: It is modified according to the maximum number of atomic orbitals in the data set, which can be14
,19
,26
.For short-period elements such as C, Si, O, etc., a nao_max of 14 is sufficient; the number of atomic bases for most common elements does not exceed 19. Setting nao_max to 26 would allow the description of all elements supported by OpenMX. For the Hamiltonian of ABACUS,nao_max
can be set to either27
(without Al, Hf, Ta, W) or40
(supporting all elements in ABACUS).add_H0
: Generally true, the complete Hamiltonian is predicted as the sum of H_scf plus H_nonscf (H0)symmetrize
:if set to true, the Hermitian symmetry constraint is imposed on the Hamiltoniancalculate_band_energy
: Whether to calculate the energy bands to train the modelnum_k
: When calculating the energy bands, the number of K points to useband_num_control
:dict
: controls how many orbitals are considered for each atom in energy bands;int
: [vbm-num, vbm+num];null
: all bandsk_path
:auto
: Automatically determine the k-point path;null
: random k-point path;list
: list of k-point paths provided by the usersoc_switch
: if true, Fit the SOC Hamiltoniannonlinearity_type
:norm
activation orgate
activation as the nonlinear activation function
from e3nn import o3
row=col=o3.Irreps("1x0e+1x0e+1x0e+1x1o+1x1o+1x2e+1x2e") # for 'sssppd'
ham_irreps_dim = []
ham_irreps = o3.Irreps()
for _, li in row:
for _, lj in col:
for L in range(abs(li.l-lj.l), li.l+lj.l+1):
ham_irreps += o3.Irrep(L, (-1)**(li.l+lj.l))
print(ham_irreps.sort()[0].simplify())
Output: 17x0e+20x1o+8x1e+8x2o+20x2e+8x3o+4x3e+4x4e
The papers related to HamGNN:
[1] Transferable equivariant graph neural networks for the Hamiltonians of molecules and solids
[2] Universal Machine Learning Kohn-Sham Hamiltonian for Materials
[4] Topological interfacial states in ferroelectric domain walls of two-dimensional bismuth
[5] Transferable Machine Learning Approach for Predicting Electronic Structures of Charged Defects
- Yang Zhong (Fudan University)
- Changwei Zhang (Fudan University)
- Zhenxing Dai (Fudan University)
- Shixu Liu (Fudan University)
- Hongyu Yu (Fudan University)
- Yuxing Ma (Fudan University)
- Hongjun Xiang (Fudan University)
- Xingao Gong (Fudan University)