A large language model (LLM) for Rydberg atom array physics. Manuscript available on arXiv.
Theconfig.yaml
is used to define the hyperparameters for :
- Model architecture
- Training settings
- Data loading
- Others
To train RydbergGPT locally, execute the main.py
with :
python main.py --config_name=config_small.yaml
Clone the repository using the following command :
git clone https://github.com/PIQuIL/RydbergGPT
Create a conda environment
conda create --name rydberg_env python=3.11
and finally install via pip in developer mode:
cd RydbergGPT
pip install -e .
Documentation is implemented with MkDocs and available at https://piquil.github.io/RydbergGPT.
Consider the standard Rydberg Hamiltonian of the form :
Here,
Vanilla transformer architecture taken from Attention is All You Need.
-
$\mathbf{x} =$ experimental settings -
$\sigma_i =$ one-hot encoding of measured qubit$i$ -
$p_{\theta}(\sigma_i | \sigma_{< i}) =$ neural network conditional probability distribution of qubit$i$
The transformer encoder represents the Rydberg Hamiltonian with a sequence.
The transformer decoder represents the corresponding ground state wavefunction.
Consider setting
Data available on Pennylane Datasets