Skip to content

Commit

Permalink
deploy: 3c11f14
Browse files Browse the repository at this point in the history
  • Loading branch information
zulissimeta committed Apr 16, 2024
1 parent 8c3179d commit 73481a9
Show file tree
Hide file tree
Showing 31 changed files with 3,390 additions and 2,538 deletions.
667 changes: 337 additions & 330 deletions _downloads/5fdddbed2260616231dbf7b0d94bb665/train.txt

Large diffs are not rendered by default.

67 changes: 34 additions & 33 deletions _downloads/819e10305ddd6839cd7da05935b17060/mass-inference.txt
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
2024-04-15 21:17:17 (INFO): Project root: /home/runner/work/ocp/ocp
2024-04-16 17:51:46 (INFO): Project root: /home/runner/work/ocp/ocp
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/cuda/amp/grad_scaler.py:126: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
2024-04-15 21:17:19 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-15 21:17:19 (INFO): amp: true
2024-04-16 17:51:47 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-16 17:51:47 (INFO): amp: true
cmd:
checkpoint_dir: ./checkpoints/2024-04-15-21-17-52
commit: 341a16b
checkpoint_dir: ./checkpoints/2024-04-16-17-50-56
commit: 3c11f14
identifier: ''
logs_dir: ./logs/tensorboard/2024-04-15-21-17-52
logs_dir: ./logs/tensorboard/2024-04-16-17-50-56
print_every: 10
results_dir: ./results/2024-04-15-21-17-52
results_dir: ./results/2024-04-16-17-50-56
seed: 0
timestamp_id: 2024-04-15-21-17-52
timestamp_id: 2024-04-16-17-50-56
dataset:
a2g_args:
r_energy: false
Expand Down Expand Up @@ -122,28 +122,29 @@ test_dataset:
trainer: ocp
val_dataset: null

2024-04-15 21:17:19 (INFO): Loading dataset: ase_db
Traceback (most recent call last):
File "/home/runner/work/ocp/ocp/main.py", line 89, in <module>
Runner()(config)
File "/home/runner/work/ocp/ocp/main.py", line 34, in __call__
with new_trainer_context(args=args, config=config) as ctx:
File "/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/home/runner/work/ocp/ocp/ocpmodels/common/utils.py", line 977, in new_trainer_context
trainer = trainer_cls(
^^^^^^^^^^^^
File "/home/runner/work/ocp/ocp/ocpmodels/trainers/ocp_trainer.py", line 95, in __init__
super().__init__(
File "/home/runner/work/ocp/ocp/ocpmodels/trainers/base_trainer.py", line 176, in __init__
self.load()
File "/home/runner/work/ocp/ocp/ocpmodels/trainers/base_trainer.py", line 198, in load
self.load_datasets()
File "/home/runner/work/ocp/ocp/ocpmodels/trainers/base_trainer.py", line 281, in load_datasets
self.train_dataset = registry.get_dataset_class(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/ocp/ocp/ocpmodels/datasets/ase_datasets.py", line 114, in __init__
raise ValueError(
ValueError: No valid ase data found!Double check that the src path and/or glob search pattern gives ASE compatible data: data.db
Elapsed time = 3.9 seconds
2024-04-16 17:51:47 (INFO): Loading dataset: ase_db
2024-04-16 17:51:47 (INFO): rank: 0: Sampler created...
2024-04-16 17:51:47 (INFO): Batch balancing is disabled for single GPU training.
2024-04-16 17:51:47 (INFO): rank: 0: Sampler created...
2024-04-16 17:51:47 (INFO): Batch balancing is disabled for single GPU training.
2024-04-16 17:51:47 (INFO): Loading model: gemnet_t
2024-04-16 17:51:49 (INFO): Loaded GemNetT with 31671825 parameters.
2024-04-16 17:51:49 (WARNING): Model gradient logging to tensorboard not yet supported.
2024-04-16 17:51:49 (INFO): Loading checkpoint from: /tmp/ocp_checkpoints/gndt_oc22_all_s2ef.pt
2024-04-16 17:51:49 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-04-16 17:51:49 (WARNING): Scale factor comment not found in model
2024-04-16 17:51:49 (INFO): Predicting on test.
device 0: 0%| | 0/3 [00:00<?, ?it/s]/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
device 0: 33%|███████████▋ | 1/3 [00:02<00:04, 2.09s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:04<00:02, 2.35s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:06<00:00, 2.01s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:06<00:00, 2.09s/it]
/home/runner/work/ocp/ocp/ocpmodels/trainers/ocp_trainer.py:510: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
predictions[key] = np.array(predictions[key])
/home/runner/work/ocp/ocp/ocpmodels/trainers/base_trainer.py:840: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
np.array(gather_results[k])[idx]
2024-04-16 17:51:56 (INFO): Writing results to ./results/2024-04-16-17-50-56/ocp_predictions.npz
2024-04-16 17:51:56 (INFO): Total time taken: 6.417162895202637
Elapsed time = 12.7 seconds
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion _sources/core/fine-tuning/fine-tuning-oxides.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 10,
'optim.max_epochs': 5,
'optim.batch_size': 4,
'logger':'tensorboard', # don't use wandb!
# Train data
Expand Down
8 changes: 7 additions & 1 deletion _sources/core/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,15 @@ import numpy as np
with ase.db.connect('full_data.db') as full_db:
with ase.db.connect('data.db',append=False) as subset_db:
# Select 50 random points for the subset, ASE DB ids start at 1
for i in np.random.choice(list(range(1,len(full_db)+1)),size=50,replace=False):
subset_db.write(full_db.get_atoms(f'id={i}'))
atoms = full_db.get_atoms(f'id={i}', add_additional_information=True)
if 'tag' in atoms.info['key_value_pairs']:
atoms.info['key_value_pairs']['tag'] = int(atoms.info['key_value_pairs']['tag'])
subset_db.write(atoms, **atoms.info['key_value_pairs'])
```

```{code-cell} ipython3
Expand Down
6 changes: 1 addition & 5 deletions _sources/core/intro_series.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,4 @@

New to chemistry but excited to know how ML can help? Larry Zitnick has made a few intro videos for audiences without a computational chemistry background!

<iframe width="560" height="315" src="https://www.youtube.com/embed/LfnnW6fPkFk?si=XKL2-IH0VJAXuyZL" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/vHny-vpg57c?si=tVoYO_lKyrb5vzcv" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/HGk48aNfJMo?si=BEf35mZDDTBCAUQx" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/videoseries?si=IDjebRSRSThxX7Ar&amp;list=PLU7acyFOb6DXgCTAi2TwKXaFD_i3C6hSL" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
2 changes: 1 addition & 1 deletion _sources/tutorials/advanced/fine-tuning-in-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 10,
'optim.max_epochs': 5,
'optim.batch_size': 4,
'logger': 'tensorboard', # don't use wandb unless you already are logged in
# Train data
Expand Down
9 changes: 6 additions & 3 deletions core/fine-tuning/fine-tuning-oxides.html
Original file line number Diff line number Diff line change
Expand Up @@ -769,7 +769,7 @@ <h1>Fine tuning a model<a class="headerlink" href="#fine-tuning-a-model" title="
warnings.warn(
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time 67.7 seconds.
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time 68.0 seconds.
</pre></div>
</div>
<img alt="../../_images/92bd7f94dd548c8cfc2744eb5890cd23fada1ff98e8dc907657e2eb109af0402.png" src="../../_images/92bd7f94dd548c8cfc2744eb5890cd23fada1ff98e8dc907657e2eb109af0402.png" />
Expand Down Expand Up @@ -921,7 +921,7 @@ <h2>Setting up the configuration yaml file<a class="headerlink" href="#setting-u
<span class="n">update</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;gpus&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;task.dataset&#39;</span><span class="p">:</span> <span class="s1">&#39;ase_db&#39;</span><span class="p">,</span>
<span class="s1">&#39;optim.eval_every&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;optim.max_epochs&#39;</span><span class="p">:</span> <span class="mi">10</span><span class="p">,</span>
<span class="s1">&#39;optim.max_epochs&#39;</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span>
<span class="s1">&#39;optim.batch_size&#39;</span><span class="p">:</span> <span class="mi">4</span><span class="p">,</span>
<span class="s1">&#39;logger&#39;</span><span class="p">:</span><span class="s1">&#39;tensorboard&#39;</span><span class="p">,</span> <span class="c1"># don&#39;t use wandb!</span>
<span class="c1"># Train data</span>
Expand Down Expand Up @@ -1075,7 +1075,7 @@ <h2>Setting up the configuration yaml file<a class="headerlink" href="#setting-u
load_balancing: atoms
loss_energy: mae
lr_initial: 0.0005
max_epochs: 10
max_epochs: 5
mode: min
num_workers: 2
optimizer: AdamW
Expand Down Expand Up @@ -1137,6 +1137,9 @@ <h2>Running the training job<a class="headerlink" href="#running-the-training-jo
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time = 1200.5 seconds
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>
</pre></div>
</div>
</div>
</details>
</div>
Expand Down
28 changes: 14 additions & 14 deletions core/gotchas.html
Original file line number Diff line number Diff line change
Expand Up @@ -929,7 +929,7 @@ <h1>I get wildly different energies from the different models<a class="headerlin
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6857712268829346
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6831002235412598
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1433,7 +1433,7 @@ <h1>To tag or not?<a class="headerlink" href="#to-tag-or-not" title="Link to thi
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.42973706126213074
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.4297373294830322
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1483,17 +1483,17 @@ <h1>Stochastic simulation results<a class="headerlink" href="#stochastic-simulat
warnings.warn(
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.2139865159988403 1.6765759937047162e-06
1.2139840126037598
1.2139897346496582
1.213986873626709
1.2139887809753418
1.213986873626709
1.2139856815338135
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.2139861583709717 1.530342725360986e-06
1.2139849662780762
1.2139854431152344
1.2139873504638672
1.2139854431152344
1.213989019393921
1.2139837741851807
1.2139866352081299
1.2139861583709717
1.2139859199523926
1.2139866352081299
1.2139878273010254
1.2139866352081299
1.2139840126037598
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1536,7 +1536,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00847937, 0.01409653, -0.05882907], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00848317, 0.01409542, -0.05882776], dtype=float32)
</pre></div>
</div>
</div>
Expand All @@ -1549,7 +1549,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 7.7532604e-08, -7.8813173e-08, 0.0000000e+00], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([-2.44472176e-09, 1.06636435e-07, 2.38418579e-07], dtype=float32)
</pre></div>
</div>
</div>
Expand Down
Loading

0 comments on commit 73481a9

Please sign in to comment.