Skip to content

Commit

Permalink
deploy: d0f61fa
Browse files Browse the repository at this point in the history
  • Loading branch information
lbluque committed May 15, 2024
1 parent 38af7a2 commit ef123ea
Show file tree
Hide file tree
Showing 35 changed files with 3,680 additions and 1,927 deletions.
94 changes: 47 additions & 47 deletions _downloads/5fdddbed2260616231dbf7b0d94bb665/train.txt

Large diffs are not rendered by default.

46 changes: 23 additions & 23 deletions _downloads/819e10305ddd6839cd7da05935b17060/mass-inference.txt
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
2024-05-15 20:51:25 (INFO): Project root: /home/runner/work/fairchem/fairchem/src/fairchem
2024-05-15 20:54:21 (INFO): Project root: /home/runner/work/fairchem/fairchem/src/fairchem
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/cuda/amp/grad_scaler.py:126: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
2024-05-15 20:51:26 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-05-15 20:51:26 (INFO): amp: true
2024-05-15 20:54:22 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-05-15 20:54:22 (INFO): amp: true
cmd:
checkpoint_dir: ./checkpoints/2024-05-15-20-52-16
checkpoint_dir: ./checkpoints/2024-05-15-20-54-24
commit: d0f61fa
identifier: ''
logs_dir: ./logs/tensorboard/2024-05-15-20-52-16
logs_dir: ./logs/tensorboard/2024-05-15-20-54-24
print_every: 10
results_dir: ./results/2024-05-15-20-52-16
results_dir: ./results/2024-05-15-20-54-24
seed: 0
timestamp_id: 2024-05-15-20-52-16
timestamp_id: 2024-05-15-20-54-24
dataset:
a2g_args:
r_energy: false
Expand Down Expand Up @@ -122,25 +122,25 @@ test_dataset:
trainer: ocp
val_dataset: null

2024-05-15 20:51:26 (INFO): Loading dataset: ase_db
2024-05-15 20:51:26 (INFO): rank: 0: Sampler created...
2024-05-15 20:51:26 (INFO): Batch balancing is disabled for single GPU training.
2024-05-15 20:51:26 (INFO): rank: 0: Sampler created...
2024-05-15 20:51:26 (INFO): Batch balancing is disabled for single GPU training.
2024-05-15 20:51:26 (INFO): Loading model: gemnet_t
2024-05-15 20:51:27 (INFO): Loaded GemNetT with 31671825 parameters.
2024-05-15 20:51:27 (WARNING): Model gradient logging to tensorboard not yet supported.
2024-05-15 20:51:28 (INFO): Loading checkpoint from: /tmp/ocp_checkpoints/gndt_oc22_all_s2ef.pt
2024-05-15 20:51:28 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-05-15 20:51:28 (WARNING): Scale factor comment not found in model
2024-05-15 20:51:28 (INFO): Predicting on test.
2024-05-15 20:54:22 (INFO): Loading dataset: ase_db
2024-05-15 20:54:22 (INFO): rank: 0: Sampler created...
2024-05-15 20:54:22 (INFO): Batch balancing is disabled for single GPU training.
2024-05-15 20:54:22 (INFO): rank: 0: Sampler created...
2024-05-15 20:54:22 (INFO): Batch balancing is disabled for single GPU training.
2024-05-15 20:54:22 (INFO): Loading model: gemnet_t
2024-05-15 20:54:24 (INFO): Loaded GemNetT with 31671825 parameters.
2024-05-15 20:54:24 (WARNING): Model gradient logging to tensorboard not yet supported.
2024-05-15 20:54:24 (INFO): Loading checkpoint from: /tmp/ocp_checkpoints/gndt_oc22_all_s2ef.pt
2024-05-15 20:54:24 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-05-15 20:54:24 (WARNING): Scale factor comment not found in model
2024-05-15 20:54:24 (INFO): Predicting on test.
device 0: 0%| | 0/3 [00:00<?, ?it/s]/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
device 0: 33%|███████████▋ | 1/3 [00:03<00:07, 3.73s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:06<00:03, 3.36s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 2.96s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 3.11s/it]
2024-05-15 20:51:37 (INFO): Writing results to ./results/2024-05-15-20-52-16/ocp_predictions.npz
2024-05-15 20:51:37 (INFO): Total time taken: 9.476700067520142
Elapsed time = 15.5 seconds
device 0: 33%|███████████▋ | 1/3 [00:02<00:04, 2.21s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:05<00:02, 2.80s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:07<00:00, 2.52s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:07<00:00, 2.54s/it]
2024-05-15 20:54:32 (INFO): Writing results to ./results/2024-05-15-20-54-24/ocp_predictions.npz
2024-05-15 20:54:32 (INFO): Total time taken: 7.774691820144653
Elapsed time = 13.9 seconds
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
8 changes: 4 additions & 4 deletions core/fine-tuning/fine-tuning-oxides.html
Original file line number Diff line number Diff line change
Expand Up @@ -1195,7 +1195,7 @@ <h2>Running the training job<a class="headerlink" href="#running-the-training-jo
<span class="expanded">Hide code cell output</span>
</summary>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time = 204.5 seconds
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Elapsed time = 205.4 seconds
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1276,7 +1276,7 @@ <h2>Running the training job<a class="headerlink" href="#running-the-training-jo
warnings.warn(
</pre></div>
</div>
<img alt="../../_images/776c8f543b5b35f922518995c0fd862e5122d23a6fa9037d9b13dc73c37121d8.png" src="../../_images/776c8f543b5b35f922518995c0fd862e5122d23a6fa9037d9b13dc73c37121d8.png" />
<img alt="../../_images/35c4a3e693de286afb1ae6ae44ec31ee74d172204ac1b20b3cebe817135a315b.png" src="../../_images/35c4a3e693de286afb1ae6ae44ec31ee74d172204ac1b20b3cebe817135a315b.png" />
</div>
</div>
<div class="cell docutils container">
Expand All @@ -1287,7 +1287,7 @@ <h2>Running the training job<a class="headerlink" href="#running-the-training-jo
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>New MAE = 0.157 eV/atom
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>New MAE = 0.855 eV/atom
</pre></div>
</div>
</div>
Expand All @@ -1306,7 +1306,7 @@ <h2>Running the training job<a class="headerlink" href="#running-the-training-jo
</div>
</div>
<div class="cell_output docutils container">
<img alt="../../_images/c66e6605a614b1f6ca39750cf8699198b72c0d4fe2d9c1dc1f60e9b7edbadcd1.png" src="../../_images/c66e6605a614b1f6ca39750cf8699198b72c0d4fe2d9c1dc1f60e9b7edbadcd1.png" />
<img alt="../../_images/261483059c1791a272aec1cd5fda61dcba4658983b84d0429f9138c0ad5fca01.png" src="../../_images/261483059c1791a272aec1cd5fda61dcba4658983b84d0429f9138c0ad5fca01.png" />
</div>
</div>
<p>It is possible to continue refining the fit. The simple things to do are to use more epochs of training. Eventually the MAE will stabilize, and then it may be necessary to adjust other optimization parameters like the learning rate (usually you decrease it).</p>
Expand Down
22 changes: 11 additions & 11 deletions core/gotchas.html
Original file line number Diff line number Diff line change
Expand Up @@ -999,7 +999,7 @@ <h1>I get wildly different energies from the different models<a class="headerlin
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6718599796295166
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.6822845935821533
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1512,7 +1512,7 @@ <h1>To tag or not?<a class="headerlink" href="#to-tag-or-not" title="Link to thi
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.42973729968070984
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>-0.42973777651786804
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1567,17 +1567,17 @@ <h1>Stochastic simulation results<a class="headerlink" href="#stochastic-simulat
warnings.warn(
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.2139871835708618 1.2159347534179687e-06
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>1.213985538482666 1.4072756350873667e-06
1.2139852046966553
1.2139861583709717
1.2139863967895508
1.2139885425567627
1.2139849662780762
1.2139873504638672
1.2139873504638672
1.213982343673706
1.2139856815338135
1.2139863967895508
1.2139880657196045
1.2139875888824463
1.2139861583709717
1.2139892578125
1.213984489440918
1.2139859199523926
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1623,7 +1623,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
warnings.warn(
</pre></div>
</div>
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00847836, 0.0140937 , -0.05882788], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 0.00848436, 0.01409486, -0.05882823], dtype=float32)
</pre></div>
</div>
</div>
Expand All @@ -1636,7 +1636,7 @@ <h1>The forces don’t sum to zero<a class="headerlink" href="#the-forces-don-t-
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([-7.4738637e-08, -1.3585668e-07, -1.1920929e-07], dtype=float32)
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>array([ 6.4377673e-08, -4.1676685e-08, 2.3841858e-07], dtype=float32)
</pre></div>
</div>
</div>
Expand Down
Loading

0 comments on commit ef123ea

Please sign in to comment.