Skip to content

Commit

Permalink
deploy: 3edcf80
Browse files Browse the repository at this point in the history
  • Loading branch information
zulissimeta committed Apr 17, 2024
1 parent 2b11937 commit 7b61d4f
Show file tree
Hide file tree
Showing 193 changed files with 3,152 additions and 6,662 deletions.
2 changes: 1 addition & 1 deletion .buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ac0276eee55fc00b9e2e0b7128308701
config: efa2b153c9e066cd129008874a6226a9
tags: 645f666f9bcd5a90fca523b33c5a78b7
1,187 changes: 33 additions & 1,154 deletions _downloads/5fdddbed2260616231dbf7b0d94bb665/train.txt

Large diffs are not rendered by default.

49 changes: 24 additions & 25 deletions _downloads/819e10305ddd6839cd7da05935b17060/mass-inference.txt
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
2024-04-16 23:01:59 (INFO): Project root: /home/runner/work/ocp/ocp
2024-04-17 16:33:33 (INFO): Project root: /home/runner/work/ocp/ocp
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/cuda/amp/grad_scaler.py:126: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
2024-04-16 23:02:01 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-16 23:02:01 (INFO): amp: true
2024-04-17 16:33:35 (WARNING): Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
2024-04-17 16:33:35 (INFO): amp: true
cmd:
checkpoint_dir: ./checkpoints/2024-04-16-23-02-24
commit: 46df531
checkpoint_dir: ./checkpoints/2024-04-17-16-34-08
commit: 3edcf80
identifier: ''
logs_dir: ./logs/tensorboard/2024-04-16-23-02-24
logs_dir: ./logs/tensorboard/2024-04-17-16-34-08
print_every: 10
results_dir: ./results/2024-04-16-23-02-24
results_dir: ./results/2024-04-17-16-34-08
seed: 0
timestamp_id: 2024-04-16-23-02-24
timestamp_id: 2024-04-17-16-34-08
dataset:
a2g_args:
r_energy: false
Expand All @@ -36,7 +36,6 @@ eval_metrics:
- magnitude_error
misc:
- energy_forces_within_threshold
primary_metric: forces_mae
gpus: 0
logger: tensorboard
loss_fns:
Expand Down Expand Up @@ -122,29 +121,29 @@ test_dataset:
trainer: ocp
val_dataset: null

2024-04-16 23:02:01 (INFO): Loading dataset: ase_db
2024-04-16 23:02:01 (INFO): rank: 0: Sampler created...
2024-04-16 23:02:01 (INFO): Batch balancing is disabled for single GPU training.
2024-04-16 23:02:01 (INFO): rank: 0: Sampler created...
2024-04-16 23:02:01 (INFO): Batch balancing is disabled for single GPU training.
2024-04-16 23:02:01 (INFO): Loading model: gemnet_t
2024-04-16 23:02:03 (INFO): Loaded GemNetT with 31671825 parameters.
2024-04-16 23:02:03 (WARNING): Model gradient logging to tensorboard not yet supported.
2024-04-16 23:02:03 (INFO): Loading checkpoint from: /tmp/ocp_checkpoints/gndt_oc22_all_s2ef.pt
2024-04-16 23:02:03 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-04-16 23:02:03 (WARNING): Scale factor comment not found in model
2024-04-16 23:02:03 (INFO): Predicting on test.
2024-04-17 16:33:35 (INFO): Loading dataset: ase_db
2024-04-17 16:33:35 (INFO): rank: 0: Sampler created...
2024-04-17 16:33:35 (INFO): Batch balancing is disabled for single GPU training.
2024-04-17 16:33:35 (INFO): rank: 0: Sampler created...
2024-04-17 16:33:35 (INFO): Batch balancing is disabled for single GPU training.
2024-04-17 16:33:35 (INFO): Loading model: gemnet_t
2024-04-17 16:33:37 (INFO): Loaded GemNetT with 31671825 parameters.
2024-04-17 16:33:37 (WARNING): Model gradient logging to tensorboard not yet supported.
2024-04-17 16:33:37 (INFO): Loading checkpoint from: /tmp/ocp_checkpoints/gndt_oc22_all_s2ef.pt
2024-04-17 16:33:37 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-04-17 16:33:37 (WARNING): Scale factor comment not found in model
2024-04-17 16:33:37 (INFO): Predicting on test.
device 0: 0%| | 0/3 [00:00<?, ?it/s]/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
device 0: 33%|███████████▋ | 1/3 [00:04<00:08, 4.11s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:08<00:04, 4.25s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 2.93s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 3.28s/it]
device 0: 33%|███████████▋ | 1/3 [00:04<00:08, 4.34s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:07<00:03, 3.88s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 3.01s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:09<00:00, 3.30s/it]
/home/runner/work/ocp/ocp/ocpmodels/trainers/ocp_trainer.py:510: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
predictions[key] = np.array(predictions[key])
/home/runner/work/ocp/ocp/ocpmodels/trainers/base_trainer.py:840: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
np.array(gather_results[k])[idx]
2024-04-16 23:02:13 (INFO): Writing results to ./results/2024-04-16-23-02-24/ocp_predictions.npz
2024-04-16 23:02:13 (INFO): Total time taken: 10.010130405426025
Elapsed time = 16.4 seconds
2024-04-17 16:33:47 (INFO): Writing results to ./results/2024-04-17-16-34-08/ocp_predictions.npz
2024-04-17 16:33:47 (INFO): Total time taken: 10.054125785827637
Elapsed time = 16.5 seconds
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
4 changes: 2 additions & 2 deletions _sources/core/fine-tuning/fine-tuning-oxides.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,8 +208,8 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
'dataset', 'test_dataset', 'val_dataset'],
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 4,
'optim.eval_every': 10,
'optim.max_epochs': 1,
'optim.batch_size': 4,
'logger':'tensorboard', # don't use wandb!
# Train data
Expand Down
2 changes: 1 addition & 1 deletion _sources/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
[![codecov](https://codecov.io/gh/Open-Catalyst-Project/ocp/graph/badge.svg?token=M606LH5LK6)](https://codecov.io/gh/Open-Catalyst-Project/ocp)

`ocp` is the [Open Catalyst Project](https://opencatalystproject.org/)'s
library of state-of-the-art machine learning algorithms for catalysis.
library of state-of-the-art machine learning algorithms for chemistry. We've used it for catalysis and direct air capture (MOF) applications, and we hope it's useful for your projects too!

<div align="left">
<img src="https://user-images.githubusercontent.com/1156489/170388229-642c6619-dece-4c88-85ef-b46f4d5f1031.gif">
Expand Down
4 changes: 2 additions & 2 deletions _sources/tutorials/advanced/fine-tuning-in-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ yml = generate_yml_config(checkpoint_path, 'config.yml',
'dataset', 'test_dataset', 'val_dataset'],
update={'gpus': 1,
'task.dataset': 'ase_db',
'optim.eval_every': 1,
'optim.max_epochs': 4,
'optim.eval_every': 10,
'optim.max_epochs': 1,
'optim.batch_size': 4,
'logger': 'tensorboard', # don't use wandb unless you already are logged in
# Train data
Expand Down
2 changes: 1 addition & 1 deletion _static/basic.css
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
Expand Down
2 changes: 1 addition & 1 deletion _static/doctools.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
*
* Base JavaScript utilities for all Sphinx HTML documentation.
*
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
Expand Down
2 changes: 1 addition & 1 deletion _static/graphviz.css
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- graphviz extension.
*
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
Expand Down
4 changes: 2 additions & 2 deletions _static/language_data.js
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
* This script contains the language-specific data used by searchtools.js,
* namely the list of stopwords, stemmer, scorer and splitter.
*
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/

var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"];


/* Non-minified version is copied as a separate JS file, is available */
/* Non-minified version is copied as a separate JS file, if available */

/**
* Porter Stemmer
Expand Down
Loading

0 comments on commit 7b61d4f

Please sign in to comment.