-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[r3.0] release v3.0.1 #4487
Merged
Merged
[r3.0] release v3.0.1 #4487
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Updated multi-task model configuration with a new descriptor for enhanced representation learning. - Introduced additional parameters for model initialization and attention mechanisms. - **Bug Fixes** - Replaced outdated descriptor references in model configurations to ensure compatibility with new settings. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit e7ad8dc)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced model definition handling for improved encapsulation and consistency across different model types. - **Bug Fixes** - Ensured that model definition scripts are correctly set to a JSON string representation for all model instances. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit f343a3b)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new method `enable_compression` in the PairTabAtomicModel class, indicating that the model does not support compression settings. - **Documentation** - Added docstring for the `enable_compression` method to clarify its purpose. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit 3cdf407)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Adjusted the summary printing functionality to ensure it only executes from the main process in distributed settings, preventing duplicate outputs. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit 4b92b6d)
Improvements to the training process: * [`deepmd/pt/train/training.py`](diffhunk://#diff-a90c90dc0e6a17fbe2e930f91182805b83260484c9dc1cfac3331378ffa34935R659): Added a check to skip setting the model to training mode if it already is. The profiling result shows it takes some time to recursively set it to all models. * [`deepmd/pt/train/training.py`](diffhunk://#diff-a90c90dc0e6a17fbe2e930f91182805b83260484c9dc1cfac3331378ffa34935L686-L690): Modified the gradient clipping function to include the `error_if_nonfinite` parameter, and removed the manual check for non-finite gradients and the associated exception raising. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Improved training loop with enhanced error handling and control flow. - Updated gradient clipping logic for better error detection. - Refined logging functionality for training and validation results. - **Bug Fixes** - Prevented redundant training calls by adding conditional checks. - **Documentation** - Clarified method logic in the `Trainer` class without changing method signatures. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit 037cf3f)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced training loop to support multi-task training, allowing for more flexible model selection. - **Improvements** - Streamlined `step` function to accept only the step ID, simplifying its usage. - Adjusted logging and model saving mechanisms for consistency with the new training flow. - Improved random seed management for enhanced reproducibility in data processing. - Enhanced error handling in data retrieval to ensure seamless operation during data loading. - Added type hints for better clarity in data loader attributes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Chun Cai <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit 03c6e49)
…e_r` (deepmodeling#4446) Fix deepmodeling#4445. * Modify `DPTabulate` instance creation to include `self.type_one_side` and `self.exclude_types` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced configurability for the `DescrptSeR` class, allowing users to customize compression behavior with new parameters. - Introduced optional parameters for improved management of atom types and interactions during the embedding process. - **Bug Fixes** - Added validation for excluded types to ensure proper handling within the compression logic. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit 9b70351)
Systems are aggregated here https://github.com/deepmodeling/deepmd-kit/blob/f343a3b212edab5525502e0261f3068c0b6fb1f6/deepmd/utils/data_system.py#L802 and later initialized here https://github.com/deepmodeling/deepmd-kit/blob/f343a3b212edab5525502e0261f3068c0b6fb1f6/deepmd/utils/data_system.py#L809-L810 This process will instantiate `DeepmdData` class, and it will perform data integrity checks https://github.com/deepmodeling/deepmd-kit/blob/e695a91ca6f7a1c9c830ab1c58b7b7a05db3da23/deepmd/utils/data.py#L80-L82 Besides, the checking process enumerates all items for all ranks, which is unnecessary and quite slow. So this PR removes this check. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced flexibility in defining test sizes by allowing percentage input for the `test_size` parameter. - Introduced a new method to automatically compute test sizes based on the specified percentage of total data. - Improved path handling to accept both string and Path inputs, enhancing usability. - **Bug Fixes** - Improved error handling for invalid paths, ensuring users receive clear feedback when files are not found. - **Deprecation Notice** - The `get_test` method is now deprecated, with new logic implemented for loading test data when necessary. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Chun Cai <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Jinzhe Zeng <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> (cherry picked from commit 3917cf0)
This PR sets the Adam optimizer to use the `fused=True` parameter. For the profiling result shown below, this modification brings an 2.75x improvement on optimizer update (22ms vs. 8ms) and ~3% improvement for total speed up (922ms vs. 892ms). The benchmark case is training a DPA-2 Q3 release model. Please note that the absolute time may differs between steps. <details><summary>Before</summary> <p> ![image](https://github.com/user-attachments/assets/d6b05a1d-6e6c-478d-921f-c497718bc551) </p> </details> <details><summary>After</summary> <p> ![image](https://github.com/user-attachments/assets/b216b919-094c-441f-96a7-146e1e3db483) </p> </details> [Ref](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html): > The foreach and fused implementations are typically faster than the for-loop, single-tensor implementation, with **fused being theoretically fastest** with both vertical and horizontal fusion. As such, if the user has not specified either flag (i.e., when foreach = fused = None), we will attempt defaulting to the foreach implementation when the tensors are all on CUDA. Why not fused? Since the fused implementation is relatively new, we want to give it sufficient bake-in time. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved optimizer performance during training by modifying the initialization of the Adam optimizer. - **Documentation** - Updated method signature for clarity in the `Trainer` class. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit 104fc36)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Updated references in the bibliography for the DPA-2 model to include a new article entry for 2024. - Added a new reference for an attention-based descriptor. - **Bug Fixes** - Corrected reference links in documentation to point to updated DOI links instead of arXiv. - **Documentation** - Revised entries in the credits and model documentation to reflect the latest citations and details. - Enhanced clarity and detail in fine-tuning documentation for TensorFlow and PyTorch implementations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <[email protected]> (cherry picked from commit deaeec9)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Updated guidelines for creating and integrating new models in the DeePMD-kit framework. - Added new sections on descriptors, fitting networks, and model requirements. - Enhanced unit testing section with instructions for regression tests. - Updated URL for the DeePMD-GNN plugin to reflect new repository location. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: Jinzhe Zeng <[email protected]> (cherry picked from commit 250c907)
…eepmodeling#4484) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Updated formatting of the installation guide for the pre-compiled C library. - Icons for TensorFlow and JAX are now displayed together in the header. - Retained all installation instructions and compatibility notes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: Jinzhe Zeng <[email protected]> (cherry picked from commit 2525ab2)
xref: njzjz/deepmd-gnn#44 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced error messages for library loading failures on non-Windows platforms. - Updated thread management environment variable checks for improved compatibility. - Added support for mixed types in tensor input handling, allowing for more flexible configurations. - **Bug Fixes** - Improved error reporting for dynamic library loading issues. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit cfe17a3)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## r3.0 #4487 +/- ##
=======================================
Coverage 84.64% 84.65%
=======================================
Files 614 614
Lines 57135 57127 -8
Branches 3486 3487 +1
=======================================
- Hits 48364 48361 -3
+ Misses 7645 7639 -6
- Partials 1126 1127 +1 ☔ View full report in Codecov by Sentry. |
wanghan-iapcm
approved these changes
Dec 23, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This patch version only contains bug fixes, enhancements, and documentation improvements.
Just make a note that the following command is useful.
git cherry-pick -x $(gh pr view 4485 --json mergeCommit -q '.mergeCommit.oid')