Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify DPA-1/2 Citations #4509

Closed
wants to merge 14 commits into from

Conversation

QuantumMisaka
Copy link

@QuantumMisaka QuantumMisaka commented Dec 26, 2024

As title, modified some citation in docs

Summary by CodeRabbit

  • Documentation
    • Updated references in the DPA-2 model documentation for improved clarity and accessibility.
    • Changed links from arXiv to Nature for DPA-1 model papers in multiple documents, ensuring accuracy of references.
    • Maintained overall structure and content in all affected documents.

iProzd and others added 14 commits December 23, 2024 15:06
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated multi-task model configuration with a new descriptor for
enhanced representation learning.
- Introduced additional parameters for model initialization and
attention mechanisms.

- **Bug Fixes**
- Replaced outdated descriptor references in model configurations to
ensure compatibility with new settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit e7ad8dc)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced model definition handling for improved encapsulation and
consistency across different model types.

- **Bug Fixes**
- Ensured that model definition scripts are correctly set to a JSON
string representation for all model instances.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit f343a3b)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new method `enable_compression` in the PairTabAtomicModel
class, indicating that the model does not support compression settings.

- **Documentation**
- Added docstring for the `enable_compression` method to clarify its
purpose.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 3cdf407)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Adjusted the summary printing functionality to ensure it only executes
from the main process in distributed settings, preventing duplicate
outputs.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit 4b92b6d)
Improvements to the training process:

*
[`deepmd/pt/train/training.py`](diffhunk://#diff-a90c90dc0e6a17fbe2e930f91182805b83260484c9dc1cfac3331378ffa34935R659):
Added a check to skip setting the model to training mode if it already
is. The profiling result shows it takes some time to recursively set it
to all models.

*
[`deepmd/pt/train/training.py`](diffhunk://#diff-a90c90dc0e6a17fbe2e930f91182805b83260484c9dc1cfac3331378ffa34935L686-L690):
Modified the gradient clipping function to include the
`error_if_nonfinite` parameter, and removed the manual check for
non-finite gradients and the associated exception raising.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Improved training loop with enhanced error handling and control flow.
	- Updated gradient clipping logic for better error detection.
	- Refined logging functionality for training and validation results.

- **Bug Fixes**
	- Prevented redundant training calls by adding conditional checks.

- **Documentation**
- Clarified method logic in the `Trainer` class without changing method
signatures.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 037cf3f)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced training loop to support multi-task training, allowing for
more flexible model selection.

- **Improvements**
- Streamlined `step` function to accept only the step ID, simplifying
its usage.
- Adjusted logging and model saving mechanisms for consistency with the
new training flow.
- Improved random seed management for enhanced reproducibility in data
processing.
- Enhanced error handling in data retrieval to ensure seamless operation
during data loading.
	- Added type hints for better clarity in data loader attributes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Chun Cai <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 03c6e49)
…e_r` (deepmodeling#4446)

Fix deepmodeling#4445.

* Modify `DPTabulate` instance creation to include `self.type_one_side`
and `self.exclude_types`

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced configurability for the `DescrptSeR` class, allowing users to
customize compression behavior with new parameters.
- Introduced optional parameters for improved management of atom types
and interactions during the embedding process.

- **Bug Fixes**
- Added validation for excluded types to ensure proper handling within
the compression logic.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit 9b70351)
Systems are aggregated here

https://github.com/deepmodeling/deepmd-kit/blob/f343a3b212edab5525502e0261f3068c0b6fb1f6/deepmd/utils/data_system.py#L802
and later initialized here

https://github.com/deepmodeling/deepmd-kit/blob/f343a3b212edab5525502e0261f3068c0b6fb1f6/deepmd/utils/data_system.py#L809-L810

This process will instantiate `DeepmdData` class, and it will perform
data integrity checks

https://github.com/deepmodeling/deepmd-kit/blob/e695a91ca6f7a1c9c830ab1c58b7b7a05db3da23/deepmd/utils/data.py#L80-L82

Besides, the checking process enumerates all items for all ranks, which
is unnecessary and quite slow. So this PR removes this check.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced flexibility in defining test sizes by allowing percentage
input for the `test_size` parameter.
- Introduced a new method to automatically compute test sizes based on
the specified percentage of total data.
- Improved path handling to accept both string and Path inputs,
enhancing usability.

- **Bug Fixes**
- Improved error handling for invalid paths, ensuring users receive
clear feedback when files are not found.

- **Deprecation Notice**
- The `get_test` method is now deprecated, with new logic implemented
for loading test data when necessary.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Chun Cai <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jinzhe Zeng <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
(cherry picked from commit 3917cf0)
This PR sets the Adam optimizer to use the `fused=True` parameter.
For the profiling result shown below, this modification brings an 2.75x
improvement on optimizer update (22ms vs. 8ms) and ~3% improvement for
total speed up (922ms vs. 892ms). The benchmark case is training a DPA-2
Q3 release model. Please note that the absolute time may differs between
steps.

<details><summary>Before</summary>
<p>

![image](https://github.com/user-attachments/assets/d6b05a1d-6e6c-478d-921f-c497718bc551)

</p>
</details>

<details><summary>After</summary>
<p>

![image](https://github.com/user-attachments/assets/b216b919-094c-441f-96a7-146e1e3db483)

</p>
</details>

[Ref](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html):
> The foreach and fused implementations are typically faster than the
for-loop, single-tensor implementation, with **fused being theoretically
fastest** with both vertical and horizontal fusion. As such, if the user
has not specified either flag (i.e., when foreach = fused = None), we
will attempt defaulting to the foreach implementation when the tensors
are all on CUDA. Why not fused? Since the fused implementation is
relatively new, we want to give it sufficient bake-in time.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved optimizer performance during training by modifying the
initialization of the Adam optimizer.

- **Documentation**
	- Updated method signature for clarity in the `Trainer` class.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit 104fc36)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Updated references in the bibliography for the DPA-2 model to include
a new article entry for 2024.
	- Added a new reference for an attention-based descriptor.

- **Bug Fixes**
- Corrected reference links in documentation to point to updated DOI
links instead of arXiv.

- **Documentation**
- Revised entries in the credits and model documentation to reflect the
latest citations and details.
- Enhanced clarity and detail in fine-tuning documentation for
TensorFlow and PyTorch implementations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
(cherry picked from commit deaeec9)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated guidelines for creating and integrating new models in the
DeePMD-kit framework.
- Added new sections on descriptors, fitting networks, and model
requirements.
	- Enhanced unit testing section with instructions for regression tests.
- Updated URL for the DeePMD-GNN plugin to reflect new repository
location.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>
(cherry picked from commit 250c907)
…eepmodeling#4484)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated formatting of the installation guide for the pre-compiled C
library.
- Icons for TensorFlow and JAX are now displayed together in the header.
	- Retained all installation instructions and compatibility notes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>
(cherry picked from commit 2525ab2)
xref: deepmodeling/deepmd-gnn#44

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced error messages for library loading failures on non-Windows
platforms.
- Updated thread management environment variable checks for improved
compatibility.
- Added support for mixed types in tensor input handling, allowing for
more flexible configurations.

- **Bug Fixes**
	- Improved error reporting for dynamic library loading issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit cfe17a3)
@github-actions github-actions bot added the Docs label Dec 26, 2024
@QuantumMisaka QuantumMisaka changed the base branch from r3.0 to devel December 26, 2024 00:36
Copy link
Contributor

coderabbitai bot commented Dec 26, 2024

📝 Walkthrough

Walkthrough

This pull request focuses on updating documentation references for the DPA-1 and DPA-2 models across multiple markdown files. The changes primarily involve replacing arXiv paper links with Nature publication links, specifically updating the reference URLs for scientific papers related to the DPA models. These modifications improve the documentation's reference accuracy and readability without altering the underlying content or functionality of the documents.

Changes

File Change Summary
doc/model/dpa2.md Updated paper reference from plain URL to formatted markdown link for DPA-2 paper
doc/model/train-se-atten.md Replaced arXiv paper link with Nature publication link for DPA-1 model reference
doc/train/finetuning.md Updated multiple references to DPA-1 paper, changing arXiv links to Nature publication links

Possibly related PRs

Suggested labels

Docs

Suggested reviewers

  • njzjz
  • wanghan-iapcm

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5eac4a and 7325438.

📒 Files selected for processing (3)
  • doc/model/dpa2.md (1 hunks)
  • doc/model/train-se-atten.md (1 hunks)
  • doc/train/finetuning.md (2 hunks)
✅ Files skipped from review due to trivial changes (3)
  • doc/model/train-se-atten.md
  • doc/model/dpa2.md
  • doc/train/finetuning.md

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@QuantumMisaka QuantumMisaka deleted the dpa-docs branch December 26, 2024 00:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants