Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time measuring of different cost is not correct for running on tesla a100 #16

Open
Pegessi opened this issue Sep 14, 2023 · 2 comments
Open

Comments

@Pegessi
Copy link

Pegessi commented Sep 14, 2023

I have executed dashboard.sh to test models cost on tesla a100, but I find that most of time part is unprofiled.
image
Is this result right? I use profiler to check runtime of different kernel, and cpu time percent sum of backward in Transformer is only about 8%, 89% for aten operations, these percents results seems to be dismatched with the figure above.

@Pegessi
Copy link
Author

Pegessi commented Sep 14, 2023

I find most of time statics in cpp codes about remat haven't use sync operation, and your paper have mentioned the sync mode for time measuring in pytorch, so maybe my code run without sync mode? Unprofiled part seems to be too heavy for several models.

@MarisaKirisame
Copy link
Collaborator

Hi @Pegessi , you can indeed make cuda sync, but we found that to take more time without giving better result. weirdly the code work even when stuff is non-synced: as you can see we get 5x memory saving on transformer with 2x time cost. it is weird.

my understanding is that cache policy doesnt need to be very accurate, and the kernel launch time is somewhat suggestive of the actual runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants