You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have executed dashboard.sh to test models cost on tesla a100, but I find that most of time part is unprofiled.
Is this result right? I use profiler to check runtime of different kernel, and cpu time percent sum of backward in Transformer is only about 8%, 89% for aten operations, these percents results seems to be dismatched with the figure above.
The text was updated successfully, but these errors were encountered:
I find most of time statics in cpp codes about remat haven't use sync operation, and your paper have mentioned the sync mode for time measuring in pytorch, so maybe my code run without sync mode? Unprofiled part seems to be too heavy for several models.
Hi @Pegessi , you can indeed make cuda sync, but we found that to take more time without giving better result. weirdly the code work even when stuff is non-synced: as you can see we get 5x memory saving on transformer with 2x time cost. it is weird.
my understanding is that cache policy doesnt need to be very accurate, and the kernel launch time is somewhat suggestive of the actual runtime.
I have executed dashboard.sh to test models cost on tesla a100, but I find that most of time part is unprofiled.
Is this result right? I use profiler to check runtime of different kernel, and cpu time percent sum of backward in Transformer is only about 8%, 89% for aten operations, these percents results seems to be dismatched with the figure above.
The text was updated successfully, but these errors were encountered: