Speed comparison with other frameworks #1130
-
Hello! I am moving from mammoth (https://github.com/aimagelab/mammoth) repository to avalanche. I notice than the mammoth repository is generally faster than avalanche under the same experimental settings, cpu and gpu. In order to understand whether this issue comes from my use of avalanche or there are issues in avalanche itself, I was wondering whether a timing comparison has been made with other existing frameworks. Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 2 replies
-
Dear @NareshGuru77, thanks for considering using Avalanche for your work! From my understanding Mammoth is more limited than Avalanche in terms of offered functionalities and hence less complex in terms of codebase. It makes sense that for some vertical experiments it may be faster. However, I'm not aware of any benchmark analysis in terms of efficiency. Maybe if you can share your code or provide more details about the experiment you are running, we may gather more insights and help you out with optimizing your code! Moving this to the discussion tab since it is not an issue! |
Beta Was this translation helpful? Give feedback.
-
Hi @vlomonaco Thank you for the quick response. Avalanche is definitely more extensive and offers more functions. However, it would be nice if avalanche can also take similar time for experimentation. For instance, I ran the naive strategy in avalanche and mammoth for cifar10 dataset using the same gpu (2080Ti). The experimental settings are the same (resnet18, input size 32, batch size 32, 5 epochs, sgd optimizer, 5 tasks with 2 classes each). In this experiment, mammoth takes roughly 3 mins while avalanche takes roughly 7 mins. I am not entirely sure whether it is a bug in the way i am using avalanche but from understanding this is a simple strategy and I haven't done any major changes. I will try to soon add a self contained example that can show this issue. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hey @NareshGuru77, can you please try running the Avalanche code with
and the same for evaluation, if you are evaluating after each experience:
It would be helpful to see the difference in runtime by increasing the number of workers. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hey @HamedHemati , I ran with different workers. Please find the table below:
These numbers are rough values and it different slightly from run to run. However, the difference in speed is there irrespective of workers. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Thank you @HamedHemati . I did a more detailed comparison and I find that the reason is because I am using the following for reproducibility in avalanche: In mammoth, this was not used. Turns out that this slows down computation. Therefore, avalanche does not have any issues with speed. This discussion can be closed. |
Beta Was this translation helpful? Give feedback.
Thank you @HamedHemati . I did a more detailed comparison and I find that the reason is because I am using the following for reproducibility in avalanche:
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
In mammoth, this was not used. Turns out that this slows down computation.
Therefore, avalanche does not have any issues with speed. This discussion can be closed.