You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are used to having good performance results for MNIST dataset (often reaching >80% accuracy) independently from scenario configuration, which allows for good comparison of contributivity methods implemented
For Cifar dataset, results are more uncertain/variable and we need to find a few sets of configurations that give acceptable accuracy to be able to compare contributivity methods
Example of config leading to poor performance (early stopping after 5 epochs - max acc 34%...):
We are used to having good performance results for MNIST dataset (often reaching >80% accuracy) independently from scenario configuration, which allows for good comparison of contributivity methods implemented
For Cifar dataset, results are more uncertain/variable and we need to find a few sets of configurations that give acceptable accuracy to be able to compare contributivity methods
Example of config leading to poor performance (early stopping after 5 epochs - max acc 34%...):
dataset_name:
- 'cifar10'
partners_count:
- 3
amounts_per_partner:
- [0.3, 0.3, 0.4]
samples_split_option:
- ['advanced', [[7, 'shared'], [6, 'shared'], [2, 'specific']]]
multi_partner_learning_approach:
- 'seqavg'
aggregation_weighting:
- 'data_volume'
epoch_count:
- 38
minibatch_count:
- 20
gradient_updates_per_pass_count:
- 8
Other possibility = changing early stopping conditions, such as increasing PATIENCE ?
The text was updated successfully, but these errors were encountered: