You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue aims to achieve federated incremental learning in the case of sparse samples in KubeEdge-Ianvs, combining the advantages of existing federated semi-supervised learning and federated incremental learning methods, including but not limited to:
Implement a benchmark test of federated semi-supervised incremental learning in KubeEdge-Ianvs, using dataset CIFAR100 and ILSVRC2012, and measurements such as accuracy, forgetting rate, etc.
Propose a federated semi-supervised incremental learning method with its benchmark results.
Why is this needed:
In edge environments, data continuously arrives at edge devices over time, with the number of categories contained within it increasing steadily. Due to the cost associated with labeling, only a small fraction of this data is labeled. To leverage this data for model optimization, collaborative distributed model training can be conducted among edge devices through federated learning. However, traditional federated learning only considers supervised learning in scenarios where data remains static, thus it cannot effectively train on dynamically changing datasets with sparse labeling.
This issue aims to fully utilize streaming sparse labeled data from different edge devices, employing federated learning to conduct distributed training of models. This approach will mitigate the catastrophic forgetting in models in scenarios of class-incremental learning, thereby enhancing the model's generalization ability.
Recommended Skills:
Deep learning, Python, KubeEdge-Ianvs
The text was updated successfully, but these errors were encountered:
chenhaorui0768
changed the title
Federated Incremental Learning for Label Scarcity Problems based on KubeEdge-Ianvs
Federated Incremental Learning for Label Scarcity Scenarios based on KubeEdge-Ianvs
May 8, 2024
If anyone has questions regarding this issue, please feel free to leave a message here. We would also appreciate it if new members could introduce themselves to the community.
What would you like to be added/modified:
This issue aims to achieve federated incremental learning in the case of sparse samples in KubeEdge-Ianvs, combining the advantages of existing federated semi-supervised learning and federated incremental learning methods, including but not limited to:
Why is this needed:
In edge environments, data continuously arrives at edge devices over time, with the number of categories contained within it increasing steadily. Due to the cost associated with labeling, only a small fraction of this data is labeled. To leverage this data for model optimization, collaborative distributed model training can be conducted among edge devices through federated learning. However, traditional federated learning only considers supervised learning in scenarios where data remains static, thus it cannot effectively train on dynamically changing datasets with sparse labeling.
This issue aims to fully utilize streaming sparse labeled data from different edge devices, employing federated learning to conduct distributed training of models. This approach will mitigate the catastrophic forgetting in models in scenarios of class-incremental learning, thereby enhancing the model's generalization ability.
Recommended Skills:
Deep learning, Python, KubeEdge-Ianvs
Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
[NeurIPS'22] SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training
[CVPR'22] Federated Class-Incremental Learning
The text was updated successfully, but these errors were encountered: