Bump version to V0.2.9 with new mixup augmentations and various optimizers.
- Support new mixup augmentation methods, including AdAutoMix and SnapMix. Config files and models & logs were provided and are on updating.
- Support more backbone architectures, including UniRepLKNet, TransNeXt, StarNet, etc. Fixed some bugs.
- Support classical self-supervised method DINO with ViT-Base on ImageNet-1K.
- Support more PyTorch optimizers implemented, including Adam variants (e.g., AdaBelief, AdaFactor) and SGD variants (e.g., SGDP).
- Support evaluation tools for mixup augmentations, including robustness testing (corruption and adversiral attack robustness) and calibration evaluation.
- Provide more config files for self-supervised learning methods on small-scale datasets (CIFAR-100 and STL-10).
- Support Sharpness-Aware Minimization (SAM) optimizer variants for small-scale datasets.
Bump version to V0.2.8 with new features in MMPreTrain.
- Support more backbone architectures, including MobileNetV3, EfficientNetV2, HRNet, CSPNet, LeViT, MobileViT, DaViT, and MobileOne, etc.
- Support CIFAR-100 benchmarks of Metaformer architectures and Mixup variants with Transformers, detailed in cifar100/advanced and cifar100/mixups. Models and logs of various CIFAR-100 mixup benchmarks are on updating.
- Support regression tasks with relavent datasets, metrics, and configs. Datasets include AgeDB, IMDB-WIKI, and RCFMNIST.
- Support Switch EMA in image classification, contrastive learning (BYOL, MoCo variants), and regression tasks.
- Support optimizers implemented in timm, including AdaBelief, AdaFactor, Lion, etc.
- Update formats of awesome lists in Awesome Mixups and Awesome MIM and provide the latest methods (updated to 30/09/2023).
- Fix the
by_epoch
setting inCustomSchedulerHook
and updateDecoupleMix
insoft_mix_cross_entropy
to support label smoothing settings. - Fix bugs of Vision Transformers in cls_mixup_head and reg_head.
Bump version to V0.2.7 with new features as #35. Update new features of OpenMixup
v0.2.7 as issue #36.
- Refactor
openmixup.core
(instead ofopenmixup.hooks
) andopenmixup.models.augments
(contains mixup augmentation methods which are originally implemented inopenmixup.models.utils
). After code refactoring, the macro design ofOpenMixup
is similar to most projects of MMLab. - Support deployment of
ONNX
andTorchScript
inopenmixup.core.export
andtools/deployment
. We refactored the abstract classBaseModel
(implemented inopenmixup/models/classifiers/base_model.py
) to supportforward_inference
(for custom inference and visualization). We also refactoredopenmixup.models.heads
andopenmixup.models.losses
to supportforward_inference
. You can deploy the classification models inOpenMixup
according to deployment tutorials. - Support testing API methods in
openmixup/apis/test.py
for evaluation and deployment of classification models. - Refactor
openmixup.core.optimizers
to separate optimizers and builders and support the latest Adan optimizer. - Refactor
mixup_classification.py
to support label mixup methods, addreturn_mask
for mixup methods inaugments
and addreturn_attn
in ViT backbone. - Refactor
ValidateHook
to support new features asEvalHook
in mmcv, e.g.,save_best="auto"
during training. - Refactor
ClsHead
withBaseClsHead
to support MLP classification head variants in modern network architectures.
- Support detailed usage instructions in README of config files for image classification methods in
configs/classification
, e.g., mixups on ImageNet. READMEs of other methods inconfigs/selfsup
andconfigs/semisup
will also be updated. - Refine the origianzation of README files according to README-Template.
- Support the new mixup augmentation method (AlignMix) and provide the relevant config files in various datasets.
- Refine the setup for the local installation and PyPi release in
setup.py
andsetup.cfg
. View PyPi project of OpenMixup. - Support a new mixup method TransMix and provide config files in mixups/deit.
- Update config files. Provide full config files of mixup methods based on ViT-T/S/B on ImageNet and update RSB A3 config files for popular backbones.
- Update
target_generators
to support the latest MIM pre-training methods (fixed requirements). - Update config files and scripts for SSL downstream tasks benchmarks (classification, detection, and segmentation).
- Update and fix bugs in visualization tools (vis_loss_landscape). Fix model converters tools.
- Support Semantic-Softmax loss and ImageNet-21K-P (Winter) pre-training.
- Support more backbone architectures, including BEiT, MetaFormer, ConvNeXtV2, VanillaNet, and CoC.
- Update documents of mixup benchmarks on ImageNet in Model_Zoo_sup.md. Update config files for supported mixup methods.
- Update formats (figures, introductions and content tables) of awesome lists in Awesome Mixups and Awesome MIM and provide the latest methods (updated to 18/03/2023).
- Update
api
that describes the overall code structures indocs/en/api
for the readthedocs page. - Reorganize and update tutorials for SSL downstream tasks benchmarks (classification, detection, and segmentation).
Bump version to V0.2.6 with new features as #20. Update new features and documents of OpenMixup
v0.2.6 as issue #24, fix relevant issue #25, issue #26, issue #27, issue #31, and issue #33.
- Support new backbone architectures (EdgeNeXt, EfficientFormer, HorNet, (MogaNet, MViT.V2, ShuffleNet.V1, DeiT-3), and provide relevant network modules in
models/utils/layers
. Config files and README.md are updated. - Support new self-supervised method BEiT with ViT-Base on ImageNet-1K, and fix bugs of CAE, MaskFeat, and SimMIM in
Dataset
,Model
, andHead
. Note that we addedHOG
feature implementation borrowed from the original repo for MaskFeat. Update pre-training and fine-tuning config files, and documents for the relevant masked image modeling (MIM) methods (BEiT, MaskFeat, CAE, and A2MIM). Support more fine-tuning setting on ImageNet for MIM pre-training based on various backbones (e.g., ViTs, ResNets, ConvNeXts). - Fix the updated arXiv.V2 version of VAN by adding architecture configurations.
- Support ArcFace loss for metric learning and the relevant
NormLinearClsHead
. And support SeeSaw loss for long-tail classification tasks. - Update the issue template with more relevant links and emojis.
- Support Grad-CAM visualization tools vis_cam.py of supported architectures.
- Update our
OpenMixup
tech report on arXiv, which provides more technical details and benchmark results. - Update self-supervised learning Model_Zoo_selfsup.md. And update documents of the new backbone and self-supervised methods.
- Update supervised learning Model_Zoo_sup.md as provided in AutoMix and support more mixup benchmark results.
- Update the template and add the latest paper lists of mixup and MIM methods in Awesome Mixups and Awesome MIM. We provide teaser figures of most papers as illustrations.
- Update documents of
tools
.
- Fix raising error notification of
torch.fft
for PyTorch 1.6 or lower versions in backbones and heads. - Fix
README.md
(new icons, fixing typos) and support pytest intests
. - Fix the classification heads and update implementations and config files of AlexNet and InceptionV3.
Bump version to V0.2.5 with new features and updating documents as #10. Update features and fix bugs in V0.2.5 as #17. Update features and documents in V0.2.5 as #18 and #19.
- Support new attention mechanisms in backbone architectures (Anti-Oversmoothing,
FlowAttention
in FlowFormer andPoolAttention
in MViTv2). - Update code intergration testing in tests.
- Recognize
README
andREADME
for various methods. - Update Awesome Mixups and Awesome MIM.
- Update get_started.md and Tutorials for better usage of
OpenMixup
. - Update mixup benchmarks in model_zoos: providing configs, weights, and more details.
- Update latest methods in Awesome Mixups and Awesome MIM.
- Update
README.md
and fixauto_train_mixups.py
for various datasets.
- Fix visualization of the reconstruction results in
MAE
. - Fix the normalization bug in config files and
plot_torch.py
as mentioned in #16. - Fix the random seeds in
tools/train.py
as mentioned in #14.
Update new features and fix bugs as #7.
- Support new backbone architectures (LITv2).
- Refactor code structures weight initialization in various network modules (using
BaseModule
inmmcv
). - Refactor code structures of
openmixup.models.utils.layers
to support more network structures.
- Fix bugs that cause degenerate performances of pure Transformer backbones (DeiT and Swin) in
OpenMixup
. The main reason might be the old version ofauto_fp16
andDistOptimizerHook
implementations, sincePyTorch=>1.6.0
has better support of fp16 training thanmmcv
. - Fix the bug of ViT fine-tuning for MIM methods (e.g., MAE, SimMIM). The original
MIMVisionTransformer
inopenmixup.models.mim_vit
has frozen all the backbone parameters during fine-tuning. - Fix the initialization of Transformer-based architectures (e.g., ViT, Swin) to reproduce the train-from-scratch performances.
- Fix the weight initialization of Transformer-based architectures (e.g., ViT, Swin) to reproduce the train-from-scratch performance. Update weight initialization, parameter-wise weight decay, and fp16 settings in relevant config files.
Support new features as #6.
- Support the online document of OpenMixup (built on Read the Docs).
- Provide README and update configs for self-supervised and supervised methods.
- Support new Masked Image Modeling (MIM) methods (A2MIM, CAE).
- Support new backbone networks (DenseNet, ResNeSt, PoolFormer, UniFormer).
- Support new Fine-tuing method (HCR).
- Support new mixup augmentation methods (SmoothMix, GridMix).
- Support more regression losses (Focal L1/L2 loss, Balanced L1 loss, Balanced MSE loss).
- Support more regression metrics (regression errors and correlations) and the regression dataset.
- Support more reweight classification losses (Gradient Harmonized loss, Varifocal Focal Loss) from MMDetection.
- Refactor code structures of
openmixup.models.utils
and support more network layers. - Fix the bug of
DropPath
(using stochastic depth rule) inResNet
for RSB A1/A2 training settings.
Support new features and finish code refactoring as #5.
- Support more self-supervised methods (Barlow Twins and Masked Image Modeling methods).
- Support popular backbones (ConvMixer, MLPMixer, VAN) based on MMClassification.
- Support more regression losses (Charbonnier loss and Focal Frequency loss).
- Fix bugs in self-supervised classification benchmarks (configs and implementations of VisionTransformer).
- Update INSTALL.md. We suggest you install PyTorch 1.8 or higher and mmcv-full for better usage of this repo. PyTorch 1.8 has bugs in AdamW optimizer (do not use PyTorch 1.8 to fine-tune ViT-based methods).
- Fix bugs in PreciseBNHook (update all BN stats) and RepeatSampler (set sync_random_seed).
Support new features and finish code refactoring as #4.
- Support masked image modeling (MIM) self-supervised methods (MAE, SimMIM, MaskFeat).
- Support visualization of reconstruction results in MIM methods.
- Support basic regression losses and metrics.
- Fix bugs in regression metrics, MIM dataset, and benchmark configs. Notice that only
l1_loss
is supported by FP16 training, other regression losses (e.g., MSE and Smooth_L1 losses) will cause NAN when the target and prediction are not normalized in FP16 training. - We suggest you install PyTorch 1.8 or higher (required by some self-supervised methods) and
mmcv-full
for better usage of this repo. Do not use PyTorch 1.8 to fine-tune ViT-based methods, and you can still use PyTorch 1.6 for supervised classification methods.
Support new features and finish code refactoring as #3.
- Support various popular backbones (ConvNets and ViTs), various image datasets, popular mixup methods, and benchmarks for supervised learning. Config files are available.
- Support popular self-supervised methods (e.g., BYOL, MoCo.V3, MAE) on both large-scale and small-scale datasets, and self-supervised benchmarks (merged from MMSelfSup). Config files are available.
- Support analyzing tools for self-supervised learning (kNN/SVM/linear metrics and t-SNE/UMAP visualization).
- Convenient usage of configs: fast configs generation by 'auto_train.py' and configs inheriting (MMCV).
- Support mixed-precision training (NVIDIA Apex or MMCV Apex) for all methods.
- Model Zoos and lists of Awesome Mixups have been released.
- Done code refactoring follows MMSelfSup and MMClassification.
- Refactor code structures for vision transformers and self-supervised methods (e.g., MoCo.V3 and MAE).
- Provide online analysis of self-supervised methods (knn metric and t-SNE/UMAP visualization).
- More results are provided in Model Zoos.
- Fix bugs of reusing of configs, ViTs, visualization tools, etc. It requires rebuilding of OpenMixup (install mmcv-full).
- Refactor code structures according to MMSelfsup to fit high version of mmcv and PyTorch.
- Support self-supervised methods and optimizes config structures.
- Support various popular backbones (ConvNets and ViTs) and update config files.
- Support various handcrafted methods and optimization-based methods (e.g., PuzzleMix, AutoMix, SAMix, DecoupleMix, etc.). Config files generation of mixup methods are supported.
- Provide supervised image classification benchmarks in model_zoo and results (on updating).
- Fix bugs of new mixup methods (e.g., gco for Puzzlemix, etc.).
- Support various popular backbones (popular ConvNets and ViTs).
- Support mixed precision training (NVIDIA Apex or MMCV Apex).
- Support supervised, self- & semi-supervised learning methods and benchmarks.
- Support fast configs generation from a basic config file by
auto_train.py
.
- Fix bugs of code refactoring (backbones, fp16 training, etc.).
This repo is originally built on OpenSelfSup (the old version of MMSelfSup) and borrows some implementations from MMClassification.
- Mixed Precision Training (based on NVIDIA Apex for PyTorch 1.6).
- Improvement of GaussianBlur doubles the training speed of MoCo V2, SimCLR, and BYOL.
- More benchmarking results, including benchmarks on Places, VOC, COCO, and linear/semi-supervised benchmarks.
- Fix bugs in moco v2 and BYOL so that the reported results are reproducible.
- Provide benchmarking results and model download links.
- Support updating the network every several iterations (accumulation).
- Support LARS and LAMB optimizer with Nesterov (LAMB from MMClassification).
- Support excluding specific parameter-wise settings from the optimizer updating.