Replies: 3 comments 2 replies
-
As the gap is still relevant why we don't try to reproduce a result for imagenette? |
Beta Was this translation helpful? Give feedback.
-
@LukeWood @ianstenbit |
Beta Was this translation helpful? Give feedback.
-
Sorry for the delay on this thread. After tuning our training script a bit, we've now been able to match the expected performance of ResNet50 using KerasCV, and our latest weights now beat the Keras Applications weights with a validation accuracy of 75.5%. We'll be continuing to train KerasCV models using this ImageNet training script for now, but contributions are still welcome. |
Beta Was this translation helpful? Give feedback.
-
Hello community! @ianstenbit and I have been working on creating high quality training scripts for KerasCV models. You can see the current state of this at https://github.com/keras-team/keras-cv/blob/master/examples/training/classification/imagenet/basic_training.py
Currently, our training script achieves 63% accuracy using a ResNet50. This is clearly suboptimal, as the keras.applications model achieves 74.9% accuracy: https://keras.io/api/applications/
We'd like to open the training script up to PRs that will generally improve the results for
basic_training.py
, as well as architecture specific training scripts.More information in:
https://github.com/keras-team/keras-cv/blob/master/.github/CONTRIBUTING.md#contributing-training-scripts
@sayakpaul @quantumalaviya @atuleu @AdityaKane2001 @innat @bhack may be interested in this!
Beta Was this translation helpful? Give feedback.
All reactions