Skip to content

Releases: keras-team/keras-hub

v0.18.1

06 Dec 00:53
2fd531c
Compare
Choose a tag to compare

Summary

  • Minor bug fix point release.
    • Remove einops code from flux model.
    • Fix specifying dtype during task from_preset.

What's Changed

Full Changelog: v0.18.0...v0.18.1

v0.18.0

05 Dec 06:03
1f11e3f
Compare
Choose a tag to compare

Summary

  • New Models.
    • PaliGemma 2: Better performing PaliGemma release based on Gemma 2.
    • SegFormer: Introduced the SegFormer architecture for SemanticSegmentation.
    • CLIP.
    • EfficientNet: Added EfficientNet presets, including the Edge and lite0 variants.
    • RetinaNet: Added an object detection task model.
    • Stable Diffusion: Added SD3.5 large and large turbo presets and flash attention support.
  • HuggingFace integration.
    • All Keras team presets are now on both Kaggle and Huggingface hubs.

Breaking Changes.

  • Updated initialization parameters for SD3, replacing height and width with image_shape.

What's Changed

Full Changelog: v0.17.0...v0.18.0

v0.17.0

22 Oct 02:08
e76807b
Compare
Choose a tag to compare

Summary

  • 📢 KerasNLP and KerasCV are now becoming KerasHub 📢. KerasCV and KerasNLP have been consolidated into KerasHub package
  • Models available now in KerasHub are albert, bart, bert, bloom, clip, csp_darknet, deberta_v3, deeplab_v3, densenet, distil_bert, efficientnet, electra, f_net, falcon, gemma, gpt2, gpt_neo_x, llama, llama3, mistral, mit, mobilenet, opt, pali_gemma, phi3, resnet, retinanet, roberta, sam, stable_diffusion_3, t5, vae, vgg, vit_det, whisper, xlm_roberta and xlnet.
  • A new preprocessor flow has been added for vision and audio models

What's Changed

Read more

v0.16.0.dev0

22 Oct 00:30
Compare
Choose a tag to compare
v0.16.0.dev0 Pre-release
Pre-release

Summary

  • 📢 KerasNLP and KerasCV are now becoming KerasHub 📢. KerasCV and KerasNLP have been consolidated into KerasHub package
  • Models available now in KerasHub are albert, bart, bert, bloom, clip, csp_darknet, deberta_v3, deeplab_v3, densenet, distil_bert, efficientnet, electra, f_net, falcon, gemma, gpt2, gpt_neo_x, llama, llama3, mistral, mit, mobilenet, opt, pali_gemma, phi3, resnet, retinanet, roberta, sam, stable_diffusion_3, t5, vae, vgg, vit_det, whisper, xlm_roberta and xlnet.
  • A new preprocessor flow has been added for vision and audio models

What's Changed

Read more

v0.15.1

19 Sep 16:54
e307389
Compare
Choose a tag to compare

Summary

Bug fix patch release.

  • Always run tf preprocessing on CPU.
  • Fix running preprocessing outside the main python thread.
  • Fix loading classifiers with the "old name" of XXClasssifier as XXTextClassifier.
  • Restore support for bytestring to tokenizers and other preprocessing layers as strings.

What's Changed

Full Changelog: v0.15.0...v0.15.1

v0.15.0

13 Sep 19:26
99df05b
Compare
Choose a tag to compare

Summary

📢 KerasNLP is becoming KerasHub 📢, read more about it here.

This release contains a number of feature improvements:

  • Added int8 quantization support.
    • Use the quantize() method to quantize any model.
    • Llama 2 and Llama 3 pre-quantized presets are available.
  • PaliGemmaCausalLM will automatically resize input images during preprocessing.
  • Added more converters for hugginface/transformers checkpoints.
    • Gemma 2, PaliGemma, GPT2, Bert, Albert, DistilBert, Bart.
  • Class detection for huggingface/transformers checkpoints.
    • Call from_preset() on a base class, and we will find the correct subclass to create.
  • Added Vicuna presets.
  • Alias Classifier as TextClassifier, BertClassifier as BertTextClassifier.
  • Added tokenizer.special_tokens and tokenizer.special_token_ids as convenient properties to view all special tokens on a pretrained tokenizer.
# Quantize an unquantized model.
lm = keras_nlp.models.CausalLM.from_preset(
    "gemma2_instruct_2b_en",
    dtype="bfloat16",
)
lm.quantize("int8")
# Load a pre-quantized model.
lm = keras_nlp.models.CausalLM.from_preset(
    "llama3_instruct_8b_en_int8",
    dtype="bfloat16",
)
# Convert a bert model in the huggingface/transformers format.
classifier = keras_nlp.models.TextClassifier.from_preset(
    "hf://google-bert/bert-base-uncased",
    num_classes=2,
)
# View all special tokens.
print(classifier.preprocessor.tokenizer.special_tokens)
print(classifier.preprocessor.tokenizer.special_token_ids)

Breaking changes

  • On all backends, all strings and ragged output will be returned as python strings or python lists respectively.
    • This include preprocessing methods like tokenize() and detokenize().
    • This may break code that depended on tf.Tensor output on the tensorflow backend, but will lead to consistent output on all backends, which we believe will be an overall improvement.
    • Preprocessing layers can still always be included in a tf.data preprocessing pipeline, on any backend.

What's Changed

New Contributors

Full Changelog: v0.14.4...v0.15.0

v0.14.4

06 Aug 18:01
4601d88
Compare
Choose a tag to compare

Summary

  • Fix issues with Gemma 2 sliding window.
  • Fix TensorFlow backend Gemma 2 generation.

What's Changed

Full Changelog: v0.14.3...v0.14.4

v0.14.3

02 Aug 18:43
4d1659e
Compare
Choose a tag to compare

Summary

  • Short names for shield gemma checkpoints.
keras_nlp.models.GemmaCausalLM.from_preset("shieldgemma_2b_en")

What's Changed

Full Changelog: v0.14.2...v0.14.3

v0.14.2

31 Jul 04:03
016f79c
Compare
Choose a tag to compare

Summary

  • Add Gemma 2 2b.
  • Fixes for logit softcapping.

What's Changed

Full Changelog: v0.14.1...v0.14.2

v0.14.1

26 Jul 21:44
7e56dbd
Compare
Choose a tag to compare

Summary

  • Update Gemma 2 9b to fix minor config error.

What's Changed

Full Changelog: v0.14.0...v0.14.1