Skip to content

Commit

Permalink
[models] update docs and version
Browse files Browse the repository at this point in the history
  • Loading branch information
pkelaita committed Sep 30, 2024
1 parent cee2cef commit c4281af
Show file tree
Hide file tree
Showing 3 changed files with 55 additions and 29 deletions.
26 changes: 25 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,33 @@
# Changelog

_Current version: 0.0.33_
_Current version: 0.0.34_

[PyPi link](https://pypi.org/project/l2m2/)

### 0.0.34 - September 30, 2024

> [!CAUTION]
> This release has breaking changes! Please read the changelog carefully.
#### Added

- New supported models `gemma-2-9b`, `llama-3.2-1b`, and `llama-3.2-3b` via Groq.

#### Changed

- In order to be more consistent with l2m2's naming scheme, the following model ids have been updated:
- `llama3-8b``llama-3-8b`
- `llama3-70b``llama-3-70b`
- `llama3.1-8b``llama-3.1-8b`
- `llama3.1-70b``llama-3.1-70b`
- `llama3.1-405b``llama-3.1-405b`
- **This is a breaking change!!!** Calls using the old `model_id`s (`llama3-8b`, etc.) will fail.

#### Removed

- Provider `octoai` has been removed as they have [been acquired](https://www.geekwire.com/2024/chip-giant-nvidia-acquires-octoai-a-seattle-startup-that-helps-companies-run-ai-models/) and are shutting down their cloud platform. **This is a breaking change!!!** Calls using the `octoai` provider will fail.
- All previous OctoAI supported models (`mixtral-8x22b`, `mixtral-8x7b`, `mistral-7b`, `llama-3-70b`, `llama-3.1-8b`, `llama-3.1-70b`, and `llama-3.1-405b`) are still available via Mistral, Groq, and/or Replicate.

### 0.0.33 - September 11, 2024

#### Changed
Expand Down
56 changes: 29 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# L2M2: A Simple Python LLM Manager 💬👍

[![Tests](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml/badge.svg?timestamp=1726092692)](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml) [![codecov](https://codecov.io/github/pkelaita/l2m2/graph/badge.svg?token=UWIB0L9PR8)](https://codecov.io/github/pkelaita/l2m2) [![PyPI version](https://badge.fury.io/py/l2m2.svg?timestamp=1726092692)](https://badge.fury.io/py/l2m2)
[![Tests](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml/badge.svg?timestamp=1727732688)](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml) [![codecov](https://codecov.io/github/pkelaita/l2m2/graph/badge.svg?token=UWIB0L9PR8)](https://codecov.io/github/pkelaita/l2m2) [![PyPI version](https://badge.fury.io/py/l2m2.svg?timestamp=1727732688)](https://badge.fury.io/py/l2m2)

**L2M2** ("LLM Manager" → "LLMM" → "L2M2") is a tiny and very simple LLM manager for Python that exposes lots of models through a unified API. This is useful for evaluation, demos, production applications etc. that need to easily be model-agnostic.

### Features

- <!--start-count-->22<!--end-count--> supported models (see below) – regularly updated and with more on the way.
- <!--start-count-->25<!--end-count--> supported models (see below) – regularly updated and with more on the way.
- Session chat memory – even across multiple models or with concurrent memory streams.
- JSON mode
- Prompt loading tools
Expand All @@ -23,30 +23,33 @@ L2M2 currently supports the following models:

<!--start-model-table-->

| Model Name | Provider(s) | Model Version(s) |
| ------------------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| `gpt-4o` | [OpenAI](https://openai.com/product) | `gpt-4o-2024-08-06` |
| `gpt-4o-mini` | [OpenAI](https://openai.com/product) | `gpt-4o-mini-2024-07-18` |
| `gpt-4-turbo` | [OpenAI](https://openai.com/product) | `gpt-4-turbo-2024-04-09` |
| `gpt-3.5-turbo` | [OpenAI](https://openai.com/product) | `gpt-3.5-turbo-0125` |
| `gemini-1.5-pro` | [Google](https://ai.google.dev/) | `gemini-1.5-pro` |
| `gemini-1.0-pro` | [Google](https://ai.google.dev/) | `gemini-1.0-pro` |
| `claude-3.5-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-5-sonnet-20240620` |
| `claude-3-opus` | [Anthropic](https://www.anthropic.com/api) | `claude-3-opus-20240229` |
| `claude-3-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-sonnet-20240229` |
| `claude-3-haiku` | [Anthropic](https://www.anthropic.com/api) | `claude-3-haiku-20240307` |
| `command-r` | [Cohere](https://docs.cohere.com/) | `command-r` |
| `command-r-plus` | [Cohere](https://docs.cohere.com/) | `command-r-plus` |
| `mistral-large-2` | [Mistral](https://mistral.ai/) | `mistral-large-latest` |
| `mixtral-8x22b` | [Mistral](https://mistral.ai/), [OctoAI](https://octoai.cloud/) | `open-mixtral-8x22b`, `mixtral-8x22b-instruct` |
| `mixtral-8x7b` | [Mistral](https://mistral.ai/), [OctoAI](https://octoai.cloud/), [Groq](https://wow.groq.com/) | `open-mixtral-8x7b`, `mixtral-8x7b-instruct`, `mixtral-8x7b-32768` |
| `mistral-7b` | [Mistral](https://mistral.ai/), [OctoAI](https://octoai.cloud/) | `open-mistral-7b`, `mistral-7b-instruct` |
| `gemma-7b` | [Groq](https://wow.groq.com/) | `gemma-7b-it` , |
| `llama-3-8b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama-3-8b-8192`, `meta/meta-llama-3-8b-instruct` |
| `llama-3-70b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/), [OctoAI](https://octoai.cloud/) | `llama-3-70b-8192`, `meta/meta-llama-3-70b-instruct`, `meta-llama-3-70b-instruct` |
| `llama-3.1-8b` | [OctoAI](https://octoai.cloud/) | `meta-llama-3.1-8b-instruct` |
| `llama-3.1-70b` | [OctoAI](https://octoai.cloud/) | `meta-llama-3.1-70b-instruct` |
| `llama-3.1-405b` | [Replicate](https://replicate.com/), [OctoAI](https://octoai.cloud/) | `meta/meta-llama-3.1-405b-instruct`, `meta-llama-3.1-405b-instruct` |
| Model Name | Provider(s) | Model Version(s) |
| ------------------- | ------------------------------------------------------------------ | --------------------------------------------------- |
| `gpt-4o` | [OpenAI](https://openai.com/product) | `gpt-4o-2024-08-06` |
| `gpt-4o-mini` | [OpenAI](https://openai.com/product) | `gpt-4o-mini-2024-07-18` |
| `gpt-4-turbo` | [OpenAI](https://openai.com/product) | `gpt-4-turbo-2024-04-09` |
| `gpt-3.5-turbo` | [OpenAI](https://openai.com/product) | `gpt-3.5-turbo-0125` |
| `gemini-1.5-pro` | [Google](https://ai.google.dev/) | `gemini-1.5-pro` |
| `gemini-1.0-pro` | [Google](https://ai.google.dev/) | `gemini-1.0-pro` |
| `claude-3.5-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-5-sonnet-20240620` |
| `claude-3-opus` | [Anthropic](https://www.anthropic.com/api) | `claude-3-opus-20240229` |
| `claude-3-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-sonnet-20240229` |
| `claude-3-haiku` | [Anthropic](https://www.anthropic.com/api) | `claude-3-haiku-20240307` |
| `command-r` | [Cohere](https://docs.cohere.com/) | `command-r` |
| `command-r-plus` | [Cohere](https://docs.cohere.com/) | `command-r-plus` |
| `mistral-large-2` | [Mistral](https://mistral.ai/) | `mistral-large-latest` |
| `mixtral-8x22b` | [Mistral](https://mistral.ai/) | `open-mixtral-8x22b` |
| `mixtral-8x7b` | [Mistral](https://mistral.ai/), [Groq](https://wow.groq.com/) | `open-mixtral-8x7b`, `mixtral-8x7b-32768` |
| `mistral-7b` | [Mistral](https://mistral.ai/) | `open-mistral-7b` |
| `gemma-7b` | [Groq](https://wow.groq.com/) | `gemma-7b-it` |
| `gemma-2-9b` | [Groq](https://wow.groq.com/) | `gemma2-9b-it` |
| `llama-3-8b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-8b-8192`, `meta/meta-llama-3-8b-instruct` |
| `llama-3-70b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-70b-8192`, `meta/meta-llama-3-70b-instruct` |
| `llama-3.1-8b` | [Groq](https://wow.groq.com/) | `llama-3.1-8b-instant` |
| `llama-3.1-70b` | [Groq](https://wow.groq.com/) | `llama-3.1-70b-versatile` |
| `llama-3.1-405b` | [Replicate](https://replicate.com/) | `meta/meta-llama-3.1-405b-instruct` |
| `llama-3.2-1b` | [Groq](https://wow.groq.com/) | `llama-3.2-1b-preview` |
| `llama-3.2-3b` | [Groq](https://wow.groq.com/) | `llama-3.2-3b-preview` |

<!--end-model-table-->

Expand Down Expand Up @@ -101,7 +104,6 @@ To activate any of the providers, set the provider's API key in the correspondin
| Google | `GOOGLE_API_KEY` |
| Groq | `GROQ_API_KEY` |
| Replicate | `REPLICATE_API_TOKEN` |
| OctoAI | `OCTOAI_TOKEN` |
| Mistral (La Plateforme) | `MISTRAL_API_KEY` |

Additionally, you can activate providers programmatically as follows:
Expand Down
2 changes: 1 addition & 1 deletion l2m2/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.0.33"
__version__ = "0.0.34"

0 comments on commit c4281af

Please sign in to comment.