diff --git a/README.md b/README.md index e23d47b..ac31713 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # L2M2: A Simple Python LLM Manager 💬👍 -[![Tests](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml/badge.svg?timestamp=1715761013)](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml) [![codecov](https://codecov.io/github/pkelaita/l2m2/graph/badge.svg?token=UWIB0L9PR8)](https://codecov.io/github/pkelaita/l2m2) [![PyPI version](https://badge.fury.io/py/l2m2.svg?timestamp=1715761013)](https://badge.fury.io/py/l2m2) +[![Tests](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml/badge.svg?timestamp=1715836447)](https://github.com/pkelaita/l2m2/actions/workflows/tests.yml) [![codecov](https://codecov.io/github/pkelaita/l2m2/graph/badge.svg?token=UWIB0L9PR8)](https://codecov.io/github/pkelaita/l2m2) [![PyPI version](https://badge.fury.io/py/l2m2.svg?timestamp=1715836447)](https://badge.fury.io/py/l2m2) **L2M2** ("LLM Manager" → "LLMM" → "L2M2") is a very simple LLM manager for Python that exposes lots of models through a unified API. This is useful for evaluation, demos, and other apps that need to easily be model-agnostic. @@ -16,22 +16,22 @@ L2M2 currently supports the following models: -| Model Name | Provider(s) | Model Version(s) | -| ----------------- | ------------------------------------------------------------------ | -------------------------------------------------- | -| `gpt-4o` | [OpenAI](https://openai.com/product) | `gpt-4o-2024-05-13` | -| `gpt-4-turbo` | [OpenAI](https://openai.com/product) | `gpt-4-turbo-2024-04-09` | -| `gpt-3.5-turbo` | [OpenAI](https://openai.com/product) | `gpt-3.5-turbo-0125` | -| `gemini-1.5-pro` | [Google](https://ai.google.dev/) | `gemini-1.5-pro-latest` | -| `gemini-1.0-pro` | [Google](https://ai.google.dev/) | `gemini-1.0-pro-latest` | -| `claude-3-opus` | [Anthropic](https://www.anthropic.com/api) | `claude-3-opus-20240229` | -| `claude-3-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-sonnet-20240229` | -| `claude-3-haiku` | [Anthropic](https://www.anthropic.com/api) | `claude-3-haiku-20240307` | -| `command-r` | [Cohere](https://docs.cohere.com/) | `command-r` | -| `command-r-plus` | [Cohere](https://docs.cohere.com/) | `command-r-plus` | -| `mixtral-8x7b` | [Groq](https://wow.groq.com/) | `mixtral-8x7b-32768` | -| `gemma-7b` | [Groq](https://wow.groq.com/) | `gemma-7b-it` | -| `llama3-8b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-8b-8192`, `meta/meta-llama-3-8b-instruct` | -| `llama3-70b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-70b-8192`, `meta/meta-llama-3-8b-instruct` | +| Model Name | Provider(s) | Model Version(s) | +| ----------------- | ------------------------------------------------------------------ | --------------------------------------------------- | +| `gpt-4o` | [OpenAI](https://openai.com/product) | `gpt-4o-2024-05-13` | +| `gpt-4-turbo` | [OpenAI](https://openai.com/product) | `gpt-4-turbo-2024-04-09` | +| `gpt-3.5-turbo` | [OpenAI](https://openai.com/product) | `gpt-3.5-turbo-0125` | +| `gemini-1.5-pro` | [Google](https://ai.google.dev/) | `gemini-1.5-pro-latest` | +| `gemini-1.0-pro` | [Google](https://ai.google.dev/) | `gemini-1.0-pro-latest` | +| `claude-3-opus` | [Anthropic](https://www.anthropic.com/api) | `claude-3-opus-20240229` | +| `claude-3-sonnet` | [Anthropic](https://www.anthropic.com/api) | `claude-3-sonnet-20240229` | +| `claude-3-haiku` | [Anthropic](https://www.anthropic.com/api) | `claude-3-haiku-20240307` | +| `command-r` | [Cohere](https://docs.cohere.com/) | `command-r` | +| `command-r-plus` | [Cohere](https://docs.cohere.com/) | `command-r-plus` | +| `mixtral-8x7b` | [Groq](https://wow.groq.com/) | `mixtral-8x7b-32768` | +| `gemma-7b` | [Groq](https://wow.groq.com/) | `gemma-7b-it` | +| `llama3-8b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-8b-8192`, `meta/meta-llama-3-8b-instruct` | +| `llama3-70b` | [Groq](https://wow.groq.com/), [Replicate](https://replicate.com/) | `llama3-70b-8192`, `meta/meta-llama-3-70b-instruct` | diff --git a/l2m2/__init__.py b/l2m2/__init__.py index f18e5d0..b2d3da7 100644 --- a/l2m2/__init__.py +++ b/l2m2/__init__.py @@ -1 +1 @@ -__version__ = "0.0.18" +__version__ = "0.0.18.post1" diff --git a/l2m2/model_info.py b/l2m2/model_info.py index c0538be..2e48413 100644 --- a/l2m2/model_info.py +++ b/l2m2/model_info.py @@ -298,7 +298,7 @@ class ModelEntry(TypedDict): }, }, "replicate": { - "model_id": "meta/meta-llama-3-8b-instruct", + "model_id": "meta/meta-llama-3-70b-instruct", "params": { "temperature": { "default": PROVIDER_DEFAULT,