Releases: jackmpcollins/magentic
v0.26.0
What's Changed
- Return usage stats on AssistantMessage by @jackmpcollins in #214
Example of non-streamed response with usage immediately available
from magentic import OpenaiChatModel, UserMessage
chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")])
print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)
Example of streamed response where usage only becomes available after the stream has been processed
from magentic import OpenaiChatModel, UserMessage
from magentic.streaming import StreamedStr
chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")], output_types=[StreamedStr])
print(message.usage)
# > `None` because stream has not be processed yet
# Process the stream (convert StreamedStr to str)
str(message.content)
print(message.usage)
# > Usage(input_tokens=10, output_tokens=9)
Full Changelog: v0.25.0...v0.26.0
v0.25.0
What's Changed
- Switch AnthropicChatModel to use streaming by @jackmpcollins in #215
StreamedStr
now streams correctly, but object streaming is waiting on Anthropic support for streaming array responses.from magentic import prompt, StreamedStr from magentic.chat_model.anthropic_chat_model import AnthropicChatModel @prompt( "Tell me about {topic}.", model=AnthropicChatModel("claude-3-opus-20240229"), ) def tell_me_about(topic: str) -> StreamedStr: ... for chunk in tell_me_about("chocolate"): print(chunk, end="", flush=True)
- add optional custom_llm_provider param for litellm by @entropi in #221
- Add tests for LiteLLM async callbacks by @jackmpcollins in #223
- Tidy up: Combine openai streamed_tool_call functions by @jackmpcollins in #225
New Contributors
Full Changelog: v0.24.0...v0.25.0
v0.25.0a0
v0.24.0
Warning
The default model for magentic is now gpt-4o instead of gpt-4-turbo. See Configuration for how to change this.
What's Changed
- docs: update README.md by @eltociear in #206
- Make GPT-4o the default OpenAI model by @jackmpcollins in #212
- Skip validation for message serialization by @jackmpcollins in #213
New Contributors
- @eltociear made their first contribution in #206
Full Changelog: v0.23.0...v0.24.0
v0.23.0
What's Changed
- 🦙 Ollama can now return structured outputs / function calls (it takes a little prompting to make it reliable).
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel
@prompt(
"Count to {n}. Use the tool to return in the format [1, 2, 3, ...]",
model=LitellmChatModel("ollama_chat/llama2", api_base="http://localhost:11434")
)
def count_to(n: int) -> list[int]: ...
count_to(5)
# > [1, 2, 3, 4, 5]
PRs
- poetry update by @jackmpcollins in #202
- Support ollama structured outputs / function calling by @jackmpcollins in #204
Full Changelog: v0.22.0...v0.23.0
v0.22.0
What's Changed
- 🚀 Forced function calling using the new
tool_choice: "required"
argument from OpenAI. This means no moreStructuredOutputError
caused by the model returning a string when it was not in the return annotation (for prompt-functions with a union return type. Single return types were already forced).
PRs
- Use tool_choice required for OpenaiChatModel by @jackmpcollins in #201
- Bump tqdm from 4.66.2 to 4.66.3 by @dependabot in #200
- Bump mkdocs from 1.5.3 to 1.6.0 by @dependabot in #198
- Bump pytest from 8.1.1 to 8.2.0 by @dependabot in #197
- Bump mypy from 1.9.0 to 1.10.0 by @dependabot in #196
Full Changelog: v0.21.1...v0.22.0
v0.21.1
What's Changed
- Remove duplicate chat prompting section from readme by @jackmpcollins in #193
- Bump ruff from 0.3.0 to 0.4.1 by @dependabot in #192
- Bump openai from 1.17.1 to 1.23.2 by @dependabot in #190
- Bump pydantic from 2.6.3 to 2.7.0 by @dependabot in #191
- Include generated text in error message for "string not expected" by @jackmpcollins in #195
Full Changelog: v0.21.0...v0.21.1
The error message for when the model returns a string when not expected now contains the start of the returned string. For example:
StructuredOutputError: String was returned by model but not expected. You may need to update your prompt to encourage the model to return a specific type. Model output: '{ "name": "return_list_of_int", "arguments": { "properties": { "value": { "items": [1, 2, 3], [...]'
v0.21.0
What's Changed
- Improve function calling docs by @jackmpcollins in #186
- Add vision example: renaming screenshots by @jackmpcollins in #187
- Improve RAG example notebook using GitHub search by @jackmpcollins in #188
- Add Mistral backend by @jackmpcollins in #189
Full Changelog: v0.20.1...v0.21.0
Mistral API now supported natively 🚀
with full support for StreamedStr
, ParallelFunctionCall
, etc. Example:
from magentic import prompt
from magentic.chat_model.mistral_chat_model import MistralChatModel
from pydantic import BaseModel
class Superhero(BaseModel):
name: str
age: int
power: str
enemies: list[str]
@prompt(
"""Create a Superhero named {name}""",
model=MistralChatModel("mistral-large-latest"),
)
def create_superhero(name: str) -> Superhero: ...
create_superhero("Garden Man")
# Superhero(name='Garden Man', age=35, power='Plant control', enemies=['Smog', 'Deforestator'])
v0.20.1
What's Changed
Full Changelog: v0.20.0...v0.20.1
Example of passing litellm metadata
@prompt(
"Create a Superhero named {name}.",
model=LitellmChatModel("gpt-4", metadata={"foo": "bar"})
)
def create_superhero(name: str) -> Superhero: ...
v0.20.0
Warning
The default model for magentic is now gpt-4-turbo
instead of gpt-3.5-turbo
. See Configuration for how to change this.
What's Changed
- Tidy up docs by @jackmpcollins in #181
- Set default LLM to gpt-4-turbo. Update vision docs. by @jackmpcollins in #183
- Bump anthropic from 0.23.1 to 0.25.1 by @dependabot in #180
- Bump peaceiris/actions-gh-pages from 3 to 4 by @dependabot in #178
- Bump aiohttp from 3.9.3 to 3.9.4 by @dependabot in #182
- Bump idna from 3.6 to 3.7 by @dependabot in #176
- Bump openai from 1.14.2 to 1.17.1 by @dependabot in #177
- Bump litellm from 1.34.0 to 1.35.5 by @dependabot in #179
- Included fix for Anthropic responses that contain both content and tool_calls
Full Changelog: v0.19.0...v0.20.0
Having a default of gpt-4-turbo
enables using vision with function calling by default
from pydantic import BaseModel, Field
from magentic import chatprompt, UserMessage
from magentic.vision import UserImageMessage
IMAGE_URL_WOODEN_BOARDWALK = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
class ImageDetails(BaseModel):
description: str = Field(description="A brief description of the image.")
name: str = Field(description="A short name.")
@chatprompt(
UserMessage("Describe the following image in one sentence."),
UserImageMessage(IMAGE_URL_WOODEN_BOARDWALK),
)
def describe_image() -> ImageDetails: ...
image_details = describe_image()
print(image_details.name)
# 'Wooden Boardwalk in Green Wetland'
print(image_details.description)
# 'A serene wooden boardwalk meanders through a lush green wetland under a blue sky dotted with clouds.'