Skip to content

Commit

Permalink
Merge branch 'main' into devin/1732421325-poetry-to-uv
Browse files Browse the repository at this point in the history
  • Loading branch information
jxnl authored Dec 3, 2024
2 parents 2d01d9b + 4c4a1ce commit eba5e13
Show file tree
Hide file tree
Showing 4 changed files with 185 additions and 1 deletion.
4 changes: 3 additions & 1 deletion docs/blog/posts/writer-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,13 @@ tags:

# Structured Outputs with Writer now supported

>
We're excited to announce that `instructor` now supports [Writer](https://writer.com)'s enterprise-grade LLMs, including their latest Palmyra X 004 model. This integration enables structured outputs and enterprise AI workflows with Writer's powerful language models.

## Getting Started

First, make sure that you've signed up for an account on [Writer](https://writer.com) and obtained an API key. Once you've done so, install `instructor` with Writer support by running `pip install instructor[writer]` in your terminal.
First, make sure that you've signed up for an account on [Writer](https://app.writer.com/aistudio/signup?utm_campaign=devrel) and obtained an API key using this [quickstart guide](https://dev.writer.com/api-guides/quickstart). Once you've done so, install `instructor` with Writer support by running `pip install instructor[writer]` in your terminal.

Make sure to set the `WRITER_API_KEY` environment variable with your Writer API key or pass it as an argument to the `Writer` constructor.

Expand Down
180 changes: 180 additions & 0 deletions docs/integrations/cortex.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
title: "Structured outputs with Cortex, a complete guide w/ instructor"
description: "Learn how to use Cortex with Instructor for structured outputs. Complete guide with examples and best practices."
---

# Structured outputs with Cortex

Cortex.cpp is a runtime that helps you run open source LLMs out of the box. It supports a wide variety of models and powers their [Jan](https://jan.ai) platform. This guide provides a quickstart on how to use Cortex with instructor for structured outputs.

## Quick Start

Instructor comes with support for the OpenAI client out of the box, so you don't need to install anything extra.

```bash
pip install "instructor"
```

Once you've done so, make sure to pull the model that you'd like to use. In this example, we'll be using a quantized llama3.2 model.

```bash
cortex run llama3.2:3b-gguf-q4-km
```

Let's start by initializing the client below - note that we need to provide a base URL and an API key here. The API key isn't important, it's just so the OpenAI client doesn't throw an error.

```python
import os
from openai import OpenAI

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)
```

## Simple User Example (Sync)

```python
from instructor import from_openai
from pydantic import BaseModel
import openai

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)


class User(BaseModel):
name: str
age: int


resp = client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[{"role": "user", "content": "Ivan is 27 and lives in Singapore"}],
response_model=User,
)

print(resp)
# > name='Ivan', age=27
```

## Simple User Example (Async)

```python
import os
from openai import AsyncOpenAI
import instructor
from pydantic import BaseModel
import asyncio

# Initialize with API key
client = from_openai(
openai.AsyncOpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)

class User(BaseModel):
name: str
age: int

async def extract_user():
user = await client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[
{"role": "user", "content": "Extract: Jason is 25 years old"},
],
response_model=User,
)
return user

# Run async function
user = asyncio.run(extract_user())
print(user)
#> User(name='Jason', age=25)
```

## Nested Example

```python
from instructor import from_openai
from pydantic import BaseModel
import openai

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)


class Address(BaseModel):
street: str
city: str
country: str


class User(BaseModel):
name: str
age: int
addresses: list[Address]


user = client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[
{
"role": "user",
"content": """
Extract: Jason is 25 years old.
He lives at 123 Main St, New York, USA
and has a summer house at 456 Beach Rd, Miami, USA
""",
},
],
response_model=User,
)

print(user)

#> {
#> 'name': 'Jason',
#> 'age': 25,
#> 'addresses': [
#> {
#> 'street': '123 Main St',
#> 'city': 'New York',
#> 'country': 'USA'
#> },
#> {
#> 'street': '456 Beach Rd',
#> 'city': 'Miami',
#> 'country': 'USA'
#> }
#> ]
#> }
```

In this tutorial we've seen how we can run local models with Cortex while simplifying a lot of the logic around managing retries and function calling with our simple interface.

We'll be publishing a lot more content on Cortex and how to work with local models moving forward so do keep an eye out for that.

## Related Resources

- [Cortex Documentation](https://cortex.so/docs/)
- [Instructor Core Concepts](../concepts/index.md)
- [Type Validation Guide](../concepts/validation.md)
- [Advanced Usage Examples](../examples/index.md)

## Updates and Compatibility

Instructor maintains compatibility with the latest OpenAI API versions and models. Check the [changelog](https://github.com/jxnl/instructor/blob/main/CHANGELOG.md) for updates.
1 change: 1 addition & 0 deletions docs/integrations/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Instructor supports a wide range of AI model providers, each with their own capa
- [Ollama](./ollama.md) - Run open-source models locally
- [llama-cpp-python](./llama-cpp-python.md) - Python bindings for llama.cpp
- [Together AI](./together.md) - Host and run open source models
- [Cortex](./cortex.md) - Run open source models with Cortex

### Cloud AI Providers

Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ nav:
- Azure OpenAI: 'integrations/azure.md'
- Cerebras: 'integrations/cerebras.md'
- Cohere: 'integrations/cohere.md'
- Cortex: 'integrations/cortex.md'
- Fireworks: 'integrations/fireworks.md'
- Gemini: 'integrations/google.md'
- Groq: 'integrations/groq.md'
Expand Down

0 comments on commit eba5e13

Please sign in to comment.