Skip to content

Commit

Permalink
feat: added cortex documentation (#1225)
Browse files Browse the repository at this point in the history
  • Loading branch information
ivanleomk authored Nov 28, 2024
1 parent 73df68f commit 4c4a1ce
Show file tree
Hide file tree
Showing 3 changed files with 182 additions and 0 deletions.
180 changes: 180 additions & 0 deletions docs/integrations/cortex.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
title: "Structured outputs with Cortex, a complete guide w/ instructor"
description: "Learn how to use Cortex with Instructor for structured outputs. Complete guide with examples and best practices."
---

# Structured outputs with Cortex

Cortex.cpp is a runtime that helps you run open source LLMs out of the box. It supports a wide variety of models and powers their [Jan](https://jan.ai) platform. This guide provides a quickstart on how to use Cortex with instructor for structured outputs.

## Quick Start

Instructor comes with support for the OpenAI client out of the box, so you don't need to install anything extra.

```bash
pip install "instructor"
```

Once you've done so, make sure to pull the model that you'd like to use. In this example, we'll be using a quantized llama3.2 model.

```bash
cortex run llama3.2:3b-gguf-q4-km
```

Let's start by initializing the client below - note that we need to provide a base URL and an API key here. The API key isn't important, it's just so the OpenAI client doesn't throw an error.

```python
import os
from openai import OpenAI

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)
```

## Simple User Example (Sync)

```python
from instructor import from_openai
from pydantic import BaseModel
import openai

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)


class User(BaseModel):
name: str
age: int


resp = client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[{"role": "user", "content": "Ivan is 27 and lives in Singapore"}],
response_model=User,
)

print(resp)
# > name='Ivan', age=27
```

## Simple User Example (Async)

```python
import os
from openai import AsyncOpenAI
import instructor
from pydantic import BaseModel
import asyncio

# Initialize with API key
client = from_openai(
openai.AsyncOpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)

class User(BaseModel):
name: str
age: int

async def extract_user():
user = await client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[
{"role": "user", "content": "Extract: Jason is 25 years old"},
],
response_model=User,
)
return user

# Run async function
user = asyncio.run(extract_user())
print(user)
#> User(name='Jason', age=25)
```

## Nested Example

```python
from instructor import from_openai
from pydantic import BaseModel
import openai

client = from_openai(
openai.OpenAI(
base_url="http://localhost:39281/v1",
api_key="this is a fake api key that doesn't matter",
)
)


class Address(BaseModel):
street: str
city: str
country: str


class User(BaseModel):
name: str
age: int
addresses: list[Address]


user = client.chat.completions.create(
model="llama3.2:3b-gguf-q4-km",
messages=[
{
"role": "user",
"content": """
Extract: Jason is 25 years old.
He lives at 123 Main St, New York, USA
and has a summer house at 456 Beach Rd, Miami, USA
""",
},
],
response_model=User,
)

print(user)

#> {
#> 'name': 'Jason',
#> 'age': 25,
#> 'addresses': [
#> {
#> 'street': '123 Main St',
#> 'city': 'New York',
#> 'country': 'USA'
#> },
#> {
#> 'street': '456 Beach Rd',
#> 'city': 'Miami',
#> 'country': 'USA'
#> }
#> ]
#> }
```

In this tutorial we've seen how we can run local models with Cortex while simplifying a lot of the logic around managing retries and function calling with our simple interface.

We'll be publishing a lot more content on Cortex and how to work with local models moving forward so do keep an eye out for that.

## Related Resources

- [Cortex Documentation](https://cortex.so/docs/)
- [Instructor Core Concepts](../concepts/index.md)
- [Type Validation Guide](../concepts/validation.md)
- [Advanced Usage Examples](../examples/index.md)

## Updates and Compatibility

Instructor maintains compatibility with the latest OpenAI API versions and models. Check the [changelog](https://github.com/jxnl/instructor/blob/main/CHANGELOG.md) for updates.
1 change: 1 addition & 0 deletions docs/integrations/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Instructor supports a wide range of AI model providers, each with their own capa
- [Ollama](./ollama.md) - Run open-source models locally
- [llama-cpp-python](./llama-cpp-python.md) - Python bindings for llama.cpp
- [Together AI](./together.md) - Host and run open source models
- [Cortex](./cortex.md) - Run open source models with Cortex

### Cloud AI Providers

Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ nav:
- Azure OpenAI: 'integrations/azure.md'
- Cerebras: 'integrations/cerebras.md'
- Cohere: 'integrations/cohere.md'
- Cortex: 'integrations/cortex.md'
- Fireworks: 'integrations/fireworks.md'
- Gemini: 'integrations/google.md'
- Groq: 'integrations/groq.md'
Expand Down

0 comments on commit 4c4a1ce

Please sign in to comment.