Skip to content

Commit

Permalink
fix: update type hints and comment formatting for Python 3.7 compatib…
Browse files Browse the repository at this point in the history
…ility
  • Loading branch information
devin-ai-integration[bot] committed Nov 24, 2024
1 parent 16ff46d commit ed21cbd
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 15 deletions.
12 changes: 1 addition & 11 deletions docs/blog/posts/introducing-structured-outputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,17 +126,7 @@ with client.beta.chat.completions.stream(
# > {"name":"Jason","age":
# > {"name":"Jason","age":25
# > {"name":"Jason","age":25}
# > {"
# > {"name
# > {"name":"
# > {"name":"Jason
# > {"name":"Jason","
# > {"name":"Jason","age
# > {"name":"Jason","age":
# > {"name":"Jason","age":25
# > {"name":"Jason","age":25}

### Unpredictable Latency Spikes
```

In order to benchmark the two modes, we made 200 identical requests to OpenAI and noted the time taken for each request to complete. The results are summarized in the following table:

Expand Down
9 changes: 5 additions & 4 deletions docs/blog/posts/version-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,12 @@ Now, whenever you call `client.chat.completions.create` the `model` and `tempera
When I first started working on this project, my goal was to ensure that we weren't introducing any new standards. Instead, our focus was on maintaining compatibility with existing ones. By creating our own client, we can seamlessly proxy OpenAI's `chat.completions.create` and Anthropic's `messages.create` methods. This approach allows us to provide a smooth upgrade path for your client, enabling support for all the latest models and features as they become available. Additionally, this strategy safeguards us against potential downstream changes.

```python
from __future__ import annotations
import openai
import anthropic
import litellm
import instructor
from typing import TypeVar
from typing import TypeVar, Type

T = TypeVar("T")

Expand All @@ -77,9 +78,9 @@ client = instructor.from_litellm(litellm.completion)

# all of these will route to the same underlying create function
# allow you to add instructor to try it out, while easily removing it
def create(model: str, response_model: typing.Type[T]) -> T: ... # type: ignore
def chat_completions_create(model: str, response_model: typing.Type[T]) -> T: ... # type: ignore
def messages_create(model: str, response_model: typing.Type[T]) -> T: ... # type: ignore
def create(model: str, response_model: Type[T]) -> T: ... # type: ignore
def chat_completions_create(model: str, response_model: Type[T]) -> T: ... # type: ignore
def messages_create(model: str, response_model: Type[T]) -> T: ... # type: ignore

## Type are infered correctly

Expand Down

0 comments on commit ed21cbd

Please sign in to comment.