-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for Custom API Endpoint Support #14
Comments
@jonasendc provided this comment:
client = openai.AzureOpenAI(
api_version="2024-03-01-preview",
azure_endpoint="",
api_key=api_key,
)
|
@Mingzefei provided this comment:
from openai import OpenAI
client = OpenAI(
base_url = 'http://localhost:11434/v1',
api_key='ollama', # required, but unused
)
response = client.chat.completions.create(
model="llama3",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The LA Dodgers won in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
print(response.choices[0].message.content)
|
A temporary solution is as follows:
That should be OK for Openai-style LLMs. |
As noted in the PR linked above, that may or may not work with any allegedly OpenAI-compatible endpoint. A little more discovery is needed to decide how well we can support “arbitrary” end points in general. |
With the Azure and Ollama support now in |
per toshiakit/MatGPT#30
This refers to the use of custom assistants
https://community.openai.com/t/custom-assistant-api-endpoints/567107
The text was updated successfully, but these errors were encountered: