OpenAI provides access to powerful language models and various AI capabilities through its API. This document outlines how to use the OpenAI API within the OneSDK framework, offering a seamless integration for a wide range of AI tasks including text generation, embeddings, image generation, audio processing, and fine-tuning.
To use the OpenAI API, initialize the OneSDK with your OpenAI API key:
from llm_onesdk import OneSDK
openai_sdk = OneSDK("openai", {
"api_key": "your_api_key_here",
"api_url": "https://api.openai.com/" # Optional: Use this to override the default base URL
})
Alternatively, you can set the API key as an environment variable OPENAI_API_KEY
, and the SDK will automatically use it.
To get a list of available models:
models = openai_sdk.list_models()
print(models)
To get information about a specific model:
model_info = openai_sdk.get_model("gpt-3.5-turbo")
print(model_info)
To generate text, use the generate
method. Specify the model and provide a list of messages:
model = "gpt-3.5-turbo" # Or another available OpenAI model
messages = [{"role": "user", "content": "Explain the concept of machine learning."}]
response = openai_sdk.generate(model, messages)
print(response['choices'][0]['message']['content'])
For longer responses or to get partial results as they're generated, use the stream_generate
method:
for chunk in openai_sdk.stream_generate(model, messages):
print(chunk['choices'][0]['delta']['content'], end='', flush=True)
OpenAI supports creating embeddings for text:
model = "text-embedding-ada-002" # OpenAI's embedding model
input_text = "Hello, world!"
embeddings = openai_sdk.create_embedding(model, input_text)
print(embeddings)
OpenAI's DALL-E models can generate and manipulate images:
# Generate an image
prompt = "A surrealist painting of a cat playing chess with a robot"
image_response = openai_sdk.create_image(prompt, n=1, size="1024x1024")
# Edit an image
with open("image.png", "rb") as image_file, open("mask.png", "rb") as mask_file:
edit_response = openai_sdk.create_edit(image_file, mask_file, "Add a hat to the person")
# Create image variations
with open("image.png", "rb") as image_file:
variation_response = openai_sdk.create_variation(image_file, n=3)
OpenAI provides audio transcription and translation capabilities:
# Transcribe audio
with open("audio.mp3", "rb") as audio_file:
transcription = openai_sdk.create_transcription(audio_file, "whisper-1")
# Translate audio
with open("foreign_audio.mp3", "rb") as audio_file:
translation = openai_sdk.create_translation(audio_file, "whisper-1")
# Generate speech
speech_audio = openai_sdk.create_speech("tts-1", "Hello, world!", "alloy")
with open("speech.mp3", "wb") as audio_file:
audio_file.write(speech_audio)
Use OpenAI's content moderation:
moderation_result = openai_sdk.create_moderation("Text to be moderated")
print(moderation_result)
OpenAI allows file uploads for certain use cases:
# Upload a file
with open("data.jsonl", "rb") as file:
file_response = openai_sdk.upload_file(file, purpose="fine-tune")
# List files
files = openai_sdk.list_files()
# Get file info
file_info = openai_sdk.get_file_info(file_response['id'])
# Get file content
file_content = openai_sdk.get_file_content(file_response['id'])
# Delete a file
openai_sdk.delete_file(file_response['id'])
Create and manage fine-tuning jobs:
# Create a fine-tuning job
job = openai_sdk.create_fine_tuning_job(training_file="file-abc123", model="gpt-3.5-turbo")
# List fine-tuning jobs
jobs = openai_sdk.list_fine_tuning_jobs()
# Get fine-tuning job info
job_info = openai_sdk.get_fine_tuning_job(job['id'])
# Cancel a fine-tuning job
openai_sdk.cancel_fine_tuning_job(job['id'])
# List fine-tuning events
events = openai_sdk.list_fine_tuning_events(job['id'])
The SDK raises InvokeError
or its subclasses for various error conditions. Always wrap your API calls in try-except blocks:
from llm_onesdk.utils.error_handler import (
InvokeError, InvokeConnectionError, InvokeServerUnavailableError,
InvokeRateLimitError, InvokeAuthorizationError, InvokeBadRequestError
)
try:
response = openai_sdk.generate(model, messages)
except InvokeConnectionError as e:
print(f"Connection error: {str(e)}")
except InvokeServerUnavailableError as e:
print(f"Server unavailable: {str(e)}")
except InvokeRateLimitError as e:
print(f"Rate limit exceeded: {str(e)}")
except InvokeAuthorizationError as e:
print(f"Authorization error: {str(e)}")
except InvokeBadRequestError as e:
print(f"Bad request: {str(e)}")
except InvokeError as e:
print(f"An error occurred: {str(e)}")
The SDK uses Python's logging module. To enable debug logging:
import logging
logging.basicConfig(level=logging.DEBUG)
This will print detailed information about API requests and responses, which can be helpful for troubleshooting.
- Choose the appropriate model for your specific task (e.g., GPT-4 for complex reasoning, Ada for simpler tasks).
- Implement proper error handling and retries for production applications.
- Be mindful of rate limits and implement appropriate backoff strategies.
- Keep your API key secure and never expose it in client-side code.
- Use environment variables for API keys in production environments.
- When working with large responses, use the streaming API to improve responsiveness.
- For file operations, ensure you're using the correct 'purpose' parameter.
- When fine-tuning models, carefully prepare your training data for best results.
- For image and audio tasks, pay attention to file format and size requirements.
- Regularly update the SDK to benefit from the latest features and bug fixes.
To use a proxy for API calls:
openai_sdk.set_proxy("http://your-proxy-url:port")
For more detailed information about available models, specific features, and API updates, please refer to the official OpenAI API documentation.