Skip to content

Sheepsta300/getting_started_openai_cookbook

Repository files navigation

Getting Started Azure OpenAI with LangChain Plugins Cookbook

Overview

This notebook serves as a guide for setting up and using Azure OpenAI with the LangChain framework in Python. It walks through obtaining necessary credentials, creating models, and practical examples of utilizing features like prompting, chaining, embeddings, and more. It is ideal for developers aiming to integrate Azure OpenAI capabilities with LangChain.

Prerequisites

Python 3.7+ Jupyter Notebook or Jupyter Lab An Azure account with access to Azure OpenAI resources. Basic familiarity with Python and virtual environments.

Contents

1. Obtaining Keys and Endpoints

Instructions on how to sign up for Azure and set up OpenAI resources. Information on storing credentials in a .env file for secure access.

2. Setting Environment Variables

Guidance on setting up environment variables for API keys, endpoints, and model details. Options for using dotenv to manage environment variables.

3. Using Azure OpenAI with LangChain

Examples of creating and configuring an AzureChatOpenAI model. Utilizing SystemMessage and HumanMessage for structured input.

4. Prompt Engineering

How to create prompts with PromptTemplate and ChatPromptTemplate. Using FewShotPromptTemplate to provide examples for better model responses.

5. Working with Embeddings

Using AzureOpenAIEmbeddings to transform text into vector representations. Performing vector-based search using cosine similarity. Organizing embeddings with pandas DataFrames.

6. Streaming and Chaining

Streaming chat responses for real-time interaction. Chaining LangChain components for seamless model integration using Runnable.

7. Tools and Structured Outputs

Defining custom tools for structured responses using Pydantic models. Generating JSON-based outputs for API integration. Examples of using images and audio data with models.

8. Handling Images

Retrieve images using HTTP requests and convert them into base64 format for compatibility with the API. Feed base64-encoded images to models along with instructions for analysis, such as object detection or description generation. Example: Converting an image URL into base64, sending it to the model, and interpreting the model's output.

9. Audio Processing with Whisper API

Transcribing audio using the Whisper API and integrating transcriptions with LangChain models. Example: Converting audio files into text and using the transcribed text as input for AzureChatOpenAI. Integration with langchain_community for easy access to the Whisper transcription capabilities. Workflow for processing audio data, converting it into text, and using it for further analysis or conversation.

Example Use Cases

Translation tasks using AzureChatOpenAI. Building interactive chatbots with role-based responses. Advanced search queries with embeddings to find relevant content. Transcribing audio using Whisper API and integrating the text into chat models. Image analysis and interpretation using base64-encoded image data.

Additional Information

Monitoring Usage: Tips for tracking token usage and managing costs with get_openai_callback(). Rate Limit Management: Recommendations for handling rate limits with delays between API requests. Further Reading: Explore the LangChain Documentation (https://python.langchain.com/docs/introduction/) for more advanced features.

Acknowledgments

Azure OpenAI LangChain

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •