diff --git a/Instructions/Exercises/01-get-started-azure-openai.md b/Instructions/Exercises/01-get-started-azure-openai.md index 0254dcdb..3ff5e973 100644 --- a/Instructions/Exercises/01-get-started-azure-openai.md +++ b/Instructions/Exercises/01-get-started-azure-openai.md @@ -5,7 +5,7 @@ lab: # Get started with Azure OpenAI service -Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure OpenAI Studio to deploy and explore generative AI models. +Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure AI Studio to deploy and explore generative AI models. In the scenario for this exercise, you will perform the role of a software developer who has been tasked to implement an AI agent that can use generative AI to help a marketing organization improve its effectiveness at reaching customers and advertising new products. The techniques used in the exercise can be applied to any scenario where an organization wants to use generative AI models to help employees be more effective and productive. @@ -39,37 +39,33 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure ## Deploy a model -Azure OpenAI service provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model. +Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model. -> **Note**: As you use Azure OpenAI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise. +> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise. -1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab. - - After the new tab opens, you can close any banner notifications for new preview services that are displayed at the top of the Azure OpenAI Studio page. - -1. In Azure OpenAI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: +1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**. +1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: - **Deployment name**: *A unique name of your choice* - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* - - **Model version**: Auto-update to default + - **Model version**: *Use default version* - **Deployment type**: Standard - **Tokens per minute rate limit**: 5K\* - **Content filter**: Default - - **Enable dynamic quota**: Enabled + - **Enable dynamic quota**: Disabled > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. ## Use the Chat playground -Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure OpenAI Studio provides a chatbot interface for GPT 3.5 and higher models. +Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure AI Studio provides a chatbot interface for GPT 3.5 and higher models. > **Note:** The *Chat* playground uses the *ChatCompletions* API rather than the older *Completions* API that is used by the *Completions* playground. The Completions playground is provided for compatibility with older models. -1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution): - - **Setup** - used to set the context for the model's responses. +1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution): + - **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment. - **Chat session** - used to submit chat messages and view responses. - - **Configuration** - used to configure settings for the model deployment. -1. In the **Configuration** panel, ensure that your gpt-35-turbo-16k model deployment is selected. -1. In the **Setup** panel, review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user. +1. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected. +1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user. 1. In the **Chat session** panel, enter the user query `How can I use generative AI to help me market a new product?` > **Note**: You may receive a response that the API deployment is not yet ready. If so, wait for a few minutes and try again. @@ -84,7 +80,7 @@ Now that you've deployed a model, you can use it to generate responses based on So far, you've engaged in a chat conversation with your model based on the default system message. You can customize the system setup to have more control over the kinds of responses generated by your model. -1. In the **Setup** panel, under **Use a system message template**, select the **Marketing Writing Assistant** template and confirm that you want to update the system message. +1. In the main toolbar, select the **Prompt samples**, and use the **Marketing Writing Assistant** prompt template. 1. Review the new system message, which describes how an AI agent should use the model to respond. 1. In the **Chat session** panel, enter the user query `Create an advertisement for a new scrubbing brush`. 1. Review the response, which should include advertising copy for a scrubbing brush. The copy may be quite extensive and creative. @@ -96,7 +92,7 @@ So far, you've engaged in a chat conversation with your model based on the defau The response should now be more useful, but to have even more control over the output from the model, you can provide one or more *few-shot* examples on which responses should be based. -1. In the **Setup** panel, under **Examples**, select **Add**. Then type the following message and response in the designated boxes: +1. Under the **System message** text box, expand the dropdown for **Add section** and select **Examples**. Then type the following message and response in the designated boxes: **User**: @@ -139,7 +135,7 @@ You've explored how the system message, examples, and prompts can help refine th ## Deploy your model to a web app -Now that you've explored some of the capabilities of a generative AI model in the Azure OpenAI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model. +Now that you've explored some of the capabilities of a generative AI model in the Azure AI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model. 1. At the top right of the **Chat** playground page, in the **Deploy to** menu, select **A new web app**. 1. In the **Deploy to a web app** dialog box, create a new web app with the following settings: @@ -162,7 +158,7 @@ Now that you've explored some of the capabilities of a generative AI model in th > **Note**: You deployed the *model* to a web app, but this deployment doesn't include the system settings and parameters you set in the playground; so the response may not reflect the examples you specified in the playground. In a real scenario, you would add logic to your application to modify the prompt so that it includes the appropriate contextual data for the kinds of response you want to generate. This kind of customization is beyond the scope of this introductory-level exercise, but you can learn about prompt engineering techniques and Azure OpenAI APIs in other exercises and product documentation. -1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure OpenAI Studio. +1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure AI Studio. ## Clean up diff --git a/Instructions/Exercises/02-natural-language-azure-openai.md b/Instructions/Exercises/02-natural-language-azure-openai.md index aca8f3c1..889a9326 100644 --- a/Instructions/Exercises/02-natural-language-azure-openai.md +++ b/Instructions/Exercises/02-natural-language-azure-openai.md @@ -39,17 +39,19 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure ## Deploy a model -Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model. +Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model. -1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab. -2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: +> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise. + +1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**. +1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: - **Deployment name**: *A unique name of your choice* - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* - - **Model version**: Auto-update to default + - **Model version**: *Use default version* - **Deployment type**: Standard - **Tokens per minute rate limit**: 5K\* - **Content filter**: Default - - **Enable dynamic quota**: Enabled + - **Enable dynamic quota**: Disabled > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. @@ -95,7 +97,7 @@ Applications for both C# and Python have been provided. Both apps feature the sa 4. Update the configuration values to include: - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal) - - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio). + - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI studio). 5. Save the configuration file. ## Add code to use the Azure OpenAI service diff --git a/Instructions/Exercises/03-prompt-engineering.md b/Instructions/Exercises/03-prompt-engineering.md index 02f6bd49..7c2a4bcc 100644 --- a/Instructions/Exercises/03-prompt-engineering.md +++ b/Instructions/Exercises/03-prompt-engineering.md @@ -39,17 +39,19 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure ## Deploy a model -Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model. +Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model. -1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab. -2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: +> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise. + +1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**. +1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: - **Deployment name**: *A unique name of your choice* - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* - - **Model version**: Auto-update to default + - **Model version**: *Use default version* - **Deployment type**: Standard - **Tokens per minute rate limit**: 5K\* - **Content filter**: Default - - **Enable dynamic quota**: Enabled + - **Enable dynamic quota**: Disabled > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. @@ -57,12 +59,11 @@ Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you Let's start by exploring some prompt engineering techniques in the Chat playground. -1. In **Azure OpenAI Studio** at `https://oai.azure.com`, in the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main sections: - - **Setup** - used to set the context for the model's responses. +1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution): + - **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment. - **Chat session** - used to submit chat messages and view responses. - - **Configuration** - used to configure settings for the model deployment. -2. In the **Configuration** section, ensure that your model deployment is selected. -3. In the **Setup** area, select the default system message template to set the context for the chat session. The default system message is *You are an AI assistant that helps people find information*. +2. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected. +1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.* 4. In the **Chat session**, submit the following query: ```prompt @@ -79,9 +80,9 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou The response provides a description of the article. However, suppose you want a more specific format for article categorization. -5. In the **Setup** section change the system message to `You are a news aggregator that categorizes news articles.` +5. In the **Configuration** section change the system message to `You are a news aggregator that categorizes news articles.` -6. Under the new system message, in the **Examples** section, select the **Add** button. Then add the following example. +6. Under the new system message, select the **Add section** button, and choose **Examples**. Then add the following example. **User:** @@ -126,7 +127,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou Entertainment ``` -8. Use the **Apply changes** button at the top of the **Setup** section to update the system message. +8. Use the **Apply changes** button at the top of the **Configuration** section to save your changes. 9. In the **Chat session** section, resubmit the following prompt: @@ -144,7 +145,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou The combination of a more specific system message and some examples of expected queries and responses results in a consistent format for the results. -10. In the **Setup** section, change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes. +10. Change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes. 11. In the **Chat session** section, submit the following prompt: @@ -209,7 +210,7 @@ Applications for both C# and Python have been provided, and both apps feature th 4. Update the configuration values to include: - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal) - - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio). + - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Studio). 5. Save the configuration file. ## Add code to use the Azure OpenAI service @@ -300,7 +301,7 @@ Now you're ready to use the Azure OpenAI SDK to consume your deployed model. Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request. -1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interations, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message. +1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interactions, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message. 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application. - **C#**: `dotnet run` diff --git a/Instructions/Exercises/04-code-generation.md b/Instructions/Exercises/04-code-generation.md index 2a347e1b..7f91508b 100644 --- a/Instructions/Exercises/04-code-generation.md +++ b/Instructions/Exercises/04-code-generation.md @@ -39,17 +39,19 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure ## Deploy a model -Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model. +Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model. -1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab. -2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: +> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise. + +1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**. +1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: - **Deployment name**: *A unique name of your choice* - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* - - **Model version**: Auto-update to default + - **Model version**: *Use default version* - **Deployment type**: Standard - **Tokens per minute rate limit**: 5K\* - **Content filter**: Default - - **Enable dynamic quota**: Enabled + - **Enable dynamic quota**: Disabled > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. @@ -57,13 +59,12 @@ Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you Before using in your app, examine how Azure OpenAI can generate and explain code in the chat playground. -1. In the **Azure OpenAI Studio** at `https://oai.azure.com`, in the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main sections: - - **Setup** - used to set the context for the model's responses. +1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution): + - **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment. - **Chat session** - used to submit chat messages and view responses. - - **Configuration** - used to configure settings for the model deployment. -2. In the **Configuration** section, ensure that your model deployment is selected. -3. In the **Setup** area, set the system message to `You are a programming assistant helping write code` and apply the changes. -4. In the **Chat session**, submit the following query: +1. Under **Deployments**, ensure that your model deployment is selected. +1. In the **System message** area, set the system message to `You are a programming assistant helping write code` and apply the changes. +1. In the **Chat session**, submit the following query: ``` Write a function in python that takes a character and a string as input, and returns how many times the character appears in the string @@ -71,11 +72,11 @@ Before using in your app, examine how Azure OpenAI can generate and explain code The model will likely respond with a function, with some explanation of what the function does and how to call it. -5. Next, send the prompt `Do the same thing, but this time write it in C#`. +1. Next, send the prompt `Do the same thing, but this time write it in C#`. The model likely responded very similarly as the first time, but this time coding in C#. You can ask it again for a different language of your choice, or a function to complete a different task such as reversing the input string. -6. Next, let's explore using AI to understand code. Submit the following prompt as the user message. +1. Next, let's explore using AI to understand code. Submit the following prompt as the user message. ``` What does the following function do? @@ -153,7 +154,7 @@ Applications for both C# and Python have been provided, as well as a sample text 4. Update the configuration values to include: - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal) - - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio). + - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Studio). 5. Save the configuration file. ## Add code to use your Azure OpenAI service model diff --git a/Instructions/Exercises/05-generate-images.md b/Instructions/Exercises/05-generate-images.md index 699e487f..ea5d4826 100644 --- a/Instructions/Exercises/05-generate-images.md +++ b/Instructions/Exercises/05-generate-images.md @@ -16,7 +16,7 @@ This exercise will take approximately **25** minutes. Before you can use Azure OpenAI to generate images, you must provision an Azure OpenAI resource in your Azure subscription. The resource must be in a region where DALL-E models are supported. 1. Sign into the **Azure portal** at `https://portal.azure.com`. -2. Create an **Azure OpenAI** resource with the following settings: +1. Create an **Azure OpenAI** resource with the following settings: - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service, including DALL-E* - **Resource group**: *Choose or create a resource group* - **Region**: *Choose either **East US** or **Sweden Central***\* @@ -25,21 +25,29 @@ Before you can use Azure OpenAI to generate images, you must provision an Azure > \* DALL-E 3 models are only available in Azure OpenAI service resources in the **East US** and **Sweden Central** regions. -3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal. +1. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal. +1. On the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**. +1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one for DALL-E 3, create a new deployment of the **dall-e-3** model with the following settings: + - **Deployment name**: dalle3 + - **Model version**: *Use default version* + - **Deployment type**: Standard + - **Capacity units**: 1K + - **Content filter**: Default + - **Enable dynamic quota**: Disabled +1. One deployed, navigate back to the **Images** page in the left pane. -## Explore image-generation in the DALL-E playground +## Explore image-generation in the images playground -You can use the DALL-E playground in **Azure OpenAI Studio** to experiment with image-generation. +You can use the Images playground in **Azure AI Studio** to experiment with image generation. -1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, use the **Explore** button to open Azure OpenAI Studio in a new browser tab. Alternatively, navigate to [Azure OpenAI Studio](https://oai.azure.com) directly at `https://oai.azure.com`. -2. In the **Playground** section, select the **DALL-E** playground. A deployment of the DALL-E model named *Dalle3* will be created automatically. -3. In the **Prompt** box, enter a description of an image you'd like to generate. For example, `An elephant on a skateboard` Then select **Generate** and view the image that is generated. +1. In the **Images playground** section, your deployment of DALL-E 3 should be automatically selected. If not, select it from the deployment dropdown. +1. In the **Prompt** box, enter a description of an image you'd like to generate. For example, `An elephant on a skateboard` Then select **Generate** and view the image that is generated. - ![The DALL-E Playground in Azure OpenAI Studio with a generated image.](../media/dall-e-playground.png) + ![The Images Playground in Azure AI Studio with a generated image.](../media/images-playground.png) -4. Modify the prompt to provide a more specific description. For example `An elephant on a skateboard in the style of Picasso`. Then generate the new image and review the results. +1. Modify the prompt to provide a more specific description. For example `An elephant on a skateboard in the style of Picasso`. Then generate the new image and review the results. - ![The DALL-E Playground in Azure OpenAI Studio with two generated images.](../media/dall-e-playground-new-image.png) + ![The Images Playground in Azure AI Studio with two generated images.](../media/images-playground-new-style.png) ## Use the REST API to generate images @@ -87,6 +95,8 @@ Now you're ready to explore the code used to call the REST API and generate an i - The code makes an https request to the endpoint for your service, including the key for your service in the header. Both of these values are obtained from the configuration file. - The request includes some parameters, including the prompt from on the image should be based, the number of images to generate, and the size of the generated image(s). - The response includes a revised prompt that the DALL-E model extrapolated from the user-provided prompt to make it more descriptive, and the URL for the generated image. + + > **Important**: If you named your deployment anything other than the recommended *dalle3*, you'll need to update the code to use the name of your deployment. ### Run the app diff --git a/Instructions/Exercises/06-use-own-data.md b/Instructions/Exercises/06-use-own-data.md index 2009a90c..fba1eb2d 100644 --- a/Instructions/Exercises/06-use-own-data.md +++ b/Instructions/Exercises/06-use-own-data.md @@ -7,13 +7,17 @@ lab: The Azure OpenAI Service enables you to use your own data with the intelligence of the underlying LLM. You can limit the model to only use your data for pertinent topics, or blend it with results from the pre-trained model. -In the scenario for this exercise, you will perform the role of a software developer working for Margie's Travel Agency. You will explore how to use generative AI to make coding tasks easier and more efficient. The techniques used in the exercise can be applied to other code files, programming languages, and use cases. +In the scenario for this exercise, you will perform the role of a software developer working for Margie's Travel Agency. You will explore how use Azure AI Search to index your own data and use it with Azure OpenAI to augment prompts. -This exercise will take approximately **20** minutes. +This exercise will take approximately **30** minutes. -## Provision an Azure OpenAI resource +## Provision Azure resources -If you don't already have one, provision an Azure OpenAI resource in your Azure subscription. +To complete this exercise, you'll need: + +- An Azure OpenAI resource. +- An Azure AI Search resource. +- An Azure Storage Account resource. 1. Sign into the **Azure portal** at `https://portal.azure.com`. 2. Create an **Azure OpenAI** resource with the following settings: @@ -35,111 +39,86 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region. -3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal. - -## Deploy a model - -Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model. - -1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab. -2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings: - - **Deployment name**: *A unique name of your choice* - - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* - - **Model version**: Auto-update to default - - **Deployment type**: Standard - - **Tokens per minute rate limit**: 5K\* - - **Content filter**: Default - - **Enable dynamic quota**: Enabled - - > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. - -## Observe normal chat behavior without adding your own data - -Before connecting Azure OpenAI to your data, let's first observe how the base model responds to queries without any grounding data. - -1. In **Azure OpenAI Studio** at `https://oai.azure.com`, in the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main sections: - - **Setup** - used to set the context for the model's responses. - - **Chat session** - used to submit chat messages and view responses. - - **Configuration** - used to configure settings for the model deployment. -2. In the **Configuration** section, ensure that your model deployment is selected. -3. In the **Setup** area, select the default system message template to set the context for the chat session. The default system message is *You are an AI assistant that helps people find information*. -4. In the **Chat session**, submit the following queries, and review the responses: - - ```prompt - I'd like to take a trip to New York. Where should I stay? - ``` - - ```prompt - What are some facts about New York? - ``` - - Try similar questions about tourism and places to stay for other locations that will be included in our grounding data, such as London, or San Francisco. You'll likely get complete responses about areas or neighborhoods, and some general facts about the city. - -## Connect your data in the chat playground - -Now you'll add some data for a fictional travel agent company named *Margie's Travel*. Then you'll see how the Azure OpenAI model responds when using the brochures from Margie's Travel as grounding data. - -1. In a new browser tab, download an archive of brochure data from `https://aka.ms/own-data-brochures`. Extract the brochures to a folder on your PC. -1. In Azure OpenAI Studio, in the **Chat** playground, in the **Setup** section, select **Add your data**. -1. Select **Add a data source** and choose **Upload files**. -1. You'll need to create a storage account and Azure AI Search resource. Under the dropdown for the storage resource, select **Create a new Azure Blob storage resource**, and create a storage account with the following settings. Anything not specified leave as the default. - - - **Subscription**: *Your Azure subscription* - - **Resource group**: *Select the same resource group as your Azure OpenAI resource* - - **Storage account name**: *Enter a unique name* - - **Region**: *Select the same region as your Azure OpenAI resource* - - **Redundancy**: Locally-redundant storage (LRS) - -1. While the storage account resource is being created, return to Azure OpenAI Studio and select **Create a new Azure AI Search resource** with the following settings. Anything not specified leave as the default. - - - **Subscription**: *Your Azure subscription* - - **Resource group**: *Select the same resource group as your Azure OpenAI resource* - - **Service name**: *Enter a unique name* - - **Location**: *Select the same location as your Azure OpenAI resource* +3. While the Azure OpenAI resource is being provisioned, create an **Azure AI Search** resource with the following settings: + - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource* + - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource* + - **Service name**: *A unique name of your choice* + - **Location**: *The region in which you provisioned your Azure OpenAI resource* - **Pricing tier**: Basic +4. While the Azure AI Search resource is being provisioned, create a **Storage account** resource with the following settings: + - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource* + - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource* + - **Storage account name**: *A unique name of your choice* + - **Region**: *The region in which you provisioned your Azure OpenAI resource* + - **Performance**: Standard + - **Redundancy**: Locally redundant storage (LRS) +5. After all three of the resources have been successfully deployed in your Azure subscription, review them in the Azure portal and gather the following information (which you'll need later in the exercise): + - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal) + - The endpoint for your Azure AI Search service (the **Url** value on the overview page for your Azure AI Search resource in the Azure portal). + - A **primary admin key** for your Azure AI Search resource (available in the **Keys** page for your Azure AI Search resource in the Azure portal). -1. Wait until your search resource has been deployed, then switch back to the Azure AI Studio. -1. In the **Add data**, enter the following values for your data source, then select **Next**. - - - **Select data source**: Upload files - - **Subscription**: Your Azure subscription - - **Select Azure Blob storage resource**: *Use the **Refresh** button to repopulate the list, and then choose the storage resource you created* - - Turn on CORS when prompted - - **Select Azure AI Search resource**: *Use the **Refresh** button to repopulate the list, and then choose the search resource you created* - - **Enter the index name**: `margiestravel` - - **Add vector search to this search resource**: unchecked - - **I acknowledge that connecting to an Azure AI Search account will incur usage to my account** : checked - -1. On the **Upload files** page, upload the PDFs you downloaded, and then select **Next**. -1. On the **Data management** page select the **Keyword** search type from the drop-down, and then select **Next**. -1. On the **Data Connection** page select **API Key**. -1. On the **Review and finish** page select **Save and close**, which will add your data. This may take a few minutes, during which you need to leave your window open. Once complete, you'll see the data source, search resource, and index specified in the **Setup** section. - - > **Tip**: Occasionally the connection between your new search index and Azure OpenAI Studio takes too long. If you've waited for a few minutes and it still hasn't connected, check your AI Search resources in Azure portal. If you see the completed index, you can disconnect the data connection in Azure OpenAI Studio and re-add it by specifying an Azure AI Search data source and selecting your new index. - -## Chat with a model grounded in your data +## Upload your data -Now that you've added your data, ask the same questions as you did previously, and see how the response differs. +You're going to ground the prompts you use with a generative AI model by using your own data. In this exercise, the data consists of a collection of travel brochures from the fictional *Margies Travel* company. -```prompt -I'd like to take a trip to New York. Where should I stay? -``` +1. In a new browser tab, download an archive of brochure data from `https://aka.ms/own-data-brochures`. Extract the brochures to a folder on your PC. +1. In the Azure portal, navigate to your storage account and view the **Storage browser** page. +1. Select **Blob containers** and then add a new container named `margies-travel`. +1. Select the **margies-travel** container, and then upload the .pdf brochures you extracted previously to the root folder of the blob container. -```prompt -What are some facts about New York? -``` +## Deploy AI models -You'll notice a very different response this time, with specifics about certain hotels and a mention of Margie's Travel, as well as references to where the information provided came from. If you open the PDF reference listed in the response, you'll see the same hotels as the model provided. +You're going to use two AI models in this exercise: -Try asking it about other cities included in the grounding data, which are Dubai, Las Vegas, London, and San Francisco. +- A text embedding model to *vectorize* the text in the brochures so it can be indexed efficiently for use in grounding prompts. +- A GPT model that you application can use to generate responses to prompts that are grounded in your data. -> **Note**: **Add your data** is still in preview and might not always behave as expected for this feature, such as giving the incorrect reference for a city not included in the grounding data. +To deploy these models, you'll use AI Studio. -## Connect your app to your own data +1. In the Azure portal, navigate to your Azure OpenAI resource. Then use the link to open your resource in **Azure AI Studio**.. +1. In Azure AI Studio, on the **Deployments** page, view your existing model deployments. Then create a new base model deployment of the **text-embedding-ada-002** model with the following settings: + - **Deployment name**: text-embedding-ada-002 + - **Model version**: *The default version* + - **Deployment type**: Standard + - **Tokens per minute rate limit**: 5K\* + - **Content filter**: Default + - **Enable dynamic quota**: Enabled +1. After the text embedding model has been deployed, return to the **Deployments** page and create a new deployment of the **gpt-35-turbo-16k** model with the following settings: + - **Deployment name**: gpt-35-turbo-16k + - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)* + - **Model version**: *The default version* + - **Deployment type**: Standard + - **Tokens per minute rate limit**: 5K\* + - **Content filter**: Default + - **Enable dynamic quota**: Enabled -Next, let's explore how to connect your app to use your own data. + > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription. -### Prepare to develop an app in Visual Studio Code +## Create an index + +To make it easy to use your own data in a prompt, you'll index it using Azure AI Search. You'll use the text embedding mdoel you deployed previously during the indexing process to *vectorize* the text data (which results in each text token in the index being represented by numeric vectors - making it compatible with the way a generative AI model represents text) + +1. In the Azure portal, navigate to your Azure AI Search resource. +1. On the **Overview** page, select **Import and vectorize data**. +1. In the **Setup your data connection** page, select **Azure Blob Storage** and configure the data source with the following settings: + - **Subscription**: The Azure subscription in which you provisioned your storage account. + - **Blob storage account**: The storage account you created previously. + - **Blob container**: margies-travel + - **Blob folder**: *Leave blank* + - **Enable deletion tracking**: Unselected + - **Authenticate using managed identity**: Unselected +1. On the **Vectorize your text** page, select the following settings: + - **Kind**: Azure OpenAI + - **Subscription**: The Azure subscription in which you provisioned your Azure OpenAI service. + - **Azure OpenAI Service**: Your Azure OpenAI Service resource + - **Model deployment**: text-embedding-ada-002 + - **Authentication type**: API key + - **I acknowledge that connecting to an Azure OpenAI service will incur additional costs to my account**: Selected +1. On the next page, do not select the toptions to vectorize images or extract data with AI skills. +1. On the next page, enable semantic ranking and schedule the indexer to run once. +1. On the final page, set the **Objects name prefix** to `margies-index` and then create the index. + +## Prepare to develop an app in Visual Studio Code Now let's explore the use of your own data in an app that uses the Azure OpenAI service SDK. You'll develop your app using Visual Studio Code. The code files for your app have been provided in a GitHub repo. @@ -181,11 +160,11 @@ Applications for both C# and Python have been provided, and both apps feature th 4. Update the configuration values to include: - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal) - - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio). + - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Studio). - The endpoint for your search service (the **Url** value on the overview page for your search resource in the Azure portal). - A **key** for your search resource (available in the **Keys** page for your search resource in the Azure portal - you can use either of the admin keys) - - The name of the search index (which should be `margiestravel`). -1. Save the configuration file. + - The name of the search index (which should be `margies-index`). +5. Save the configuration file. ### Add code to use the Azure OpenAI service @@ -242,4 +221,4 @@ Now that your app has been configured, run it to send your request to your model ## Clean up -When you're done with your Azure OpenAI resource, remember to delete the resource in the **Azure portal** at `https://portal.azure.com`. Be sure to also include the storage account and search resource, as those can incur a relatively large cost. +When you're done with your Azure OpenAI resource, remember to delete the resources in the **Azure portal** at `https://portal.azure.com`. Be sure to also include the storage account and search resource, as those can incur a relatively large cost. diff --git a/Instructions/media/dall-e-playground-new-image.png b/Instructions/media/dall-e-playground-new-image.png deleted file mode 100644 index 8aa023a6..00000000 Binary files a/Instructions/media/dall-e-playground-new-image.png and /dev/null differ diff --git a/Instructions/media/dall-e-playground.png b/Instructions/media/dall-e-playground.png deleted file mode 100644 index 8b618015..00000000 Binary files a/Instructions/media/dall-e-playground.png and /dev/null differ diff --git a/Instructions/media/images-playground-new-style.png b/Instructions/media/images-playground-new-style.png new file mode 100644 index 00000000..3f78017a Binary files /dev/null and b/Instructions/media/images-playground-new-style.png differ diff --git a/Instructions/media/images-playground.png b/Instructions/media/images-playground.png new file mode 100644 index 00000000..edc89208 Binary files /dev/null and b/Instructions/media/images-playground.png differ