Skip to content

Commit

Permalink
Ai studio update (#82)
Browse files Browse the repository at this point in the history
* Updated

* updates for lab 1

* updates for AI studio move

---------

Co-authored-by: Graeme Malcolm <[email protected]>
  • Loading branch information
ivorb and GraemeMalcolm authored Aug 12, 2024
1 parent 98157b8 commit a6c0c1e
Show file tree
Hide file tree
Showing 10 changed files with 157 additions and 168 deletions.
36 changes: 16 additions & 20 deletions Instructions/Exercises/01-get-started-azure-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ lab:

# Get started with Azure OpenAI service

Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure OpenAI Studio to deploy and explore generative AI models.
Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure AI Studio to deploy and explore generative AI models.

In the scenario for this exercise, you will perform the role of a software developer who has been tasked to implement an AI agent that can use generative AI to help a marketing organization improve its effectiveness at reaching customers and advertising new products. The techniques used in the exercise can be applied to any scenario where an organization wants to use generative AI models to help employees be more effective and productive.

Expand Down Expand Up @@ -39,37 +39,33 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure

## Deploy a model

Azure OpenAI service provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.

> **Note**: As you use Azure OpenAI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.

After the new tab opens, you can close any banner notifications for new preview services that are displayed at the top of the Azure OpenAI Studio page.

1. In Azure OpenAI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
- **Deployment name**: *A unique name of your choice*
- **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
- **Model version**: Auto-update to default
- **Model version**: *Use default version*
- **Deployment type**: Standard
- **Tokens per minute rate limit**: 5K\*
- **Content filter**: Default
- **Enable dynamic quota**: Enabled
- **Enable dynamic quota**: Disabled

> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
## Use the Chat playground

Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure OpenAI Studio provides a chatbot interface for GPT 3.5 and higher models.
Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure AI Studio provides a chatbot interface for GPT 3.5 and higher models.

> **Note:** The *Chat* playground uses the *ChatCompletions* API rather than the older *Completions* API that is used by the *Completions* playground. The Completions playground is provided for compatibility with older models.
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
- **Setup** - used to set the context for the model's responses.
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
- **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
- **Chat session** - used to submit chat messages and view responses.
- **Configuration** - used to configure settings for the model deployment.
1. In the **Configuration** panel, ensure that your gpt-35-turbo-16k model deployment is selected.
1. In the **Setup** panel, review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user.
1. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected.
1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user.
1. In the **Chat session** panel, enter the user query `How can I use generative AI to help me market a new product?`

> **Note**: You may receive a response that the API deployment is not yet ready. If so, wait for a few minutes and try again.
Expand All @@ -84,7 +80,7 @@ Now that you've deployed a model, you can use it to generate responses based on

So far, you've engaged in a chat conversation with your model based on the default system message. You can customize the system setup to have more control over the kinds of responses generated by your model.

1. In the **Setup** panel, under **Use a system message template**, select the **Marketing Writing Assistant** template and confirm that you want to update the system message.
1. In the main toolbar, select the **Prompt samples**, and use the **Marketing Writing Assistant** prompt template.
1. Review the new system message, which describes how an AI agent should use the model to respond.
1. In the **Chat session** panel, enter the user query `Create an advertisement for a new scrubbing brush`.
1. Review the response, which should include advertising copy for a scrubbing brush. The copy may be quite extensive and creative.
Expand All @@ -96,7 +92,7 @@ So far, you've engaged in a chat conversation with your model based on the defau

The response should now be more useful, but to have even more control over the output from the model, you can provide one or more *few-shot* examples on which responses should be based.

1. In the **Setup** panel, under **Examples**, select **Add**. Then type the following message and response in the designated boxes:
1. Under the **System message** text box, expand the dropdown for **Add section** and select **Examples**. Then type the following message and response in the designated boxes:

**User**:

Expand Down Expand Up @@ -139,7 +135,7 @@ You've explored how the system message, examples, and prompts can help refine th
## Deploy your model to a web app
Now that you've explored some of the capabilities of a generative AI model in the Azure OpenAI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model.
Now that you've explored some of the capabilities of a generative AI model in the Azure AI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model.
1. At the top right of the **Chat** playground page, in the **Deploy to** menu, select **A new web app**.
1. In the **Deploy to a web app** dialog box, create a new web app with the following settings:
Expand All @@ -162,7 +158,7 @@ Now that you've explored some of the capabilities of a generative AI model in th
> **Note**: You deployed the *model* to a web app, but this deployment doesn't include the system settings and parameters you set in the playground; so the response may not reflect the examples you specified in the playground. In a real scenario, you would add logic to your application to modify the prompt so that it includes the appropriate contextual data for the kinds of response you want to generate. This kind of customization is beyond the scope of this introductory-level exercise, but you can learn about prompt engineering techniques and Azure OpenAI APIs in other exercises and product documentation.
1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure OpenAI Studio.
1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure AI Studio.
## Clean up
Expand Down
14 changes: 8 additions & 6 deletions Instructions/Exercises/02-natural-language-azure-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,17 +39,19 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure

## Deploy a model

Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.

1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
- **Deployment name**: *A unique name of your choice*
- **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
- **Model version**: Auto-update to default
- **Model version**: *Use default version*
- **Deployment type**: Standard
- **Tokens per minute rate limit**: 5K\*
- **Content filter**: Default
- **Enable dynamic quota**: Enabled
- **Enable dynamic quota**: Disabled

> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
Expand Down Expand Up @@ -95,7 +97,7 @@ Applications for both C# and Python have been provided. Both apps feature the sa
4. Update the configuration values to include:
- The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio).
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI studio).
5. Save the configuration file.
## Add code to use the Azure OpenAI service
Expand Down
33 changes: 17 additions & 16 deletions Instructions/Exercises/03-prompt-engineering.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,30 +39,31 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure

## Deploy a model

Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.

1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
- **Deployment name**: *A unique name of your choice*
- **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
- **Model version**: Auto-update to default
- **Model version**: *Use default version*
- **Deployment type**: Standard
- **Tokens per minute rate limit**: 5K\*
- **Content filter**: Default
- **Enable dynamic quota**: Enabled
- **Enable dynamic quota**: Disabled

> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
## Explore prompt engineering techniques

Let's start by exploring some prompt engineering techniques in the Chat playground.

1. In **Azure OpenAI Studio** at `https://oai.azure.com`, in the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main sections:
- **Setup** - used to set the context for the model's responses.
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
- **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
- **Chat session** - used to submit chat messages and view responses.
- **Configuration** - used to configure settings for the model deployment.
2. In the **Configuration** section, ensure that your model deployment is selected.
3. In the **Setup** area, select the default system message template to set the context for the chat session. The default system message is *You are an AI assistant that helps people find information*.
2. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected.
1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.*
4. In the **Chat session**, submit the following query:

```prompt
Expand All @@ -79,9 +80,9 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
The response provides a description of the article. However, suppose you want a more specific format for article categorization.
5. In the **Setup** section change the system message to `You are a news aggregator that categorizes news articles.`
5. In the **Configuration** section change the system message to `You are a news aggregator that categorizes news articles.`
6. Under the new system message, in the **Examples** section, select the **Add** button. Then add the following example.
6. Under the new system message, select the **Add section** button, and choose **Examples**. Then add the following example.
**User:**
Expand Down Expand Up @@ -126,7 +127,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
Entertainment
```
8. Use the **Apply changes** button at the top of the **Setup** section to update the system message.
8. Use the **Apply changes** button at the top of the **Configuration** section to save your changes.
9. In the **Chat session** section, resubmit the following prompt:
Expand All @@ -144,7 +145,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
The combination of a more specific system message and some examples of expected queries and responses results in a consistent format for the results.
10. In the **Setup** section, change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes.
10. Change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes.
11. In the **Chat session** section, submit the following prompt:
Expand Down Expand Up @@ -209,7 +210,7 @@ Applications for both C# and Python have been provided, and both apps feature th
4. Update the configuration values to include:
- The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio).
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Studio).
5. Save the configuration file.
## Add code to use the Azure OpenAI service
Expand Down Expand Up @@ -300,7 +301,7 @@ Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interations, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message.
1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interactions, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message.
1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
- **C#**: `dotnet run`
Expand Down
Loading

0 comments on commit a6c0c1e

Please sign in to comment.