Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
ivorb committed Nov 2, 2023
2 parents d91890c + 42d3c1b commit 8bf90bf
Show file tree
Hide file tree
Showing 12 changed files with 1,262 additions and 27 deletions.
115 changes: 115 additions & 0 deletions Instructions/Exercises/01-get-started-azure-openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
---
lab:
title: 'Get started with Azure OpenAI'
---

# Get started with Azure OpenAI Service

Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure OpenAI Studio to deploy and explore OpenAI models.

This exercise takes approximately **30** minutes.

## Provision an Azure OpenAI resource

If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.

1. Sign into the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`.
2. Create an **Azure OpenAI** resource with the following settings:
- **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
- **Resource group**: *Choose or create a resource group*
- **Region**: *Make a random choice from any of the available regions*\*
- **Name**: *A unique name of your choice*
- **Pricing tier**: Standard S0

> \* Azure OpenAI resources are constrained by regional quotas. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.

## Deploy a model

Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.

1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
2. In Azure OpenAI Studio, on the **Deployments** age, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo** model with the following settings:
- **Model**: gpt-35-turbo
- **Model version**: Auto-update to default
- **Deployment name**: *A unique name of your choice*
- **Advanced options**
- **Content filter**: Default
- **Tokens per minute rate limit**: 5K\*
- **Enable dynamic quota**: Enabled

> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
## Use the Chat playground

The *Chat* playground provides a chatbot interface for GPT 3.5 and higher models.

> **Note:** The *Chat* playground uses the *ChatCompletions* API rather than the older *Completions* API that is used by the *Completions* playground. The Completions playground is provided for compatibility with older models.
1. In the **Playground** section, select the **Chat** page, and ensure that the **my-gpt-model** model is selected in the configuration pane on the right.
2. In the **Assistant setup** section, in the **System message** box, replace the current text with the following statement: `The system is an AI teacher that helps people learn about AI`.

3. Below the **System message** box, select **Add an example**, and type the following message and response in the designated boxes:

- **User**: `What are different types of artificial intelligence?`
- **Assistant**: `There are three main types of artificial intelligence: Narrow or Weak AI (such as virtual assistants like Siri or Alexa, image recognition software, and spam filters), General or Strong AI (AI designed to be as intelligent as a human being. This type of AI does not currently exist and is purely theoretical), and Artificial Superintelligence (AI that is more intelligent than any human being and can perform tasks that are beyond human comprehension. This type of AI is also purely theoretical and has not yet been developed).`

> **Note**: Few-shot examples are used to provide the model with examples of the types of responses that are expected. The model will attempt to reflect the tone and style of the examples in its own responses.
4. Save the changes to start a new session and set the behavioral context of the chat system.
5. In the query box at the bottom of the page, enter the user query `What is artificial intelligence?`

> **Note**: You may receive a response that the API deployment is not yet ready. If so, wait for a few minutes and try again.
6. Review the response and then submit the following message to continue the conversation: `How is it related to machine learning?`
7. Review the response, noting that context from the previous interaction is retained (so the model understands that "it" refers to artificial intelligence).
8. Use the **View Code** button to view the code for the interaction. The prompt consists of the *system* message, the few-shot examples of *user* and *assistant* messages, and the sequence of *user* and *assistant* messages in the chat session so far.

## Explore prompts and parameters

You can use the prompt and parameters to maximize the likelihood of generating the response you need.

1. In the **Parameters** pane, set the following parameter values:
- **Temperature**: 0
- **Max length (tokens)**: 500

2. Submit the following message

```
Write three multiple choice questions based on the following text, indcating the correct answers.
Most computer vision solutions are based on machine learning models that can be applied to visual input from cameras, videos, or images.*
- Image classification involves training a machine learning model to classify images based on their contents. For example, in a traffic monitoring solution you might use an image classification model to classify images based on the type of vehicle they contain, such as taxis, buses, cyclists, and so on.*
- Object detection machine learning models are trained to classify individual objects within an image, and identify their location with a bounding box. For example, a traffic monitoring solution might use object detection to identify the location of different classes of vehicle.*
- Semantic segmentation is an advanced machine learning technique in which individual pixels in the image are classified according to the object to which they belong. For example, a traffic monitoring solution might overlay traffic images with "mask" layers to highlight different vehicles using specific colors.
```
3. Review the results, which should consist of multiple-choice questions that a teacher could use to test students on the computer vision topics in the prompt. The total response should be smaller than the maximum length you specified as a parameter.
Observe the following about the prompt and parameters you used:
- The prompt specifically states that the desired output should be three multiple choice questions.
- The parameters include *Temperature*, which controls the degree to which response generation includes an element of randomness. The value of **0** used in your submission minimizes randomness, resulting in stable, predictable responses.
## Explore code-generation
In addition to generating natural language responses, you can use GPT models to generate code.
1. In the **Assistant setup** pane, select the **Empty Example** template to reset the system message.
2. Enter the system message: `You are a Python developer.` and save the changes.
3. In the **Chat session** pane, select **Clear chat** to clear the chat history and start a new session.
4. Submit the following user message:
```
Write a Python function named Multiply that multiplies two numeric parameters.
```
5. Review the response, which should include sample Python code that meets the requirement in the prompt.
## Clean up
When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the [Azure portal](https://portal.azure.com).
206 changes: 206 additions & 0 deletions Instructions/Exercises/02-natural-language-azure-openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,206 @@
---
lab:
title: 'Integrate Azure OpenAI into your app'
---

# Integrate Azure OpenAI into your app

With the Azure OpenAI Service, developers can create chatbots, language models, and other applications that excel at understanding natural human language. The Azure OpenAI provides access to pre-trained AI models, as well as a suite of APIs and tools for customizing and fine-tuning these models to meet the specific requirements of your application. In this exercise, you'll learn how to deploy a model in Azure OpenAI and use it in your own application to summarize text.

This exercise will take approximately **30** minutes.

## Provision an Azure OpenAI resource

If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.

1. Sign into the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`.
2. Create an **Azure OpenAI** resource with the following settings:
- **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
- **Resource group**: *Choose or create a resource group*
- **Region**: *Make a random choice from any of the available regions*\*
- **Name**: *A unique name of your choice*
- **Pricing tier**: Standard S0

> \* Azure OpenAI resources are constrained by regional quotas. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.

## Deploy a model

Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.

1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo** model with the following settings:
- **Model**: gpt-35-turbo
- **Model version**: Auto-update to default
- **Deployment name**: *A unique name of your choice*
- **Advanced options**
- **Content filter**: Default
- **Tokens per minute rate limit**: 5K\*
- **Enable dynamic quota**: Enabled

> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
## Set up an application in Cloud Shell

To show how to integrate with an Azure OpenAI model, we'll use a short command-line application that runs in Cloud Shell on Azure. Open up a new browser tab to work with Cloud Shell.

1. In the [Azure portal](https://portal.azure.com?azure-portal=true), select the **[>_]** (*Cloud Shell*) button at the top of the page to the right of the search box. A Cloud Shell pane will open at the bottom of the portal.

![Screenshot of starting Cloud Shell by clicking on the icon to the right of the top search box.](../media/cloudshell-launch-portal.png#lightbox)

2. The first time you open the Cloud Shell, you may be prompted to choose the type of shell you want to use (*Bash* or *PowerShell*). Select **Bash**. If you don't see this option, skip the step.

3. If you're prompted to create storage for your Cloud Shell, select **Show advanced settings** and select the following settings:
- **Subscription**: Your subscription
- **Cloud shell regions**: Choose any available region
- **Show VNET isolation setings** Unselected
- **Resource group**: Use the existing resource group where you provisioned your Azure OpenAI resource
- **Storage account**: Create a new storage account with a unique name
- **File share**: Create a new file share with a unique name

Then wait a minute or so for the storage to be created.

> **Note**: If you already have a cloud shell set up in your Azure subscription, you may need to use the **Reset user settings** option in the ⚙️ menu to ensure the latest versions of Python and the .NET Framework are installed.
4. Make sure the type of shell indicated on the top left of the Cloud Shell pane is *Bash*. If it's *PowerShell*, switch to *Bash* by using the drop-down menu.

5. Once the terminal starts, enter the following command to download the sample application and save it to a folder called `azure-openai`.

```bash
rm -r azure-openai -f
git clone https://github.com/MicrosoftLearning/mslearn-openai azure-openai
```

6. The files are downloaded to a folder named **azure-openai**. Navigate to the lab files for this exercise using the following command.

```bash
cd azure-openai/Labfiles/02-nlp-azure-openai
```

Applications for both C# and Python have been provided, as well as a sample text file you'll use to test the summarization. Both apps feature the same functionality.
Open the built-in code editor, and observe the text file that you'll be summarizing with your model located at `text-files/sample-text.txt`. Use the following command to open the lab files in the code editor.

```bash
code .
```

## Configure your application

For this exercise, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
1. In the code editor, expand the **CSharp** or **Python** folder, depending on your language preference.
2. Open the configuration file for your language
- C#: `appsettings.json`
- Python: `.env`
3. Update the configuration values to include the **endpoint** and **key** from the Azure OpenAI resource you created, as well as the model name that you deployed, `text-turbo`. Save the file.
4. Navigate to the folder for your preferred language and install the necessary packages
**C#**
```bash
cd CSharp
dotnet add package Azure.AI.OpenAI --prerelease
```
**Python**
```bash
cd Python
pip install python-dotenv
pip install openai
```
5. Navigate to your preferred language folder, select the code file, and add the necessary libraries.
**C#**
```csharp
// Add Azure OpenAI package
using Azure.AI.OpenAI;
```
**Python**
```python
# Add OpenAI import
import openai
```
5. Open up the application code for your language and add the necessary code for building the request, which specifies the various parameters for your model such as `prompt` and `temperature`.
**C#**
```csharp
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(new Uri(oaiEndpoint), new AzureKeyCredential(oaiKey));
// Build completion options object
ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
{
Messages =
{
new ChatMessage(ChatRole.System, "You are a helpful assistant. Summarize the following text in 60 words or less."),
new ChatMessage(ChatRole.User, text),
},
MaxTokens = 120,
Temperature = 0.7f,
};
// Send request to Azure OpenAI model
ChatCompletions response = client.GetChatCompletions(
deploymentOrModelName: oaiModelName,
chatCompletionsOptions);
string completion = response.Choices[0].Message.Content;
Console.WriteLine("Summary: " + completion + "\n");
```
**Python**
```python
# Set OpenAI configuration settings
openai.api_type = "azure"
openai.api_base = azure_oai_endpoint
openai.api_version = "2023-03-15-preview"
openai.api_key = azure_oai_key
# Send request to Azure OpenAI model
print("Sending request for summary to Azure OpenAI endpoint...\n\n")
response = openai.ChatCompletion.create(
engine=azure_oai_model,
temperature=0.7,
max_tokens=120,
messages=[
{"role": "system", "content": "You are a helpful assistant. Summarize the following text in 60 words or less."},
{"role": "user", "content": text}
]
)
print("Summary: " + response.choices[0].message.content + "\n")
```
## Run your application
Now that your app has been configured, run it to send your request to your model and observe the response.
1. In the Cloud Shell bash terminal, navigate to the folder for your preferred language.
1. Run the application.
- **C#**: `dotnet run`
- **Python**: `python test-openai-model.py`
1. Observe the summarization of the sample text file.
1. Navigate to your code file for your preferred language, and change the *temperature* value to `1`. Save the file.
1. Run the application again, and observe the output.
Increasing the temperature often causes the summary to vary, even when provided the same text, due to the increased randomness. You can run it several times to see how the output may change. Try using different values for your temperature with the same input.
## Clean up
When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the [Azure portal](https://portal.azure.com?azure-portal=true).
Loading

0 comments on commit 8bf90bf

Please sign in to comment.