You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of issue
I'm curious about the intention of the mentioned task above from the course design perspective. Is the underlying intention for the above task to use keyword search from Azure AI Search for retrieving the documents or to use vector search for the retrieval? If the intention is to use vector search to utilize the vector index of the Margie's Travel brochures that students have created from the previous steps, I believe current code does not utilize vector search for retrieval. We can verify this by looking at the metrics of the AOAI service.
When we use the current code, the metrics will show that only gpt-35-turbo-16k model is receiving requests when we send the user query using the ownData.py file. This implies that user query is not getting embedded with the same embedding model used to embed the brochures from the Margie's Travel.
If we implement the proposed changed in code shown below and send user query using the 'ownData.py' file, metrics will show that requests are now being sent to both gpt-35-turbo-16k and text-embedding-ada-002. This implies that user query is now being embedded with the same text-embedding-ada-002 model used to embed the source data(brochures from Margie's Travel) in order to perform vector search.
Repro steps:
follow the current steps using the current code on Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service lab.
implement the code changes below on ownData.py file to view the changes mentioned in the metrics shown above.
Module: Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service
Lab/Demo: Instructions/Exercises/06-use-own-data.md
Task: Add code to use the Azure OpenAI service
Step: 1
Description of issue
I'm curious about the intention of the mentioned task above from the course design perspective. Is the underlying intention for the above task to use keyword search from Azure AI Search for retrieving the documents or to use vector search for the retrieval? If the intention is to use vector search to utilize the vector index of the Margie's Travel brochures that students have created from the previous steps, I believe current code does not utilize vector search for retrieval. We can verify this by looking at the metrics of the AOAI service.
When we use the current code, the metrics will show that only gpt-35-turbo-16k model is receiving requests when we send the user query using the
ownData.py
file. This implies that user query is not getting embedded with the same embedding model used to embed the brochures from the Margie's Travel.If we implement the proposed changed in code shown below and send user query using the 'ownData.py' file, metrics will show that requests are now being sent to both gpt-35-turbo-16k and text-embedding-ada-002. This implies that user query is now being embedded with the same text-embedding-ada-002 model used to embed the source data(brochures from Margie's Travel) in order to perform vector search.
Repro steps:
ownData.py
file to view the changes mentioned in the metrics shown above.The text was updated successfully, but these errors were encountered: