From 1db3b104f6c1d62327111b477d7ccf9e585c87ed Mon Sep 17 00:00:00 2001 From: Nitya Narasimhan Date: Fri, 10 Nov 2023 06:50:03 -0500 Subject: [PATCH] Updating filenames in translation --- 04-prompt-engineering-fundamentals/README.md | 2 +- .../translations/cn/README.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/04-prompt-engineering-fundamentals/README.md b/04-prompt-engineering-fundamentals/README.md index 65921f843..067fe5153 100644 --- a/04-prompt-engineering-fundamentals/README.md +++ b/04-prompt-engineering-fundamentals/README.md @@ -120,7 +120,7 @@ Let's see this in action in the OpenAI or Azure OpenAI Playground: ### Fabrications Example -In this course, we use the term **"fabrication"** to reference the phenomenon where LLMs sometimes generate factually incorrect information due to limitations in their training or other constraints. You may also have heard this referred to as _"hallucinations"_ in popular articles or research papers. However, we strongly recommend using _"fabrication"_ as the term so we don't accidentally anthropomorphize the behavior by attributing a human-like trait to a machine-driven outcome. This also reinforces [Responsible AI guidelines](https://www.microsoft.com/ai/responsible-ai) from a terminology perspective, removing terms that may also be considered offensive or non-inclusive in some contexts. +In this course, we use the term **"fabrication"** to reference the phenomenon where LLMs sometimes generate factually incorrect information due to limitations in their training or other constraints. You may also have heard this referred to as _"hallucinations"_ in popular articles or research papers. However, we strongly recommend using _"fabrication"_ as the term so we don't accidentally anthropomorphize the behavior by attributing a human-like trait to a machine-driven outcome. This also reinforces [Responsible AI guidelines](https://www.microsoft.com/ai/responsible-ai?WT.mc_id=academic-105485-koreyst) from a terminology perspective, removing terms that may also be considered offensive or non-inclusive in some contexts. Want to get a sense of how fabrications work? Think of a prompt that instructs the AI to generate content for a non-existent topic (to ensure it is not found in the training dataset). For example - I tried this prompt: > **Prompt:** generate a lesson plan on the Martian War of 2076. diff --git a/04-prompt-engineering-fundamentals/translations/cn/README.md b/04-prompt-engineering-fundamentals/translations/cn/README.md index 6c37476c8..8c0965306 100644 --- a/04-prompt-engineering-fundamentals/translations/cn/README.md +++ b/04-prompt-engineering-fundamentals/translations/cn/README.md @@ -138,15 +138,15 @@ But what if the user wanted to see something specific that met some criteria or > **响应 1**: OpenAI Playground (GPT-35) -![Response 1](../../images/04-hallucination-oai.png?WT.mc_id=academic-105485-koreyst) +![Response 1](../../images/04-fabrication-oai.png?WT.mc_id=academic-105485-koreyst) > **响应 2**: Azure OpenAI Playground (GPT-35) -![Response 2](../../images/04-hallucination-aoai.png?WT.mc_id=academic-105485-koreyst) +![Response 2](../../images/04-fabrication-aoai.png?WT.mc_id=academic-105485-koreyst) > **响应 3**: : Hugging Face Chat Playground (LLama-2) -![Response 3](../../images/04-hallucination-huggingchat.png?WT.mc_id=academic-105485-koreyst) +![Response 3](../../images/04-fabrication-huggingchat.png?WT.mc_id=academic-105485-koreyst) 正如预期的那样,由于随机行为和模型能力变化,每个模型(或模型版本)都会产生略有不同的响应。 例如,一个模型针对八年级受众,而另一个模型则假设高中生。 但所有三个模型确实生成了可以让不知情的用户相信该事件是真实的响应