diff --git a/tutorials/groq-gradio/groq-gradio-tutorial.ipynb b/tutorials/groq-gradio/groq-gradio-tutorial.ipynb index 71e1bca..30a2ccf 100644 --- a/tutorials/groq-gradio/groq-gradio-tutorial.ipynb +++ b/tutorials/groq-gradio/groq-gradio-tutorial.ipynb @@ -8,14 +8,14 @@ "\n", "In this tutorial, we'll build a voice-powered AI application using Groq for realtime speech recognition and text generation, Gradio for creating an interactive web interface, and Hugging Face Spaces for hosting our application.\n", "\n", - "[Groq](groq.com) is known for insanely fast inference speed that is very well-suited for realtime AI applications, providing multiple Large Language Models (LLMs) and speech-to-text models via Groq API. In this tutorial, we will use the [Distil-Whisper English](https://huggingface.co/distil-whisper/distil-large-v3) and [Llama 3.1 70B](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md) models for speech-to-text and text-to-text. \n", + "[Groq](groq.com) is known for insanely fast inference speed that is very well-suited for realtime AI applications, providing multiple Large Language Models (LLMs) and speech-to-text models via Groq API. In this tutorial, we will use the [Distil-Whisper English](https://huggingface.co/distil-whisper/distil-large-v3) and [Llama 3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) models for speech-to-text and text-to-text. \n", "\n", "[Gradio](https://www.gradio.app/) is an open-source Python library that makes it easy to prototype and deploy interactive demos without needing to write frontend code for a nice User Interface (UI), which is great if you're a developer like me who doesn't know much about frontend Bob Ross-ery. 🖌️\n", "\n", "By combining models powered by Groq with Gradio's user-friendly interface creation, we will:\n", "\n", "- Use Distil-Whisper English powered by Groq transcribe audio input in realtime.\n", - "- Use Llama 3.1 70B powered by Groq to generate instant responses based on the transcription.\n", + "- Use Llama 3 70B powered by Groq to generate instant responses based on the transcription.\n", "- Create a Gradio interface to handle audio input and display results on a nice UI.\n", "\n", "Let's get started!" @@ -107,7 +107,7 @@ "source": [ "## Step 4: Implement Response Generation\n", "\n", - "Now, let's build a function to take the transcribed text and generate a response using Llama 3.1 70B (`llama-3.1-70b-versatile`) powered by Groq:" + "Now, let's build a function to take the transcribed text and generate a response using Llama 3 70B (`llama3-70b-8192`) powered by Groq:" ] }, { @@ -123,9 +123,9 @@ " client = groq.Client(api_key=api_key)\n", " \n", " try:\n", - " # Use Llama 3.1 70B powered by Groq for text generation\n", + " # Use Llama 3 70B powered by Groq for text generation\n", " completion = client.chat.completions.create(\n", - " model=\"llama-3.1-70b-versatile\",\n", + " model=\"llama3-70b-8192\",\n", " messages=[\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": transcription}\n",