From 85f2427be7c45d95b2d4016260458a1495016e99 Mon Sep 17 00:00:00 2001
From: saudsami <saudsami@gmail.com>
Date: Thu, 26 Sep 2024 16:06:47 +0500
Subject: [PATCH] feedback updates

---
 .../overview/product-overview.mdx             |   5 +-
 shared/common/core-concepts/agora-console.mdx |  15 +-
 .../common/core-concepts/app-certificate.mdx  |   2 +-
 shared/common/core-concepts/app-id.mdx        |   4 +-
 shared/common/core-concepts/channel.mdx       |   8 +-
 shared/common/core-concepts/open-ai-intro.mdx |   8 +-
 shared/common/core-concepts/real-time-stt.mdx |   5 +-
 shared/common/core-concepts/sd-rtn.mdx        |   7 +-
 shared/common/core-concepts/token.mdx         |   6 +-
 shared/common/core-concepts/user-id.mdx       |   4 +-
 shared/common/core-concepts/video-sdk.mdx     |   2 +-
 shared/common/prerequisites/index.mdx         |   2 -
 shared/open-ai-integration/quickstart.mdx     | 177 +++++++++---------
 .../project-implementation/python.mdx         |   5 +-
 .../get-started-sdk/project-setup/python.mdx  |   7 +-
 .../get-started-sdk/project-test/python.mdx   |   6 +-
 16 files changed, 123 insertions(+), 140 deletions(-)

diff --git a/open-ai-integration/overview/product-overview.mdx b/open-ai-integration/overview/product-overview.mdx
index 1ea1e3a97..6115ab28c 100644
--- a/open-ai-integration/overview/product-overview.mdx
+++ b/open-ai-integration/overview/product-overview.mdx
@@ -47,10 +47,9 @@ description: >
       link: '',
     },
   ]}
-/>
+>
 
 Integrating Agora’s real-time audio communication with OpenAI’s Large Language Models (LLMs) unlocks the potential for powerful, interactive voice-based applications. By combining Agora’s robust real-time audio streaming capabilities with the conversational intelligence of OpenAI’s LLMs, you can create seamless voice-enabled experiences, such as voice-powered AI assistants or interactive dialogue systems. This integration enables dynamic, responsive audio interactions, enhancing user engagement across a broad range of use cases—from customer support bots to collaborative voice-driven applications.
 
-Most importantly, By combining Agora’s and OpenAI’s strengths this integration finally enables the most natural form of language interaction, lowering the barrier for users to leverage the power of AI and making advanced technologies more accessible than ever before.
-
+Most importantly, by combining the strengths of Agora and OpenAI, this integration enables the most natural form of language interaction, lowering the barrier for users to harness the power of AI and making advanced technologies more accessible than ever before.
 </ProductOverview>
diff --git a/shared/common/core-concepts/agora-console.mdx b/shared/common/core-concepts/agora-console.mdx
index 01a0f1b6a..0ef193e2c 100644
--- a/shared/common/core-concepts/agora-console.mdx
+++ b/shared/common/core-concepts/agora-console.mdx
@@ -1,26 +1,17 @@
 <ProductWrapper notAllowed={['interactive-whiteboard', 'cloud-recording', 'agora-analytics', 'extensions-marketplace']}>
-  <Link to="{{Global.AGORA_CONSOLE_URL}}">
-    <Vg k="CONSOLE" />
-  </Link>{' '}
-  is the main dashboard where you manage your <Vg k="COMPANY" /> projects and services. Before you can use <Vg k="COMPANY" />
-  's SDKs, you must first create a project in the <Vg k="CONSOLE" />. See [Agora account management](../get-started/manage-agora-account) for
+<Link to="{{Global.AGORA_CONSOLE_URL}}"><Vg k="CONSOLE" /></Link> is the main dashboard where you manage your <Vg k="COMPANY" /> projects and services. Before you can use <Vg k="COMPANY" />'s SDKs, you must first create a project in the <Vg k="CONSOLE" />. See [Agora account management](../get-started/manage-agora-account) for
   details.
 </ProductWrapper>
 
 <ProductWrapper product={['interactive-whiteboard', 'cloud-recording', 'agora-analytics', 'extensions-marketplace']}>
-  To use <Vg k="COMPANY" /> <Vpd k="NAME" />, create a project in the <Vg k="CONSOLE" /> first.
+To use <Vg k="COMPANY" /> <Vpd k="NAME" />, create a project in the <Vg k="CONSOLE" /> first.
 </ProductWrapper>
 
 <ProductWrapper notAllowed="interactive-whiteboard">![Create project in Agora Console](/images/common/create-project.svg)</ProductWrapper>
 
 #### <Vg k="CONSOLE" />
 
-<Link to="{{Global.AGORA_CONSOLE_URL}}">
-  <Vg k="CONSOLE" />
-</Link> provides an intuitive interface for developers to query and manage their <Vg k="COMPANY" /> account. After registering an <Link to="{{Global.AGORA_CONSOLE_URL}}">
-  Agora Account
-</Link>
-, you use the <Vg k="CONSOLE" /> to perform the following tasks:
+<Link to="{{Global.AGORA_CONSOLE_URL}}"><Vg k="CONSOLE" /></Link> provides an intuitive interface for developers to query and manage their <Vg k="COMPANY" /> account. After registering an <Link to="{{Global.AGORA_CONSOLE_URL}}">Agora Account</Link>, you use the <Vg k="CONSOLE" /> to perform the following tasks:
 
 - Manage the account
 - Create and configure <Vg k="COMPANY" /> projects and services
diff --git a/shared/common/core-concepts/app-certificate.mdx b/shared/common/core-concepts/app-certificate.mdx
index 2ef839809..3be9d5353 100644
--- a/shared/common/core-concepts/app-certificate.mdx
+++ b/shared/common/core-concepts/app-certificate.mdx
@@ -2,4 +2,4 @@
 
 An App Certificate is a unique key generated by the <Vg k="CONSOLE" /> to secure projects through token authentication. It is required, along with the App ID, to generate a token that proves authorization between your systems and <Vg k="COMPANY" />'s network. App Certificates are used to generate <Vg k="VSDK" /> or <Vg k="MESS" /> authentication tokens.
 
-App Certificates should be stored securely in your backend systems. In the event that your App Certificate is no longer secure or to enable compliance with security requirements, certificates can be invalidated and new ones can be created through <Vg k="CONSOLE" />.
+App Certificates should be stored securely in your backend systems. If your App Certificate is compromised or to meet security compliance requirements, you can invalidate certificates and create new ones through the <Vg k='CONSOLE' />.
diff --git a/shared/common/core-concepts/app-id.mdx b/shared/common/core-concepts/app-id.mdx
index a646356c5..af9b69fa4 100644
--- a/shared/common/core-concepts/app-id.mdx
+++ b/shared/common/core-concepts/app-id.mdx
@@ -7,13 +7,13 @@ The App ID is a unique key generated by <Vg k="COMPANY" />'s platform to identif
 App IDs are stored on the front-end client and do not provide access control. Projects using only an App ID allow any user with the App ID to join voice and video streams.
 
 <ProductWrapper notAllowed={["extensions-marketplace","agora-analytics","video-calling", "voice-calling",
-  "interactive-live-streaming", "broadcast-streaming","signaling"]}>
+  "interactive-live-streaming", "broadcast-streaming","signaling","open-ai-integration"]}>
 
 For applications requiring access controls, such as those in production environments, choose an **App ID + Token** mechanism for [user authentication](../get-started/authentication-workflow) when creating a new project. Without an authentication token, your environment is open to anyone with access to your App ID.
 
 </ProductWrapper>
 
-<ProductWrapper product="agora-analytics">
+<ProductWrapper product="agora-analytics, open-ai-integration">
 
 For applications requiring access controls, such as those in production environments, choose an **App ID + Token** mechanism for user authentication when creating a new project. Without an authentication token, your environment is open to anyone with your App ID.
 
diff --git a/shared/common/core-concepts/channel.mdx b/shared/common/core-concepts/channel.mdx
index b157cb391..585721d44 100644
--- a/shared/common/core-concepts/channel.mdx
+++ b/shared/common/core-concepts/channel.mdx
@@ -8,9 +8,9 @@ In <Vg k="MESS" />, channels serve as a data transfer management mechanism for p
 
 <Vg k="MESS" /> supports the following channel types:
 
-| Channel Type | Main Features                                                                                                                                                                                                                                                                                                | Applicable Scenarios                                                                                                                                                            |
-| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Message      | Follows the industry-standard pub/sub model. Channels do not need to be created in advance, and there is no upper limit on the number of publishers and subscribers in a channel.                                                                                                                            | Multi-device management and command exchange in the IoT industry, location tracking in smart devices, etc.                                                                      |
+| Channel Type | Main Features   | Applicable Scenarios  |
+| ------------ | ------------------- | --------------------------------- |
+| Message      | Follows the industry-standard pub/sub model. Channels do not need to be created in advance, and there is no upper limit on the number of publishers and subscribers in a channel.                | Multi-device management and command exchange in the IoT industry, location tracking in smart devices, etc.  |
 | Stream       | Follows the chat room model. Users need to join the channel to send and receive event notifications. Messages are managed and delivered through topics, and a single channel allows up to 1,000 users to join simultaneously. Supports channel sharing and synchronous transmission of audio and video data. | High-frequency and large concurrent data transmission or co-channel and synchronous transmission with audio and video data, such as in metaverse and cloud gaming applications. |
 
 </ProductWrapper>
@@ -24,6 +24,6 @@ Channels are created by calling the methods for transmitting real-time data. <Vg
 
 These channels are independent of each other.
 
-Additional services provided by <Vg k="COMPANY" />, such as Cloud Recording and Real-Time Transcription, join the <Vg k="VSDK" /> channel to provide real-time recording, transmission acceleration, media playback, and content moderation.
+Additional services provided by <Vg k="COMPANY" />, such as Cloud Recording and Real-Time Speech-To-Text, join the <Vg k="VSDK" /> channel to provide real-time recording, transmission acceleration, media playback, and content moderation.
 
 </ProductWrapper>
diff --git a/shared/common/core-concepts/open-ai-intro.mdx b/shared/common/core-concepts/open-ai-intro.mdx
index 22c6ea81a..8f8af22bf 100644
--- a/shared/common/core-concepts/open-ai-intro.mdx
+++ b/shared/common/core-concepts/open-ai-intro.mdx
@@ -6,11 +6,9 @@ import Channel from './channel.mdx';
 import UserId from './user-id.mdx';
 import SD_RTN from './sd-rtn.mdx';
 
-<Vg k="COMPANY" />
-'s platform enables you to transcribe audio streams from users in real-time, providing live transcription to power features such as live closed
-captions (CC) for improved accessibility.
+Combining Agora’s real-time audio communication with OpenAI’s Large Language Models (LLMs) opens up new possibilities for creating powerful, interactive voice-driven applications.
 
-This article introduces the key processes and concepts you need to know to use <Vg k="COMPANY" />'s platform effectively.
+This guide introduces the key processes and concepts you need to know to use <Vg k="COMPANY" />'s platform effectively.
 
 ## Using the <Vg k="CONSOLE" />
 
@@ -27,4 +25,4 @@ This article introduces the key processes and concepts you need to know to use <
 
 ## RESTful APIs
 
-<Vg k="COMPANY" /> offers RESTful APIs across many of its products. For the full list, see [Agora API reference](../../../api-reference?platform=rest&product=all).
+<Vg k="COMPANY" /> offers RESTful APIs across many of its products. For details, see [RESTful API](/video-calling/channel-management-api/overview).
diff --git a/shared/common/core-concepts/real-time-stt.mdx b/shared/common/core-concepts/real-time-stt.mdx
index 937272ba7..92eb320a0 100644
--- a/shared/common/core-concepts/real-time-stt.mdx
+++ b/shared/common/core-concepts/real-time-stt.mdx
@@ -6,10 +6,9 @@ import Channel from './channel.mdx';
 import UserId from './user-id.mdx';
 import SD_RTN from './sd-rtn.mdx';
 
-<Vg k="COMPANY" />
-'s <Vpd k="NAME" /> enables you to transcribe audio of each host to provide live closed captions (CC) and transcription for improved accessibility.
+<Vg k="COMPANY" />'s <Vpd k="NAME" /> enables you to transcribe audio of each host to provide live closed captions (CC) and transcription for improved accessibility.
 
-This article introduces the key processes and concepts you need to know to use <Vpd k="NAME" />.
+This guide introduces the key processes and concepts you need to know to use <Vpd k="NAME" />.
 
 ## Using the <Vg k="CONSOLE" />
 
diff --git a/shared/common/core-concepts/sd-rtn.mdx b/shared/common/core-concepts/sd-rtn.mdx
index 4a7108406..c353d810d 100644
--- a/shared/common/core-concepts/sd-rtn.mdx
+++ b/shared/common/core-concepts/sd-rtn.mdx
@@ -1,12 +1,7 @@
 <a name="agora-sd-rtn"></a>
 #### <Vg k="AGORA_BACKEND" />
 
-<Vg k="COMPANY" />
-'s core engagement services are powered by its Software-Defined Real-time Network (SD-RTN™), which is accessible and available anytime, anywhere
-around the world. Unlike traditional networks, the software-defined network is not confined by device, phone numbers, or a telecommunication
-provider's coverage area. <Vg k="AGORA_BACKEND" /> has data centers globally, covering over 200 countries and regions. The network delivers sub-second
-latency and high availability of real-time video and audio anywhere on the globe. With <Vg k="AGORA_BACKEND" />, <Vg k="COMPANY" /> can
-deliver live user engagement experiences in the form of real-time communication (RTC) with the following advantages:
+<Vg k="COMPANY" />'s core engagement services are powered by its Software-Defined Real-time Network (SD-RTN™), which is accessible and available anytime, anywhere around the world. Unlike traditional networks, the software-defined network is not confined by device, phone numbers, or a telecommunication provider's coverage area. <Vg k="AGORA_BACKEND" /> has data centers globally, covering over 200 countries and regions. The network delivers sub-second latency and high availability of real-time video and audio anywhere on the globe. With <Vg k="AGORA_BACKEND" />, <Vg k="COMPANY" /> can deliver live user engagement experiences in the form of real-time communication (RTC) with the following advantages:
 
 - Unmatched quality of service
 - High availability and accessibility
diff --git a/shared/common/core-concepts/token.mdx b/shared/common/core-concepts/token.mdx
index 3c564fb53..b60bcc13d 100644
--- a/shared/common/core-concepts/token.mdx
+++ b/shared/common/core-concepts/token.mdx
@@ -6,17 +6,15 @@ Tokens are generated on your server and passed to the client for use in the <Vg
 
 For testing and during development, use the <Vg k="CONSOLE" /> to generate temporary tokens. For production environments, implement a token server as part of your security infrastructure to control access to your channels.
 
-For more information, see [Secure authentication with tokens](../get-started/authentication-workflow).
-
 <ProductWrapper notAllowed={["extensions-marketplace","agora-analytics","video-calling", "voice-calling",
   "interactive-live-streaming", "broadcast-streaming","signaling"]}>
 
-For information on setting up a token server for generating and managing tokens, refer to the guide on [creating and running a token server](../get-started/authentication-workflow).
+For information on setting up a token server for generating and managing tokens, refer to the guide on [Secure authentication with tokens](/video-calling/get-started/authentication-workflow).
 
 </ProductWrapper>
 
 <ProductWrapper product={["video-calling", "voice-calling", "interactive-live-streaming", "broadcast-streaming","signaling"]}>
 
-For information on setting up a token server for generating and managing tokens, refer to the guide on [creating and running a token server](../get-started/authentication-workflow).
+For information on setting up a token server for generating and managing tokens, refer to the guide on [Secure authentication with tokens](../get-started/authentication-workflow).
 
 </ProductWrapper>
diff --git a/shared/common/core-concepts/user-id.mdx b/shared/common/core-concepts/user-id.mdx
index 657420831..1eb667457 100644
--- a/shared/common/core-concepts/user-id.mdx
+++ b/shared/common/core-concepts/user-id.mdx
@@ -2,7 +2,7 @@
 
 <ProductWrapper notAllowed="signaling">
 
-In <Vg k="COMPANY" />'s platform, the UID is an integer value that is a unique identifier assigned to each user within the context of a specific channel. When joining a channel, you have the choice to either assign a specific UID to the user or pass null and allow <Vg k="COMPANY" />'s platform to automatically generate and assign a UID for the user. If two users attempt to join the same channel with the same UID, it can lead to unexpected behavior.
+In <Vg k="COMPANY" />'s platform, the UID is an integer value that is a unique identifier assigned to each user within the context of a specific channel. When joining a channel, you have the choice to either assign a specific UID to the user or pass `0` or `null` and allow <Vg k="COMPANY" />'s platform to automatically generate and assign a UID for the user. If two users attempt to join the same channel with the same UID, it can lead to unexpected behavior.
 
 The UID is used by <Vg k="COMPANY" />'s services and components to identify and manage users within a channel. Developers should ensure that UIDs are properly assigned to prevent conflicts.
 
@@ -12,7 +12,7 @@ The UID is used by <Vg k="COMPANY" />'s services and components to identify and
 
 In <Vg k="MESS" />, the UID is a string that is a unique identifier and required along with an App ID to initialize the SDK. It is used to identify the user when logging in to <Vg k="MESS" /> and throughout their session. Users can join channels by providing just the channel name, as the UID is already associated with the user during initialization.
 
-The same UID cannot log in to <Vg k="AGORA_BACKEND" /> from multiple devices at the same time. If <Vpl k="CLIENT"/>s with the same UID logs in to <Vg k="AGORA_BACKEND" />, the <Vplk="CLIENT" /> previously logged in client is disconnected and sent an event notification.
+The same UID cannot log in to <Vg k="AGORA_BACKEND" /> from multiple devices at the same time. If <Vpl k="CLIENT" />s with the same UID logs in to <Vg k="AGORA_BACKEND" />, the <Vpl k="CLIENT" /> previously logged in client is disconnected and sent an event notification.
 
 The UID is used for billing and online status notifications.
 
diff --git a/shared/common/core-concepts/video-sdk.mdx b/shared/common/core-concepts/video-sdk.mdx
index 8db497ab3..b11931002 100644
--- a/shared/common/core-concepts/video-sdk.mdx
+++ b/shared/common/core-concepts/video-sdk.mdx
@@ -17,7 +17,7 @@ RTC (Real-Time Communication) refers to real-time communication technology, whic
 
 <Vg k="COMPANY" /> SDKs provide real-time audio and video interaction services, with multi-platform and multi-device support. This includes high-definition video calls, voice-only calls, interactive live streaming, as well as one-on-one and multi-group chats.
 
-This article introduces the key processes and concepts you need to know to use <Vg k="COMPANY" /> SDKs.
+This guide introduces the key processes and concepts you need to know to use <Vg k="COMPANY" /> SDKs.
 
 ## Using the <Vg k="CONSOLE" />
 
diff --git a/shared/common/prerequisites/index.mdx b/shared/common/prerequisites/index.mdx
index 4c4c41e8a..b151313d6 100644
--- a/shared/common/prerequisites/index.mdx
+++ b/shared/common/prerequisites/index.mdx
@@ -1,7 +1,6 @@
 import Android from './android.mdx';
 import Ios from './ios.mdx';
 import MacOS from './macos.mdx';
-import Python from './python.mdx';
 import Web from './web.mdx';
 import ReactNative from './react-native.mdx';
 import ReactJS from './react-js.mdx';
@@ -15,7 +14,6 @@ import Unreal from './unreal.mdx';
 <Android />
 <Ios />
 <MacOS />
-<Python />
 <Web />
 <ReactNative />
 <ReactJS />
diff --git a/shared/open-ai-integration/quickstart.mdx b/shared/open-ai-integration/quickstart.mdx
index a72c82f45..ea4a0e671 100644
--- a/shared/open-ai-integration/quickstart.mdx
+++ b/shared/open-ai-integration/quickstart.mdx
@@ -20,40 +20,41 @@ Follow these steps to set up your Python integration project:
 
 1. Create a new folder for the project.
 
-   ```bash
-   mkdir realtime-agent
-   cd realtime-agent/
+    ```bash
+    mkdir realtime-agent
+    cd realtime-agent/
 
-   ```
+    ```
 
 1. Create the following structure for your project:
 
-   > Note: This project uses the OpenAI [`realtimeapi-examples`](https://openai.com/api/) package.
-   > Download the project and unzip it into your `realtime-agent` folder.
-
-   ```
-   /realtime-agent
-    ├── __init__.py
-    ├── .env
-    ├── agent.py
-    ├── agora
-    │   ├── __init__.py
-    │   ├── requirements.txt
-    │   └── rtc.py
-    └── realtimeapi
+    ```
+    /realtime-agent
         ├── __init__.py
-        ├── client.py
-        ├── messages.py
-        └── util.py
-   ```
-
-To provide some context for the files within the project:
-
-- `agent.py`: The primary script responsible for executing the `RealtimeKitAgent`. It integrates Agora's functionality from the `agora/rtc.py` module and OpenAI's capabilities from the `realtimeapi` package.
-- `agora/rtc.py`: Contains an implementation of the server-side Agora Python Voice SDK.
-- `realtimeapi/`: Contains the classes and methods that interact with OpenAI’s Realtime API.
-
-The [Complete code](#complete-integration-code) code for `agent.py` and `rtc.py` are provided at the bottom of this page.
+        ├── .env
+        ├── agent.py
+        ├── agora
+        │   ├── __init__.py
+        │   ├── requirements.txt
+        │   └── rtc.py
+        └── realtimeapi
+            ├── __init__.py
+            ├── client.py
+            ├── messages.py
+            └── util.py
+    ```
+
+    <Admonition type="info" title="Note">
+    This project uses the OpenAI [`realtimeapi-examples`](https://openai.com/api/) package.Download the project and unzip it into your `realtime-agent` folder.
+    </Admonition>
+
+    The following descriptions provide an overview of the key files in the project:
+
+    - `agent.py`: The primary script responsible for executing the `RealtimeKitAgent`. It integrates Agora's functionality from the `agora/rtc.py` module and OpenAI's capabilities from the `realtimeapi` package.
+    - `agora/rtc.py`: Contains an implementation of the server-side Agora Python Voice SDK.
+    - `realtimeapi/`: Contains the classes and methods that interact with OpenAI’s Realtime API.
+
+    The [Complete code](#complete-integration-code) for `agent.py` and `rtc.py` is provided at the bottom of this page.
 
 1. Open your `.env` file and add the following keys:
 
@@ -82,7 +83,9 @@ The `RealtimeKitAgent` class integrates Agora's audio communication capabilities
 
 The `setup_and_run_agent` method sets up the `RealtimeKitAgent` by connecting to an Agora channel using the provided `RtcEngine` and initializing a session with the OpenAI Realtime API client. It sends configuration messages to set up the session and define conversation parameters, such as the system message and output audio format, before starting the agent's operations. The method uses asynchronous execution to handle both listening for the session start and sending conversation configuration updates concurrently. It ensures that the connection is properly managed and cleaned up after use, even in cases of exceptions, early exits, or shutdowns.
 
-> **Note**: UIDs in the Python SDK are set using a string value. Agora recommends using only numerical values for UID strings to ensure compatibility with all Agora products and extensions.
+<Admonition type="info" title="Note">
+UIDs in the Python SDK are set using a string value. Agora recommends using only numerical values for UID strings to ensure compatibility with all Agora products and extensions.
+</Admonition>
 
 ```python
 @classmethod
@@ -107,11 +110,6 @@ async def setup_and_run_agent(
             await client.send_message(
                 messages.UpdateSessionConfig(
                     session=messages.SessionResource(),
-                    # The following options are commented out, possibly for future use
-                    # turn_detection=inference_config.turn_detection,
-                    # transcribe_input=False,
-                    # input_audio_format=messages.AudioFormats.PCM16,
-                    # vads=messages.VADConfig(),
                 )
             )
 
@@ -157,7 +155,7 @@ async def setup_and_run_agent(
 
 ### Initialize the RealtimeKitAgent
 
-The `RealtimeKitAgent` class constructor accepts an OpenAI `RealtimeApiClient`, an optional `ToolContext` for function registration, and an Agora channel for managing audio communication. This setup initializes the agent to process audio streams, register tools (if provided), and interact with the AI model.
+The `RealtimeKitAgent` class constructor accepts an OpenAI `RealtimeApiClient`, an optional `ToolContext` for function registration, and an Agora channel for managing audio communication. This setup initializes the agent to process audio streams, register tools (if provided), and interacts with the AI model.
 
 ```python
 def __init__(
@@ -361,14 +359,14 @@ logger = logging.getLogger(**name**)
 
 @dataclass(frozen=True, kw_only=True)
 class InferenceConfig:
-"""Configuration for the inference process."""
-system_message: str | None = None
-turn_detection: messages.TurnDetectionTypes | None = None
-voice: messages.Voices | None = None
+    """Configuration for the inference process."""
+    system_message: str | None = None
+    turn_detection: messages.TurnDetectionTypes | None = None
+    voice: messages.Voices | None = None
 
 @dataclass(frozen=True, kw_only=True)
 class LocalFunctionToolDeclaration:
-"""Declaration of a tool that can be called by the model, and runs a function locally on the tool context."""
+    """Declaration of a tool that can be called by the model, and runs a function locally on the tool context."""
 
     name: str
     description: str
@@ -387,7 +385,7 @@ class LocalFunctionToolDeclaration:
 
 @dataclass(frozen=True, kw_only=True)
 class PassThroughFunctionToolDeclaration:
-"""Declaration of a tool that can be called by the model, and is passed through the LiveKit client."""
+    """Declaration of a tool that can be called by the model, and is passed through the LiveKit client."""
 
     name: str
     description: str
@@ -409,19 +407,19 @@ ToolDeclaration = LocalFunctionToolDeclaration | PassThroughFunctionToolDeclarat
 
 @dataclass(frozen=True, kw_only=True)
 class LocalToolCallExecuted:
-json_encoded_output: str
+    json_encoded_output: str
 
 @dataclass(frozen=True, kw_only=True)
 class ShouldPassThroughToolCall:
-decoded_function_args: dict[str, Any]
+    decoded_function_args: dict[str, Any]
 
 # Type alias for tool execution results
 
 ExecuteToolCallResult = LocalToolCallExecuted | ShouldPassThroughToolCall
 
 class ToolContext(abc.ABC):
-"""Abstract base class for managing tool declarations and executions."""
-\_tool_declarations: dict[str, ToolDeclaration]
+    """Abstract base class for managing tool declarations and executions."""
+    _tool_declarations: dict[str, ToolDeclaration]
 
     def __init__(self) -> None:
         # TODO: This should be an ordered dict
@@ -479,18 +477,18 @@ class ToolContext(abc.ABC):
         return [v.model_description() for v in self._tool_declarations.values()]
 
 class ClientToolCallResponse(BaseModel):
-tool_call_id: str
-result: dict[str, Any] | str | float | int | bool | None = None
+    tool_call_id: str
+    result: dict[str, Any] | str | float | int | bool | None = None
 
 class RealtimeKitAgent:
-"""Main agent class for handling real-time communication and processing."""
-engine: RtcEngine
-channel: Channel
-client: RealtimeApiClient
-audio_queue: asyncio.Queue[bytes] = asyncio.Queue()
-message_queue: asyncio.Queue[messages.ResonseAudioTranscriptionDelta] = asyncio.Queue()
-message_done_queue: asyncio.Queue[messages.ResonseAudioTranscriptionDone] = asyncio.Queue()
-tools: ToolContext | None = None
+    """Main agent class for handling real-time communication and processing."""
+    engine: RtcEngine
+    channel: Channel
+    client: RealtimeApiClient
+    audio_queue: asyncio.Queue[bytes] = asyncio.Queue()
+    message_queue: asyncio.Queue[messages.ResonseAudioTranscriptionDelta] = asyncio.Queue()
+    message_done_queue: asyncio.Queue[messages.ResonseAudioTranscriptionDone] = asyncio.Queue()
+    tools: ToolContext | None = None
 
     _client_tool_futures: dict[str, asyncio.Future[ClientToolCallResponse]]
 
@@ -612,9 +610,8 @@ tools: ToolContext | None = None
         asyncio.create_task(self._stream_input_audio_to_model()).add_done_callback(
             log_exception
         )
-        asyncio.create_task(
-            self._stream_audio_queue_to_audio_output()
-        ).add_done_callback(log_exception)
+        asyncio.create_task(self._stream_audio_queue_to_audio_output()).add_done_callback(
+            log_exception)
 
         asyncio.create_task(self._process_model_messages()).add_done_callback(
             log_exception
@@ -689,41 +686,41 @@ tools: ToolContext | None = None
                     logger.warning(f"Unhandled message type: {message=}")
 
 async def shutdown(loop, signal=None):
-"""Gracefully shut down the application."""
-if signal:
-print(f"Received exit signal {signal.name}...")
-
-    tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
-
-    print(f"Cancelling {len(tasks)} outstanding tasks")
-    for task in tasks:
-        task.cancel()
-
-    await asyncio.gather(*tasks, return_exceptions=True)
-    loop.stop()
-
-if **name** == "**main**": # Load environment variables and run the agent
-load_dotenv()
-asyncio.run(
-RealtimeKitAgent.entry_point(
-engine=RtcEngine(appid="aab8b8f5a8cd4469a63042fcfafe7063"),
-inference_config=InferenceConfig(
-system_message="""\
-You are a helpful assistant. If asked about the weather, make sure to use the provided tool to get that information. \
-If you are asked a question that requires a tool, say something like "working on that" and don't provide a concrete response \
-until you have received the response to the tool call.\
+    """Gracefully shut down the application."""
+    if signal:
+        print(f"Received exit signal {signal.name}...")
+
+        tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
+        
+        print(f"Cancelling {len(tasks)} outstanding tasks")
+        for task in tasks:
+            task.cancel()
+
+        await asyncio.gather(*tasks, return_exceptions=True)
+        loop.stop()
+
+if __name__ == "__main__":  # Load environment variables and run the agent
+    load_dotenv()
+    asyncio.run(
+        RealtimeKitAgent.entry_point(
+            engine=RtcEngine(appid="aab8b8f5a8cd4469a63042fcfafe7063"),
+            inference_config=InferenceConfig(
+                system_message="""\\
+You are a helpful assistant. If asked about the weather, make sure to use the provided tool to get that information. \\
+If you are asked a question that requires a tool, say something like "working on that" and don't provide a concrete response \\
+until you have received the response to the tool call.\\
 """,
-voice=messages.Voices.Alloy,
-turn_detection=messages.TurnDetectionTypes.SERVER_VAD,
-),
-)
-)
+                voice=messages.Voices.Alloy,
+                turn_detection=messages.TurnDetectionTypes.SERVER_VAD,
+            ),
+        )
+    )
 `}
 
 </CodeBlock>
 </details>
 
-The `agent.py` imports key classes from `rtc.py`, which serves as a wrapper around the Agora Python Voice SDK, facilitating communication and managing audio streams. For SDK setup and dependencies, refer to [Voice calling quickstart](/voice-calling/get-started/get-started-sdk?platform=python).
+The `agent.py` imports key classes from `rtc.py`, which implements the server-side Agora Python Voice SDK,, facilitating communication and managing audio streams. For SDK setup and dependencies, refer to [Voice calling quickstart](/voice-calling/get-started/get-started-sdk?platform=python).
 
 Below is the complete code for `rtc.py`.
 
@@ -732,7 +729,7 @@ Below is the complete code for `rtc.py`.
   <CodeRtcPy />
 </details>
 
-## **Test your code**
+## Test the code
 
 1. **Update the values for** `AGORA_APP_ID` **and** `OPENAI_API_KEY` **in the project's** `.env` **file**.  
    This step ensures that the necessary credentials for Agora and OpenAI are correctly configured in your project.
@@ -745,7 +742,7 @@ Below is the complete code for `rtc.py`.
 
    This command launches the `agent.py` script, initializing the Agora channel and the OpenAI API connection.
 
-## **Reference**
+## Reference
 
 This section contains additional information or links to relevant documentation that complements the current page or explains other aspects of the product.
 
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-implementation/python.mdx b/shared/video-sdk/get-started/get-started-sdk/project-implementation/python.mdx
index 27fa285a6..7c956f7f6 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-implementation/python.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-implementation/python.mdx
@@ -23,7 +23,6 @@ from agora.rtc.local_user import LocalUser
 from agora.rtc.local_user_observer import IRTCLocalUserObserver
 from agora.rtc.rtc_connection import RTCConnection, RTCConnInfo
 from agora.rtc.rtc_connection_observer import IRTCConnectionObserver
-from pyee.asyncio import AsyncIOEventEmitter
 ```
 
 ### Initialize the engine
@@ -52,6 +51,10 @@ class RtcEngine:
 
 To asynchronously join a channel, implement a `Channel` class. When you create an instance of the class, the initializer sets up the necessary components for joining a channel. It takes an instance of `RtcEngine`, a `channelId`, and a `uid` as parameters. During initialization, the code creates an event emitter, configures the connection for broadcasting, and registers an event observer for channel events. It also sets up the local user’s audio configuration to enable audio streaming.
 
+<Admonition type="info" title="Note">
+UIDs in the Python SDK are set using a string value. Agora recommends using only numerical values for UID strings to ensure compatibility with all Agora products and extensions.
+</Admonition>
+
 ```python
 class Channel():
     def __init__(
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-setup/python.mdx b/shared/video-sdk/get-started/get-started-sdk/project-setup/python.mdx
index d7bae4b00..564569bdf 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-setup/python.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-setup/python.mdx
@@ -12,10 +12,15 @@
     pip3 install pyee
     ```
 
-1. Install the <Vg k="COMPANY" /> <Vpl k="NAME" /> SDK.
+1. Install the <Vg k="COMPANY" /> server side <Vpl k="NAME" /> SDK.
 
     ```
     pip3 install agora-python-server-sdk
     ```
 
+    <Admonition type="info" title="Note">
+    The Python SDK is a server side SDK.
+    </Admonition>
+
+
 </PlatformWrapper>
\ No newline at end of file
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-test/python.mdx b/shared/video-sdk/get-started/get-started-sdk/project-test/python.mdx
index 2ca2505aa..a684aea31 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-test/python.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-test/python.mdx
@@ -2,9 +2,9 @@
 
 Follow these steps to test the demo code:
 
-1. Create a file named `rtc.py` and paste the [complete source code](#complete-source-code) into this file.
+1. Create a file named `rtc.py` and paste the [complete source code](#complete-code) into this file.
 
-1. Create a file named `test_rtc.py` in the same folder as `rtc.py` and copy the following code to the file:
+1. Create a file named `main.py` in the same folder as `rtc.py` and copy the following code to the file:
 
     ```python
     import asyncio
@@ -38,7 +38,7 @@ Follow these steps to test the demo code:
 1. To run the app, execute the following command in your terminal:
 
     ```bash
-    python3 run_rtc.py
+    python3 main.py
     ```
 
     You see output similar to the following: