The variables represent the following information:
`sessionid`: The recording ID, the unique identifier of the current recording.
`sessiondate`: The time when the recording starts. The time zone is UTC+0, and the variable value consists of year, month, day, hour, minute, second, and millisecond. For example, 20190611073246073 represents 7:32:46.073, June 11, 2019.
|
+
+An example of the `payload`:
+
+```json
+{
+ "eventName": "new_record_start",
+ "entryPoint": "live",
+ "streamName": "test_stream",
+ "fileName": "live/test_stream/536694d6ef4b06c137b0e8a0a3f225a7_20211122094718109.m3u8"
+}
+```
+
+
+
+### 7 new_record_end
+
+`eventType` 7 indicates that the custom recording ends, and the `payload` contains the following fields:
+
+| Field | Data type | Meaning |
+| :----------- | :-------- | :----------------------------------------------------------- |
+| `eventName` | String | The event name, that is, `new_record_end`. |
+| `entryPoint` | String | The entry point name. |
+| `streamName` | String | The stream name. |
+| `fileName` | String | The recording file name.
The variables represent the following information:
`sessionid`: The recording ID, the unique identifier of the current recording.
`sessiondate`: The time when the recording starts. The time zone is UTC+0, and the variable value consists of year, month, day, hour, minute, second, and millisecond. For example, 20190611073246073 represents 7:32:46.073, June 11, 2019.
|
+
+An example of the `payload`:
+
+```json
+{
+ "eventName": "new_record_end",
+ "entryPoint": "live",
+ "streamName": "test_stream",
+ "fileName": "live/test_stream/536694d6ef4b06c137b0e8a0a3f225a7_20211122094718109.m3u8"
+}
+```
+
### 101 new_standard_record_file
`eventType` 101 indicates that the new standard record file is generated, and the `payload` contains the following fields:
@@ -309,3 +360,26 @@ An example of the `payload`:
"suggestion": "pass"
}
```
+
+
+### 106 new_encrypt_record_file
+
+`eventType` 106 indicates that the DRM-encrypted recording file is generated, and the `payload` contains the following fields:
+
+| Field | Data type | Meaning |
+| :----------- | :-------- | :----------------------------------------------------------- |
+| `eventName` | String | The event name, that is, `new_encrypt_record_file`. |
+| `entryPoint` | String | The entry point name. |
+| `streamName` | String | The stream name. |
+| `fileName` | String | The name of the DRM-encrypted recording file.
The variables represent the following information:
`sessionid`: The recording ID, the unique identifier of the current recording.
`sessiondate`: The time when the recording starts. The time zone is UTC+0, and the variable value consists of year, month, day, hour, minute, second, and millisecond. For example, 20190611073246073 represents 7:32:46.073, June 11, 2019.
|
+
+An example of the `payload`:
+
+```json
+{
+ "eventName": "new_encrypt_record_file",
+ "entryPoint": "live",
+ "streamName": "test_stream",
+ "fileName": "encrypt/live/test_stream/536694d6ef4b06c137b0e8a0a3f225a7_20211122094718109.m3u8"
+}
+```
diff --git a/shared/common/no-uikit.mdx b/shared/common/no-uikit.mdx
index 24a00e19b..63a0c4972 100644
--- a/shared/common/no-uikit.mdx
+++ b/shared/common/no-uikit.mdx
@@ -9,4 +9,6 @@ import * as data from '@site/data/variables';
**Currently, there is no for this platform.**
-
+
+**Currently, there is no for this platform.**
+
diff --git a/shared/common/prerequities.mdx b/shared/common/prerequities.mdx
index 03570b8f3..98d4fa5b3 100644
--- a/shared/common/prerequities.mdx
+++ b/shared/common/prerequities.mdx
@@ -16,6 +16,10 @@
- A device running Windows 7 or higher.
- Microsoft Visual Studio 2017 or higher with [Desktop development with C++](https://devblogs.microsoft.com/cppblog/windows-desktop-development-with-c-in-visual-studio/) support.
+
+- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher with C++ and desktop development support.
+- [Unreal Engine 4.27](https://www.unrealengine.com/en-US/download) or higher.
+
- A [supported browser](../reference/supported-platforms#browsers).
- Physical media input devices, such as a camera and a microphone.
diff --git a/shared/extensions-marketplace/_use-an-extension.mdx b/shared/extensions-marketplace/_use-an-extension.mdx
index 2c8836f33..9a627eddb 100644
--- a/shared/extensions-marketplace/_use-an-extension.mdx
+++ b/shared/extensions-marketplace/_use-an-extension.mdx
@@ -25,6 +25,10 @@ An extension accesses voice and video data when it is captured from the user's l
A typical transmission pipeline consists of a chain of procedures, including capture, pre-processing, encoding, transmitting, decoding, post-processing, and play. Audio or video extensions are inserted into either the pre-processing or post-processing procedure, in order to modify the voice or video data in the transmission pipeline.
+
+**This functionality is not supported for Unreal Engine.**
+
+
## Prerequisites
In order to follow this procedure you must have:
@@ -82,4 +86,6 @@ To ensure that you have integrated the extension in your :
This section contains information that completes the information in this page, or points you to documentation that explains other aspects to this product.
-
\ No newline at end of file
+
+
+
\ No newline at end of file
diff --git a/shared/extensions-marketplace/activefence/reference/index.mdx b/shared/extensions-marketplace/activefence/reference/index.mdx
index ac69a9556..10685649e 100644
--- a/shared/extensions-marketplace/activefence/reference/index.mdx
+++ b/shared/extensions-marketplace/activefence/reference/index.mdx
@@ -4,13 +4,14 @@ import Ios from './ios.mdx';
import Macos from './macos.mdx';
import ReactNative from './react-native.mdx';
import Web from './web.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
-
+
\ No newline at end of file
diff --git a/shared/extensions-marketplace/ai-noise-suppression.mdx b/shared/extensions-marketplace/ai-noise-suppression.mdx
index fda5b873f..9efd5fbbf 100644
--- a/shared/extensions-marketplace/ai-noise-suppression.mdx
+++ b/shared/extensions-marketplace/ai-noise-suppression.mdx
@@ -30,6 +30,10 @@ In the pre-processing stage, uses deep learning noise reductio
![](/images/extensions-marketplace/ai-noise-suppression.png)
+
+ **This functionality is not supported for Unreal Engine.**
+
+
## Prerequisites
@@ -160,3 +164,5 @@ Currently, has the following limitations:
- Although supports Safari v14.1 and greater, there are performance issues. Best practice is to not support Safari.
- does not support browsers on mobile devices.
+
+
\ No newline at end of file
diff --git a/shared/extensions-marketplace/virtual-background.mdx b/shared/extensions-marketplace/virtual-background.mdx
index 3b20282bb..6a638837c 100644
--- a/shared/extensions-marketplace/virtual-background.mdx
+++ b/shared/extensions-marketplace/virtual-background.mdx
@@ -36,11 +36,6 @@ A typical transmission pipeline in the Agora Web SDK consists of a chain of proc
![](https://web-cdn.agora.io/docs-files/1647326674232)
-
-**Support for is not yet available for .**
-
-
-
## Project setup
@@ -60,4 +55,3 @@ This section contains information that completes the information in this page, o
-
\ No newline at end of file
diff --git a/shared/variables/global.js b/shared/variables/global.js
index c14b1de05..cfc7f2fea 100644
--- a/shared/variables/global.js
+++ b/shared/variables/global.js
@@ -10,7 +10,6 @@ export const VSDK_UNITY_FLUTTER_RN_RELEASE_API = '4.0.0.beta-2'
export const VSDK_WEB_RELEASE_API = '4.13.0'
export const MAJOR_VERSION = '4.x'
export const VSDK_FLUTTER_PREVIOUS_RELEASE = '5.x';
-
export const API_ROOT = 'https://api-ref.agora.io/en';
//export const API_ROOT = 'https://api-ref-staging.agora.io/en';
export const API_REF_ROOT = `${API_ROOT}/video-sdk`;
@@ -30,6 +29,8 @@ export const API_REF_IOS_ROOT_VOICE_SDK = `${API_REF_ROOT_VOICE_SDK}/ios/${VSDK_
export const API_REF_IOS_ROOT_RTC_KIT_VOICE_SDK = `${API_REF_IOS_ROOT_VOICE_SDK}/agorartckit`;
export const API_REF_IOS_ROOT_RTC_ENGINE_KIT_VOICE_SDK = `${API_REF_IOS_ROOT_RTC_KIT_VOICE_SDK}/agorartcenginekit`;
export const API_REF_RN_ROOT = `${API_REF_ROOT}/react-native/${MAJOR_VERSION}/API`;
+
+export const API_REF_UE_ROOT = `${API_REF_ROOT}/unreal-engine/${MAJOR_VERSION}/API`;
export const API_REF_RN_PREVIOUS_ROOT = `${API_REF_ROOT}/react-native/${VSDK_PREVIOUS_RELEASE_API}`;
export const API_REF_RN_ROOT_VOICE = `${API_REF_ROOT_VOICE_SDK}/react-native/${MAJOR_VERSION}/API`;
export const API_REF_MACOS_ROOT = `${API_REF_ROOT}/macos/${VSDK_RELEASE_API}/documentation`;
@@ -240,7 +241,6 @@ export const DEMO_BASIC_VIDEO_CALL_URL =
export const DEMO_PAGE_LINK = ` web demo`;
-
export const AGORA_DYNAMIC_KEY_CODE_BASE_URL =
'https://github.com/AgoraIO/Tools/tree/master/DynamicKey/AgoraDynamicKey';
diff --git a/shared/variables/platform.js b/shared/variables/platform.js
index 71f046872..897d8f1b5 100644
--- a/shared/variables/platform.js
+++ b/shared/variables/platform.js
@@ -68,6 +68,12 @@ const data = {
NAME: 'Linux C',
PATH: 'linux-c',
CLIENT: 'app'
+ },
+
+ 'unreal': {
+ NAME: 'Unreal Engine',
+ PATH: 'unreal',
+ CLIENT: 'game'
}
};
diff --git a/shared/video-sdk/_get-started-uikit.mdx b/shared/video-sdk/_get-started-uikit.mdx
index d65688db4..9b818ae28 100644
--- a/shared/video-sdk/_get-started-uikit.mdx
+++ b/shared/video-sdk/_get-started-uikit.mdx
@@ -12,6 +12,7 @@ import NoUIKit from '@docs/shared/common/no-uikit.mdx';
+
This page outlines the minimum code you need to integrate high-quality, low-latency functionality into your with a customizable UI.
@@ -75,3 +76,4 @@ This section contains information that completes the information in this page, o
+
diff --git a/shared/video-sdk/authentication-workflow/project-implementation/index.mdx b/shared/video-sdk/authentication-workflow/project-implementation/index.mdx
index 48b167c2d..8e2c7afda 100644
--- a/shared/video-sdk/authentication-workflow/project-implementation/index.mdx
+++ b/shared/video-sdk/authentication-workflow/project-implementation/index.mdx
@@ -8,6 +8,7 @@ import Flutter from './flutter.mdx';
import MacOs from './macos.mdx'
import Windows from './windows.mdx'
import LinuxC from './linux-c.mdx';
+import Unreal from './unreal.mdx'
@@ -20,4 +21,5 @@ import LinuxC from './linux-c.mdx';
+
diff --git a/shared/video-sdk/authentication-workflow/project-implementation/unreal.mdx b/shared/video-sdk/authentication-workflow/project-implementation/unreal.mdx
new file mode 100644
index 000000000..875b962b5
--- /dev/null
+++ b/shared/video-sdk/authentication-workflow/project-implementation/unreal.mdx
@@ -0,0 +1,207 @@
+
+
+1. **Add the necessary dependencies**
+
+ In order to make HTTPS calls to a token server and interpret the JSON return parameters, integrate the `HTTP` module into your Unreal project. In `AgoraImplementation.Build.cs`, update `PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore"});` with the following line:
+
+ ``` cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "Json", "AgoraPlugin", "HTTP" });
+ ```
+
+3. **Enable the user to specify a channel name**
+
+ To get the channel name from the user, you need a text box in the UI. To add a text box, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following property specifiers after `UButton* LeaveBtn = nullptr;`:
+
+ ``` cpp
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UEditableTextBox* channelTextBox = nullptr;
+ ```
+
+ 2. In Unreal Editor, go to **Content Browser** and open `NewBlueprint`, then do the following:
+
+ 1. Drag **Text Box** from the **Input** section of the **Palette** to the canvas panel. You see a text box appears on the canvas.
+
+ 1. In **Details**, rename **EditableTextBox_0** to `channelTxtBox`, then change the following properties:
+
+ * **Position X** - 799
+ * **Position Y** - 192
+ * **Size X** - 257
+ * **Size Y** - 43
+
+ Click **Compile** to save and compile the newly added widget.
+
+4. **Add the required header files**
+
+ In `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ``` cpp
+ #include "Components/EditableTextBox.h"
+ #include "Http.h"
+ #include
+ using namespace std;
+ ```
+
+5. **Add variables for your connection to the token server**
+
+ Declare the variables you need to specify the local user uid, token server URL, and the token expire time. In `MyUserWidget.h`, add the following declarations to the `UMyUserWidget` class:
+
+ ``` cpp
+ std::string serverUrl = ""; // The base URL to your token server, for example, "https://agora-token-service-production-92ff.up.railway.app".
+ int tokenExpireTime = 40; // Expire time in Seconds.
+ int localUid = 1;
+ ```
+
+ Make sure you specify the token server URL in exactly the same format as shown in the example.
+
+7. **Retrieve a token from the server**
+
+ Use a GET request to retrieve an authentication token for a specific channel from the token server, then decode the return parameters. to implement this workflow, do the following:
+
+
+ 1. Setup a method to fetch token. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ void fetchToken();
+ ```
+
+ 2. Add the logic of fetching a token from the server to the `fetchToken` method. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+
+ ```cpp
+ void UMyUserWidget::fetchToken()
+ {
+ // Setup a Http get request to fetch a token from the token server.
+ FHttpRequestRef Request = FHttpModule::Get().CreateRequest();
+ Request->OnProcessRequestComplete().BindUObject(this, &UMyUserWidget::OnResponseReceived);
+ std::string serverUrlString = serverUrl + "/rtc/";
+ serverUrlString += channelName;
+ // Concatenate token type, uid, and expire time with the server URL string.
+ serverUrlString += "/1/uid/";
+ serverUrlString += to_string(localUid) + "/?expiry=";
+ serverUrlString += to_string(tokenExpireTime);
+ // Convert the sever URL string to a FString.
+ FString Furl(serverUrlString.c_str());
+ UE_LOG(LogTemp, Warning, TEXT("%s"), *Furl);
+ // Set the request URL.
+ Request->SetURL(Furl);
+ // Set the request type.
+ Request->SetVerb("GET");
+ // Process the request to retrieve a token.
+ Request->ProcessRequest();
+ }
+ ```
+
+
+ ```cpp
+ void UMyUserWidget::fetchToken()
+ {
+ // Setup a Http get request to fetch a token from the token server.
+ FHttpRequestRef Request = FHttpModule::Get().CreateRequest();
+ Request->OnProcessRequestComplete().BindUObject(this, &UMyUserWidget::OnResponseReceived);
+ std::string serverUrlString = serverUrl + "/rtc/";
+ serverUrlString += channelName;
+ // Concatenate token type, uid, and expire time with the server URL string.
+ if (userRole == "Host")
+ {
+ serverUrlString += "/1/uid/";
+ }
+ else
+ {
+ serverUrlString += "/2/uid/";
+ }
+ serverUrlString += to_string(localUid) + "/?expiry=";
+ serverUrlString += to_string(tokenExpireTime);
+ // Convert the sever URL string to a FString.
+ FString Furl(serverUrlString.c_str());
+ UE_LOG(LogTemp, Warning, TEXT("%s"), *Furl);
+ // Set the request URL.
+ Request->SetURL(Furl);
+ // Set the request type.
+ Request->SetVerb("GET");
+ // Process the request to retrieve a token.
+ Request->ProcessRequest();
+ }
+ ```
+
+
+8. **Update the `joinChannel` method to fetch a token**
+
+ To retrieve a fresh token from the token server, call `fetchToken` before you join a channel. In `MyUserWidget.cpp`, locate `OnJoinButtonClick` and add the following line after `agoraEngine->setupLocalVideo(videoCanvas);`:
+
+ ``` cpp
+ FString channelname = channelTextBox->GetText().ToString();
+ if(channelName == "" || serverUrl == "")
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Please!, make sure you passed a valid server url and channel name"));
+ }
+ else
+ {
+ channelName = std::string(TCHAR_TO_UTF8(*channelname));
+ fetchToken();
+ }
+ ```
+ When your receives a response from the server, it calls `OnResponseReceived`. You use this function to parse the token and join the channel. To implement this, do the following:
+
+ 1. Setup `OnResponseReceived` in your . In `MyUserWidget.cpp`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ void OnResponseReceived(FHttpRequestPtr Request, FHttpResponsePtr Response, bool bConnectedSuccessfully);
+ ```
+ 2. Parse the token from the request response. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnResponseReceived(FHttpRequestPtr Request, FHttpResponsePtr Response, bool bConnectedSuccessfully)
+ {
+ // Parse the response to retrieve the token.
+ TSharedPtr ResponseObj;
+ TSharedRef> Reader = TJsonReaderFactory<>::Create(Response->GetContentAsString());
+ FJsonSerializer::Deserialize(Reader, ResponseObj);
+ UE_LOG(LogTemp, Display, TEXT("Response %s"), *Response->GetContentAsString());
+ UE_LOG(LogTemp, Display, TEXT("rtcToken: %s"), *ResponseObj->GetStringField("rtcToken"));
+ }
+ ```
+
+ 3. Shift the `joinChannel` API call to `OnResponseReceived` method. In `MyUserWidget.cpp`, locate `OnJoinButtonClick` and remove the following lines:
+
+ ```cpp
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", 0);
+ isJoin = true;
+ ```
+
+ 3. In `MyUserWidget.cpp`, locate `OnResponseReceived` and add the following after `FJsonSerializer::Deserialize(Reader, ResponseObj);`:
+
+ ```cpp
+ token = std::string(TCHAR_TO_UTF8(*ResponseObj->GetStringField("rtcToken")));
+ if (isJoin == true)
+ {
+ agoraEngine->renewToken(token.c_str());
+ }
+ else
+ {
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", localUid);
+ isJoin = true;
+ }
+ ```
+9. **Handle the event triggered by when the token is about to expire**
+
+ A token expires after the `expireTime` specified in the call to the token server or expires after 24 hours, if the time is not specified. The `onTokenPrivilegeWillExpire` event receives a callback when the current token is about to expire so that a fresh token may be retrieved and used. To implement `onTokenPrivilegeWillExpire` in your , take the following steps:
+
+ 1. Set up the `onTokenPrivilegeWillExpire` callback. In `MyUserWidget.h`, add the following method to `UMyUserWidget`:
+
+ ```cpp
+ void onTokenPrivilegeWillExpire(const char* expiredToken);
+ ```
+
+ 2. Call `fetchToken` when the triggers `onTokenPrivilegeWillExpire`. In `MyUserWidget.cpp`, add the following before `OnJoinButtonClick`:
+
+ ```cpp
+ void UMyUserWidget::onTokenPrivilegeWillExpire(const char* expiredToken)
+ {
+ UE_LOG(LogTemp, Display, TEXT("Token expired: Retrieving a token from the server...."));
+ fetchToken();
+ }
+ ```
+
+
diff --git a/shared/video-sdk/authentication-workflow/project-test/index.mdx b/shared/video-sdk/authentication-workflow/project-test/index.mdx
index 9d30bf9d3..7238e1835 100644
--- a/shared/video-sdk/authentication-workflow/project-test/index.mdx
+++ b/shared/video-sdk/authentication-workflow/project-test/index.mdx
@@ -7,6 +7,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import MacOs from './macos.mdx'
import Windows from './windows.mdx'
+import Unreal from './unreal.mdx'
import LinuxC from './linux-c.mdx';
@@ -18,4 +19,5 @@ import LinuxC from './linux-c.mdx';
+
diff --git a/shared/video-sdk/authentication-workflow/project-test/unreal.mdx b/shared/video-sdk/authentication-workflow/project-test/unreal.mdx
new file mode 100644
index 000000000..1ec916c94
--- /dev/null
+++ b/shared/video-sdk/authentication-workflow/project-test/unreal.mdx
@@ -0,0 +1,17 @@
+
+3. Set the variables in your :
+ 1. Update `appId` in the declarations to the value from .
+
+ 1. Set `token` to an empty string in the declarations.
+
+ 1. Update `serverUrl` in the declarations to the base address of your token server, for example, `https://agora-token-service-production-92ff.up.railway.app`.
+
+
+1. In Unreal Editor, click **Play**. A moment later you see the project running on your development device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+1. Enter the same channel name in the UI text box that you used to connect to the web demo.
+
+1. Click **Join** to connect your unreal to the web demo app.
+
diff --git a/shared/video-sdk/authentication-workflow/reference/index.mdx b/shared/video-sdk/authentication-workflow/reference/index.mdx
index 92cb5b55d..2c12f3d8c 100644
--- a/shared/video-sdk/authentication-workflow/reference/index.mdx
+++ b/shared/video-sdk/authentication-workflow/reference/index.mdx
@@ -6,6 +6,7 @@ import ReactNative from './react-native.mdx'
import Unity from './unity.mdx';
import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
+import Unreal from './unreal.mdx';
import LinuxC from './linux-c.mdx';
import Windows from './windows.mdx'
@@ -17,5 +18,6 @@ import Windows from './windows.mdx'
+
diff --git a/shared/video-sdk/authentication-workflow/reference/unreal.mdx b/shared/video-sdk/authentication-workflow/reference/unreal.mdx
new file mode 100644
index 000000000..9c1b543e0
--- /dev/null
+++ b/shared/video-sdk/authentication-workflow/reference/unreal.mdx
@@ -0,0 +1,9 @@
+
+
+### API reference
+
+- renewToken
+
+- onTokenPrivilegeWillExpire
+
+
diff --git a/shared/video-sdk/develop/_custom-video-and-audio.mdx b/shared/video-sdk/develop/_custom-video-and-audio.mdx
index d1a78f0de..9a4920b3b 100644
--- a/shared/video-sdk/develop/_custom-video-and-audio.mdx
+++ b/shared/video-sdk/develop/_custom-video-and-audio.mdx
@@ -41,6 +41,7 @@ The following figure shows the workflow you need to implement to stream a custom
![Process custom audio](/images/voice-sdk/custom-source-audio.svg)
+
## Prerequisites
To follow this procedure you must have implemented the [](../get-started/get-started-sdk) project for .
diff --git a/shared/video-sdk/develop/_migration-guide.mdx b/shared/video-sdk/develop/_migration-guide.mdx
index a8274f9f8..2314d667b 100644
--- a/shared/video-sdk/develop/_migration-guide.mdx
+++ b/shared/video-sdk/develop/_migration-guide.mdx
@@ -9,6 +9,8 @@ import Electron from '@docs/shared/video-sdk/develop/migration-guide/electron.md
import Windows from '@docs/shared/video-sdk/develop/migration-guide/windows.mdx';
import Unity from '@docs/shared/video-sdk/develop/migration-guide/unity.mdx';
import ReactNative from '@docs/shared/video-sdk/develop/migration-guide/react.mdx';
+import Unreal from '@docs/shared/video-sdk/develop/migration-guide/unreal.mdx';
+
@@ -18,4 +20,5 @@ import ReactNative from '@docs/shared/video-sdk/develop/migration-guide/react.md
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/android.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/android.mdx
index 9e2b0747f..d2f7eed50 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/android.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/android.mdx
@@ -146,7 +146,7 @@ To add sound and voice effect to your , take the following ste
audioEffectManager.preloadEffect(soundEffectId, soundEffectFilePath);
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
When a user presses the button, the sound effect is played. When they press the button again, the effect is paused and resumed alternately. To do this, add the following method to the `MainActivity` class:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/electron.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/electron.mdx
index db1f4673c..f01588234 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/electron.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/electron.mdx
@@ -84,7 +84,7 @@ To add sound and voice effect to your , take the following ste
}
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
When a user presses the button, the sound effect is played. When they press the button again, the effect is paused and resumed alternately. To do this, in `preload.js`, add the following method before `document.getElementById("join").onclick = async function ()`:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/flutter.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/flutter.mdx
index 10a03c4f9..9b56c6c02 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/flutter.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/flutter.mdx
@@ -114,7 +114,7 @@ To add sound and voice effect to your , take the following ste
);
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
When a user presses the sound effect button, the sound effect starts playing. When they press the button again, the effect is paused or resumed alternately. To do this, add the following method to the `_MyAppState` class:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/index.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/index.mdx
index be501f963..ca4830ea9 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -16,4 +17,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/ios.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/ios.mdx
index a6dbbc8f2..7a64163f2 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/ios.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/ios.mdx
@@ -136,7 +136,7 @@ To add sound and voice effect to your , take the following ste
agoraEngine.preloadEffect(soundEffectId, filePath: soundEffectFilePath)
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
The first time a user presses the button, the sound effect is played. The next time, the effect is paused. To enable this functionality, add the following function to the `ViewController` class:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/swift.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/swift.mdx
index ffd1d6fb2..9b94770a9 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/swift.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/swift.mdx
@@ -99,7 +99,7 @@ To add sound and voice effect to your , take the following ste
agoraEngine.preloadEffect(soundEffectId, filePath: soundEffectFilePath)
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
The first time a user presses the button, the sound effect is played. The next time, the effect is paused. To enable this functionality, add the following function to the `ViewController` class:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unity.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unity.mdx
index 523494ba0..f8a87b247 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unity.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unity.mdx
@@ -141,7 +141,7 @@ To add sound and voice effect to your , take the following ste
RtcEngine.PreloadEffect(soundEffectId, soundEffectFilePath);
```
-1. **Play, pause, or resume playing the sound effect**
+1. **Play, pause, or resume playing a sound effect**
When a user presses the button, the sound effect is played. When they press the button again, the effect is paused and resumed alternately. To implement this workflow, in your script file, add the following method to `NewBehaviourScript`:
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx
new file mode 100644
index 000000000..015ae6f0d
--- /dev/null
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx
@@ -0,0 +1,266 @@
+
+
+### Implement the user interface
+
+To enable the user to play sound and voice effects and modify the audio route, add the following elements to the user interface:
+
+ * A button to start and stop audio mixing.
+ * A button to play the sound effect.
+ * A button to apply various voice effects.
+ * Three text widgets for the buttons.
+
+To Implement this UI, take the following steps:
+
+1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A button control appears on the canvas panel.
+
+1. In **Details**, rename **Button_0** to `voiceEffectButton`, then change the following coordinates:
+
+ * **Position X** - 650
+ * **Position Y** - 960
+ * **Size X** - 338
+ * **Size Y** - 60
+
+1. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `voiceEffectButton`.
+
+ You see a text widget is added to `voiceEffectButton`.
+
+1. In **Details**, rename the text widget to `voiceEffectBtnText`, then change the **Text** field to `Apply Voice Effect`.
+
+1. Use the same procedure and create two buttons called `playEffectButton` and `audioMixingButton`.
+
+ Do not change the text widget names. You only access the text widget of play effect button from code.
+
+1. Select `playEffectButton` and change the following coordinates in **Details**:
+
+ * **Position X** - 360
+ * **Position Y** - 960
+ * **Size X** - 275
+ * **Size Y** - 60
+
+1. Select `audioMixingButton` and change the following coordinates in **Details**:
+
+ * **Position X** - 1296
+ * **Position Y** - 960
+ * **Size X** - 184
+ * **Size Y** - 60
+
+1. Select the **Text** widget of `playEffectButton`, then in **Details** change the **Text** field to `Play Audio Effect`.
+
+1. Select the **Text** widget of `audioMixingButton`, then in **Details** change the **Text** field to `Mix Audio`.
+
+
+### Handle the system logic
+
+This section describes the steps required to set up access to the UI elements.
+
+1. **Define variables to manage audio effects and access the UI elements**
+
+ In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ int soundEffectId = 1; // Unique identifier for the sound effect file
+ std::string soundEffectFilePath = "https://www.soundjay.com/human/applause-01.mp3"; // URL or path to the sound effect
+ int soundEffectStatus = 0;
+ int voiceEffectIndex = 0;
+ bool audioPlaying = false; // Manage the audio mixing state
+ std::string audioFilePath = "https://www.kozco.com/tech/organfinale.mp3"; // URL or path to the audio mixing file
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* playEffectButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* voiceEffectButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* audioMixingButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UTextBlock* voiceEffectBtnText = nullptr;
+ ```
+
+
+1. **Import the required UI library**
+
+ To access the text widget from UI, in `MyUserWidget.h`, add the following library before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/TextBlock.h"
+ ```
+
+
+1. **Set up event listeners for the UI elements**
+
+ To add event listeners for the UI elements, do the following:
+
+ 1. In `UMyUserWidget.h`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnAudioMixingButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnVoiceEffectButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnPlayEffectButtonClick();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ audioMixingButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnAudioMixingButtonClick);
+ voiceEffectButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnVoiceEffectButtonClick);
+ playEffectButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnPlayEffectButtonClick);
+ ```
+
+### Implement sound and voice effects
+
+To add sound and voice effect to your , take the following steps:
+
+1. **Enable the user to start and stop audio mixing**
+
+ When the user presses **Mix Audio**, the fetches an audio file using `audioFilePath` and mixes it with the local audio stream. The remote and local users hear the audio file playing with the audio stream. To enable audio mixing in your , in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnAudioMixingButtonClick()
+ {
+ audioPlaying = !audioPlaying;
+ if (audioPlaying)
+ {
+ agoraEngine->startAudioMixing(audioFilePath.c_str(), false, 1, 0);
+ }
+ else
+ {
+ agoraEngine->stopAudioMixing();
+ }
+ }
+ ```
+
+1. **Pre-load sound effects**
+
+ To set up playing voice effects, call `preloadEffect` to pre-load the sound effects. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ // Pre-load sound effects to improve performance
+ agoraEngine->preloadEffect(soundEffectId, soundEffectFilePath.c_str());
+ ```
+
+1. **Play, pause, or resume playing a sound effect**
+
+ When a user presses **Play Audio Effect**, the sound effect is played. When they press the button again, the effect is paused and resumed alternately. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnPlayEffectButtonClick()
+ {
+ if (soundEffectStatus == 0)
+ {
+ // Play effect
+ agoraEngine->playEffect(
+ soundEffectId, // The ID of the sound effect file.
+ soundEffectFilePath.c_str(), // The path of the sound effect file.
+ 0, // The number of sound effect loops. -1 means an infinite loop. 0 means once.
+ 1, // The pitch of the audio effect. 1 represents the original pitch.
+ 0.0, // The spatial position of the audio effect. 0.0 represents that the audio effect plays in the front.
+ 100, // The volume of the audio effect. 100 represents the original volume.
+ true,// Whether to publish the audio effect to remote users.
+ 0 // The playback starting position of the audio effect file in ms.
+ );
+ soundEffectStatus = 1;
+ }
+ else if (soundEffectStatus == 1)
+ {
+ // Pause effect.
+ agoraEngine->pauseEffect(soundEffectId);
+ soundEffectStatus = 2;
+ }
+ else if (soundEffectStatus == 2)
+ {
+ // Resume effect,
+ agoraEngine->resumeEffect(soundEffectId);
+ soundEffectStatus = 1;
+ }
+ }
+ ```
+
+1. **Inform the user when the effect finishes playing**
+
+ When has finished playing the sound effect, the `onAudioEffectFinished` event is fired. Your listens to the callback and invoke the `OnEIDStopAudioEffect` method to update the UI and stop the audio effect. To Implement this logic, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ void onAudioEffectFinished(int soundId) override;
+ ```
+
+ 1. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::onAudioEffectFinished(int soundId)
+ {
+ agoraEngine->stopEffect(soundId);
+ soundEffectStatus = 0; // Stopped
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onAudioEffectFinished:: Sound effect %u has been finished"), soundId);
+ }
+ ```
+
+1. **Set an audio profile**
+
+ For applications where special audio performance is required, you set a suitable audio profile and audio scenario. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ // Specify the audio scenario and audio profile
+ agoraEngine->setAudioProfile(AUDIO_PROFILE_MUSIC_HIGH_QUALITY_STEREO);
+ agoraEngine->setAudioScenario(AUDIO_SCENARIO_GAME_STREAMING);
+ ```
+
+1. **Apply various voice and audio effects**
+
+ When a user presses **Apply Voice Effect**, apply a new voice effect and change the text on the button to describe the effect. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnVoiceEffectButtonClick()
+ {
+ voiceEffectIndex++;
+ // Turn off all previous effects
+ agoraEngine->setVoiceBeautifierPreset(VOICE_BEAUTIFIER_OFF);
+ agoraEngine->setAudioEffectPreset(AUDIO_EFFECT_OFF);
+ agoraEngine->setVoiceConversionPreset(VOICE_CONVERSION_OFF);
+ if (voiceEffectIndex == 1)
+ {
+ agoraEngine->setVoiceBeautifierPreset(CHAT_BEAUTIFIER_MAGNETIC);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Chat Beautifier"));
+ }
+ else if (voiceEffectIndex == 2)
+ {
+ agoraEngine->setVoiceBeautifierPreset(SINGING_BEAUTIFIER);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Singing Beautifier"));
+ }
+ else if (voiceEffectIndex == 3)
+ {
+ agoraEngine->setAudioEffectPreset(VOICE_CHANGER_EFFECT_HULK);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Hulk"));
+ }
+ else if (voiceEffectIndex == 4)
+ {
+ agoraEngine->setVoiceConversionPreset(VOICE_CHANGER_BASS);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Voice Changer"));
+ }
+ else if (voiceEffectIndex == 5)
+ {
+ // Sets the local voice equalization.
+ // The first parameter sets the band frequency. The value ranges between 0 and 9.
+ // Each value represents the center frequency of the band:
+ // 31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, and 16k Hz.
+ // The second parameter sets the gain of each band between -15 and 15 dB.
+ // The default value is 0.
+ agoraEngine->setLocalVoiceEqualization(AUDIO_EQUALIZATION_BAND_FREQUENCY::AUDIO_EQUALIZATION_BAND_4K, 3);
+ agoraEngine->setLocalVoicePitch(0.5);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Voice Equalization"));
+ }
+ else if (voiceEffectIndex > 5)
+ {
+ // Remove all effects
+ voiceEffectIndex = 0;
+ agoraEngine->setLocalVoicePitch(1.0);
+ agoraEngine->setLocalVoiceEqualization(AUDIO_EQUALIZATION_BAND_FREQUENCY::AUDIO_EQUALIZATION_BAND_4K, 0);
+ voiceEffectBtnText->SetText(FText::FromString("Apply voice effect"));
+ }
+ }
+ ```
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-test/index.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-test/index.mdx
index 20aad8e12..8223865ef 100644
--- a/shared/video-sdk/develop/audio-and-voice-effects/project-test/index.mdx
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-test/index.mdx
@@ -6,6 +6,7 @@ import Unity from './unity.mdx';
import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -18,3 +19,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/project-test/unreal.mdx b/shared/video-sdk/develop/audio-and-voice-effects/project-test/unreal.mdx
new file mode 100644
index 000000000..d156a5879
--- /dev/null
+++ b/shared/video-sdk/develop/audio-and-voice-effects/project-test/unreal.mdx
@@ -0,0 +1,28 @@
+
+
+3. In `MyUserWidget.cpp`, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the project running on your development device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+
+5. To join as a host, select **Host** and click **Join**.
+
+
+
+5. To connect to a channel, click **Join**.
+
+
+6. Press **Mix Audio**.
+
+ You hear the audio file played in the channel mixed with the microphone audio. Press the button again to stop audio mixing.
+
+7. Press **Play Audio Effect**.
+
+ You hear the audio file being played. Press the button again to pause and resume the audio. When the audio has finished playing, you see a message in **Output Log**.
+
+8. Press **Apply voice effect**.
+
+ Put on headphones connected to your web browser app and speak into the microphone of your development device. You hear your voice through the headphones with the voice effect name displayed on the button. Press the button again to test all the implemented voice effects one by one.
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/audio-and-voice-effects/reference/unreal.mdx b/shared/video-sdk/develop/audio-and-voice-effects/reference/unreal.mdx
new file mode 100644
index 000000000..c0e736e0c
--- /dev/null
+++ b/shared/video-sdk/develop/audio-and-voice-effects/reference/unreal.mdx
@@ -0,0 +1,69 @@
+
+
+### Audio route change workflow
+
+The audio route is changed in the following ways:
+
+- User: Add or remove an external device such as headphones or a Bluetooth audio device.
+- Developer:
+ - `setDefaultAudioRoutetoSpeakerphone` - change the default audio route.
+ - `setEnableSpeakerphone` - temporarily change audio route.
+
+The principles for audio route change are:
+
+- **User behaviour has the highest priority**:
+
+ - When a user connects an external device the audio route automatically changes to the external device.
+ - If the user connects multiple external devices in sequence, the audio route automatically changes to the last connected device.
+
+- **Developers can implement the following functionality**:
+
+ - Call `setDefaultAudioRoutetoSpeakerphone` to change the default and current setting:
+
+ The workflow is:
+ 1. The app calls `setDefaultAudioRoutetoSpeakerphone(true)`.
+ The audio route changes to the speakerphone.
+ 4. The user plugs in headphones.
+ The audio route changes to the headphones.
+ 5. The app calls `setDefaultAudioRoutetoSpeakerphone(true)`.
+ The audio route remains the headphones, because `setDefaultAudioRoutetoSpeakerphone` works on the audio route of the device only.
+ 6. The user unplugs the headphones.
+ The audio route changes to the speakerphone.
+
+ - Call `setEnableSpeakerphone` to temporarily set the audio route to speakerphone or earpiece. Because `setEnableSpeakerphone` is a transient API, any user behaviour or audio-related API call might change the current audio device `setEnableSpeakerphone`.
+
+ The workflow is:
+ 1. A user joins an interactive live streaming channel.
+ The audio route is the speakerphone.
+ 2. The user plugs in headphones.
+ The audio route changes to the headphones.
+ 3. The app calls `setEnableSpeakerphone(true)`.
+ On Android, the audio route changes to the speakerphone. On iOS, the audio route remains the headphones because on iOS, once the mobile deivce is connected to headphones or a Bluetooth audio device, you cannot change to audio route to the speakerphone.
+
+ - Any change to the audio route triggers the `onAudioRouteChanged` (Android) or `didAudioRouteChanged` (iOS) callback. You can use this callback to get the current audio route.
+
+### API reference
+
+* setLocalVoiceEqualization
+
+* setAudioEffectPreset
+
+* setAudioEffectParameters
+
+* setVoiceBeautifierPreset
+
+* setVoiceConversionPreset
+
+* setVoiceBeautifierParameters
+
+* setLocalVoiceReverb
+
+* setLocalVoiceReverbPreset
+
+* setLocalVoicePitch
+
+* setAudioProfile
+
+* setAudioScenario
+
+
diff --git a/shared/video-sdk/develop/cloud-proxy/project-implementation/index.mdx b/shared/video-sdk/develop/cloud-proxy/project-implementation/index.mdx
index 7b1ffffef..fcb6bf78f 100644
--- a/shared/video-sdk/develop/cloud-proxy/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/cloud-proxy/project-implementation/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx'
import Unity from './unity.mdx'
+import Unreal from './unreal.mdx'
import Windows from './windows.mdx';
@@ -17,4 +18,5 @@ import Windows from './windows.mdx';
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/cloud-proxy/project-implementation/unreal.mdx b/shared/video-sdk/develop/cloud-proxy/project-implementation/unreal.mdx
new file mode 100644
index 000000000..7ce43ee56
--- /dev/null
+++ b/shared/video-sdk/develop/cloud-proxy/project-implementation/unreal.mdx
@@ -0,0 +1,22 @@
+
+1. **Set to connect to before you join a channel**
+
+ To access in a restricted network environment, call `setCloudProxy` and pass `NONE_PROXY` to select the automatic mode for transmission. The `setCloudProxy` method returns `0` upon
+ successful initiation of cloud proxy service.
+
+ To enable the service in your , in `MyUserWidget.cpp`, locate `setupVideoSDKEngine` and add the following code after `agoraEngine->initialize(context);`:
+
+ ``` cpp
+ int proxyStatus = agoraEngine->setCloudProxy(CLOUD_PROXY_TYPE::NONE_PROXY);
+ // Start cloud proxy service and set automatic transmission mode.
+ if (proxyStatus == 0)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Proxy service started successfully"));
+ }
+ else
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Proxy service failed with error %u:"), proxyStatus);
+ }
+ ```
+
+
diff --git a/shared/video-sdk/develop/cloud-proxy/project-test/index.mdx b/shared/video-sdk/develop/cloud-proxy/project-test/index.mdx
index 7b1ffffef..dc89897f1 100644
--- a/shared/video-sdk/develop/cloud-proxy/project-test/index.mdx
+++ b/shared/video-sdk/develop/cloud-proxy/project-test/index.mdx
@@ -7,6 +7,7 @@ import Flutter from './flutter.mdx';
import MacOS from './macos.mdx'
import Unity from './unity.mdx'
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx'
@@ -17,4 +18,5 @@ import Windows from './windows.mdx';
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/cloud-proxy/project-test/unreal.mdx b/shared/video-sdk/develop/cloud-proxy/project-test/unreal.mdx
new file mode 100644
index 000000000..f7361e279
--- /dev/null
+++ b/shared/video-sdk/develop/cloud-proxy/project-test/unreal.mdx
@@ -0,0 +1,21 @@
+
+
+3. In `MyUserWidget.h`, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+
+6. Select an option and click **Join** to start a session.
+ * When you join as a **Host**, the local video is published and played in the .
+ * When you join as **Audience**, the remote stream is subscribed and played.
+
+
+
+6. Click **Join** to start a call.
+
+
+ You see your starts the proxy service and magically connects to the which was not possible in a restricted network environment.
+
+
diff --git a/shared/video-sdk/develop/cloud-proxy/reference/index.mdx b/shared/video-sdk/develop/cloud-proxy/reference/index.mdx
index 7ae892546..d2d46263c 100644
--- a/shared/video-sdk/develop/cloud-proxy/reference/index.mdx
+++ b/shared/video-sdk/develop/cloud-proxy/reference/index.mdx
@@ -7,6 +7,7 @@ import Flutter from './flutter.mdx';
import MacOS from './macos.mdx'
import Unity from './unity.mdx'
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx'
@@ -16,4 +17,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/cloud-proxy/reference/unreal.mdx b/shared/video-sdk/develop/cloud-proxy/reference/unreal.mdx
new file mode 100644
index 000000000..69d4f2e36
--- /dev/null
+++ b/shared/video-sdk/develop/cloud-proxy/reference/unreal.mdx
@@ -0,0 +1,7 @@
+
+
+### API reference
+
+* setCloudProxy
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/custom-video-and-audio/project-implementation/index.mdx b/shared/video-sdk/develop/custom-video-and-audio/project-implementation/index.mdx
index 952cc3e60..4e5ff70c7 100644
--- a/shared/video-sdk/develop/custom-video-and-audio/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/custom-video-and-audio/project-implementation/index.mdx
@@ -6,6 +6,8 @@ import Electron from './electron.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx';
@@ -16,4 +18,5 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/custom-video-and-audio/project-implementation/unreal.mdx b/shared/video-sdk/develop/custom-video-and-audio/project-implementation/unreal.mdx
new file mode 100644
index 000000000..b9d8d530f
--- /dev/null
+++ b/shared/video-sdk/develop/custom-video-and-audio/project-implementation/unreal.mdx
@@ -0,0 +1,285 @@
+
+
+### Implement a custom video source
+
+In this section you create the basic framework required to push video frames from a custom source. Depending on the type of your source, you add your own code to this framework that converts the source data to `VideoFrame` data. To create the basic framework, take the following steps:
+
+1. **Define variables to push the video frames**
+
+ In `MyUserWidget.h`, add the following declarations to the `UMyUserWidget` class:
+
+ ```cpp
+ agora::media::IMediaEngine* MediaEngine;
+ FString VIDEO_FILE = "";
+ ```
+
+1. **Add a raw video file to the project**
+
+ In this example, you use a video file as the source of your custom video data. To add the video file to your project:
+ 1. Create a folder called `Video` in the `/Content` directory
+ 1. Add a sample video file in `*.mp4` format to this folder.
+ 1. Update the value of the `VIDEO_FILE` variable to show the audio file name.
+
+1. **Setup an instance of the media engine**
+
+ provides `IMediaEngine` for video source customization. To push the customized video frames in the channel, initialize an instance of the media engine. To implement this workflow, in `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ agoraEngine->queryInterface(AGORA_IID_MEDIA_ENGINE, (void**)&MediaEngine);
+ ```
+
+1. **Setup a method to push video frames**
+
+ To push the customized video frames in the channel, your :
+
+ 1. Loads the video file from the project directory in a `TArray`.
+ 2. Calls `createCustomVideoTrack` to create a custom video rack.
+ 3. Calls `setExternalVideoSource` to setup the source of external video stream.
+ 4. Creates an external video frame and passes the video file data to the frame buffer.
+ 5. Updates the channel media options and publishes the custom video track.
+ 5. Calls `pushVideoFrame` and pushes the external video frame into the channel.
+
+ To implement this logic, do the following:
+
+ 1. In `MyUserWidget.h`, add the following method declaration before `void NativeConstruct();`:
+
+ ```cpp
+ void startPushVideo();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following method definition before `setupVideosSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::startPushVideo()
+ {
+ FString LoadDir = FPaths::ProjectContentDir() / TEXT("Video/");
+ LoadDir += VIDEO_FILE;
+ TArray result;
+ FFileHelper::LoadFileToArray(result, *LoadDir, 0);
+ if (result.IsEmpty())
+ {
+ UE_LOG(LogTemp, Warning, TEXT("File is empty"));
+ }
+ agora::rtc::video_track_id_t trackID = agoraEngine->createCustomVideoTrack();
+ SenderOptions sendoptions;
+ MediaEngine->setExternalVideoSource(true, false, agora::media::EXTERNAL_VIDEO_SOURCE_TYPE::VIDEO_FRAME, sendoptions);
+ agora::media::base::ExternalVideoFrame externalVideoFrame;
+ externalVideoFrame.buffer = FMemory::Malloc(result.Num() * sizeof(uint8));
+ FMemory::Memcpy(externalVideoFrame.buffer, result.GetData(), result.Num() * sizeof(uint8));
+ externalVideoFrame.type = agora::media::base::ExternalVideoFrame::VIDEO_BUFFER_RAW_DATA;
+ externalVideoFrame.format = agora::media::base::VIDEO_PIXEL_RGBA;
+ std::chrono::time_point tp = std::chrono::time_point_cast(std::chrono::system_clock::now());
+ externalVideoFrame.timestamp = tp.time_since_epoch().count();
+ ChannelMediaOptions options;
+ options.clientRoleType = CLIENT_ROLE_BROADCASTER;
+ options.autoSubscribeAudio = true;
+ options.autoSubscribeVideo = true;
+ options.publishCameraTrack = false; // Disable publishing video track.
+ options.publishCustomVideoTrack = true; // Enable publishing custom video track.
+ options.publishMicrophoneTrack = false;
+ agoraEngine->updateChannelMediaOptions(options);
+ MediaEngine->pushVideoFrame(&externalVideoFrame, trackID);
+ }
+ ```
+
+### Implement a custom audio source
+
+To push audio from a custom source to a channel, take the following steps:
+
+1. **Import the required libraries**
+
+ To read and push the audio data, in `MyUserWidget.h`, add the following statements before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ // Multithreading library.
+ #include "HAL/Runnable.h"
+ // TimeStamp library.
+ #include "__msvc_chrono.hpp"
+ ```
+
+1. **Define variables to push the audio frames**
+
+ In `MyUserWidget.h`, add the following declarations to the `UMyUserWidget` class:
+
+ ```cpp
+ TArray RecordingBuffer;
+ agora::media::IMediaEngine* MediaEngine;
+ int AudioDataLength;
+ int sampleRate = 48000;
+ int numOfChannel = 2;
+ FRunnable* Runnable;
+ FString AUDIO_FILE = "";
+ void startPushAudio();
+ ```
+
+1. **Add a raw audio file to the project**
+
+ In this example, you use an audio file as the source of your custom audio data. To add the audio file to your project:
+
+ 1. Create a folder called `Audio` in the `/Content` directory
+ 1. Add a [sample audio file](\files\applause.wav) in `*.wav` or `*.raw` format to this folder.
+ 1. Update the value of the `AUDIO_FILE` variable to show the audio file name.
+
+1. **Enable custom audio track publishing**
+
+ When a user presses **Join**, you set the `ChannelMediaOptions` to disable the microphone audio track and enable the custom audio track. You also enable custom audio local playback and set the external audio source. To implement this workflow, in `MyUserWidget.cpp`, add the following at the end of `OnJoinButtonClick`:
+
+ ```cpp
+ ChannelMediaOptions options;
+ options.publishMicrophoneTrack = false; // Disable publishing microphone audio
+ options.publishCustomAudioTrack = true; // Enable publishing custom audio
+ options.enableAudioRecordingOrPlayout = true;
+ agoraEngine->enableCustomAudioLocalPlayback(0, true);
+ MediaEngine->setExternalAudioSource(true, sampleRate, numOfChannel, 1);
+ agoraEngine->updateChannelMediaOptions(options);
+ ```
+
+1. **Setup a class to push audio frames**
+
+ To push the audio frame, you use a runnable class. To add this class in your code, in `MyUserWidget.h`, add the following after `UMyUserWidget`:
+
+ ```cpp
+ DECLARE_DYNAMIC_MULTICAST_DELEGATE(FAgoraOnCompleteSignature);
+ class FAgoraCaptureRunnable : public FRunnable
+ {
+ public:
+ FAgoraCaptureRunnable(agora::media::IMediaEngine* MediaEngine, const uint8* audioData, int dataLength);
+ virtual uint32 Run() override; // Override the Run method to push the audio frames from the custom source.
+ virtual void Exit() override; // Clean up the resource when a thread
+ FAgoraOnCompleteSignature OnCompleteDelegate;
+ protected:
+ // Required variables to customize the audio source.
+ TArray ProcessedNumbers;
+ bool bStopThread = false; // A boolean variable to track the thread state.
+ agora::media::IMediaEngine* MediaEngine; // An instance of IMediaEngine to push the audio frames.
+ uint8* audioData; // A pointe to hold the reference of audio data.
+ int dataLength; // The audio file length in bytes.
+ int numOfChannel = 2; // Number of channels
+ int sampleRate = 48000; // Sample rate of the audio frames.
+ int PUSH_FREQ_PER_SEC = 100; // The number of frames you push in a second.
+ void* sendByte;
+ };
+ ```
+1. **Setup an instance of the media engine**
+
+ provides `IMediaEngine` for audio source customization. To push the audio frames, initialize an instance of the media engine and pass the audio file data and the media engine to `FAgoraCaptureRunnable` via its constructor. To implement this workflow, do the following:
+
+ 1. In `MyUserWidget.cpp`, add the following at the end of `setupVideosSDKEngine`:
+
+ ```cpp
+ agoraEngine->queryInterface(AGORA_IID_MEDIA_ENGINE, (void**)&MediaEngine);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following before `setupVideosSDKEngine`:
+
+ ```cpp
+ FAgoraCaptureRunnable::FAgoraCaptureRunnable(agora::media::IMediaEngine* MediaEngine, const uint8* audioData, int dataLength)
+ {
+ sendByte = nullptr;
+ this->MediaEngine = MediaEngine;
+ this->audioData = new uint8[dataLength];
+ this->dataLength = dataLength;
+ FMemory::Memcpy(this->audioData, audioData, dataLength * sizeof(uint8));
+ }
+ ```
+
+1. **Start the task to push audio frames**
+
+ When a user successfully joins a channel, you run the thread that pushes audio frames. To implement this logic, in `MyUserWidget.cpp` ,add the following before `setupVideosSDKEngine`:
+ ```cpp
+ void UMyUserWidget::startPushAudio()
+ {
+ MediaEngine->setExternalAudioSource(true, sampleRate, numOfChannel, 1);
+ FString LoadDir = FPaths::ProjectContentDir() / TEXT("Audio/");
+ LoadDir += AUDIO_FILE;
+ UE_LOG(LogTemp, Warning, TEXT("%s"), *LoadDir);
+ TArray result;
+ FFileHelper::LoadFileToArray(result, *LoadDir, 0);
+ Runnable = new FAgoraCaptureRunnable(MediaEngine, result.GetData(), result.Num() * sizeof(uint8));
+ FRunnableThread* RunnableThread = FRunnableThread::Create(Runnable, TEXT("Agora"));
+ }
+
+ ```
+ You execute this method after joining a channel. In `MyUserWidget.cpp`, add the following line at the end of `OnJoinButtonClick`:
+
+ ```cpp
+ startPushAudio();
+ ```
+
+1. **Push the audio frames**
+
+ To push the audio frame in the channel, your :
+
+ * Dynamically allocates a memory segment using `FMemory`.
+ * Creates an external audio frame.
+ * Passes the reference of first byte to the frame `buffer` which reads only two bytes.
+ * Calls `pushAudioFrame` and push the frame in channel.
+ * Increments the pointer by two bytes to send next two bytes in the channel.
+ The keep sending the frames till to the last byte.
+
+ To implement this workflow, in `MyUserWidget.cpp`, add the following method before `setupVideosSDKEngine`:
+
+ ```cpp
+ uint32 FAgoraCaptureRunnable::Run()
+ {
+ std::chrono::time_point tp = std::chrono::time_point_cast(std::chrono::system_clock::now());
+ auto tic = tp.time_since_epoch().count();
+ bStopThread = false;
+ const uint8* Ptr = reinterpret_cast(audioData);
+ while (!bStopThread)
+ {
+ if (MediaEngine == nullptr)
+ {
+ break;
+ }
+ auto toc = getTimeStamp();
+ if ((toc - tic) >= 10)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("FAgoraCaptureRunnable TimeStamp ====== %d"), toc - tic);
+ if (dataLength != 0)
+ {
+ if (dataLength < 0)
+ {
+ OnCompleteDelegate.Broadcast();
+ break;
+ }
+ if (sendByte == nullptr)
+ {
+ sendByte = FMemory::Malloc(sampleRate / PUSH_FREQ_PER_SEC * agora::rtc::BYTES_PER_SAMPLE::TWO_BYTES_PER_SAMPLE * numOfChannel);
+ }
+ FMemory::Memcpy(sendByte, Ptr, sampleRate / PUSH_FREQ_PER_SEC * agora::rtc::BYTES_PER_SAMPLE::TWO_BYTES_PER_SAMPLE * numOfChannel);
+ agora::media::IAudioFrameObserverBase::AudioFrame externalAudioFrame;
+ externalAudioFrame.bytesPerSample = BYTES_PER_SAMPLE::TWO_BYTES_PER_SAMPLE;
+ externalAudioFrame.type = agora::media::IAudioFrameObserver::FRAME_TYPE_PCM16;
+ externalAudioFrame.samplesPerChannel = sampleRate / PUSH_FREQ_PER_SEC;
+ externalAudioFrame.samplesPerSec = sampleRate;
+ externalAudioFrame.channels = numOfChannel;
+ externalAudioFrame.buffer = (void*)sendByte;
+ externalAudioFrame.renderTimeMs = 10;
+ int ret = MediaEngine->pushAudioFrame(agora::media::AUDIO_PLAYOUT_SOURCE, &externalAudioFrame);
+ // UE_LOG(LogTemp, Warning, TEXT("FAgoraCaptureRunnable pushAudioFrame ====== %d"), ret);
+ Ptr += sampleRate / PUSH_FREQ_PER_SEC * 2 * numOfChannel;
+ dataLength -= sampleRate / PUSH_FREQ_PER_SEC * 2 * numOfChannel;
+ tic = toc;
+ }
+ }
+ FPlatformProcess::Sleep(0.001f);
+ }
+ return 0;
+ }
+ ```
+
+1. **Clean up the resources**
+
+ When the is closed, you clean up the resources that you allocated in the `Run` method. To do this, in `MyUserWidget.cpp`, add the following before `setupVideosSDKEngine`:
+
+ ```java
+ void FAgoraCaptureRunnable::Exit()
+ {
+ sendByte = nullptr;
+ bStopThread = true;
+ FMemory::Free(sendByte);
+ }
+ ```
+
+
diff --git a/shared/video-sdk/develop/custom-video-and-audio/project-test/index.mdx b/shared/video-sdk/develop/custom-video-and-audio/project-test/index.mdx
index f7cd6ae71..180a4e4f4 100644
--- a/shared/video-sdk/develop/custom-video-and-audio/project-test/index.mdx
+++ b/shared/video-sdk/develop/custom-video-and-audio/project-test/index.mdx
@@ -7,6 +7,8 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/custom-video-and-audio/project-test/unreal.mdx b/shared/video-sdk/develop/custom-video-and-audio/project-test/unreal.mdx
new file mode 100644
index 000000000..0e480e812
--- /dev/null
+++ b/shared/video-sdk/develop/custom-video-and-audio/project-test/unreal.mdx
@@ -0,0 +1,30 @@
+
+
+3. In Unreal Editor, open `MyUserWidget.h`, and update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In Unreal Editor, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+5. **Test the custom video source**
+
+ Press **Join**. You hear the video file streamed to the web demo app.
+
+ To use this code for streaming data from your particular custom video source, modify `externalVideoFrame.buffer` to read the audio data from your source, instead of a raw video file.
+
+6. **Test the custom audio source**
+
+ Press **Join**. You hear the audio file streamed to the web demo app.
+
+ To use this code for streaming data from your particular custom audio source, modify the `externalAudioFrame.buffer` method to read the audio data from your source, instead of a raw audio file.
+
+
+
+5. **Test the custom video source**
+
+ Press **Join**. You hear the video file streamed to the web demo app.
+
+ To use this code for streaming data from your particular custom video source, modify `externalVideoFrame.buffer` to read the audio data from your source, instead of a raw video file.
+
+
+
diff --git a/shared/video-sdk/develop/custom-video-and-audio/reference/index.mdx b/shared/video-sdk/develop/custom-video-and-audio/reference/index.mdx
index 938ab3ecd..1e4a6c7d1 100644
--- a/shared/video-sdk/develop/custom-video-and-audio/reference/index.mdx
+++ b/shared/video-sdk/develop/custom-video-and-audio/reference/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -17,3 +18,4 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/custom-video-and-audio/reference/unreal.mdx b/shared/video-sdk/develop/custom-video-and-audio/reference/unreal.mdx
new file mode 100644
index 000000000..3a5925070
--- /dev/null
+++ b/shared/video-sdk/develop/custom-video-and-audio/reference/unreal.mdx
@@ -0,0 +1,12 @@
+
+
+### API reference
+
+
+- setExternalVideoSource
+- pushVideoFrame
+
+- setExternalAudioSource
+- pushAudioFrame
+
+
diff --git a/shared/video-sdk/develop/encrypt-media-streams/project-implementation/index.mdx b/shared/video-sdk/develop/encrypt-media-streams/project-implementation/index.mdx
index 9e8c5a5cf..08f2ae6ad 100644
--- a/shared/video-sdk/develop/encrypt-media-streams/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/encrypt-media-streams/project-implementation/index.mdx
@@ -7,6 +7,7 @@ import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
@@ -16,4 +17,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/encrypt-media-streams/project-implementation/unreal.mdx b/shared/video-sdk/develop/encrypt-media-streams/project-implementation/unreal.mdx
new file mode 100644
index 000000000..6037b8bcc
--- /dev/null
+++ b/shared/video-sdk/develop/encrypt-media-streams/project-implementation/unreal.mdx
@@ -0,0 +1,58 @@
+
+
+1. **Add the required variables**
+
+ In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ // In a production environment, you retrieve the key and salt from
+ // an authentication server. For this code example you generate them locally.
+
+ // A 32-byte string for encryption.
+ std::string encryptionKey = "";
+ // A 32-byte string in Base64 format for encryption.
+ std::string encryptionSaltBase64 = "";
+ ```
+
+3. **Add the media stream encryption method**
+
+ To enable media stream encryption in your , create an `EncryptionConfig` object and specify a key, salt, and encryption mode. Call `enableEncryption` and pass the `EncryptionConfig` object as a parameter. To implement this logic, take the following steps:
+
+ 1. In `MyUserWidget.cpp`, add the following method before `CheckAndroidPermission`:
+
+ ```cpp
+ void UMyUserWidget::enableEncryption()
+ {
+ if (encryptionSaltBase64 == "" || encryptionKey == "")
+ return;
+ //Set encrypt mode and encrypt secret
+ EncryptionConfig config;
+ config.encryptionMode = AES_256_GCM2;
+ config.encryptionKey = encryptionKey.c_str();
+ memcpy(config.encryptionKdfSalt, encryptionSaltBase64.c_str(), 32);
+ // Call the method to enable media encryption.
+ if (agoraEngine->enableEncryption(true, config) == 0)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Encryption Enabled"));
+ }
+ }
+ ```
+
+ 2. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ void enableEncryption();
+ ```
+
+4. **Start media encryption before joining a channel**
+
+ In `MyUserWidget.cpp`, add the following code at the end of `SetupVoiceSDKEngine`:
+
+
+
+ In `MyUserWidget.cpp`, add the following code at the end of `SetupVideoSDKEngine`:
+
+ ``` cpp
+ enableEncryption();
+ ```
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/encrypt-media-streams/project-test/index.mdx b/shared/video-sdk/develop/encrypt-media-streams/project-test/index.mdx
index 32b10774b..f43715939 100644
--- a/shared/video-sdk/develop/encrypt-media-streams/project-test/index.mdx
+++ b/shared/video-sdk/develop/encrypt-media-streams/project-test/index.mdx
@@ -6,6 +6,7 @@ import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx'
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -16,4 +17,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/encrypt-media-streams/project-test/unreal.mdx b/shared/video-sdk/develop/encrypt-media-streams/project-test/unreal.mdx
new file mode 100644
index 000000000..315cbc1ed
--- /dev/null
+++ b/shared/video-sdk/develop/encrypt-media-streams/project-test/unreal.mdx
@@ -0,0 +1,26 @@
+
+
+4. In `MyUserWidget.h`, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In Unreal Editor, click **Play**. A moment later you see the project running on your development device.
+
+
+6. Select an option and click **Join** to start a session.
+ - When you join as a **Host**, the local video is published and played in the .
+ - When you join as **Audience**, the remote stream is subscribed and played.
+
+ If this is the first time you run the project, you need to grant microphone and camera access to your .
+
+
+
+
+
+6. Click **Join** to start .
+
+ If this is the first time you run the project, you need to grant microphone and camera access to your .
+
+
+
+7. Open another instance of your on a test device and update `appId`, `channelName` and `token` with your values, then click **Join**.
+
+
diff --git a/shared/video-sdk/develop/encrypt-media-streams/reference/index.mdx b/shared/video-sdk/develop/encrypt-media-streams/reference/index.mdx
index 0be29d57a..f1741f5bd 100644
--- a/shared/video-sdk/develop/encrypt-media-streams/reference/index.mdx
+++ b/shared/video-sdk/develop/encrypt-media-streams/reference/index.mdx
@@ -6,6 +6,7 @@ import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx'
import Windows from './windows.mdx';
@@ -17,4 +18,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/encrypt-media-streams/reference/unreal.mdx b/shared/video-sdk/develop/encrypt-media-streams/reference/unreal.mdx
new file mode 100644
index 000000000..8086b73ca
--- /dev/null
+++ b/shared/video-sdk/develop/encrypt-media-streams/reference/unreal.mdx
@@ -0,0 +1,7 @@
+
+
+ ### API reference
+
+ - EnableEncryption
+ - EncryptionConfig
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/ensure-channel-quality/project-implementation/index.mdx b/shared/video-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
index a1b4b72e9..9512f9dab 100644
--- a/shared/video-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
@@ -7,6 +7,8 @@ import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx'
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx b/shared/video-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx
new file mode 100644
index 000000000..d77313322
--- /dev/null
+++ b/shared/video-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx
@@ -0,0 +1,354 @@
+
+
+### Implement the user interface
+
+To implement call quality features in your , you need the following elements in the user interface:
+
+* A button widget to switch video quality.
+
+* A button widget to start and stop the echo test.
+
+* A text widget to display last-mile network quality.
+
+To add these elements to the UI, take the following steps:
+
+1. In Unreal Editor, open the project you created for Get Started with .
+
+1. In **Content Browser**, navigate to the `Content` folder and double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+1. Drag **Text** from the **Common** section of the **Palette** to the canvas panel.
+
+ A text widget appears on the canvas.
+
+1. In **Details**, rename it to `networkStatus`, then change the following properties:
+
+ * **Position X** - 956
+ * **Position Y** - 280
+ * **Size X** - 315
+ * **Size Y** - 40
+
+1. Drag **Button** from **Palette** > **Common** to the canvas panel.
+
+ A button appears on the canvas.
+
+1. In **Details**, rename **Button_0** to `videoQuality`, then change the following properties:
+
+ * **Position X** - 1004
+ * **Position Y** - 880
+ * **Size X** - 226
+ * **Size Y** - 60
+
+1. Use the same procedure to create a button called `echoTest`, then change the following properties in **Details**:
+
+ * **Position X** - 1252
+ * **Position Y** - 880
+ * **Size X** - 160
+ * **Size Y** - 60
+
+1. Add a text widget for the echo test button. Drag **Text** from **Palette** > **Common** to **Hierarchy** and drop it over `echoTest`. Then, in **Details**, change the **Text** field to `Echo Test`.
+
+1. Use the same procedure and add a text widget for the video quality button where the **Text** field says `Video Quality`.
+
+1. **Declare the variables you need**
+
+ To manage the test workflow and video quality state, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ``` cpp
+ bool highQuality = true; // Quality of the remote video stream being played
+ bool isEchoTestRunning = false; // Keeps track of the echo test
+ ```
+
+1. **Reference the UI elements**
+
+ 1. In `MyUserWidget.h`, add the following property specifiers after `UTextBlock* networkStatus = nullptr;`:
+
+ ```cpp
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* echoTest = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* videoQuality = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UTextBlock* networkStatus = nullptr;
+ ```
+
+ 2. To setup access to the text widget, include the text widget header file in the sample code. In `MyUserWidget.h`, add the following include before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/TextBlock.h"
+ ```
+
+1. **Setup event listeners for the buttons**
+
+ 1. In `MyUserWidget.h`, add the following method declarations after `UTextBlock* networkStatus = nullptr;`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnEchoTestButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnVideoQualityButtonClick();
+ ```
+
+ 2. Attach the event listener methods to the buttons. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ``` cpp
+ echoTest->OnClicked.AddDynamic(this, &UMyUserWidget::OnEchoTestButtonClick);
+ videoQuality->OnClicked.AddDynamic(this, &UMyUserWidget::OnVideoQualityButtonClick);
+ ```
+
+1. **Update the network status indication**
+
+ To show the network quality result visually to the user, do the following:
+
+ 1. Setup a method to update the network status indicator. In `MyUserWidget.h`, add the following to the `UMyUserWidget`:
+
+ ```cpp
+ void updateNetworkStatus(int quality);
+ ```
+
+
+ 2. In `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::updateNetworkStatus(int quality)
+ {
+ if (quality > 0 && quality < 3)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Green);
+ networkStatus->SetText(FText::FromString("Network Quality : Excellent"));
+ });
+ }
+ else if (quality <= 4)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Yellow);
+ networkStatus->SetText(FText::FromString("Network Quality : Good"));
+ });
+ }
+ else if (quality <= 6)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Red);
+ networkStatus->SetText(FText::FromString("Network Quality : Poor"));
+ });
+ }
+ else
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::White);
+ networkStatus->SetText(FText::FromString("Network Quality : Bad"));
+ });
+ }
+ }
+ ```
+
+### Implement features to ensure quality
+
+To implement the call quality features, take the following steps:
+
+1. **Enable the user to test the network**
+
+ When the launches, call `startLastmileProbeTest` with a `LastmileProbeConfig` object to check the last-mile uplink and downlink quality. To implement this workflow, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ void startProbeTest();
+ ```
+
+ 1. Add the network test logic to `startProbeTest`. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::startProbeTest()
+ {
+ // Configure a LastmileProbeConfig instance.
+ LastmileProbeConfig config;
+ // Probe the uplink network quality.
+ config.probeUplink = true;
+ // Probe the downlink network quality.
+ config.probeDownlink = true;
+ // The expected uplink bitrate (bps). The value range is [100000,5000000].
+ config.expectedUplinkBitrate = 100000;
+ // The expected downlink bitrate (bps). The value range is [100000,5000000].
+ config.expectedDownlinkBitrate = 100000;
+ // Start probe test.
+ agoraEngine->startLastmileProbeTest(config);
+ }
+ ```
+
+1. **Implement best practice for app initiation**
+
+ When a user starts your , is initialized in `setupVideoSDKEngine`. After initialization, do the following:
+
+ * _Enable dual stream mode_: Required for multi-user scenarios.
+ * _Set an audio profile and audio scenario_: Setting an audio profile is optional and only required if you have special requirements such as streaming music.
+ * _Set the video profile_: Setting a video profile is also optional. It is useful when you want to change one or more of `mirrorMode`, `frameRate`, `bitrate`, `dimensions`, `orientationMode` or `degradationPrefer` from default setting to custom values.
+ * _Start the network probe test_: A quick test at startup to gauge network quality.
+
+ To implement these features, in `MyUserWidget.cpp`, add the following code to `setupVideoSDKEngine` after `agoraEngine->initialize(context)`;
+
+ ```cpp
+ // Enable the dual stream mode
+ agoraEngine->enableDualStreamMode(true);
+ // Set audio profile and audio scenario.
+ agoraEngine->setAudioProfile(AUDIO_PROFILE_DEFAULT, AUDIO_SCENARIO_GAME_STREAMING);
+ // Set the video profile
+ VideoEncoderConfiguration videoConfig;
+ // Set mirror mode
+ videoConfig.mirrorMode = VIDEO_MIRROR_MODE_AUTO;
+ // Set framerate
+ videoConfig.frameRate = FRAME_RATE_FPS_10;
+ // Set bitrate
+ videoConfig.bitrate = STANDARD_BITRATE;
+ // Set dimensions
+ videoConfig.dimensions = VideoDimensions(360,360);
+ // Set orientation mode
+ videoConfig.orientationMode = ORIENTATION_MODE_ADAPTIVE;
+ // Set degradation preference
+ videoConfig.degradationPreference = MAINTAIN_BALANCED;
+ // Apply the configuration
+ agoraEngine->setVideoEncoderConfiguration(videoConfig);
+ // Start the probe test
+ startProbeTest();
+ ```
+
+3. **Test the user's hardware**
+
+ The echo test checks that the user's hardware is working properly. To implement the echo test logic, in `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnEchoTestButtonClick()
+ {
+ if (isJoin)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Leave the channel first to start echo test!"));
+ return;
+ }
+ if (!isEchoTestRunning)
+ {
+ EchoTestConfiguration echoConfig;
+ echoConfig.enableAudio = true;
+ echoConfig.enableVideo = true;
+ echoConfig.token = token.c_str();
+ echoConfig.channelId = channelName.c_str();
+ echoConfig.view = localView;
+ VideoCanvas canvas;
+ canvas.uid = 0;
+ canvas.sourceType = VIDEO_SOURCE_CAMERA;
+ canvas.view = localView;
+ agoraEngine->setupLocalVideo(canvas);
+ agoraEngine->startEchoTest(echoConfig);
+ isEchoTestRunning = true;
+ UE_LOG(LogTemp, Warning, TEXT("Echo test started"));
+ }
+ else
+ {
+ agoraEngine->stopEchoTest();
+ isEchoTestRunning = false;
+ UE_LOG(LogTemp, Warning, TEXT("Echo test stopped!"));
+ }
+ }
+ ```
+
+4. **Listen to events to receive state change notifications and quality statistics**
+
+ Add the following event handlers to receive state change notifications and quality statistics:
+
+ * `onLastmileQuality`: Receives the network quality result.
+ * `onLastmileProbeResult`: Receives detailed probe test results.
+ * `onNetworkQuality`: Receives statistics on network quality.
+ * `onRtcStats`: Receives the stats.
+ * `onRemoteVideoStateChanged`: Receives notification regarding any change in the state of the remote video.
+ * `onRemoteVideoStats`: Receives stats about the remote videos.
+
+ To implement these callbacks, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following callbacks after ` void onUserJoined(uid_t uid, int elapsed) override;`:
+
+ ```cpp
+ void onLastmileQuality(int quality) override;
+ void onLastmileProbeResult(const LastmileProbeResult& result) override;
+ void onNetworkQuality(uid_t uid, int txQuality, int rxQuality) override;
+ void onRtcStats(const RtcStats& stats) override;
+ void onRemoteVideoStateChanged(uid_t uid, REMOTE_VIDEO_STATE state, REMOTE_VIDEO_STATE_REASON reason, int elapsed);
+ void onRemoteVideoStats(const RemoteVideoStats& stats);
+ ```
+ 1. Provide definitions for the callbacks you declared in `UMyUserWidget`. In `MyUserWidget.cpp`, add the following before `updateNetworkStatus`:
+
+ ```cpp
+ void UMyUserWidget::onLastmileQuality(int quality)
+ {
+ updateNetworkStatus(quality);
+ }
+ void UMyUserWidget::onLastmileProbeResult(const LastmileProbeResult& result)
+ {
+ agoraEngine->stopLastmileProbeTest();
+ // The result object contains the detailed test results that help you
+ // manage call quality, for example, the downlink jitter.
+ UE_LOG(LogTemp, Warning, TEXT("Downlink jitter: %u") , result.downlinkReport.jitter);
+ }
+ void UMyUserWidget::onNetworkQuality(uid_t uid, int txQuality, int rxQuality)
+ {
+ updateNetworkStatus(txQuality);
+ }
+ void UMyUserWidget::onRtcStats(const RtcStats& stats)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("User(s): %d"), stats.userCount);
+ UE_LOG(LogTemp, Warning, TEXT("Packet loss rate: %d"), stats.rxPacketLossRate);
+ }
+ void UMyUserWidget::onRemoteVideoStateChanged(uid_t uid, REMOTE_VIDEO_STATE state, REMOTE_VIDEO_STATE_REASON reason, int elapsed)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Remote video state changed: \n Uid %d"), uid);
+ UE_LOG(LogTemp, Warning, TEXT("NewState: %d "), state);
+ UE_LOG(LogTemp, Warning, TEXT("reason: %d "), reason);
+ }
+ void UMyUserWidget::onRemoteVideoStats(const RemoteVideoStats& stats)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Remote Video Stats : \n User id = %d"), stats.uid);
+ UE_LOG(LogTemp, Warning, TEXT("Received bitrate : %d "), stats.receivedBitrate);
+ UE_LOG(LogTemp, Warning, TEXT("Total frozen time: %d"), stats.totalFrozenTime);
+ }
+ ```
+ Each event reports the statistics of the audio and video streams from each remote user and host.
+
+
+5. **Switch stream quality when the user taps the remote video**
+
+ To take advantage of dual-stream mode and switch remote video quality to high or low, in `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnVideoQualityButtonClick()
+ {
+ if (highQuality)
+ {
+ agoraEngine->setRemoteVideoStreamType(remoteUId, VIDEO_STREAM_TYPE::VIDEO_STREAM_LOW);
+ highQuality = false;
+ UE_LOG(LogTemp, Warning, TEXT("Switching to low-quality video"));
+ }
+ else
+ {
+ agoraEngine->setRemoteVideoStreamType(remoteUId, VIDEO_STREAM_TYPE::VIDEO_STREAM_HIGH);
+ highQuality = true;
+ UE_LOG(LogTemp, Warning, TEXT("Switching to high-quality video"));
+ }
+ }
+ ```
+6. **Configure the log file**
+
+ To customize the location, content and size of log files, in `MyUserWidget.cpp`, add the following code at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ context.logConfig.filePath = R"(C:\Users\\AppData\Local\Agora\AgoraImplementation\agorasdk.log)";
+ context.logConfig.fileSizeInKB = 256;
+ context.logConfig.level = agora::commons::LOG_LEVEL::LOG_LEVEL_WARN;
+ ```
+
+ Make sure you replace the `` in `filePath` with the user name of your development device.
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/ensure-channel-quality/project-test/index.mdx b/shared/video-sdk/develop/ensure-channel-quality/project-test/index.mdx
index 452d18042..5359aeb4f 100644
--- a/shared/video-sdk/develop/ensure-channel-quality/project-test/index.mdx
+++ b/shared/video-sdk/develop/ensure-channel-quality/project-test/index.mdx
@@ -7,6 +7,8 @@ import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx'
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/ensure-channel-quality/project-test/unreal.mdx b/shared/video-sdk/develop/ensure-channel-quality/project-test/unreal.mdx
new file mode 100644
index 000000000..f69b86e96
--- /dev/null
+++ b/shared/video-sdk/develop/ensure-channel-quality/project-test/unreal.mdx
@@ -0,0 +1,40 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+5. When the starts, it does the following:
+
+ * Enables the dual-stream mode
+
+ * Sets the audio profile
+
+ * Sets the video profile
+
+ * Starts the network probe test.
+
+ * Setup the log file path and configuration.
+
+ You see the result of the network probe test displayed in the network status label.
+
+
+6. Select a role using the check boxes and click **Join** to start a call.
+
+
+
+6. Click **Join** to start a call.
+
+
+7. After joining a channel, you see messages in **Output Log** informing you of some selected call statistics, including:
+
+ * The number of users in the channel
+ * Packet loss rate
+ * Remote video stats
+ * Remote video state changes
+
+8. You see the network status indicator updated periodically based on the result of the `OnNetworkQuality` callback.
+
+9. Click **Video Quality**. You see the remote video switches from high-quality to low-quality. Click the same button again to switch back to hight-quality video.
+
+
diff --git a/shared/video-sdk/develop/geofencing/project-implementation/index.mdx b/shared/video-sdk/develop/geofencing/project-implementation/index.mdx
index df6b4f50d..089880b77 100644
--- a/shared/video-sdk/develop/geofencing/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/geofencing/project-implementation/index.mdx
@@ -6,6 +6,8 @@ import ReactNative from './react-native.mdx';
import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx';
@@ -16,4 +18,5 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/geofencing/project-implementation/unreal.mdx b/shared/video-sdk/develop/geofencing/project-implementation/unreal.mdx
new file mode 100644
index 000000000..bef5816aa
--- /dev/null
+++ b/shared/video-sdk/develop/geofencing/project-implementation/unreal.mdx
@@ -0,0 +1,20 @@
+import * as data from '@site/data/variables';
+
+
+
+To enable geofencing in your , set the `areaCode` property of `RtcEngineContext` to a region for geofencing. In `MyUserWidget.cpp`, locate `setupVideoSDKEngine` and add the following after `context.eventHandler = this;`:
+
+
+To enable geofencing in your , set the `areaCode` property of `RtcEngineContext` to a region for geofencing. In `MyUserWidget.cpp`, locate `setupVoiceSDKEngine` and add the following after `context.eventHandler = this;`:
+
+```cpp
+// Your app will only connect to Agora SD-RTN located in North America.
+context.areaCode = AREA_CODE::AREA_CODE_NA;
+```
+If your fails to connect to the specified region of , instead of connection to another region, throws an error. If a firewall is deployed in your network environment, ensure that you:
+
+* Whitelist certain domains
+* Allow all IP addresses
+* Open the firewall ports defined in Use Cloud Proxy
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/geofencing/project-test/index.mdx b/shared/video-sdk/develop/geofencing/project-test/index.mdx
index df6b4f50d..180db61be 100644
--- a/shared/video-sdk/develop/geofencing/project-test/index.mdx
+++ b/shared/video-sdk/develop/geofencing/project-test/index.mdx
@@ -7,6 +7,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/geofencing/project-test/unreal.mdx b/shared/video-sdk/develop/geofencing/project-test/unreal.mdx
new file mode 100644
index 000000000..6eee285ff
--- /dev/null
+++ b/shared/video-sdk/develop/geofencing/project-test/unreal.mdx
@@ -0,0 +1,36 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, you need to grant microphone and camera access to your .
+
+
+
+6. Select an option and click *Join* to start a session.
+
+ * When you join as a **Host**, the local video is published and played in the .
+ * When you join as **Audience**, the remote stream is subscribed and played.
+ You see your to the specified region and opens a channel.
+
+
+
+6. Click **Join** to start a call.
+
+ Now, you can see yourself and remote user on the device screen. It means your has made a connection with the specified region .
+
+
+7. Try excluding a region of .
+
+ In `MyUserWidget.cpp`, locate the `setupVideoSDKEngine` method and replace `context.areaCode = AREA_CODE::AREA_CODE_NA;` with the following code:
+
+
+ In `MyUserWidget.cpp`, locate the `setupVoiceSDKEngine` method and replace `context.areaCode = AREA_CODE::AREA_CODE_NA;` with the following code:
+
+ ```cpp
+ // Exclude Mainland China from the regions for connection
+ context.areaCode = AREA_CODE::AREA_CODE_GLOB ^ AREA_CODE::AREA_CODE_CN;
+ ```
+
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-implementation/index.mdx b/shared/video-sdk/develop/integrate-token-generation/project-implementation/index.mdx
index 872efb6cf..3c9a0f896 100644
--- a/shared/video-sdk/develop/integrate-token-generation/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/integrate-token-generation/project-implementation/index.mdx
@@ -7,7 +7,9 @@ import ReactNative from './react-native.mdx'
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-implementation/unreal.mdx b/shared/video-sdk/develop/integrate-token-generation/project-implementation/unreal.mdx
new file mode 100644
index 000000000..e1ea003f5
--- /dev/null
+++ b/shared/video-sdk/develop/integrate-token-generation/project-implementation/unreal.mdx
@@ -0,0 +1,5 @@
+import Csharp from './csharp.mdx';
+
+
+
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-setup/index.mdx b/shared/video-sdk/develop/integrate-token-generation/project-setup/index.mdx
index fca8cfd52..6b94fb260 100644
--- a/shared/video-sdk/develop/integrate-token-generation/project-setup/index.mdx
+++ b/shared/video-sdk/develop/integrate-token-generation/project-setup/index.mdx
@@ -6,7 +6,9 @@ import ReactNative from './react-native.mdx'
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-setup/unreal.mdx b/shared/video-sdk/develop/integrate-token-generation/project-setup/unreal.mdx
new file mode 100644
index 000000000..dc5a8bf5c
--- /dev/null
+++ b/shared/video-sdk/develop/integrate-token-generation/project-setup/unreal.mdx
@@ -0,0 +1,4 @@
+
+
+
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-test/index.mdx b/shared/video-sdk/develop/integrate-token-generation/project-test/index.mdx
index aec7934e6..f84d9cab8 100644
--- a/shared/video-sdk/develop/integrate-token-generation/project-test/index.mdx
+++ b/shared/video-sdk/develop/integrate-token-generation/project-test/index.mdx
@@ -7,7 +7,9 @@ import ReactNative from './react-native.mdx'
import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/project-test/unreal.mdx b/shared/video-sdk/develop/integrate-token-generation/project-test/unreal.mdx
new file mode 100644
index 000000000..e1ea003f5
--- /dev/null
+++ b/shared/video-sdk/develop/integrate-token-generation/project-test/unreal.mdx
@@ -0,0 +1,5 @@
+import Csharp from './csharp.mdx';
+
+
+
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/reference/index.mdx b/shared/video-sdk/develop/integrate-token-generation/reference/index.mdx
index e9e9c1096..bf3c4442f 100644
--- a/shared/video-sdk/develop/integrate-token-generation/reference/index.mdx
+++ b/shared/video-sdk/develop/integrate-token-generation/reference/index.mdx
@@ -4,7 +4,9 @@ import Web from './web.mdx';
import Electron from './electron.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
diff --git a/shared/video-sdk/develop/integrate-token-generation/reference/unreal.mdx b/shared/video-sdk/develop/integrate-token-generation/reference/unreal.mdx
new file mode 100644
index 000000000..dc5a8bf5c
--- /dev/null
+++ b/shared/video-sdk/develop/integrate-token-generation/reference/unreal.mdx
@@ -0,0 +1,4 @@
+
+
+
+
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/index.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/index.mdx
index 81030fc64..dcb292476 100644
--- a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/index.mdx
@@ -7,6 +7,7 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
import Flutter from './flutter.mdx'
+import Unreal from './unreal.mdx'
@@ -17,3 +18,5 @@ import Flutter from './flutter.mdx'
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/unreal.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/unreal.mdx
new file mode 100644
index 000000000..3d6baa21b
--- /dev/null
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-implementation/unreal.mdx
@@ -0,0 +1,256 @@
+
+
+This section shows you how to implement the following methods of multi-channel streaming:
+
+- [Channel media relay](#channel-media-relay)
+
+- [Join multiple channels](#implement-joining-multiple-channels)
+
+Choose the method that best suits your scenario and follow the step by step procedure.
+
+### Channel media relay
+
+In this example, you use a single `Button` to start and stop channel media relay.
+
+#### Implement the user interface
+
+To enable your users to start and stop relaying to another channel, add a `Button` and a `Text` to the user interface. To add these elements, take the following steps:
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A button control appears on the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `channelMediaButton`, then change the following coordinates:
+
+ * **Position X** - 689
+ * **Position Y** - 960
+ * **Size X** - 284
+ * **Size Y** - 60
+
+ 1. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `channelMediaButton`.
+
+ You see a text widget is added to `channelMediaButton`.
+
+ 1. In **Details**, rename the text widget to `channelMediaBtnTxt`, then change the **Text** field to `Start Media Relay`.
+
+
+#### Handle the system logic
+
+In your project, declare the required variables and reference the channel media relay button:
+
+1. **Declare the variables you need**
+
+ To store source and destination channel settings and manage channel relay, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ``` cpp
+ std::string destChannelName = "";
+ std::string destSourceChannelToken = "";
+ int destUid = 100; // User ID that the user uses in the destination channel.
+ bool mediaRelaying = false;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UButton* channelMediaButton = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UTextBlock* channelMediaBtnTxt;
+ ```
+
+1. **Import the required UI library**
+
+ To access the text widget from UI, in `MyUserWidget.h`, add the following library before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/TextBlock.h"
+ ```
+
+2. **Setup an event listener for the channel relay button**
+
+ To add an event listener method to the button, do the following:
+
+ 1. Setup a method to attach it to the button. In `MyUserWidget.h`, add the following met hod declaration to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnMediaRelayButtonClick();
+ ```
+ 2. Attach the method to the button. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ channelMediaButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnMediaRelayButtonClick);
+ ```
+
+#### Implement channel media relay
+
+To enable users to relay channel media to a destination channel, take the following steps:
+
+1. **Start or stop channel media relay**
+
+ When a user presses the button, the starts relaying media from the source channel to the destination channel. To implement this logic, in `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::OnMediaRelayButtonClick()
+ {
+ if (mediaRelaying)
+ {
+ agoraEngine->stopChannelMediaRelay();
+ mediaRelaying = false;
+ }
+ else
+ {
+ // Configure the source channel information.
+ ChannelMediaInfo* srcChannelInfo = new ChannelMediaInfo();
+ srcChannelInfo->channelName = channelName.c_str();
+ srcChannelInfo->token = token.c_str();
+ ChannelMediaRelayConfiguration mediaRelayConfiguration;
+ mediaRelayConfiguration.srcInfo = srcChannelInfo;
+ // Configure the destination channel information.
+ ChannelMediaInfo* destChannelInfo = new ChannelMediaInfo();
+ destChannelInfo->channelName = destChannelName.c_str();
+ destChannelInfo->token = destSourceChannelToken.c_str();
+ mediaRelayConfiguration.destInfos = destChannelInfo;
+ // Start relaying media streams across channels.
+ agoraEngine->startChannelMediaRelay(mediaRelayConfiguration);
+ mediaRelaying = true;
+ }
+ }
+ ```
+
+2. **Monitor the channel media relay state**
+
+ To receive the state change notifications sent during media relay, you implement the `onChannelMediaRelayStateChanged` callback in your . To setup this callback, take the following steps:
+
+ 1. Declare `onChannelMediaRelayStateChanged` in the `UMyUserWidget` class. In `MyUserWidget.h`, add the following code after `void onLeaveChannel(const RtcStats& stats) override;`:
+
+ ```cpp
+ void onChannelMediaRelayStateChanged(CHANNEL_MEDIA_RELAY_STATE state, CHANNEL_MEDIA_RELAY_ERROR code);
+ ```
+
+ 2. Update the channel media relay button text to indicate the current state of relaying when triggers `onChannelMediaRelayStateChanged`. In `MyUserWidget.cpp`, add the following code before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::onChannelMediaRelayStateChanged(CHANNEL_MEDIA_RELAY_STATE state, CHANNEL_MEDIA_RELAY_ERROR code)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onChannelMediaRelayStateChanged"));
+ // This example changes the media relay button label when the relay state changes,
+ // a production level app needs to handle state change properly.
+ switch (state)
+ {
+ case RELAY_STATE_CONNECTING: // RELAY_STATE_CONNECTING:
+ channelMediaBtnTxt->SetText(FText::FromString("Connecting..."));
+ break;
+ case RELAY_STATE_RUNNING: // RELAY_STATE_RUNNING:
+ mediaRelaying = true;
+ channelMediaBtnTxt->SetText(FText::FromString("Stop Media Relay"));
+ break;
+ case RELAY_STATE_FAILURE: // RELAY_STATE_FAILURE:
+ mediaRelaying = false;
+ channelMediaBtnTxt->SetText(FText::FromString("Start Media Relay"));
+ break;
+ }
+ });
+ }
+ ```
+
+### Join multiple channels
+
+The alternate approach to multi-channel live streaming is joining multiple channels. In this section, you learn how to implement joining a second channel in your .
+
+#### Implement the user interface
+
+In this example, you use a single `Button` to join and leave a second channel. To add the `Button`, take the following steps:
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A button control appears on the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `secondChannelButton`, then change the following coordinates:
+
+ * **Position X** - 643
+ * **Position Y** - 960
+ * **Size X** - 341
+ * **Size Y** - 60
+
+ 1. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `secondChannelButton`.
+
+ You see a text widget is added to `secondChannelButton`.
+
+#### Handle the system logic
+
+In your project, import the relevant libraries and declare the required variables.
+
+1. **Declare the variables you need**
+
+ To join and manage a second channel, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ``` cpp
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UButton* secondChannelButton = nullptr;
+ std::string secondChannelName = "";
+ int secondChannelUid = 123; // Uid for the second channel
+ std::string secondChannelToken = "";
+ bool isSecondChannelJoined = false; // Track connection state of the second channel
+ ```
+
+2. **Setup an event listener for the multi-channel button**
+
+ To add an event listener method to the button, do the following:
+
+ 1. Setup a method to attach it to the button. In `MyUserWidget.h`, add the following declaration `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnSecondChannelButtonClick();
+ ```
+ 2. Attach the method to the button. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ secondChannelButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnSecondChannelButtonClick);
+ ```
+
+#### Implement joining multiple channels
+
+When a user presses the button, the joins a second channel. If the is already connected to a second channel, it leaves the channel. To implement this workflow, in `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnSecondChannelButtonClick()
+ {
+ if (isSecondChannelJoined)
+ {
+ ((agora::rtc::IRtcEngineEx*)agoraEngine)->leaveChannelEx(rtcSecondConnection);
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget:: leaveChannelEx ======"));
+ }
+ else
+ {
+ ChannelMediaOptions mediaOptions;
+ mediaOptions.autoSubscribeAudio = true;
+ mediaOptions.autoSubscribeVideo = true;
+ if (userRole == "audience")
+ {
+ // Audience Role
+ mediaOptions.publishCameraTrack = false;
+ mediaOptions.publishMicrophoneTrack = true;
+ mediaOptions.clientRoleType = CLIENT_ROLE_AUDIENCE;
+ }
+ else
+ {
+ // Host Role
+ mediaOptions.publishCameraTrack = true;
+ mediaOptions.publishMicrophoneTrack = true;
+ mediaOptions.channelProfile = CHANNEL_PROFILE_LIVE_BROADCASTING;
+ mediaOptions.clientRoleType = CLIENT_ROLE_BROADCASTER;
+ }
+ rtcSecondConnection.channelId = secondChannelName.c_str();
+ rtcSecondConnection.localUid = secondChannelUid;
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ ((agora::rtc::IRtcEngineEx*)agoraEngine)->joinChannelEx(secondChannelToken.c_str(), rtcSecondConnection, mediaOptions, this);
+ });
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget:: joinChannelEx"));
+ isSecondChannelJoined = true;
+ }
+ }
+ ```
+
+
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/index.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/index.mdx
index 49bccb086..96b874ff2 100644
--- a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/index.mdx
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/index.mdx
@@ -7,6 +7,8 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
import Flutter from './flutter.mdx'
+import Unreal from './unreal.mdx'
+
@@ -17,3 +19,5 @@ import Flutter from './flutter.mdx'
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/unreal.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/unreal.mdx
new file mode 100644
index 000000000..ee46421a1
--- /dev/null
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/project-test/unreal.mdx
@@ -0,0 +1,53 @@
+
+
+### Test channel media Relay
+
+1. In Unreal Editor, open `MyUserWidget.cpp` and update `appId`, `channelName` and `destChannelName`.
+
+2. [Generate a temporary token](../reference/manage-agora-account#generate-a-temporary-token) in using `appId` and `channelName`. Use it to update `token` in `MyUserWidget.h`.
+
+3. Generate a second token in using `appId` and `destChannelName`. Use it to update `destChannelToken` in `MyUserWidget.h`.
+
+4. In your browser, navigate to the web demo and update _App ID_, _Channel_, and _Token_ with the values for your temporary token, then click **Join**.
+
+5. Repeat the previous step on a second device, but this time use `appId`, `destChannelName`, and `destChannelToken` to **Join** the channel.
+
+7. In Unreal Editor, click **Play**. A moment later, you see the game running on your development device.
+
+ If this is the first time you run your app, grant camera and microphone permissions.
+
+8. Select **Host** and press **Join**. You see the video from the web browser demo app connected to `ChannelName` in the local view of your development device.
+
+9. Press **Start Media Relay**. You see the video from the web browser demo app connected to `channelName` relayed to the web browser demo app connected to `destChannelName`.
+
+10. Press **Stop Media Relay**. The media relaying is stopped.
+
+### Test joining multiple channels
+
+1. In Unreal Editor, open `MyUserWidget.h` and update `appId`, `channelName` and `secondChannelName`.
+
+2. [Generate a temporary token](../reference/manage-agora-account#generate-a-temporary-token) in using `appId` and `channelName`. Use it to update `token` in `MyUserWidget.h`.
+
+3. Generate a second token in using `appId` and `secondChannelName`. Use it to update `secondChannelToken` in `MyUserWidget.h`.
+
+4. In your browser, navigate to the web demo and update _App ID_, _Channel_, and _Token_ with the values for your temporary token, then click **Join**.
+
+5. Repeat the previous step on a second device, but this time use `appId`, `secondChannelName`, and `secondChannelToken` to **Join** the channel.
+
+7. In Unreal Editor, click **Play**. A moment later, you see the game running on your development device.
+
+ If this is the first time you run your app, grant camera and microphone permissions.
+
+8. Select **Audience**
+
+ 1. Press **Join**. You see the video from the web browser demo app connected to `channelName` in the remote view of your development device.
+ 2. Press **Join Second Channel**. You see the video from the web browser demo app connected to `secondChannelName` in the remote view of your development device.
+ 3. Press the same button and then **Leave** to exit both channels.
+
+9. Select **Host**
+
+ 1. Press **Join**. You see the local video in the local view of your development device. The web browser demo app connected to `channelName` shows the video from your development device.
+ 2. Press **Join Second Channel**. You see the local video in the local view of your development device. The web browser demo app connected to `secondChannelName` shows the video from your development device.
+ 3. Press the same button and then **Leave** to exit both channels.
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/index.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/index.mdx
index 4379dcab1..73a831b23 100644
--- a/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/index.mdx
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/index.mdx
@@ -7,6 +7,7 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
import Flutter from './flutter.mdx'
+import Unreal from './unreal.mdx'
@@ -16,4 +17,5 @@ import Flutter from './flutter.mdx'
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/unreal.mdx b/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/unreal.mdx
new file mode 100644
index 000000000..3f420e63d
--- /dev/null
+++ b/shared/video-sdk/develop/live-streaming-over-multiple-channels/reference/unreal.mdx
@@ -0,0 +1,17 @@
+
+
+
+### API reference
+
+* RtcEngineEx
+* joinChannelEx
+* leaveChannelEx
+* RtcConnection
+* startChannelMediaRelay
+* stopChannelMediaRelay
+* updateChannelMediaRelay
+* pauseAllChannelMediaRelay
+* resumeAllChannelMediaRelay
+* onChannelMediaRelayStateChanged
+* onChannelMediaRelayEvent
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/migration-guide/unreal.mdx b/shared/video-sdk/develop/migration-guide/unreal.mdx
new file mode 100644
index 000000000..03428b9d2
--- /dev/null
+++ b/shared/video-sdk/develop/migration-guide/unreal.mdx
@@ -0,0 +1,5 @@
+
+
+ for Unreal Engine v4.x is a new product. There are no migration steps.
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/play-media/project-implementation/index.mdx b/shared/video-sdk/develop/play-media/project-implementation/index.mdx
index 95f43012c..6b071573e 100644
--- a/shared/video-sdk/develop/play-media/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/play-media/project-implementation/index.mdx
@@ -6,6 +6,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx';
@@ -17,4 +19,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/play-media/project-implementation/unreal.mdx b/shared/video-sdk/develop/play-media/project-implementation/unreal.mdx
new file mode 100644
index 000000000..66bd0b289
--- /dev/null
+++ b/shared/video-sdk/develop/play-media/project-implementation/unreal.mdx
@@ -0,0 +1,360 @@
+
+
+### Implement the user interface
+
+In a real-word application, you provide several buttons to enable a user to open, play, pause and stop playing files in the media player. In this simple example, you use a single `Button` to demonstrate the basic media player functions. You also add a `ProgressBar` to display the play progress to the user.
+
+To add the UI elements, take the following steps:
+
+1. **Add a slider control to show media progress**
+
+ In **Content Browser**:
+
+ 1. Navigate to the content folder, then double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+ 1. Drag **Slider** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A slider appears on the canvas panel.
+
+ 1. In **Details**, rename **Slider_0** to `mediaProgressBar`, then change the following coordinates:
+
+ * **Position X** - 449
+ * **Position Y** - 272
+ * **Size X** - 869
+ * **Size Y** - 40
+
+
+2. **Add a Button to the UI**
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `mediaButton`, then change the following coordinates:
+
+ * **Position X** - 595
+ * **Position Y** - 960
+ * **Size X** - 378
+ * **Size Y** - 60
+
+ 1. Add a text widget for the media button. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it on `mediaButton`.
+
+ 1. In **Details**, rename **Text_0** to `mediaButtonText`, then change the **Text** field to `Open Media File`.
+
+### Handle the system logic
+
+To setup your project to use the media player APIs and access the UI elements, take the following steps:
+
+1. **Add the required libraries**
+
+ To import the required and user widget libraries, in `MyUserWidget.h`, add the following header files before `#include "MyUserWidget.generated.h"`:
+
+ ``` cpp
+ #include
+ #include
+ using namespace agora::base;
+ #include "Components/Slider.h"
+ #include "Components/TextBlock.h"
+ ```
+
+
+ ``` cpp
+ #include
+ #include
+ using namespace agora::base;
+ #include "Components/Slider.h"
+ ```
+
+
+2. **Declare the variables you need**
+
+ To create and manage an instance of the media player and access the UI elements, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ``` cpp
+ public:
+ agora_refptr mediaPlayer; // Instance of the media player
+ bool isMediaPlaying = false;
+ long mediaDuration = 0;
+ class MediaPlayerEventHandler* handler;
+ // In a real world app, you declare the media location variable with an empty string
+ // and update it when a user chooses a media file from a local or remote source.
+ std::string mediaLocation =
+ "https://www.appsloveworld.com/wp-content/uploads/2018/10/640.mp4";
+ UPROPERTY(VisibleAnywhere, meta = (BindWidget))
+ UButton* mediaButton = nullptr;
+ UPROPERTY(VisibleAnywhere, meta = (BindWidget))
+ USlider* mediaProgressBar;
+ UPROPERTY(VisibleAnywhere, meta = (BindWidget))
+ UTextBlock* mediaButtonText;
+ ```
+
+3. **Setup an event listener for the button**
+
+ To setup event listener for the media player button, do the following:
+
+ 1. In `MyUserWidget.h`, add the following methods to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnMediaPlayerButtonClick();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ mediaButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnMediaPlayerButtonClick);
+ ```
+
+### Implement media player functions
+
+To implement playing and publishing media files in your , take the following steps:
+
+1. **Open, play and pause media files**
+
+ When a user presses the button for the first time, you create an instance of the media player, set its `mediaPlayerObserver` to receive the callbacks, and open the media file. When the user presses the button again, you play the media file. On subsequent button presses, you pause or resume playing the media file alternately. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::OnMediaPlayerButtonClick()
+ {
+ // Initialize the mediaPlayer and open a media file.
+ if (mediaPlayer == nullptr)
+ {
+ // Create an instance of the media player
+ mediaPlayer = agoraEngine->createMediaPlayer().get();
+ // Set the mediaPlayerObserver to receive callbacks
+ handler = new MediaPlayerEventHandler(this);
+ mediaPlayer->registerPlayerSourceObserver(handler);
+ // Open the media file.
+ mediaPlayer->open(mediaLocation.c_str(), 0);
+ // Update the UI.
+ mediaButtonText->SetText(FText::FromString("Opening Media File..."));
+ return;
+ }
+ // Set up the local video container to handle the media player output
+ // or the camera stream, alternately.
+ isMediaPlaying = !isMediaPlaying;
+ // Set the stream publishing options.
+ updateChannelPublishOptions(isMediaPlaying);
+ // Display the stream locally.
+ setupLocalVideo(isMediaPlaying);
+ if (isMediaPlaying)
+ {
+ // Start or resume playing media
+ if (mediaPlayer->getState() == media::base::PLAYER_STATE_OPEN_COMPLETED)
+ {
+ mediaPlayer->play();
+ }
+ else if (mediaPlayer->getState() == media::base::PLAYER_STATE_PAUSED)
+ {
+ mediaPlayer->resume();
+ }
+ mediaButtonText->SetText(FText::FromString("Pause Playing Media"));
+ }
+ else
+ {
+ if (mediaPlayer->getState() == media::base::PLAYER_STATE_PLAYING)
+ {
+ // Pause media file
+ mediaPlayer->pause();
+ mediaButtonText->SetText(FText::FromString("Resume Playing Media"));
+ }
+ }
+ }
+ ```
+
+2. **Manage media player callbacks**
+
+ The `IMediaPlayerObserver` implements media player callbacks. You create an instance of `IMediaPlayerObserver` and register it with the media player instance. When the player state changes, you take appropriate actions to update the UI in `onPlayerStateChanged`. You use the `onPositionChanged` callback to update the progress bar. To setup an instance of `IMediaPlayerSourceObserver`, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following class after `UMyUserWidget`:
+
+ ``` cpp
+ class MediaPlayerEventHandler : public IMediaPlayerSourceObserver
+ {
+ public:
+ MediaPlayerEventHandler(UMyUserWidget* MediaPlayer)
+ {
+ this->agoraImplementation = MediaPlayer;
+ }
+ // Required to implement IMediaPlayerObserver.
+ void onPlayerSourceStateChanged(media::base::MEDIA_PLAYER_STATE state, media::base::MEDIA_PLAYER_ERROR ec) override;
+ void onPlayerEvent(media::base::MEDIA_PLAYER_EVENT eventCode, int64_t elapsedTime, const char* message) override;
+ void onPositionChanged(int64_t position_ms) override;
+ void onMetaData(const void* data, int length) override;
+ void onPlayBufferUpdated(int64_t playCachedBuffer) override;
+ void onPreloadEvent(const char* src, media::base::PLAYER_PRELOAD_EVENT event) override;
+ void onCompleted() override;
+ void onAgoraCDNTokenWillExpire() override;
+ void onPlayerSrcInfoChanged(const media::base::SrcInfo& from, const media::base::SrcInfo& to) override;
+ void onPlayerInfoUpdated(const media::base::PlayerUpdatedInfo& info) override;
+ void onAudioVolumeIndication(int volume) override;
+ private:
+ UMyUserWidget* agoraImplementation;
+ };
+ ```
+
+ 1. In `MyUserWidget.cpp`, add the following callbacks before `setupVideoSDKEngine`:
+
+ ``` cpp
+ void MediaPlayerEventHandler::onPlayerSourceStateChanged(media::base::MEDIA_PLAYER_STATE state, media::base::MEDIA_PLAYER_ERROR ec)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ if (state == media::base::MEDIA_PLAYER_STATE::PLAYER_STATE_OPEN_COMPLETED)
+ {
+ // Media file opened successfully
+ int64_t mediaDuration;
+ // Get the file duration.
+ agoraImplementation->mediaPlayer->getDuration(mediaDuration);
+ // Update the UI
+ agoraImplementation->mediaButtonText->SetText(FText::FromString("Play Media File"));
+ // Set the maximum value of the control slider to mediaDuration.
+ agoraImplementation->mediaProgressBar->SetMaxValue(mediaDuration);
+ agoraImplementation->mediaProgressBar->SetMinValue(0);
+ }
+ else if (state == media::base::MEDIA_PLAYER_STATE::PLAYER_STATE_PLAYBACK_ALL_LOOPS_COMPLETED)
+ {
+ agoraImplementation->isMediaPlaying = false;
+ // Media file finished playing.
+ agoraImplementation->mediaButtonText->SetText(FText::FromString("Load Media File"));
+ // Reset the progress bar.
+ agoraImplementation->mediaProgressBar->SetValue(0);
+ // Restore the local video view.
+ agoraImplementation->setupLocalVideo(false);
+ // Re-publish the local audio and video in the channel.
+ agoraImplementation->updateChannelPublishOptions(false);
+ // Clean up
+ agoraImplementation->mediaPlayer->Release();
+ agoraImplementation->mediaPlayer = nullptr;
+ }
+ });
+ }
+ void MediaPlayerEventHandler::onPositionChanged(int64_t position_ms)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ agoraImplementation->mediaProgressBar->SetValue(position_ms);
+ });
+ }
+ void MediaPlayerEventHandler::onMetaData(const void* data, int length)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onPlayBufferUpdated(int64_t playCachedBuffer)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onPreloadEvent(const char* src, media::base::PLAYER_PRELOAD_EVENT event)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onCompleted()
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onAgoraCDNTokenWillExpire()
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onPlayerSrcInfoChanged(const media::base::SrcInfo& from, const media::base::SrcInfo& to)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onPlayerInfoUpdated(const media::base::PlayerUpdatedInfo& info)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onAudioVolumeIndication(int volume)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ void MediaPlayerEventHandler::onPlayerEvent(media::base::MEDIA_PLAYER_EVENT eventCode, int64_t elapsedTime, const char* message)
+ {
+ // Required to implement IMediaPlayerObserver
+ }
+ ```
+
+3. **Configure to publish the media player stream**
+
+ You use `ChannelMediaOptions` and the `updateChannelMediaOptions` method to specify the type of stream to publish. To switch between publishing media player and camera and microphone streams, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget` after `void OnMediaPlayerButtonClick();`
+
+ ``` cpp
+ // Declare a method to publish and publish the local and media file streams.
+ void updateChannelPublishOptions(bool publishMediaPlayer);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following after `setupVideoSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::updateChannelPublishOptions(bool publishMediaPlayer)
+ {
+ // You use ChannelMediaOptions to change channel media options.
+ ChannelMediaOptions channelOptions;
+ channelOptions.publishMediaPlayerAudioTrack = publishMediaPlayer;
+ channelOptions.publishMediaPlayerVideoTrack = publishMediaPlayer;
+ channelOptions.publishMicrophoneTrack = !publishMediaPlayer;
+ channelOptions.publishCameraTrack = !publishMediaPlayer;
+ if (publishMediaPlayer)
+ {
+ channelOptions.publishMediaPlayerId = mediaPlayer->getMediaPlayerId();
+ }
+ agoraEngine->updateChannelMediaOptions(channelOptions);
+ }
+ ```
+
+4. **Display media player output locally**
+
+ To display the media file, you call `setupLocalVideo` to disable the local video capturer, setup a video canvas and set its `sourceType` property to `VIDEO_SOURCE_MEDIA_PLAYER`, then you call `setupLocalVideo` and pass the video canvas to preview the media stream. To implement this logic, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget` after `void OnMediaPlayerButtonClick();`
+
+ ``` cpp
+ // Declare a method to switch between the local video and media file output.
+ void setupLocalVideo(bool forMediaPlayer);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the method after `setupLocalVideo`:
+
+ ``` cpp
+ void UMyUserWidget::setupLocalVideo(bool forMediaPlayer)
+ {
+ // Setup a video canvas to switch between streams.
+ if (!forMediaPlayer)
+ {
+ agoraEngine->enableLocalVideo(true);
+ // Preview the local video track..
+ VideoCanvas videoCanvas;
+ videoCanvas.view = localView;
+ videoCanvas.uid = 0;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_CAMERA;
+ agoraEngine->setupLocalVideo(videoCanvas);
+ }
+ else
+ {
+ agoraEngine->enableLocalVideo(false);
+ // Preview the media file track.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = localView;
+ videoCanvas.uid = mediaPlayer->getMediaPlayerId();
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_MEDIA_PLAYER;
+ agoraEngine->setupLocalVideo(videoCanvas);
+ }
+ }
+ ```
+
+6. **Clean up when you close the **
+
+ To free up resources when you exit the , in `MyUserWidget.cpp` add the following lines to `NativeDestruct` before `agoraEngine->unregisterEventHandler(this);`:
+
+ ``` cpp
+ agoraEngine->destroyMediaPlayer(mediaPlayer);
+ mediaPlayer.reset();
+ if (handler != nullptr)
+ {
+ delete handler;
+ handler = nullptr;
+ }
+ ```
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/play-media/project-test/index.mdx b/shared/video-sdk/develop/play-media/project-test/index.mdx
index 1d2425015..9699232bf 100644
--- a/shared/video-sdk/develop/play-media/project-test/index.mdx
+++ b/shared/video-sdk/develop/play-media/project-test/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -17,4 +18,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/play-media/project-test/unreal.mdx b/shared/video-sdk/develop/play-media/project-test/unreal.mdx
new file mode 100644
index 000000000..794622dad
--- /dev/null
+++ b/shared/video-sdk/develop/play-media/project-test/unreal.mdx
@@ -0,0 +1,34 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In Unreal Editor, click **Play**. A moment later, you see the project running on your development device.
+
+ If this is the first time you run the , grant camera and microphone permissions.
+
+
+6. Click **Join** to start a call.
+
+
+
+6. Select **Host** and press **Join** to start a call.
+
+
+7. Press **Open Media File**.
+
+ After a short while, you see the button text changes to `Play Media File` confirming that the media file is opened successfully.
+
+1. Press **Play Media File**
+
+ You see the media file played both locally and in the web demo app. The progress bar indicates the play progress.
+
+1. Press **Pause Playing Media**
+
+ Media publishing is paused and the camera and microphone publishing is resumed.
+
+1. Press **Resume Playing Media**
+
+ You see that the media file resumes playing.
+
+1. Wait for the media file to finish playing. When the progress bar reaches the end, the media player publishing is stopped and camera and microphone publishing is resumed.
+
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/android.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/android.mdx
index 05d5c363d..c2c1ef945 100644
--- a/shared/video-sdk/develop/product-workflow/project-implementation/android.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/android.mdx
@@ -3,7 +3,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want to control, you typically add a volume control UI element such as a SeekBar to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a checkBox to the interface for each user. In this example, you add a `SeekBar` and a `CheckBox` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing.
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a SeekBar to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a checkBox to the interface for each user. In this example, you add a `SeekBar` and a `CheckBox` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing.
To add the UI elements, in `/app/res/layout/activity_main.xml`, add the following code before ``:
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/flutter.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/flutter.mdx
index 93ee46b89..b5c6a743b 100644
--- a/shared/video-sdk/develop/product-workflow/project-implementation/flutter.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/flutter.mdx
@@ -2,7 +2,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want to control, you typically add a volume control UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `Checkbox` to the interface for each user. In this example, you add a `Slider` and a `Checkbox` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing.
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `Checkbox` to the interface for each user. In this example, you add a `Slider` and a `Checkbox` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing.
To add the UI elements, in `/lib/main.dart`, add the following code to the `build` method after `ListView(... children: [`:
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/index.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/index.mdx
index 500ff1145..290a49c6d 100644
--- a/shared/video-sdk/develop/product-workflow/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/index.mdx
@@ -7,6 +7,7 @@ import ReactNative from './react-native.mdx'
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
@@ -17,3 +18,5 @@ import Windows from './windows.mdx';
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/swift.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/swift.mdx
index dfea602ab..35f07649a 100644
--- a/shared/video-sdk/develop/product-workflow/project-implementation/swift.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/swift.mdx
@@ -7,7 +7,7 @@ To implement this functionality for a UIKit based app, retrieve an Agora Engine
### Implement the user interface
-In a real-world application, for each volume setting you want to control, you typically add a volume control UI element such as a UISlider to the audio configuration panel. To enable the user to mute local or remote audio, you add a UISwitch to the interface for each user. In this example, you add a `UISlider` and a `UISwitch` to the UI to test different volume settings. For screen sharing, you add a `UIButton` to start and stop sharing.
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a UISlider to the audio configuration panel. To enable the user to mute local or remote audio, you add a UISwitch to the interface for each user. In this example, you add a `UISlider` and a `UISwitch` to the UI to test different volume settings. For screen sharing, you add a `UIButton` to start and stop sharing.
To create this user interface, in the `ViewController` class:
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/unity.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/unity.mdx
index 02e5cc35b..3abd2e5ae 100644
--- a/shared/video-sdk/develop/product-workflow/project-implementation/unity.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/unity.mdx
@@ -2,7 +2,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want to control, you typically add a volume control UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add Toggle to the interface for each user. In this example, you add a `Button` and a `Toggle` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing. To implement this UI, take the following steps:
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add Toggle to the interface for each user. In this example, you add a `Button` and a `Toggle` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing. To implement this UI, take the following steps:
1. Right-click **Sample Scene**, then click **Game Object** > **UI** > **Button - TextMeshPro**. A button appears in the **Scene** Canvas.
diff --git a/shared/video-sdk/develop/product-workflow/project-implementation/unreal.mdx b/shared/video-sdk/develop/product-workflow/project-implementation/unreal.mdx
new file mode 100644
index 000000000..715532b6f
--- /dev/null
+++ b/shared/video-sdk/develop/product-workflow/project-implementation/unreal.mdx
@@ -0,0 +1,276 @@
+
+
+### Implement the user interface
+
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a Slider Control to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a Check Box to the interface for each user. In this example, you add a `Slider Control` and a `Check Box` to the UI to test different volume settings. For screen sharing, you add a `Button` to start and stop sharing. To
+Implement this UI, take the following steps:
+
+1. **Add a slider control**
+
+ To add a slider control to the UI, take the following steps:
+
+ 1. In **Content Browser**, navigate to the `Content` folder, then double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+ 1. Drag **Slider** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A slider control appears on the canvas panel.
+
+ 1. In **Details**, rename **Slider_0** to `volumeSlider`, then change the following properties:
+
+ * **Position X** - 664
+ * **Position Y** - 816
+ * **Size X** - 250
+ * **Size Y** - 50
+
+2. **Add a button to start and stop screen sharing**
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `shareScreenBtn`, then change the following properties:
+
+ * **Position X** - 759
+ * **Position Y** - 960
+ * **Size X** - 222
+ * **Size Y** - 60
+
+ 1. Add a **Text** for the button. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `shareScreenBtn`. Then, in **Details**, update the **Text** field to `Share Screen`.
+
+3. **Add a Check Box**
+
+ 1. Drag **Check Box** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ 1. In **Details**, rename **CheckBox_0** to `muteCheckBox`, then change the following properties in **Details**.
+
+ * **Pos X** - 1040
+ * **Pos Y** - 868
+
+ 1. Add a text widget for `muteCheckBox`. Drag **Text** from **Palette Panel** to **Hierarchy** and drop it over `muteCheckBox`, and then in **Details** change the **Text** field to `Mute`.
+
+
+### Handle the system logic
+
+### Implement screen sharing, volume control, and mute
+
+To implement these features in your , take the following steps:
+
+1. **Declare the variables you need**
+
+ To access and use the UI elements and apply workflow settings, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ // A variable to access the share screen button.
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* shareScreenBtn = nullptr;
+ // A variable to access Mute check box.
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UCheckBox* muteCheckBox = nullptr;
+ // A variable to access the volume slider .
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ USlider* volumeSlider;
+ // Boolean variable to track the screen sharing state.
+ bool isScreenShare = false;
+ // A Boolean variable to track remote user mute and un-mute state.
+ bool isChecked = false;
+ ```
+
+1. **Add the required header files**
+
+
+ To setup access to the UI elements, in `MyUserWidget.h`, add the following header files before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Slider.h"
+ #include "Components/CheckBox.h"
+ ```
+
+
+
+ To setup access to the UI elements, in `MyUserWidget.h`, add the following header file before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Slider.h"
+ ```
+
+
+1. **Setup an event listeners for the UI elements**
+
+ To setup event listeners for the screen share button, mute check box, and volume slider, do the following:
+
+ 1. In `MyUserWidget.h`, add the to following to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnScreenShareButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnMuteCheckboxChanged(bool bIsChecked);
+ UFUNCTION(BlueprintCallable)
+ void OnSliderValueChanged(float volume);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ shareScreenBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnScreenShareButtonClick);
+ muteCheckBox->OnCheckStateChanged.AddDynamic(this, &UMyUserWidget::OnMuteCheckboxChanged);
+ volumeSlider->OnValueChanged.AddDynamic(this, &UMyUserWidget::OnSliderValueChanged);
+ volumeSlider->SetMaxValue(100);
+ volumeSlider->SetMinValue(0);
+ ```
+
+1. **Adjust or mute the volume**
+
+ To adjust the local playback volume, you setup an event listener for the slider control and call `adjustRecordingSignalVolume` whenever the user drags the slider left or right. To implement this workflow, in `MyUserWidget.cpp`, add the following method before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnSliderValueChanged(float volume)
+ {
+ // Use the slider value to adjust the recording volume.
+ agoraEngine->adjustRecordingSignalVolume(volume);
+ }
+ ```
+
+1. **Mute and un-mute the remote user**
+
+ To mute and un-mute the remote user, you setup an event listener for the mute check box and call `muteRemoteAudioStream` with `remoteUId` when the user selects or deselects the mute checkbox. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnMuteCheckboxChanged(bool isMute)
+ {
+ // Mute and un-mute the remote user.
+ if (remoteUId == NULL)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("No remote user in the channel"));
+ return;
+ }
+ agoraEngine->muteRemoteAudioStream(remoteUId, isMute);
+ }
+ ```
+
+1. **Configure to publish the screen stream**
+
+ You use `ChannelMediaOptions` and the `updateChannelMediaOptions` method to specify the type of stream to publish. To switch between publishing screen and camera stream, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following method to `UMyUserWidget`:
+
+ ``` cpp
+ // Declare a method to switch between the local video and screen tracks.
+ void updateChannelPublishOptions(bool value);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the method after `setupVideoSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::updateChannelPublishOptions(bool publishScreenTrack)
+ {
+ // You use ChannelMediaOptions to change channel media options.
+ ChannelMediaOptions channelOptions;
+ channelOptions.publishScreenTrack = publishScreenTrack;
+ channelOptions.publishMicrophoneTrack = publishScreenTrack;
+ channelOptions.publishCameraTrack = !publishScreenTrack;
+ agoraEngine->updateChannelMediaOptions(channelOptions);
+ }
+ ```
+
+4. **Preview the screen track**
+
+ Create a `VideoCanvas` and use it in the `setupLocalVideo` method of the to switch between displaying the screen track and the camera stream. To implement this workflow, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget`:
+
+ ``` cpp
+ // Declare a method to preview the screen track.
+ void setupLocalVideo(bool value);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the method before `setupVideoSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::setupLocalVideo(bool screenTrack)
+ {
+ // Setup a canvas to switch between the local video and screen preview.
+ VideoCanvas canvas;
+ canvas.renderMode = media::base::RENDER_MODE_FIT;
+ canvas.uid = 0;
+ canvas.view = localView;
+ if (screenTrack)
+ {
+ // Disable the local video capturer.
+ agoraEngine->enableLocalVideo(false);
+ // Select a video source.
+ canvas.sourceType = VIDEO_SOURCE_SCREEN_PRIMARY;
+ // Use the video canvas to setup a local view for the screen track.
+ agoraEngine->setupLocalVideo(canvas);
+ // Preview the screen track.
+ agoraEngine->startPreview();
+ }
+ else
+ {
+ // Enable the local video capturer.
+ agoraEngine->enableLocalVideo(true);
+ // Select a video source.
+ canvas.sourceType = VIDEO_SOURCE_CAMERA;
+ // Use the video canvas to setup a local video view.
+ agoraEngine->setupLocalVideo(canvas);
+ // Preview the local video.
+ agoraEngine->startPreview();
+ }
+ }
+ ```
+1. **Start or stop screen sharing**
+
+ When the user presses **Share Screen**, your does the following:
+
+ - Calls `getScreenCaptureSources` to get the list of available screens.
+ - Calls `getSourceInfo` and selects the first screen from the list.
+ - Calls `startScreenCaptureByWindowId` and passes the handle of the window that you selected for sharing.
+ - Calls `updateMediaPublishOptions` and passes `true` as an argument to publish the screen track.
+ - Calls `setupLocalVideo` and passes `true` as an argument to preview the screen track.
+
+ When the user presses the button again, your :
+
+ - Calls `stopScreenCapture` to stop screen sharing.
+ - Calls `updateChannelMediaOptions` and passes `false` to publish the local video track.
+ - Calls `setupLocalVideo` and passes `false` as an argument to restore the local video preview.
+
+ To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnScreenShareButtonClick()
+ {
+ if (!isScreenShare)
+ {
+
+ ScreenCaptureParameters capParam;
+ const agora::rtc::Rectangle regionRect;
+ IScreenCaptureSourceList* infos = agoraEngine->getScreenCaptureSources(SIZE(), SIZE(), true);
+ if (infos == nullptr)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("GetScreenDisplay is null"));
+ return;
+ }
+ ScreenCaptureSourceInfo info = infos->getSourceInfo(0);
+ if (info.type == ScreenCaptureSourceType_Screen)
+ {
+ agoraEngine->startScreenCaptureByDisplayId((uint64)(info.sourceId), regionRect, capParam);
+ }
+ else if (info.type == ScreenCaptureSourceType_Window)
+ {
+ agoraEngine->startScreenCaptureByWindowId(info.sourceId, regionRect, capParam);
+ }
+ isScreenShare = !isScreenShare;
+ setupLocalVideo(isScreenShare);
+ updateChannelPublishOptions(isScreenShare);
+ }
+ else
+ {
+ agoraEngine->stopScreenCapture();
+ isScreenShare = !isScreenShare;
+ setupLocalVideo(isScreenShare);
+ updateChannelPublishOptions(isScreenShare);
+ UE_LOG(LogTemp, Warning, TEXT("Screen share started"));
+ }
+ }
+ ```
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/product-workflow/project-test/index.mdx b/shared/video-sdk/develop/product-workflow/project-test/index.mdx
index 6a9076b7d..852041e5b 100644
--- a/shared/video-sdk/develop/product-workflow/project-test/index.mdx
+++ b/shared/video-sdk/develop/product-workflow/project-test/index.mdx
@@ -6,6 +6,7 @@ import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx';
import Unity from './unity.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -18,3 +19,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/develop/product-workflow/project-test/unreal.mdx b/shared/video-sdk/develop/product-workflow/project-test/unreal.mdx
new file mode 100644
index 000000000..bf0a37078
--- /dev/null
+++ b/shared/video-sdk/develop/product-workflow/project-test/unreal.mdx
@@ -0,0 +1,50 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run your app, grant camera and microphone permissions.
+
+
+5. Select a role using the check boxes.
+
+6. Click **Join** to start a call. Now, you can see yourself on the device screen and talk to the remote user using your .
+
+
+
+5. Click **Join** to start a call. Now, you can see yourself on the device screen and talk to the remote user using your .
+
+
+7. **Test volume control**
+
+ 1. Speak into your development device as you move the slider to the left and then right. You notice the volume decrease and then increase in the web demo app as the recording volume changes.
+
+ 1. Tap the **Mute** `CheckBox` while you speak into the microphone connected to the web demo app.
+ You notice that the remote audio is muted on your Android device.
+
+ 1. To test other volume control methods, in `OnSliderValueChanged` replace the `adjustRecordingSignalVolume` call with one of the following:
+
+ ```cpp
+ agoraEngine->adjustPlaybackSignalVolume(volume);
+ agoraEngine->adjustUserPlaybackSignalVolume(remoteUId,volume);
+ agoraEngine->adjustAudioMixingVolume(volume);
+ agoraEngine->adjustAudioMixingPlayoutVolume(volume);
+ agoraEngine->adjustAudioMixingPublishVolume(volume);
+ agoraEngine->setInEarMonitoringVolume(volume);
+ ```
+ Run the app again and use the volume slider to test the change in the corresponding volume setting.
+
+ 1. To test other mute methods, in `OnMuteCheckboxChanged` replace the `muteRemoteAudioStream` call with one of the following:
+
+ ```cpp
+ agoraEngine->muteAllRemoteAudioStreams(isMute);
+ agoraEngine->muteLocalAudioStream(isMute);
+ ```
+ Run the app again and tap the `CheckBox` to test the effect of these mute methods.
+
+1. **Test Screen sharing**
+
+ Press **Share Screen**. You see your device screen shared in the web demo app and a preview on your development device. Press the same button again. you see screen sharing is stopped and the camera stream is restored in the web demo app.
+
+
diff --git a/shared/video-sdk/develop/spatial-audio/project-implementation/index.mdx b/shared/video-sdk/develop/spatial-audio/project-implementation/index.mdx
index c957605ac..79d2e34e4 100644
--- a/shared/video-sdk/develop/spatial-audio/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/spatial-audio/project-implementation/index.mdx
@@ -5,6 +5,7 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
import Web from './web.mdx';
+import Unreal from './unreal.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
@@ -17,3 +18,4 @@ import ReactNative from './react-native.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/spatial-audio/project-implementation/unreal.mdx b/shared/video-sdk/develop/spatial-audio/project-implementation/unreal.mdx
new file mode 100644
index 000000000..1e66174c5
--- /dev/null
+++ b/shared/video-sdk/develop/spatial-audio/project-implementation/unreal.mdx
@@ -0,0 +1,182 @@
+
+
+
+### Implement the user interface
+
+In a real-word application, you report your local spatial position to a server in your environment and retrieve positions of remote users in the channel from your server. In this simple example, you use a single `Button` to set the spatial position of a remote user.
+
+To add the button to the UI, do the following:
+
+ 1. In **Unreal Editor**, go to **Content Browser** and open the `Content` folder, then double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+ 1. Drag **Button** from the **Common** section of the **Palette** to the canvas panel.
+
+ A button appears on the canvas.
+
+ 1. In **Details**, rename **Button_0** to `spatialAudio`, then change the following properties:
+
+ * **Position X** - 1000
+ * **Position Y** - 888
+ * **Size X** - 210
+ * **Size Y** - 60
+
+ 1. Add a text widget for the spatial audio button. Drag **Text** from **Palette** > **Common** to **Hierarchy** and drop it over `spatialAudio`. Then, in **Details**, change the **Text** field to `Spatial Audio`.
+
+ 1. Save the blueprint and click **Compile**.
+
+ The question mark on the compile button turns green. This means you have successfully added and compiled the new widget.
+
+### Handle the system logic
+
+1. **Add the required libraries**
+
+ To set up and configure an instance of the spatial audio engine, in `MyUserWidget.h`, add the following header file before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "IAgoraSpatialAudio.h"
+ ```
+1. **Reference the spatial audio button**
+
+ To access the spatial audio button from code, in `MyUserWidget.h`, add the following property specifier to `UMyUserWidget`:
+
+ ```cpp
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* spatialAudio = nullptr;
+ ```
+
+1. **Setup an event listener for the button**
+
+ To bind an event listener with the spatial audio button, do the following:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnSpatialAudioButtonClick();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ spatialAudio->OnClicked.AddDynamic(this, &UMyUserWidget::OnSpatialAudioButtonClick);
+ ```
+
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ spatialAudio->OnClicked.AddDynamic(this, &UMyUserWidget::OnSpatialAudioButtonClick);
+ ```
+
+
+### Implement spatial audio
+
+To implement these features in your , take the following steps:
+
+1. **Declare the variables you need**
+
+ You create an instance of `ILocalSpatialAudioEngine` to configure spatial audio and set self and remote user positions. In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ ILocalSpatialAudioEngine *localSpatial = nullptr;
+ bool isSpatialAudio = false;
+ ```
+
+1. **Instantiate and configure the spatial audio engine**
+
+ To configure an instance of spatial engine in your , do the following:
+
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget`:
+ ```cpp
+ void configureSpatialAudioEngine();
+ ```
+
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget` before `void setupVoiceSDKEngine();`:
+
+ ```cpp
+ void configureSpatialAudioEngine();
+ ```
+
+
+ 1. When your launches, you create an instance of `ILocalSpatialAudioEngine`, configure it and update the user's self position. To do this, in `MyUserWidget.cpp`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ void UMyUserWidget::configureSpatialAudioEngine()
+ {
+ LocalSpatialAudioConfig localSpatialAudioConfig;
+ localSpatialAudioConfig.rtcEngine = agoraEngine;
+ agoraEngine->queryInterface(AGORA_IID_LOCAL_SPATIAL_AUDIO, (void**)&localSpatial);
+ localSpatial->initialize(localSpatialAudioConfig);
+ // By default Agora subscribes to the audio streams of all remote users.
+ // Unsubscribe all remote users; otherwise, the audio reception range you set
+ // is invalid.
+ localSpatial->muteLocalAudioStream(true);
+ localSpatial->muteAllRemoteAudioStreams(true);
+
+ // Set the audio reception range, in meters, of the local user
+ localSpatial->setAudioRecvRange(50);
+ // Set the length, in meters, of unit distance
+ localSpatial->setDistanceUnit(1);
+
+ // Update self position
+ float pos[] = {0.0F, 0.0F, 0.0F};
+ float forward[] = {1.0F, 0.0F, 0.0F};
+ float right[] = {0.0F, 1.0F, 0.0F};
+ float up[] = {0.0F, 0.0F, 1.0F};
+ localSpatial->updateSelfPosition(pos, forward, right, up);
+ }
+ ```
+
+ 1. To execute this method at startup, in `MyUserWidget.cpp`, add the following line at the end of `NativeConstruct` method:
+
+ ```cpp
+ configureSpatialAudioEngine();
+ ```
+
+1. **Set the spatial position of a remote user**
+
+ To update the spatial position of a remote user, define the `RemoteVoicePositionInfo` and call `updateRemotePosition`. To implement this workflow, in `MyUserWidget.cpp`, add the following method to `UMyUserWidget`:
+
+ ```cpp
+ void UMyUserWidget::OnSpatialAudioButtonClick()
+ {
+ if (remoteUId == NULL)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("No remote user in the channel"));
+ return;
+ }
+ if (!isSpatialAudio)
+ {
+ RemoteVoicePositionInfo positionInfo;
+ // Set the coordinates in the world coordinate system.
+ // This parameter is an array of length 3
+ // The three values represent the front, right, and top coordinates
+ // Set the unit vector of the x axis in the coordinate system.
+ positionInfo.position[0] = 2;
+ positionInfo.position[1] = 4.0;
+ positionInfo.position[2] = 0.0;
+ // This parameter is an array of length 3,
+ // The three values represent the front, right, and top coordinates
+ positionInfo.forward[0] = 1.0;
+ positionInfo.forward[1] = 0.0;
+ positionInfo.forward[2] = 0.0;
+ // Update the spatial position of the specified remote user
+ localSpatial->updateRemotePosition(remoteUId, positionInfo);
+ UE_LOG(LogTemp, Warning, TEXT("spatial position of the remote user updated."));
+ isSpatialAudio = true;
+ }
+ else
+ {
+ localSpatial->removeRemotePosition(remoteUId);
+ isSpatialAudio = false;
+ }
+ }
+ ```
+
+Spatial audio is currently experimental. Contact sales-us@agora.io to activate this function. If you need further support, please contact [ technical support](mailto:support@agora.io).
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/spatial-audio/project-test/index.mdx b/shared/video-sdk/develop/spatial-audio/project-test/index.mdx
index b8ecbb302..4673ca6aa 100644
--- a/shared/video-sdk/develop/spatial-audio/project-test/index.mdx
+++ b/shared/video-sdk/develop/spatial-audio/project-test/index.mdx
@@ -7,6 +7,7 @@ import Windows from './windows.mdx';
import Web from './web.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
+import Unreal from './unreal.mdx';
@@ -17,3 +18,5 @@ import ReactNative from './react-native.mdx';
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/spatial-audio/project-test/unreal.mdx b/shared/video-sdk/develop/spatial-audio/project-test/unreal.mdx
new file mode 100644
index 000000000..55953895a
--- /dev/null
+++ b/shared/video-sdk/develop/spatial-audio/project-test/unreal.mdx
@@ -0,0 +1,38 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run your app, grant camera and microphone permissions.
+
+
+6. Select a user role. Press **Join** to connect to the same channel as your web demo.
+
+
+
+6. Press **Join** to connect to the same channel as your web demo.
+
+
+7. Test spatial audio effects for users
+
+ 1. Press **Spatial Audio**. Your updates the position of the remote user in the spatial audio engine.
+
+ 1. Listen to the audio of the remote user. You feel that the location of the remote user has shifted.
+
+8. Test spatial audio effects for media player
+
+ 1. To setup spatial audio position of your media player, add [Media Playing](/video-calling/develop/play-media) to your .
+
+ 1. Replace the call to `localSpatial->updateRemotePosition` in `OnSpatialAudioButtonClick` with the following:
+
+ ```cpp
+ //To configure special audio for media player use the following statement
+ localSpatial->updatePlayerPositionInfo(mediaPlayer.getMediaPlayerId(), positionInfo);
+ ```
+ 1. Press **Spatial Audio**
+
+ Your updates the spatial position of the media player in the spatial audio engine. Listen to the audio of the media player. You feel that the location of the media player has shifted.
+
+
+
diff --git a/shared/video-sdk/develop/spatial-audio/reference/index.mdx b/shared/video-sdk/develop/spatial-audio/reference/index.mdx
index 9173475dc..6fb9c7a37 100644
--- a/shared/video-sdk/develop/spatial-audio/reference/index.mdx
+++ b/shared/video-sdk/develop/spatial-audio/reference/index.mdx
@@ -4,6 +4,7 @@ import MacOS from './macos.mdx';
import Electron from './electron.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
import Web from './web.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
@@ -16,4 +17,5 @@ import ReactNative from './react-native.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/spatial-audio/reference/unreal.mdx b/shared/video-sdk/develop/spatial-audio/reference/unreal.mdx
new file mode 100644
index 000000000..83ba03b1a
--- /dev/null
+++ b/shared/video-sdk/develop/spatial-audio/reference/unreal.mdx
@@ -0,0 +1,33 @@
+
+
+
+### API Reference
+
+- ILocalSpatialAudioEngine
+
+- updateSelfPosition
+
+- updateRemotePosition
+
+- removeRemotePosition
+
+- clearRemotePositions
+
+- RemoteVoicePositionInfo
+
+
+
+- ILocalSpatialAudioEngine
+
+- updateSelfPosition
+
+- updateRemotePosition
+
+- removeRemotePosition
+
+- clearRemotePositions
+
+- RemoteVoicePositionInfo
+
+
+
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/index.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/index.mdx
index 0d715ff95..1bdfc3d6d 100644
--- a/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/index.mdx
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/index.mdx
@@ -7,6 +7,8 @@ import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import Windows from './windows.mdx'
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,5 @@ import Windows from './windows.mdx'
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/unreal.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/unreal.mdx
new file mode 100644
index 000000000..2b8f23530
--- /dev/null
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/project-implementation/unreal.mdx
@@ -0,0 +1,308 @@
+
+
+
+### Handle the system logic
+
+This sections describes the steps required to use the relevant libraries, declare the necessary variables, and set up access to the UI
+elements.
+
+1. **Import the required Unreal library**
+
+ To add the texture library for the local and remote video rendering, in `MyUserWidget.h`, add the following statement before `#include "MyUserWidget.generated.h"`:
+
+ ``` cpp
+ #include "Engine/Texture2D.h"
+ ```
+
+2. **Define the variables to manage audio and video processing**
+
+ In `MyUserWidget.h`, add the following declaration to the `UMyUserWidget`:
+
+ ``` java
+ UTexture2D* LocalRenderTexture;
+ FSlateBrush LocalRenderBrush;
+ UTexture2D* RemoteRenderTexture;
+ FSlateBrush RemoteRenderBrush;
+ agora::media::IMediaEngine* MediaEngine;
+ class VideoFrameEventHandler* handler;
+ TUniquePtr UpdateTextureRegionProxy;
+ class AudioFrameEventHandler* audioHandler;
+ ```
+
+### Implement processing of raw video and audio data
+
+To register and use video and audio frame observers in your , take the following steps:
+
+1. **Set up the audio frame observer**
+
+ `IAudioFrameObserver` gives you access to each audio frame after it is captured or access to each audio frame before it is played back. To setup the `IAudioFrameObserver`, do the following:
+
+ 1. In `MyUserWidget.h`, add the following class after `UMyUserWidget`:
+
+ ```cpp
+ class AudioFrameEventHandler : public agora::media::IAudioFrameObserver
+ {
+ public:
+ AudioFrameEventHandler(UMyUserWidget* customAudioAndVideo)
+ {
+ agoraImplementation = customAudioAndVideo;
+ }
+ ~AudioFrameEventHandler() {}
+ bool onPlaybackAudioFrameBeforeMixing(const char* channelId, agora::rtc::uid_t uid, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onRecordAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onPlaybackAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onMixedAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onEarMonitoringAudioFrame(AudioFrame& audioFrame);
+ AudioParams getEarMonitoringAudioParams();
+ int getObservedAudioFramePosition() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getPlaybackAudioParams() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getRecordAudioParams() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getMixedAudioParams() override;
+ private:
+ UMyUserWidget* agoraImplementation;
+ agora::media::IAudioFrameObserverBase::AudioParams audioParams;
+ };
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following callbacks before `setupVideoSDKEngine`:
+
+ ```cpp
+ bool AudioFrameEventHandler::onPlaybackAudioFrameBeforeMixing(const char* channelId, rtc::uid_t uid, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onRecordAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onPlaybackAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onMixedAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onEarMonitoringAudioFrame(AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getEarMonitoringAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ int AudioFrameEventHandler::getObservedAudioFramePosition()
+ {
+ return (int)(AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_PLAYBACK |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_RECORD |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_BEFORE_MIXING |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_MIXED);
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getPlaybackAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getRecordAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getMixedAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ ```
+
+2. **Set up the video frame observer**
+
+ `IVideoFrameObserver` gives you access to each local video frame after it is captured and access to each remote video frame before it is played back. In this example, your read the captured video frames to create textures that you use to render the local and remote video frames. To set up `IVideoFrameObserver` in your , do the following:
+
+ 1. In `MyUserWidget.h`, add the following class after `UMyUserWidget`:
+
+ ``` cpp
+ class VideoFrameEventHandler : public agora::media::IVideoFrameObserver
+ {
+ public:
+ VideoFrameEventHandler(UMyUserWidget *customAudioAndVideo)
+ {
+ agoraImplementation = customAudioAndVideo;
+ }
+ ~VideoFrameEventHandler() {}
+ virtual bool onCaptureVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onPreEncodeVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onSecondaryCameraCaptureVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onSecondaryPreEncodeCameraVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onScreenCaptureVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onPreEncodeScreenVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onMediaPlayerVideoFrame(VideoFrame& videoFrame, int mediaPlayerId) override;
+ virtual bool onSecondaryScreenCaptureVideoFrame(VideoFrame& videoFrame) override;
+ virtual bool onSecondaryPreEncodeScreenVideoFrame(VideoFrame& videoFrame) override;
+ virtual agora::media::IVideoFrameObserver::VIDEO_FRAME_PROCESS_MODE getVideoFrameProcessMode() override;
+ virtual agora::media::base::VIDEO_PIXEL_FORMAT getVideoFormatPreference() override;
+ virtual bool getRotationApplied() override;
+ virtual bool getMirrorApplied() override;
+ virtual uint32_t getObservedFramePosition() override;
+ virtual bool isExternal() override;
+ virtual bool onRenderVideoFrame(const char* channelId, rtc::uid_t remoteUid, VideoFrame& videoFrame) override;
+ virtual bool onTranscodedVideoFrame(VideoFrame& videoFrame) override;
+ private:
+ UMyUserWidget* agoraImplementation;
+ };
+ ```
+ 2. In `MyUserWidget.cpp`, add the following callbacks before `NativeDestruct`:
+
+ ```cpp
+ bool VideoFrameEventHandler::onPreEncodeVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onPreEncodeVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onSecondaryCameraCaptureVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onPreEncodeVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onSecondaryPreEncodeCameraVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onSecondaryPreEncodeCameraVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onScreenCaptureVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onScreenCaptureVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onPreEncodeScreenVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onPreEncodeScreenVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onMediaPlayerVideoFrame(VideoFrame& videoFrame, int mediaPlayerId)
+ {
+ return true;
+ }
+ bool VideoFrameEventHandler::onSecondaryScreenCaptureVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onSecondaryScreenCaptureVideoFrame"));
+ return true;
+ }
+ bool VideoFrameEventHandler::onSecondaryPreEncodeScreenVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onSecondaryPreEncodeScreenVideoFrame"));
+ return true;
+ }
+ agora::media::IVideoFrameObserver::VIDEO_FRAME_PROCESS_MODE VideoFrameEventHandler::getVideoFrameProcessMode()
+ {
+ return PROCESS_MODE_READ_WRITE;
+ }
+ agora::media::base::VIDEO_PIXEL_FORMAT VideoFrameEventHandler::getVideoFormatPreference()
+ {
+ return agora::media::base::VIDEO_PIXEL_RGBA;
+ }
+ bool VideoFrameEventHandler::getRotationApplied()
+ {
+ return true;
+ }
+ bool VideoFrameEventHandler::getMirrorApplied()
+ {
+ return true;
+ }
+ uint32_t VideoFrameEventHandler::getObservedFramePosition()
+ {
+ return agora::media::base::POSITION_POST_CAPTURER | agora::media::base::POSITION_PRE_RENDERER;
+ }
+ bool VideoFrameEventHandler::isExternal()
+ {
+ return true;
+ }
+ bool VideoFrameEventHandler::onRenderVideoFrame(const char* channelId, rtc::uid_t remoteUid, VideoFrame& videoFrame)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onRenderVideoFrame"));
+ if (agoraImplementation->RemoteRenderTexture == nullptr || !agoraImplementation->RemoteRenderTexture->IsValidLowLevel())
+ {
+ agoraImplementation->RemoteRenderTexture = UTexture2D::CreateTransient(videoFrame.width, videoFrame.height, PF_R8G8B8A8);
+ }
+ else
+ {
+ int yStride = videoFrame.yStride;
+ agoraImplementation->UpdateTextureRegionProxy = MakeUnique(0, 0, 0, 0, videoFrame.width, videoFrame.height);
+ UTexture2D* tex = (UTexture2D*)agoraImplementation->RemoteRenderTexture;
+ uint8* raw = (uint8*)tex->GetPlatformData()->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
+ memcpy(raw, videoFrame.yBuffer, videoFrame.height * videoFrame.width * 4);
+ tex->GetPlatformData()->Mips[0].BulkData.Unlock();
+ delete[] raw;
+ tex->UpdateTextureRegions(0, 1, agoraImplementation->UpdateTextureRegionProxy.Get(), yStride, (uint32)4, static_cast(raw));
+ agoraImplementation->RemoteRenderBrush.SetResourceObject(tex);
+ if (agoraImplementation->remoteView != nullptr)
+ {
+ agoraImplementation->remoteView->SetBrush(agoraImplementation->RemoteRenderBrush);
+ }
+ }
+ });
+ return true;
+ }
+ bool VideoFrameEventHandler::onCaptureVideoFrame(VideoFrame& videoFrame)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::onCaptureVideoFrame"));
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ if (agoraImplementation->LocalRenderTexture == nullptr || !agoraImplementation->LocalRenderTexture->IsValidLowLevel())
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::CreateTransient"));
+ agoraImplementation->LocalRenderTexture = UTexture2D::CreateTransient(videoFrame.width, videoFrame.height, PF_R8G8B8A8);
+ }
+ else
+ {
+ int yStride = videoFrame.yStride;
+ agoraImplementation->UpdateTextureRegionProxy = MakeUnique(0, 0, 0, 0, videoFrame.width, videoFrame.height);
+ UTexture2D* tex = (UTexture2D*)agoraImplementation->LocalRenderTexture;
+ uint8* raw = (uint8*)tex->GetPlatformData()->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
+ memcpy(raw, videoFrame.yBuffer, videoFrame.height * videoFrame.width * 4);
+ tex->GetPlatformData()->Mips[0].BulkData.Unlock();
+ tex->UpdateTextureRegions(0, 1, agoraImplementation->UpdateTextureRegionProxy.Get(), yStride, (uint32)4, static_cast(raw));
+ delete[] raw;
+ agoraImplementation->LocalRenderBrush.SetResourceObject(tex);
+ if (agoraImplementation->localView != nullptr)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("VideoFrameEventHandler::New brush applied"));
+ agoraImplementation->localView->SetBrush(agoraImplementation->LocalRenderBrush);
+ }
+ }
+ });
+ return true;
+ }
+ ```
+ Note that you must set the return value in `getVideoFrameProcessMode` to `PROCESS_MODE_READ_WRITE` in order for your raw data changes to take effect.
+
+3. **Register the video and audio frame observers**
+
+ To receive callbacks declared in `IVideoFrameObserver` and `IAudioFrameObserver`, you must register the video and audio frame observers with the before joining a channel. To specify the format of audio frames captured by each `IAudioFrameObserver` callback, use the `setRecordingAudioFrameParameters`, `setMixedAudioFrameParameters` and `setPlaybackAudioFrameParameters` methods. To do this, in `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ``` cpp
+ agoraEngine->queryInterface(AGORA_IID_MEDIA_ENGINE, (void**)&MediaEngine);
+ handler = new VideoFrameEventHandler(this);
+ int res = MediaEngine->registerVideoFrameObserver(handler);
+ audioHandler = new AudioFrameEventHandler(this);
+ MediaEngine->registerAudioFrameObserver(audioHandler);
+ audioHandler = new AudioFrameEventHandler(this);
+ MediaEngine->registerAudioFrameObserver(audioHandler);
+ // Set the format of the captured raw audio data.
+ int SAMPLE_RATE = 16000, SAMPLE_NUM_OF_CHANNEL = 1, SAMPLES_PER_CALL = 1024;
+ agoraEngine->setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+ agoraEngine->setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+ agoraEngine->setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, SAMPLES_PER_CALL);
+ ```
+
+4. **Unregister the video and audio observers when you close the **
+
+ When you close the , you unregister the frame observers by calling the register frame observer method again with a `null` pointer. To do this, in `MyUserWidget.cpp`, add the following lines to `NativeDestruct` before `agoraEngine->unregisterEventHandler(this);`:
+
+ ``` cpp
+ MediaEngine->registerAudioFrameObserver(nullptr);
+ MediaEngine->registerVideoFrameObserver(nullptr);
+ ```
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/index.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/index.mdx
index 0d715ff95..c0518276c 100644
--- a/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/index.mdx
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/index.mdx
@@ -6,6 +6,8 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx'
@@ -16,4 +18,5 @@ import Windows from './windows.mdx'
+
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/unreal.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/unreal.mdx
new file mode 100644
index 000000000..f63dc43c3
--- /dev/null
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/project-test/unreal.mdx
@@ -0,0 +1,30 @@
+
+
+1. Generate a temporary token in .
+
+1. In your browser, navigate to the web demo and update _App ID_, _Channel_, and _Token_ with the values for your temporary token, then click **Join**.
+
+1. In Unreal Studio, open `MyUserWidget.h`, and update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In Unreal Studio, click **Play**. A moment later you see the running on your device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+1. Press **Join** to see the video feed from the web app.
+
+ You see the local video in the local view and remote video in the remote view. This means your is rendering the capture video frames that you receive in the following callbacks:
+
+ - `onRenderVideoFrame`: Gets the remote user video frame data.
+
+ - `onCaptureVideoFrame`: Gets the captured video frame data.
+
+
+1. Test processing of raw audio data.
+
+ Edit the `IAudioFrameObserver` callbacks by adding code that processes the raw audio data you receive in the following callbacks:
+
+ - `onRecordAudioFrame`: Gets the captured audio frame data
+
+ - `onPlaybackAudioFrame`: Gets the audio frame for playback
+
+
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/reference/index.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/reference/index.mdx
index 67f66825f..256decef6 100644
--- a/shared/video-sdk/develop/stream-raw-audio-and-video/reference/index.mdx
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/reference/index.mdx
@@ -6,6 +6,8 @@ import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx'
@@ -16,4 +18,5 @@ import Windows from './windows.mdx'
+
diff --git a/shared/video-sdk/develop/stream-raw-audio-and-video/reference/unreal.mdx b/shared/video-sdk/develop/stream-raw-audio-and-video/reference/unreal.mdx
new file mode 100644
index 000000000..802a6b8ad
--- /dev/null
+++ b/shared/video-sdk/develop/stream-raw-audio-and-video/reference/unreal.mdx
@@ -0,0 +1,15 @@
+import * as data from '@site/data/variables';
+
+
+
+### API reference
+
+* IAudioFrameObserver
+
+* IVideoFrameObserver
+
+* registerAudioFrameObserver
+
+* registerVideoFrameObserver
+
+
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-implementation/index.mdx b/shared/video-sdk/get-started/get-started-sdk/project-implementation/index.mdx
index c08dcd230..cf75a4bd3 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-implementation/index.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-implementation/index.mdx
@@ -7,6 +7,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx b/shared/video-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx
new file mode 100644
index 000000000..64de0d8b6
--- /dev/null
+++ b/shared/video-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx
@@ -0,0 +1,789 @@
+
+### Implement the user interface
+
+For a basic , you need the following user widgets:
+
+* Two image widgets: To display local and remote video.
+
+* Two button widgets: To `Join` and `Leave` the channel.
+
+
+* One selector to choose to join as the host or the audience
+
+
+To implement this user interface, use a `UserWidget` instance to create and add the blueprint.
+
+To implement this interface, do the following:
+
+1. **Add a user widget class**
+
+ In Unreal Editor:
+
+ 1. Go to **Tools** > **New C++ Class**.
+
+ The **Add C++ Class** window opens.
+ 1. Click **All Classes** and input `UserWidget` in the **Search** field.
+ 1. From the search results, select `UserWidget` and click **Next**, then click **Create Class**.
+
+ A new C++ class is added to your project and the class source code opens in Visual Studio IDE.
+
+2. **Create a blueprint for the user widget class**
+
+ In **Content Browser**:
+ 1. Go to **Add** > **Blueprint Class**.
+
+ The **Pick Parent Class** window opens.
+ 1. Expand the **ALL CLASSES** dropdown and input `MyUserWidget` in the **Search** field.
+ 1. From the search results, select `MyUserWidget` and click **Select**.
+
+ You see a new blueprint in the content folder called `NewBlueprint`.
+
+3. **Add the join and leave buttons**
+
+ In **Content Browser**:
+
+ 1. Navigate to the content folder, then double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+ 1. Add a canvas panel. Drag **Canvas Panel** from the **Panel** section of the **Palette Panel** to the **Graph**.
+
+ A canvas panel appears in **Graph**.
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A button appears on the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `JoinBtn`, then change the following coordinates:
+
+ * **Position X** - 1000
+ * **Position Y** - 960
+ * **Size X** - 130
+ * **Size Y** - 60
+
+ 1. Use the same procedure to create a button called `LeaveBtn` and change the following properties in **Details**:
+
+ * **Position X** - 1150
+ * **Position Y** - 960
+ * **Size X** - 130
+ * **Size Y** - 60
+
+ 1. Add a label to `JoinBtn`:
+
+ 1. Drag **Text** from the **Common** section of **Palette Panel** to the canvas panel.
+ 1. Change the following properties in **Details**:
+
+ * **Position X** - 1030
+ * **Position Y** - 972
+ * **Size X** - -15
+ * **Size Y** - -15
+ * **Text** - `Join`
+
+ 1. Use the same procedure to update `LeaveBtn` with the following properties:
+
+ * **Position X** - 1168
+ * **Position Y** - 972
+ * **Size X** - -15
+ * **Size Y** - -15
+ * **Text** - `Leave`
+
+2. **Add local and remote video views**
+
+ You use **Image Widget** to display the local and remote video streams. To add image widgets in your , take the following steps:
+
+ 1. For local video, drag **Image** from **Palette Panel** > **Common** to the canvas panel.
+
+ 2. In **Details**, rename **Image_0** to **localView**, then change the following coordinates:
+
+ * **Position X** - 442
+ * **Position Y** - 336
+ * **Size X** - 400
+ * **Size Y** - 400
+
+ 3. For remote video, use the same procedure to create an **Image** called **remoteView** with the following coordinates:
+
+ * **Pos X** - 924
+ * **Pos Y** - 336
+ * **Size X** - 400
+ * **Size Y** - 400
+
+
+3. **Enable the user to join a channel as the host or the audience**
+
+ 1. Drag **Check Box** from **Palette Panel** > **Common** to the canvas panel, then change the following properties in **Details**.
+
+ * **Pos X** - 1040
+ * **Pos Y** - 772
+
+ 1. In **Details**, rename **CheckBox_0** to `hostToggle`.
+
+ 1. Use the same procedure to create a **Check Box** called **audienceToggle**, then change the following coordinates in **Details**:
+
+ * **Pos X** - 1040
+ * **Pos Y** - 820
+
+ 1. Add a text widget for `hostToggle`. Drag **Text** from **Palette Panel** to the canvas panel, then change the following properties in **Details**:
+
+ * **Pos X** - 1070
+ * **Pos Y** - 772
+ * **Text** - `Host`
+
+
+ 1. Use the same procedure to add a text widget for `audienceToggle` with the following properties:
+
+ * **Pos X** - 1070
+ * **Pos Y** - 820
+ * **Text** - `Audience`
+
+
+
+
+5. **Build your project**
+
+ Click **Compile**. The question mark on the **Compile** button turns to a green tick. This means you have successfully added new widgets to the blueprint.
+
+1. **Add the widget blueprint to viewport**
+
+ Once you have created and laid out your widget blueprint, in order to display it in your game, call it by using the **Create Widget** and **Add to Viewport** nodes inside **Level Blueprint**. To implement this workflow, take the following steps:
+
+ 1. In the list of world blueprints, click **Open Level Blueprint**.
+
+ The **Event Graph** window opens.
+
+ 1. Right-click in the **Event Graph** window and input `create widget` in the **Search** field. From the search results, choose **Create Widget**.
+
+ A node with title **Construct NONE** appears in **Level Blueprint**.
+
+ 1. Add the **Add to Viewport** node to **Level Blueprint**:
+
+ 1. Drag the right pin of the **Construct NONE** node to the level editor. At the dragged pin end a search box appears. Search for **Add to Viewport** and select it from search result list.
+
+ This puts **Add to Viewport** in the level editor with its left pin connected to the right pin of **Construct NONE**.
+
+ 1. Connect the `Return Value` pin of **Construct NONE** to the `Target` pin of **Add to Viewport**.
+
+ 1. Connect **Event BeginPlay** to the left pin of the **Construct None** node.
+
+ 1. Set the **Class** field of the **Construct NONE** node to `NewBlueprint`.
+
+ 1. Click **Compile** in the **Event Graph** window.
+
+ The yellow question mark turns green.
+
+ 1. Save the level blueprint.
+
+ Your **Event Graph** looks like this:
+
+ ![image](/images/video-sdk/unreal-blueprint.png)
+
+To check that you successfully added `NewBlueprint` to the view port. In **Content Browser**, navigate to the `Content` folder and double-click `Untitled`. You should see the new game level open. Click **Play** to see the following UI:
+
+![image](/images/video-sdk/unreal-video-calling-ui.png)
+
+
+![image](/images/video-sdk/unreal-ILS-ui.png)
+
+
+### Handle the system logic
+
+Import the C++ libraries, set up your to run on Android, and request permissions for the camera and microphone.
+
+1. **Reference the user widgets**
+
+
+ 1. In Visual Studio, add the following property specifiers to `UMyUserWidget` class after `GENERATED_BODY()` in `MyUserWidget.h`:
+
+ ```cpp
+ protected:
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UImage* remoteView = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UImage* localView = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* JoinBtn = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* LeaveBtn = nullptr;
+ ```
+
+ 1. To access the user widgets from the blueprint, in `MyUserWidget.h`, include the following headers before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Image.h"
+ #include "Components/Button.h"
+ ```
+
+
+
+
+ 1. In Visual Studio, add the following property specifiers to `MyUserWidget.h` after `GENERATED_BODY()`:
+
+ ```cpp
+ protected:
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UImage* remoteView = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UImage* localView = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* JoinBtn = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* LeaveBtn = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UCheckBox* hostToggle = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UCheckBox* audienceToggle = nullptr;
+ ```
+
+ 1. To access the user widgets from the blueprint, in `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Image.h"
+ #include "Components/Button.h"
+ #include "Components/CheckBox.h"
+ ```
+
+
+
+
+
+1. **Set up event listeners for the buttons**
+
+ In `MyUserWidget.h`, add the following to `UMyUserWidget` after `UButton* LeaveBtn = nullptr;`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnLeaveButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnJoinButtonClick();
+ ```
+
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnLeaveButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnJoinButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnHostCheckboxChanged(bool bIsChecked);
+ UFUNCTION(BlueprintCallable)
+ void OnAudienceCheckboxChanged(bool bIsChecked);
+ ```
+
+
+3. **Manage Android permissions**
+
+ 1. Add the Unreal Android libraries.
+
+ In `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #if PLATFORM_ANDROID
+ #include "AndroidPermission/Classes/AndroidPermissionFunctionLibrary.h"
+ #endif
+ ```
+
+ 2. Set up a function to check that the permissions are granted.
+
+ In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget` after `GENERATED_BODY()` (`private` access by default):
+
+ ``` cpp
+ void CheckAndroidPermission();
+ ```
+
+ 3. Add the logic of requesting permissions to `CheckAndroidPermission`.
+
+ In `MyUserWidget.cpp`, add the following method after `#include "MyUserWidget.h"`:
+
+ ```cpp
+ void UMyUserWidget::CheckAndroidPermission()
+ {
+ #if PLATFORM_ANDROID
+ FString pathfromName = UGameplayStatics::GetPlatformName();
+ if (pathfromName == "Android")
+ {
+ TArray AndroidPermission;
+ AndroidPermission.Add(FString("android.permission.CAMERA"));
+ AndroidPermission.Add(FString("android.permission.RECORD_AUDIO"));
+ AndroidPermission.Add(FString("android.permission.READ_PHONE_STATE"));
+ AndroidPermission.Add(FString("android.permission.WRITE_EXTERNAL_STORAGE"));
+ UAndroidPermissionFunctionLibrary::AcquirePermissions(AndroidPermission);
+ }
+ #endif
+ }
+ ```
+
+### Implement the channel logic
+
+The following figure shows the API call sequence of implementing .
+
+
+ ![image](/images/video-sdk/video-call-logic-unreal.svg)
+
+
+
+ ![image](/images/video-sdk/ils-call-logic-unreal.svg)
+
+
+To implement this logic, take the following steps:
+
+1. **Import the library**
+
+ 1. In `MyUserWidget.h`, add the following header file before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "AgoraPluginInterface.h"
+ ```
+
+ 1. Import the namespaces.
+
+ Add the following before `UCLASS()`:
+
+ ```cpp
+ using namespace agora::rtc;
+ using namespace agora;
+ ```
+
+1. **Import dependency module**
+
+ In `AgoraImplementation.Build.cs`, replace the following line:
+
+ ```cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore"});
+ ```
+
+ With:
+
+ ```cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "AgoraPlugin"});
+ ```
+
+1. **Declare the variables that you use to create and join a channel**
+
+ In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+
+ ```cpp
+ protected:
+ IRtcEngine* agoraEngine;
+ std::string appId = "";
+ std::string channelName = "";
+ std::string token = "";
+ bool isJoin = false;
+ int remoteUId;
+ ```
+
+
+
+ ``` cpp
+ IRtcEngine* agoraEngine;
+ std::string appId = "";
+ std::string channelName = "";
+ std::string token = "";
+ bool isJoin = false;
+ int remoteUId;
+ std::string userRole;
+ ```
+
+
+1. **Set up **
+
+ To set up an instance, take the following steps:
+
+ 1. Declare the function that creates an instance.
+
+ In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget` after `void CheckAndroidPermission();`:
+
+ ```cpp
+ void setupVideoSDKEngine();
+ ```
+
+ 2. Set up the instance.
+
+ Add the following method to `MyUserWidget.cpp`:
+
+
+ ``` cpp
+ void UMyUserWidget::setupVideoSDKEngine()
+ {
+ // Create an engine instance.
+ agoraEngine = agora::rtc::ue::createAgoraRtcEngine();
+ // Specify a context for the engine.
+ RtcEngineContext context;
+ context.appId = appId.c_str();
+ context.eventHandler = this;
+ // Choose the communication profile for video calling.
+ context.channelProfile = CHANNEL_PROFILE_TYPE::CHANNEL_PROFILE_COMMUNICATION;
+ // Initialize the engine instance with the context.
+ agoraEngine->initialize(context);
+ // Enable the local audio capture to init the local video stream.
+ agoraEngine->enableAudio();
+ // Enable the local video capture to init the local video stream.
+ agoraEngine->enableVideo();
+ // Attach event listener functions to the button.
+ LeaveBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnLeaveButtonClick);
+ JoinBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnJoinButtonClick);
+ }
+ ```
+
+
+ ``` cpp
+ void UMyUserWidget::setupVideoSDKEngine()
+ {
+ // Create an engine instance.
+ agoraEngine = agora::rtc::ue::createAgoraRtcEngine();
+ // Specify a context for the engine.
+ RtcEngineContext context;
+ context.appId = appId.c_str();
+ context.eventHandler = this;
+ // Select the live broadcasting profile for interactive live streaming.
+ context.channelProfile = CHANNEL_PROFILE_TYPE::CHANNEL_PROFILE_LIVE_BROADCASTING;
+ // Initialize the engine instance with the context.
+ agoraEngine->initialize(context);
+ // Enable the local audio capture to init the local video stream.
+ agoraEngine->enableAudio();
+ // Enable the local video capture to init the local video stream.
+ agoraEngine->enableVideo();
+ LeaveBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnLeaveButtonClick);
+ JoinBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnJoinButtonClick);
+ }
+ ```
+
+
+1. **Handle and respond to events**
+
+ To register the callbacks, inherit the `UMyUserWidget` with the `IRtcEngineEventHandler` class, then implement the callbacks.
+
+ 1. In `MyUserWidget.h`, modify the `UMyUserWidget` function interface `class AGORAIMPLEMENTATION_API UMyUserWidget : public UUserWidget` as specified below :
+
+ ```cpp
+ class AGORAIMPLEMENTATION_API UMyUserWidget : public UUserWidget, public IRtcEngineEventHandler
+ ```
+
+ 1. Override the callback definitions.
+
+ In `MyUserWidget.h`, add the following callbacks to `UMyUserWidget` after `void setupVideoSDKEngine();`:
+
+ ```cpp
+ // Occurs when a remote user joins the channel.
+ void onUserJoined(uid_t uid, int elapsed) override;
+ // Occurs when a local user joins the channel.
+ void onJoinChannelSuccess(const char* channel, uid_t uid, int elapsed) override;
+ // Occurs when you leave the channel.
+ void onLeaveChannel(const RtcStats& stats) override;
+ // Occurs when the remote user drops offline.
+ void onUserOffline(uid_t uid, USER_OFFLINE_REASON_TYPE reason) override;
+ ```
+ 1. Add your logic to the callbacks you declared in `UMyUserWidget`.
+
+ In `MyUserWidget.cpp`, add the following before function definition of `setupVideoSDKEngine()`:
+
+ ```cpp
+ void UMyUserWidget::onLeaveChannel(const RtcStats& stats)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onLeaveChannel"));
+ // Clean up the local view when the local user leaves the channel.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = nullptr;
+ videoCanvas.uid = 0;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_CAMERA;
+ if (agoraEngine != nullptr)
+ {
+ agoraEngine->setupLocalVideo(videoCanvas);
+ }
+ isJoin = false;
+ });
+ }
+
+ void UMyUserWidget::onJoinChannelSuccess(const char* channel, uid_t uid, int elapsed)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("JoinChannelSuccess uid: %u"), uid);
+ });
+ }
+ ```
+
+1. **Join a channel to start **
+
+ When the user clicks **Join**, a call for `OnJoinButtonClick()` is executed. This method securely connects the local user to a channel using the authentication token. In `MyUserWidget.cpp`, add the following before the function definition of `setupVideoSDKEngine()` :
+
+ ``` cpp
+ void UMyUserWidget::OnJoinButtonClick()
+ {
+ // Setup a video canvas to render the local video.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = localView;
+ videoCanvas.uid = 0;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_CAMERA;
+ agoraEngine->setupLocalVideo(videoCanvas);
+ // Set the user role to Host.
+ agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);
+ // Join the channel.
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", 0);
+ isJoin = true;
+ }
+ ```
+
+
+ ``` cpp
+ void UMyUserWidget::OnJoinButtonClick()
+ {
+ if(userRole == "")
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Select a role to join the channel"));
+ return;
+ }
+ // Setup a video canvas to render the local video.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = localView;
+ videoCanvas.uid = 0;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_CAMERA;
+ agoraEngine->setupLocalVideo(videoCanvas);
+ // Set the user role to Host.
+ agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);
+ // Join the channel.
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", 0);
+ isJoin = true;
+ }
+ ```
+
+
+1. **View the remote user who joins the channel**
+
+ When a remote user joins the channel, fires the `OnUserJoined` callback event. To catch this callback and start remote video, in `MyUserWidget.cpp`, add the following code after the function definition of `OnJoinChannelSuccess()`:
+
+ ``` cpp
+ void UMyUserWidget::onUserJoined(uid_t uid, int elapsed)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onUserJoined uid: %u"), uid);
+ // Setup a canvas to render the remote video.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = remoteView;
+ videoCanvas.uid = uid;
+ remoteUId = uid;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_REMOTE;
+ RtcConnection connection;
+ connection.channelId = channelName.c_str();
+ // Render the remote video.
+ ((IRtcEngineEx*)agoraEngine)->setupRemoteVideoEx(videoCanvas, connection);
+ });
+ }
+ ```
+
+
+
+ ``` cpp
+ void UMyUserWidget::onUserJoined(uid_t uid, int elapsed)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onUserJoined uid: %u"), uid);
+ // Setup a canvas to render the remote video.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = remoteView;
+ videoCanvas.uid = uid;
+ remoteUId = uid;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_REMOTE;
+ RtcConnection connection;
+ connection.channelId = channelName.c_str();
+ // Render the remote video.
+ ((IRtcEngineEx*)agoraEngine)->setupRemoteVideoEx(videoCanvas, connection);
+ });
+ }
+ ```
+
+
+
+8. **Manage the user role**
+
+ 1. Change the `LocalView` and `RemoteView` visibility.
+ When the user selects and un-selects the host or the audience check box, the `OnCheckStateChanged` event is fired. The catches the event and invoke `OnHostCheckboxChanged` or `OnAudienceCheckboxChanged` to change the `localView` and `remoteView` visibility.
+
+ To implement this workflow, take the following steps:
+
+ 1. Setup callback functions for the host and audience check boxes.
+
+ In `MyUserWidget.cpp`, add the following code at the end of `setupVideoSDKEngine`:
+
+ ```cpp
+ // When the host check box state changes, you call OnHostCheckboxChanged.
+ hostToggle->OnCheckStateChanged.AddDynamic(this, &UMyUserWidget::OnHostCheckboxChanged);
+ // When the audience check box state changes, you call OnAudienceCheckboxChanged.
+ audienceToggle->OnCheckStateChanged.AddDynamic(this, &UMyUserWidget::OnAudienceCheckboxChanged);
+ ```
+
+ 2. Add the `remoteView` and `localView` visibility logic to `OnAudienceCheckboxChanged` and `OnHostCheckboxChanged`.
+
+ In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnHostCheckboxChanged(bool bIsChecked)
+ {
+ if (bIsChecked)
+ {
+ if (isJoin == true && remoteUId != NULL)
+ {
+ agoraEngine->muteRemoteVideoStream(remoteUId, true);
+ }
+ userRole = "Host";
+ agoraEngine->enableLocalVideo(true);
+ agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);
+ audienceToggle->SetCheckedState(ECheckBoxState::Unchecked);
+ UE_LOG(LogTemp, Warning, TEXT("User role changed to host"));
+ }
+ }
+ void UMyUserWidget::OnAudienceCheckboxChanged(bool bIsChecked)
+ {
+ if (bIsChecked)
+ {
+ if (isJoin == true && remoteUId != NULL)
+ {
+ agoraEngine->muteRemoteVideoStream(remoteUId, false);
+ }
+ userRole = "Audience";
+ agoraEngine->enableLocalVideo(false);
+ agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);
+ hostToggle->SetCheckedState(ECheckBoxState::Unchecked);
+ UE_LOG(LogTemp, Warning, TEXT("User role changed to Audience"));
+ }
+ }
+ ```
+
+8. **Stop the remote video when a remote user leaves the channel**
+
+ When a remote user leaves the channel or drops offline, fires the `OnUserOffline` callback event. To catch this callback and stop remote video, in `MyUserWidget.cpp`, add the following callback method after the function definition of `OnJoinChannelSuccess()`:
+
+ ``` cpp
+ // This callback is triggered when a remote user leaves the channel or drops offline.
+ void UMyUserWidget::onUserOffline(uid_t uid, USER_OFFLINE_REASON_TYPE reason)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onUserOffline uid: %u"), uid);
+ // Clean up the remote video view.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = nullptr;
+ videoCanvas.uid = uid;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_REMOTE;
+ RtcConnection connection;
+ connection.channelId = channelName.c_str();
+ if (agoraEngine != nullptr)
+ {
+ ((IRtcEngineEx*)agoraEngine)->setupRemoteVideoEx(videoCanvas, connection);
+ }
+ });
+ }
+ ```
+
+
+
+ ``` cpp
+ // This callback is triggered when a remote user leaves the channel or drops offline.
+ void UMyUserWidget::onUserOffline(uid_t uid, USER_OFFLINE_REASON_TYPE reason)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onUserOffline uid: %u"), uid);
+ // Clean up the remote video view.
+ VideoCanvas videoCanvas;
+ videoCanvas.view = nullptr;
+ videoCanvas.uid = uid;
+ videoCanvas.sourceType = VIDEO_SOURCE_TYPE::VIDEO_SOURCE_REMOTE;
+ RtcConnection connection;
+ connection.channelId = channelName.c_str();
+ if (agoraEngine != nullptr)
+ {
+ ((IRtcEngineEx*)agoraEngine)->setupRemoteVideoEx(videoCanvas, connection);
+ }
+ if(userRole == "Host")
+ {
+ agoraEngine->muteRemoteVideoStream(remoteUId, true);
+ }
+ });
+ }
+ ```
+
+
+1. **Leave the channel when a user ends the call**
+
+ When a user clicks **Leave**, a call for `OnLeaveButtonClick()` is executed to exit the channel. In `MyUserWidget.cpp`, add the following before the function definition of `onUserJoined()`:
+ ``` cpp
+ void UMyUserWidget::OnLeaveButtonClick()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget:: OnLeaveButtonClick ======"));
+ agoraEngine->leaveChannel();
+ remoteUId = NULL;
+ isJoin = false;
+ }
+ ```
+
+### Start and stop your
+
+In this implementation, you initiate and remove when the app opens and closes. The local user joins and leaves a channel using the same instance. In order to send video and audio streams to , you need to ensure that the local user gives permission to access the camera and microphone on the local device. To implement this functionality:
+
+1. **Check that the has the correct permissions to start**
+
+ For Android, call `CheckAndroidPermission` and check if permissions are granted. To execute `CheckAndroidPermission` at startup, call `CheckAndroidPermission` in the constructor of the `UMyUserWidget` class. To implement this workflow, take the following steps:
+
+ 1. Implement a constructor in `UMyUserWidget`.
+
+ In `MyUserWidget.h`, add the following before `void setupVideoSDKEngine();`:
+
+ ```cpp
+ void NativeConstruct();
+ ```
+
+ 2. Call `CheckAndroidPermission` in the constructor.
+
+ In `MyUserWidget.cpp`, add the following code before the function definition of `setupVideoSDKEngine()`:
+
+ ```cpp
+ void UMyUserWidget::NativeConstruct()
+ {
+ CheckAndroidPermission();
+ }
+ ```
+
+ 3. After checking the permissions, call `setupVideoSDKEngine()` to create an engine instance.
+
+ In `MyUserWidget.cpp`, add the following at the end of `NativeConstruct()`:
+
+ ```cpp
+ setupVideoSDKEngine();
+ ```
+
+3. **Clean up the resources used by your **
+
+ When a user closes the , use `NativeDestruct()` to clean up the resources created in `setupVideoSDKEngine()`. To implement this workflow, take the following steps:
+
+ 1. Implement a destructor for the `UMyUserWidget` class.
+
+ In `MyUserWidget.h`, add the following before `void NativeConstruct();`:
+
+ ```cpp
+ void NativeDestruct();
+ ```
+
+ 2. Add the resource cleanup logic to `NativeDestruct()`.
+
+ In `MyUserWidget.cpp`, add the following before `NativeConstruct()`:
+
+ ```cpp
+ void UMyUserWidget::NativeDestruct()
+ {
+ Super::NativeDestruct();
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::NativeDestruct"));
+ if (agoraEngine != nullptr)
+ {
+ agoraEngine->unregisterEventHandler(this);
+ agoraEngine->release();
+ delete agoraEngine;
+ agoraEngine = nullptr;
+ }
+ }
+ ```
+
+
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-setup/index.mdx b/shared/video-sdk/get-started/get-started-sdk/project-setup/index.mdx
index 9488597b3..01386318b 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-setup/index.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-setup/index.mdx
@@ -7,6 +7,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-setup/unreal.mdx b/shared/video-sdk/get-started/get-started-sdk/project-setup/unreal.mdx
new file mode 100644
index 000000000..9795bd53e
--- /dev/null
+++ b/shared/video-sdk/get-started/get-started-sdk/project-setup/unreal.mdx
@@ -0,0 +1,29 @@
+
+1. **Create a new project**
+
+ 1. In Unreal Editor, select the **Games** new project category.
+
+ 1. From **Unreal Project Browser**, select a **Blank template** and choose the following settings from **Project Defaults**:
+
+ 1. Select **C++**.
+
+ 1. From the **Target Platform** dropdown, select **Desktop**.
+
+ 1. Clear **Starter Content**.
+
+ 1. In the **Project Name** field, input `AgoraImplementation` and click **Create**.
+
+ Your project opens in Unreal Editor.
+
+1. **Integrate **
+
+ 1. Create a folder called `Plugins` in the root directory of your project folder.
+
+ 1. Unzip the latest version of the [](/sdks) to `/Plugins`.
+
+ 1. In Solution Explorer, right-click your project, then click **Properties**. The **AgoraImplementation Property Pages** window opens. Go to the **VC++ Directories** menu and add the following string in the **External Include Directories** field, then click **OK**:
+
+ ```
+ $(SolutionDir)Plugins\AgoraPlugin\Source\AgoraPlugin\Public;
+ ```
+
\ No newline at end of file
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-test/index.mdx b/shared/video-sdk/get-started/get-started-sdk/project-test/index.mdx
index a722df263..14fc39f72 100644
--- a/shared/video-sdk/get-started/get-started-sdk/project-test/index.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/project-test/index.mdx
@@ -7,6 +7,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
+
@@ -17,3 +19,4 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/video-sdk/get-started/get-started-sdk/project-test/unreal.mdx b/shared/video-sdk/get-started/get-started-sdk/project-test/unreal.mdx
new file mode 100644
index 000000000..69b1a1e0c
--- /dev/null
+++ b/shared/video-sdk/get-started/get-started-sdk/project-test/unreal.mdx
@@ -0,0 +1,20 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, you need to grant microphone and camera access to your .
+
+
+
+5. Select a role using the check boxes.
+
+6. Click **Join** to start a call. Now, you can see yourself on the device screen and talk to the remote user using your .
+
+
+
+5. Click **Join** to start a call. Now, you can see yourself on the device screen and talk to the remote user using your .
+
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/get-started/get-started-sdk/reference/index.mdx b/shared/video-sdk/get-started/get-started-sdk/reference/index.mdx
index feb51890c..012fe944b 100644
--- a/shared/video-sdk/get-started/get-started-sdk/reference/index.mdx
+++ b/shared/video-sdk/get-started/get-started-sdk/reference/index.mdx
@@ -7,6 +7,7 @@ import Unity from './unity.mdx';
import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
@@ -17,3 +18,4 @@ import Windows from './windows.mdx';
+
diff --git a/shared/video-sdk/get-started/get-started-sdk/reference/unreal.mdx b/shared/video-sdk/get-started/get-started-sdk/reference/unreal.mdx
new file mode 100644
index 000000000..05e838dcb
--- /dev/null
+++ b/shared/video-sdk/get-started/get-started-sdk/reference/unreal.mdx
@@ -0,0 +1,11 @@
+
+
+### API reference
+
+* createAgoraRtcEngine
+
+* joinChannel
+
+* leaveChannel
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/reference/_release-notes.mdx b/shared/video-sdk/reference/_release-notes.mdx
index f3d0e0472..955756b30 100644
--- a/shared/video-sdk/reference/_release-notes.mdx
+++ b/shared/video-sdk/reference/_release-notes.mdx
@@ -9,6 +9,7 @@ import ReactNative from '@docs/shared/video-sdk/reference/release-notes/react-na
import Electron from '@docs/shared/video-sdk/reference/release-notes/electron.mdx';
import Macos from '@docs/shared/video-sdk/reference/release-notes/macos.mdx';
import Windows from '@docs/shared/video-sdk/reference/release-notes/windows.mdx';
+import Unreal from '@docs/shared/video-sdk/reference/release-notes/unreal.mdx';
import NCS from '@docs/shared/video-sdk/reference/release-notes/ncs-release-notes.mdx';
import AINS from '@docs/shared/extensions-marketplace/reference/_ains.mdx';
import VB from '@docs/shared/extensions-marketplace/reference/_vb.mdx';
@@ -22,6 +23,7 @@ This page provides the release notes for .
## Video SDK
+
diff --git a/shared/video-sdk/reference/app-size-optimization/manual-install/index.mdx b/shared/video-sdk/reference/app-size-optimization/manual-install/index.mdx
index be501f963..b2c8f5695 100644
--- a/shared/video-sdk/reference/app-size-optimization/manual-install/index.mdx
+++ b/shared/video-sdk/reference/app-size-optimization/manual-install/index.mdx
@@ -7,6 +7,7 @@ import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
@@ -16,4 +17,5 @@ import Windows from './windows.mdx';
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/video-sdk/reference/app-size-optimization/manual-install/unreal.mdx b/shared/video-sdk/reference/app-size-optimization/manual-install/unreal.mdx
new file mode 100644
index 000000000..41bfc4761
--- /dev/null
+++ b/shared/video-sdk/reference/app-size-optimization/manual-install/unreal.mdx
@@ -0,0 +1,17 @@
+
+
+To manually install :
+
+1. Create a folder called `Plugins` in the root directory of your project folder.
+
+1. Unzip the latest version of the [](/sdks) to `/Plugins`.
+
+1. In Solution Explorer, right-click your project, then click **Properties**. The **`` Property Pages** window opens. Go to the **VC++ Directories** menu and add the following string in the **External Include Directories** field, then click **OK**:
+
+ ```
+ $(SolutionDir)Plugins\AgoraPlugin\Source\AgoraPlugin\Public;
+ ```
+
+
+
+
diff --git a/shared/video-sdk/reference/release-notes/unreal.mdx b/shared/video-sdk/reference/release-notes/unreal.mdx
new file mode 100644
index 000000000..fc3d11f18
--- /dev/null
+++ b/shared/video-sdk/reference/release-notes/unreal.mdx
@@ -0,0 +1,183 @@
+
+
+## v4.1.0
+
+v4.1.0 was released on January 17, 2023.
+
+
+This release is the first version of RTC Unreal SDK, including the following features:
+
+**1. Multiple media tracks**
+
+This release supports one `IRtcEngine` instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting `RtcEngineEx` and `ChannelMediaOptions.`
+
+- After calling `joinChannel` to join the first channel, call `joinChannelEx` multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (`localUid`) and `ChannelMediaOptions` settings.
+- You can simultaneously publish multiple sets of video streams captured by multiple cameras or screen sharing by setting `publishSecondaryCameraTrack` and `publishSecondaryScreenTrack` in `ChannelMediaOptions`.
+
+This release adds `createCustomVideoTrack` method to implement video custom capture. You can refer to the following steps to publish multiple custom captured video in the channel:
+
+1. Create a custom video track: Call this method to create a video track, and get the video track ID.
+2. Set the custom video track to be published in the channel: In each channel's `ChannelMediaOptions`, set the `customVideoTrackId` parameter to the ID of the video track you want to publish, and set `publishCustomVideoTrack` to `true`.
+3. Pushing an external video source: Call `pushVideoFrame`, and specify `videoTrackId` as the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.
+
+You can also experience the following features with the multi-channel capability:
+
+- Publish multiple sets of audio and video streams to the remote users through different user IDs (`uid`).
+- Mix multiple audio streams and publish to the remote users through a user ID (`uid`).
+- Combine multiple video streams and publish them to the remote users through a user ID (`uid`).
+
+**2. Ultra HD resolution (Beta)**
+
+In order to improve the interactive video experience, the SDK optimizes the whole process of video capture, encoding, decoding and rendering, and now supports 4K resolution. The improved FEC (Forward Error Correction) algorithm enables adaptive switches according to the frame rate and number of video frame packets, which further reduces the video stuttering rate in 4K scenes.
+
+Additionally, you can set the encoding resolution to 4K (3840 × 2160) and the frame rate to 60 fps when calling `SetVideoEncoderConfiguration`. The SDK supports automatic fallback to the appropriate resolution and frame rate if your device does not support 4K.
+
+> **Note**: This feature has certain requirements with regards to device performance and network bandwidth, and the supported upstream and downstream frame rates vary on different platforms. To experience this feature, contact technical support.
+
+**3. Build-in media player**
+
+To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the `createMediaPlayer` method to create a media player object, you can then call the methods in the `IMediaPlayer` class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.
+
+
+**4. Screen sharing**
+
+This release optimizes the screen sharing function. You can enable this function in the following ways.
+
+- Call the `StartScreenCaptureByDisplayId` method before joining a channel, and then call `JoinChannel` [2/2] to join a channel and set `publishScreenTrack` or `publishSecondaryScreenTrack` as true.
+- Call the `StartScreenCaptureByDisplayId` method after joining a channel, and then call `UpdateChannelMediaOptions` to set `publishScreenTrack` or `publishSecondaryScreenTrack` as true.
+
+
+**5. Spatial audio**
+
+ > **Note**: This feature is in experimental status. To enable this feature, contact sales@agora.io. Contact technical support if needed.
+
+You can set the spatial audio for the remote user as following:
+
+- Local Cartesian Coordinate System Calculation: This solution uses the `ILocalSpatialAudioEngine` class to implement spatial audio by calculating the spatial coordinates of the remote user. You need to call `updateSelfPosition` and `updateRemotePosition` to update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user.
+ ![img](https://web-cdn.agora.io/docs-files/1656645542473)
+
+You can also set the spatial audio for the media player as following:
+
+- Local Cartesian Coordinate System Calculation: This solution uses the `ILocalSpatialAudioEngine` class to implement spatial audio. You need to call `updateSelfPosition` and `updatePlayerPositionInfo` to update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player.
+ ![img](https://web-cdn.agora.io/docs-files/1656646829637)
+
+This release also adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.
+
+- Sound insulation area: You can set a sound insulation area and sound attenuation parameter by calling `setZones`. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by calling `setPlayerAttenuation` and `setRemoteAudioAttenuation`, and specify whether to use that setting to force an override of the sound attenuation paramter in `setZones`.
+- Doppler sound: You can enable Doppler sound by setting the `enable_doppler` parameter in `SpatialAudioParams`, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario).
+- Headphone equalizer: You can use a preset headphone equalization effect by calling the `setHeadphoneEQPreset` method to improve the hearing of the headphones.
+
+**6. Media Stream Encryption**
+
+This release adds support for media stream encryption, which encrypts your app’s audio and video streams with a unique key and salt controlled by the app developer. While not every use case requires media encryption, Agora provides the option to guarantee data confidentiality during transmission.
+
+
+ **7. Media push**
+
+This release adds support for sending the audio and video of your channel to other RTMP servers through the CDN:
+- Starts or stops publishing at any time.
+- Adds or removes an address while continuously publishing the stream.
+- Adjusts the picture-in-picture layout.
+- To send a live stream to WeChat or Weibo.
+- To allow more people to watch the live stream when the number of audience members in the channel reached the limit.
+
+
+**8. Brand-new AI noise reduction**
+
+The SDK supports a new version of AI noise reduction (in comparison to the basic AI noise reduction in v3.7.0). The new AI noise reduction has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option.
+> **Note**: To experience this feature, contact sales@agora.io.
+
+
+**9. Real-time chorus**
+
+This release gives real-time chorus the following abilities:
+
+- Two or more choruses are supported.
+- Each singer is independent of each other. If one singer fails or quits the chorus, the other singers can continue to sing.
+- Very low latency experience. Each singer can hear each other in real time, and the audience can also hear each singer in real time.
+
+This release adds the `AUDIO_SCENARIO_CHORUS` enumeration in `AUDIO_SCENARIO_TYPE`. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.
+
+**10. Extensions from the Agora extensions marketplace**
+
+In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the [Agora extensions marketplace](https://www.agora.io/en/marketplace):
+
+- Easy to integrate: The integration of modular functions can be achieved simply by calling an API, and the integration efficiency is improved by nearly 95%.
+- Extensibility design: The modular and extensible SDK design style endows the Agora SDK with good extensibility, which enables developers to quickly build real-time interactive apps based on the Agora extensions marketplace ecosystem.
+- Build an ecosystem: A community of real-time audio and video apps has developed that can accommodate a wide range of developers, offering a variety of extension combinations. After integrating the extensions, developers can build richer real-time interactive functions.
+- Become a vendor: Vendors can integrate their products with Agora SDK in the form of extensions, display and publish them in the Agora extensions marketplace, and build a real-time interactive ecosystem for developers together with Agora. For details on how to develop and publish extensions.
+
+**11. Enhanced channel management**
+
+To meet the channel management requirements of various business scenarios, this release adds the following functions to the `ChannelMediaOptions` structure:
+
+- Sets or switches the publishing of multiple audio and video sources.
+- Sets or switches channel profile and user role.
+- Sets or switches the stream type of the subscribed video.
+- Controls audio publishing delay.
+
+Set `ChannelMediaOptions` when calling `joinChannel` or `joinChannelEx` to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call `updateChannelMediaOptions` to update the settings in `ChannelMediaOptions` at any time, for example, to switch the published audio and video sources.
+
+
+**12. Subscription allowlists and blocklists**
+
+This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your whitelist, or add a user ID for the streams you do not wish to see to your blacklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the `IRtcEngineEx` interface:
+
+- `SetSubscribeAudioBlacklist`:Set the audio subscription blocklist.
+- `SetSubscribeAudioWhitelist`:Set the audio subscription allowlist.
+- `SetSubscribeVideoBlacklist`:Set the video subscription blocklist.
+- `SetSubscribeVideoWhitelist`:Set the video subscription allowlist.
+
+If a user is added in a blacklist and a whitelist at the same time, only the blacklist takes effect.
+
+
+**13. Replace video feeds with images**
+
+This release supports replacing video feeds with images when publishing video streams. You can call the enableVideoImageSource method to enable this function and choose your own images through the options parameter. If you disable this function, the remote users see the video feeds that you publish.
+
+**14. Local video mixing**
+
+This release adds a series of APIs supporting local video mixing functions. You can mix multiple video streams into one video stream locally. Common scenarios are as follows:
+
+- In interactive live streaming scenarios with cohosts or when using the Media Push function, you can merge the screens of multiple hosts into one view locally.
+- In scenarios where you capture multiple local video streams (for example, video captured by cameras, screen sharing streams, video files or pictures), you can and merge them into one video stream and then publish the mixed video stream in the channel.
+
+You can call the `startLocalVideoTranscoder` method to start local video mixing and call the `stopLocalVideoTranscoder` method to stop local video mixing. After the local video mixing starts, you can call `updateLocalTranscoderConfiguration` to update the local video mixing configuration.
+
+**15. Video device management**
+
+Video capture devices can support multiple video formats, each supporting a different combination of video frame width, video frame height, and frame rate.
+
+This release adds the `numberOfCapabilities` and `getCapability` methods for getting the number of video formats supported by the video capture device and the details of the video frames in the specified video format. When calling the `startPrimaryCameraCapture` or `startSecondaryCameraCapture` method to capture video using the camera, you can use the specified video format.
+
+> **Note**: The SDK automatically selects the best video format for the video capture device based on your settings in VideoEncoderConfiguration, so normally you should not need to use these new methods.
+
+**16. In-ear monitoring**
+
+
+This release adds support for in-ear monitoring. You can call `enableInEarMonitoring` to enable the in-ear monitoring function.
+
+After successfully enabling the in-ear monitoring function, you can call `registerAudioFrameObserver` to register the audio observer, and the SDK triggers the `onEarMonitoringAudioFrame` callback to report the audio frame data. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. Agora recommends that you choose one of the following two methods to set the audio data format of the in-ear monitoring:
+
+- Call the `setEarMonitoringAudioFrameParameters` method to set the audio data format of in-ear monitoring. The SDK calculates the sampling interval based on the parameters in this method, and triggers the `onEarMonitoringAudioFrame` callback based on the sampling interval.
+- Set the audio data format in the return value of the `getEarMonitoringAudioParams` callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the onEarMonitoringAudioFrame callback based on the sampling interval.
+
+> **Note**: To adjust the in-ear monitoring volume, you can call setInEarMonitoringVolume.
+
+
+**17. Audio stream filter**
+
+This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact support@agora.io to adjust this number according to your scenarios.
+
+Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.
+
+
+> **Note**: To enable this function, contact technical support.
+
+**18. MPUDP (MultiPath UDP) (Beta)**
+
+As of this release, the SDK supports MPUDP protocol, which enables you to connect and use multiple paths to maximize the use of channel resources based on the UDP protocol. You can use different physical NICs on both mobile and desktop and aggregate them to effectively combat network jitter and improve transmission quality.
+
+> **Note**: To enable this function, contact sales@agora.io.
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/authentication-workflow/project-implementation/index.mdx b/shared/voice-sdk/authentication-workflow/project-implementation/index.mdx
index 828ea258b..246fd278f 100644
--- a/shared/voice-sdk/authentication-workflow/project-implementation/index.mdx
+++ b/shared/voice-sdk/authentication-workflow/project-implementation/index.mdx
@@ -1,9 +1,11 @@
import Android from './android.mdx';
import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
-import ReactNative from './react-native.mdx';
-import Electron from './electron.mdx';
-import Web from './web.mdx';
+import ReactNative from './react-native.mdx'
+import Electron from './electron.mdx'
+import Web from './web.mdx'
+import Unreal from './unreal.mdx'
+
@@ -12,3 +14,4 @@ import Web from './web.mdx';
+
\ No newline at end of file
diff --git a/shared/voice-sdk/authentication-workflow/project-implementation/unreal.mdx b/shared/voice-sdk/authentication-workflow/project-implementation/unreal.mdx
new file mode 100644
index 000000000..1a81dfb28
--- /dev/null
+++ b/shared/voice-sdk/authentication-workflow/project-implementation/unreal.mdx
@@ -0,0 +1,173 @@
+
+
+1. **Add the necessary dependencies**
+
+ In order to make HTTPS calls to a token server and interpret the JSON return parameters, integrate the `HTTP` module into your Unreal project. In `AgoraImplementation.Build.cs`, update `PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore"});` with the following line:
+
+ ``` cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "Json", "AgoraPlugin", "HTTP" });
+ ```
+
+3. **Enable the user to specify a channel name**
+
+ To get the channel name from the user, you need a text box in the UI. To add a text box, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following property specifiers after `UButton* LeaveBtn = nullptr;`:
+
+ ``` cpp
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UEditableTextBox* channelTextBox = nullptr;
+ ```
+
+ 2. In Unreal Editor, go to **Content Browser** and open `NewBlueprint`, then do the following:
+
+ 1. Drag **Text Box** from the **Input** section of the **Palette** to the canvas panel. You see a text box appears on the canvas.
+
+ 1. In **Details**, rename **EditableTextBox_0** to `channelTxtBox`, then change the following properties:
+
+ * **Position X** - 799
+ * **Position Y** - 192
+ * **Size X** - 257
+ * **Size Y** - 43
+
+ Click **Compile** to save and compile the newly added widget.
+
+4. **Add the required header files**
+
+ In `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ``` cpp
+ #include "Components/EditableTextBox.h"
+ #include "Http.h"
+ #include
+ using namespace std;
+ ```
+
+5. **Add variables for your connection to the token server**
+
+ Declare the variables you need to specify the local user uid, token server URL, and the token expire time. In `MyUserWidget.h`, add the following declarations to the `UMyUserWidget` class:
+
+ ``` cpp
+ std::string serverUrl = ""; // The base URL to your token server, for example, "https://agora-token-service-production-92ff.up.railway.app".
+ int tokenExpireTime = 40; // Expire time in Seconds.
+ int localUid = 1;
+ ```
+
+ Make sure you specify the token server URL in exactly the same format as shown in the example.
+
+7. **Retrieve a token from the server**
+
+ Use a GET request to retrieve an authentication token for a specific channel from the token server, then decode the return parameters. to implement this workflow, do the following:
+
+
+ 1. Setup a method to fetch token. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ void fetchToken();
+ ```
+
+ 2. Add the logic of fetching a token from the server to the `fetchToken` method. In `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::fetchToken()
+ {
+ // Setup a Http get request to fetch a token from the token server.
+ FHttpRequestRef Request = FHttpModule::Get().CreateRequest();
+ Request->OnProcessRequestComplete().BindUObject(this, &UMyUserWidget::OnResponseReceived);
+ std::string serverUrlString = serverUrl + "/rtc/";
+ serverUrlString += channelName;
+ // Concatenate token type, uid, and expire time with the server URL string.
+ serverUrlString += "/1/uid/";
+ serverUrlString += to_string(localUid) + "/?expiry=";
+ serverUrlString += to_string(tokenExpireTime);
+ // Convert the sever URL string to a FString.
+ FString Furl(serverUrlString.c_str());
+ UE_LOG(LogTemp, Warning, TEXT("%s"), *Furl);
+ // Set the request URL.
+ Request->SetURL(Furl);
+ // Set the request type.
+ Request->SetVerb("GET");
+ // Process the request to retrieve a token.
+ Request->ProcessRequest();
+ }
+ ```
+
+8. **Update the `joinChannel` method to fetch a token**
+
+ To retrieve a fresh token from the token server, call `fetchToken` before you join a channel. In `MyUserWidget.cpp`, locate `OnJoinButtonClick` and add the following line after `agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);`:
+
+ ``` cpp
+ FString channelname = channelTextBox->GetText().ToString();
+ if(channelName == "" || serverUrl == "")
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Please!, make sure you passed a valid server url and channel name"));
+ }
+ else
+ {
+ channelName = std::string(TCHAR_TO_UTF8(*channelname));
+ fetchToken();
+ }
+ ```
+ When your receives a response from the server, it calls `OnResponseReceived`. You use this function to parse the token and join the channel. To implement this, do the following:
+
+ 1. Setup `OnResponseReceived` in your . In `MyUserWidget.cpp`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ void OnResponseReceived(FHttpRequestPtr Request, FHttpResponsePtr Response, bool bConnectedSuccessfully);
+ ```
+ 2. Parse the token from the request response. In `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnResponseReceived(FHttpRequestPtr Request, FHttpResponsePtr Response, bool bConnectedSuccessfully)
+ {
+ // Parse the response to retrieve the token.
+ TSharedPtr ResponseObj;
+ TSharedRef> Reader = TJsonReaderFactory<>::Create(Response->GetContentAsString());
+ FJsonSerializer::Deserialize(Reader, ResponseObj);
+ UE_LOG(LogTemp, Display, TEXT("Response %s"), *Response->GetContentAsString());
+ UE_LOG(LogTemp, Display, TEXT("rtcToken: %s"), *ResponseObj->GetStringField("rtcToken"));
+ }
+ ```
+
+ 3. Shift the `joinChannel` API call to `OnResponseReceived` method. In `MyUserWidget.cpp`, locate `OnJoinButtonClick` and remove the following lines:
+
+ ```cpp
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", 0);
+ isJoin = true;
+ ```
+
+ 3. In `MyUserWidget.cpp`, locate `OnResponseReceived` and add the following after `FJsonSerializer::Deserialize(Reader, ResponseObj);`:
+
+ ```cpp
+ token = std::string(TCHAR_TO_UTF8(*ResponseObj->GetStringField("rtcToken")));
+ if (isJoin == true)
+ {
+ agoraEngine->renewToken(token.c_str());
+ }
+ else
+ {
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", localUid);
+ isJoin = true;
+ }
+ ```
+9. **Handle the event triggered by when the token is about to expire**
+
+ A token expires after the `expireTime` specified in the call to the token server or expires after 24 hours, if the time is not specified. The `onTokenPrivilegeWillExpire` event receives a callback when the current token is about to expire so that a fresh token may be retrieved and used. To implement `onTokenPrivilegeWillExpire` in your , take the following steps:
+
+ 1. Set up the `onTokenPrivilegeWillExpire` callback. In `MyUserWidget.h`, add the following method to `UMyUserWidget`:
+
+ ```cpp
+ void onTokenPrivilegeWillExpire(const char* expiredToken);
+ ```
+
+ 2. Call `fetchToken` when the triggers `onTokenPrivilegeWillExpire`. In `MyUserWidget.cpp`, add the following before `OnJoinButtonClick`:
+
+ ```cpp
+ void UMyUserWidget::onTokenPrivilegeWillExpire(const char* expiredToken)
+ {
+ UE_LOG(LogTemp, Display, TEXT("Token expired: Retrieving a token from the server...."));
+ fetchToken();
+ }
+ ```
+
+
diff --git a/shared/voice-sdk/authentication-workflow/project-test/unreal.mdx b/shared/voice-sdk/authentication-workflow/project-test/unreal.mdx
new file mode 100644
index 000000000..a4b3646a1
--- /dev/null
+++ b/shared/voice-sdk/authentication-workflow/project-test/unreal.mdx
@@ -0,0 +1,17 @@
+
+3. Set the variables in your :
+ 1. Update `appId` in the declarations to the value from .
+
+ 1. Set `token` to an empty string in the declarations.
+
+ 1. Update `serverUrl` in the declarations to the base address of your token server, for example, `https://agora-token-service-production-92ff.up.railway.app`.
+
+
+1. In Unreal Editor, click **Play**. A moment later you see the project running on your development device.
+
+ If this is the first time you run the project, grant microphone access to your app.
+
+1. Enter the same channel name in the UI text box that you used to connect to the web demo.
+
+1. Click **Join** to connect your unreal to the web demo app.
+
diff --git a/shared/voice-sdk/authentication-workflow/reference/index.mdx b/shared/voice-sdk/authentication-workflow/reference/index.mdx
index 0c153a378..0113a56b7 100644
--- a/shared/voice-sdk/authentication-workflow/reference/index.mdx
+++ b/shared/voice-sdk/authentication-workflow/reference/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx'
import Web from './web.mdx'
import Ios from './ios.mdx';
import Macos from './macos.mdx';
+import Unreal from './unreal.mdx'
@@ -15,3 +16,4 @@ import Macos from './macos.mdx';
+
diff --git a/shared/voice-sdk/authentication-workflow/reference/unreal.mdx b/shared/voice-sdk/authentication-workflow/reference/unreal.mdx
new file mode 100644
index 000000000..0a1f2d73f
--- /dev/null
+++ b/shared/voice-sdk/authentication-workflow/reference/unreal.mdx
@@ -0,0 +1,9 @@
+
+
+### API reference
+
+- renewToken
+
+- onTokenPrivilegeWillExpire
+
+
diff --git a/shared/voice-sdk/develop/_stream-raw-audio.mdx b/shared/voice-sdk/develop/_stream-raw-audio.mdx
index 8fd16e458..ace4017b2 100644
--- a/shared/voice-sdk/develop/_stream-raw-audio.mdx
+++ b/shared/voice-sdk/develop/_stream-raw-audio.mdx
@@ -5,10 +5,7 @@ import ProjectTest from '@docs/shared/voice-sdk/develop/stream-raw-audio/project
import Reference from '@docs/shared/voice-sdk/develop/stream-raw-audio/reference/index.mdx';
In some scenarios, raw audio captured through the microphone must be processed to achieve the desired functionality or to enhance the user experience. enables you to pre and post process the captured audio for implementation of custom playback effects.
-
- This page is coming soon for Windows,
-
-
+
In Web SDK, provides extensions like AI Denoiser for pre-process and post-process phases.
@@ -64,4 +61,3 @@ This section contains information that completes the information in this page, o
-
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx b/shared/voice-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx
new file mode 100644
index 000000000..54dc3f7f3
--- /dev/null
+++ b/shared/voice-sdk/develop/audio-and-voice-effects/project-implementation/unreal.mdx
@@ -0,0 +1,265 @@
+
+
+### Implement the user interface
+
+To enable the user to play sound and voice effects and modify the audio route, add the following elements to the user interface:
+
+ * A button to start and stop audio mixing.
+ * A button to play the sound effect.
+ * A button to apply various voice effects.
+ * Three text widgets for the buttons.
+
+To Implement this UI, take the following steps:
+
+1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A button control appears on the canvas panel.
+
+1. In **Details**, rename **Button_0** to `voiceEffectButton`, then change the following coordinates:
+
+ * **Position X** - 650
+ * **Position Y** - 960
+ * **Size X** - 338
+ * **Size Y** - 60
+
+1. Drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `voiceEffectButton`.
+
+ You see a text widget is added to `voiceEffectButton`.
+
+1. In **Details**, rename the text widget to `voiceEffectBtnText`, then change the **Text** field to `Apply Voice Effect`.
+
+1. Use the same procedure and create two buttons called `playEffectButton` and `audioMixingButton`.
+
+ Do not change the text widget names. You only access the text widget of play effect button from code.
+
+1. Select `playEffectButton` and change the following coordinates in **Details**:
+
+ * **Position X** - 360
+ * **Position Y** - 960
+ * **Size X** - 275
+ * **Size Y** - 60
+
+1. Select `audioMixingButton` and change the following coordinates in **Details**:
+
+ * **Position X** - 1296
+ * **Position Y** - 960
+ * **Size X** - 184
+ * **Size Y** - 60
+
+1. Select the **Text** widget of `playEffectButton`, then in **Details** change the **Text** field to `Play Audio Effect`.
+
+1. Select the **Text** widget of `audioMixingButton`, then in **Details** change the **Text** field to `Mix Audio`.
+
+
+### Handle the system logic
+
+This section describes the steps required to set up access to the UI elements.
+
+1. **Define variables to manage audio effects and access the UI elements**
+
+ In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ int soundEffectId = 1; // Unique identifier for the sound effect file
+ std::string soundEffectFilePath = "https://www.soundjay.com/human/applause-01.mp3"; // URL or path to the sound effect
+ int soundEffectStatus = 0;
+ int voiceEffectIndex = 0;
+ bool audioPlaying = false; // Manage the audio mixing state
+ std::string audioFilePath = "https://www.kozco.com/tech/organfinale.mp3"; // URL or path to the audio mixing file
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* playEffectButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* voiceEffectButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* audioMixingButton = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UTextBlock* voiceEffectBtnText = nullptr;
+ ```
+
+
+1. **Import the required UI library**
+
+ To access the text widget from UI, in `MyUserWidget.h`, add the following library before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/TextBlock.h"
+ ```
+
+1. **Set up event listeners for the UI elements**
+
+ To add event listeners for the UI elements, do the following:
+
+ 1. In `UMyUserWidget.h`, add the following to the `UMyUserWidget` after `void setupVoiceSDKEngine();`
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnAudioMixingButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnVoiceEffectButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnPlayEffectButtonClick();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ audioMixingButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnAudioMixingButtonClick);
+ voiceEffectButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnVoiceEffectButtonClick);
+ playEffectButton->OnClicked.AddDynamic(this, &UMyUserWidget::OnPlayEffectButtonClick);
+ ```
+
+### Implement sound and voice effects
+
+To add sound and voice effect to your , take the following steps:
+
+1. **Enable the user to start and stop audio mixing**
+
+ When the user presses **Mix Audio**, the fetches an audio file using `audioFilePath` and mixes it with the local audio stream. The remote and local users hear the audio file playing with the audio stream. To enable audio mixing in your , in `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnAudioMixingButtonClick()
+ {
+ audioPlaying = !audioPlaying;
+ if (audioPlaying)
+ {
+ agoraEngine->startAudioMixing(audioFilePath.c_str(), false, 1, 0);
+ }
+ else
+ {
+ agoraEngine->stopAudioMixing();
+ }
+ }
+ ```
+
+1. **Pre-load sound effects**
+
+ To set up playing voice effects, call `preloadEffect` to pre-load the sound effects. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ // Pre-load sound effects to improve performance
+ agoraEngine->preloadEffect(soundEffectId, soundEffectFilePath.c_str());
+ ```
+
+1. **Play, pause, or resume playing a sound effect**
+
+ When a user presses **Play Audio Effect**, the sound effect is played. When they press the button again, the effect is paused and resumed alternately. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnPlayEffectButtonClick()
+ {
+ if (soundEffectStatus == 0)
+ {
+ // Play effect
+ agoraEngine->playEffect(
+ soundEffectId, // The ID of the sound effect file.
+ soundEffectFilePath.c_str(), // The path of the sound effect file.
+ 0, // The number of sound effect loops. -1 means an infinite loop. 0 means once.
+ 1, // The pitch of the audio effect. 1 represents the original pitch.
+ 0.0, // The spatial position of the audio effect. 0.0 represents that the audio effect plays in the front.
+ 100, // The volume of the audio effect. 100 represents the original volume.
+ true,// Whether to publish the audio effect to remote users.
+ 0 // The playback starting position of the audio effect file in ms.
+ );
+ soundEffectStatus = 1;
+ }
+ else if (soundEffectStatus == 1)
+ {
+ // Pause effect
+ agoraEngine->pauseEffect(soundEffectId);
+ soundEffectStatus = 2;
+ }
+ else if (soundEffectStatus == 2)
+ {
+ // Resume effect
+ agoraEngine->resumeEffect(soundEffectId);
+ soundEffectStatus = 1;
+ }
+ }
+ ```
+
+1. **Inform the user when the effect finishes playing**
+
+ When has finished playing the sound effect, the `onAudioEffectFinished` event is fired. Your listens to the callback and invoke the `OnEIDStopAudioEffect` method to update the UI and stop the audio effect. To Implement this logic, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget` after `void setupVoiceSDKEngine();`:
+
+ ```cpp
+ void onAudioEffectFinished(int soundId) override;
+ ```
+
+ 1. In `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::onAudioEffectFinished(int soundId)
+ {
+ agoraEngine->stopEffect(soundId);
+ soundEffectStatus = 0; // Stopped
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::onAudioEffectFinished:: Sound effect %u has been finished"), soundId);
+ }
+ ```
+
+1. **Set an audio profile**
+
+ For applications where special audio performance is required, you set a suitable audio profile and audio scenario. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ // Specify the audio scenario and audio profile
+ agoraEngine->setAudioProfile(AUDIO_PROFILE_MUSIC_HIGH_QUALITY_STEREO);
+ agoraEngine->setAudioScenario(AUDIO_SCENARIO_GAME_STREAMING);
+ ```
+
+1. **Apply various voice and audio effects**
+
+ When a user presses **Apply Voice Effect**, apply a new voice effect and change the text on the button to describe the effect. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnVoiceEffectButtonClick()
+ {
+ voiceEffectIndex++;
+ // Turn off all previous effects
+ agoraEngine->setVoiceBeautifierPreset(VOICE_BEAUTIFIER_OFF);
+ agoraEngine->setAudioEffectPreset(AUDIO_EFFECT_OFF);
+ agoraEngine->setVoiceConversionPreset(VOICE_CONVERSION_OFF);
+ if (voiceEffectIndex == 1)
+ {
+ agoraEngine->setVoiceBeautifierPreset(CHAT_BEAUTIFIER_MAGNETIC);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Chat Beautifier"));
+ }
+ else if (voiceEffectIndex == 2)
+ {
+ agoraEngine->setVoiceBeautifierPreset(SINGING_BEAUTIFIER);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Singing Beautifier"));
+ }
+ else if (voiceEffectIndex == 3)
+ {
+ agoraEngine->setAudioEffectPreset(VOICE_CHANGER_EFFECT_HULK);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Hulk"));
+ }
+ else if (voiceEffectIndex == 4)
+ {
+ agoraEngine->setVoiceConversionPreset(VOICE_CHANGER_BASS);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Voice Changer"));
+ }
+ else if (voiceEffectIndex == 5)
+ {
+ // Sets the local voice equalization.
+ // The first parameter sets the band frequency. The value ranges between 0 and 9.
+ // Each value represents the center frequency of the band:
+ // 31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, and 16k Hz.
+ // The second parameter sets the gain of each band between -15 and 15 dB.
+ // The default value is 0.
+ agoraEngine->setLocalVoiceEqualization(AUDIO_EQUALIZATION_BAND_FREQUENCY::AUDIO_EQUALIZATION_BAND_4K, 3);
+ agoraEngine->setLocalVoicePitch(0.5);
+ voiceEffectBtnText->SetText(FText::FromString("Effect: Voice Equalization"));
+ }
+ else if (voiceEffectIndex > 5)
+ {
+ // Remove all effects
+ voiceEffectIndex = 0;
+ agoraEngine->setLocalVoicePitch(1.0);
+ agoraEngine->setLocalVoiceEqualization(AUDIO_EQUALIZATION_BAND_FREQUENCY::AUDIO_EQUALIZATION_BAND_4K, 0);
+ voiceEffectBtnText->SetText(FText::FromString("Apply voice effect"));
+ }
+ }
+ ```
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/audio-and-voice-effects/reference/unreal.mdx b/shared/voice-sdk/develop/audio-and-voice-effects/reference/unreal.mdx
new file mode 100644
index 000000000..bf74f2e8e
--- /dev/null
+++ b/shared/voice-sdk/develop/audio-and-voice-effects/reference/unreal.mdx
@@ -0,0 +1,23 @@
+
+
+### API reference
+
+* setLocalVoiceEqualization
+
+* setAudioEffectPreset
+
+* setAudioEffectParameters
+
+* setVoiceBeautifierPreset
+
+* setVoiceConversionPreset
+
+* setVoiceBeautifierParameters
+
+* setLocalVoiceReverb
+
+* setLocalVoiceReverbPreset
+
+* setLocalVoicePitch
+
+
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/index.mdx b/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
index 11576ca11..f39242f80 100644
--- a/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
+++ b/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/index.mdx
@@ -6,6 +6,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx'
+import Unreal from './unreal.mdx'
+
@@ -15,3 +17,4 @@ import MacOS from './macos.mdx'
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx b/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx
new file mode 100644
index 000000000..7d1d31cdf
--- /dev/null
+++ b/shared/voice-sdk/develop/ensure-channel-quality/project-implementation/unreal.mdx
@@ -0,0 +1,265 @@
+
+
+### Implement the user interface
+
+To implement call quality features in your , you need the following elements in the user interface:
+
+* A button widget to start and stop the echo test.
+
+* A text widget to display last-mile network quality.
+
+To add these elements to the UI, take the following steps:
+
+1. In **Content Browser**, navigate to the `Content` folder and double-click `NewBlueprint`. The blueprint opens in the editor.
+
+1. Drag **Text** from the **Common** section of the **Palette** to the canvas panel. You see a text widget appears on the canvas.
+
+1. In **Details**, rename it to `networkQuality`, then change the following properties:
+
+ * **Position X** - 956
+ * **Position Y** - 280
+ * **Size X** - 315
+ * **Size Y** - 40
+
+1. Drag **Button** from the **Common** section of the **Palette** to the canvas panel. You see a button appears on the canvas.
+
+1. In **Details**, rename **Button_0** to `echoTest`, then change the following properties:
+
+ * **Position X** - 801
+ * **Position Y** - 960
+ * **Size X** - 177
+ * **Size Y** - 60
+
+1. Add a text widget for the echo test button. Drag **Text** from the **Common** section of the **Palette** to **Hierarchy** and drop it over `echoTest`.
+
+
+### Handle the system logic
+
+1. **Declare the variable**
+
+ To manage the test workflow , in `MyUserWidget.h`, add the following declaration to `UMyUserWidget`:
+
+ ``` cpp
+ bool isEchoTestRunning = false; // Keeps track of the echo test
+ ```
+
+1. **Reference the UI elements**
+
+ 1. In `MyUserWidget.h`, add the following property specifiers after `UTextBlock* networkStatus = nullptr;`:
+
+ ```cpp
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* echoTest = nullptr;
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UTextBlock* networkStatus = nullptr;
+ ```
+
+ 2. To setup access to the text widget, include the text widget header file in the sample code. In `MyUserWidget.h`, add the following include before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/TextBlock.h"
+ ```
+
+1. **Setup an event listener for the echo test button**
+
+ 1. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnEchoTestButtonClick();
+ ```
+
+ 2. Attach the event listener method to the button. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ``` cpp
+ echoTest->OnClicked.AddDynamic(this, &UMyUserWidget::OnEchoTestButtonClick);
+ ```
+
+1. **Update the network status indication**
+
+ To show the network quality result visually to the user, add the following to the `MyUserWidget.cpp` before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::updateNetworkStatus(int quality)
+ {
+ if (quality > 0 && quality < 3)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Green);
+ networkStatus->SetText(FText::FromString("Network Quality : Excellent"));
+ });
+ }
+ else if (quality <= 4)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Yellow);
+ networkStatus->SetText(FText::FromString("Network Quality : Good"));
+ });
+ }
+ else if (quality <= 6)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::Red);
+ networkStatus->SetText(FText::FromString("Network Quality : Poor"));
+ });
+ }
+ else
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ networkStatus->SetColorAndOpacity(FLinearColor::White);
+ networkStatus->SetText(FText::FromString("Network Quality : Bad"));
+ });
+ }
+ }
+ ```
+
+### Implement features to ensure quality
+
+To implement the call quality features, take the following steps:
+
+1. **Enable the user to test the network**
+
+ When the launches, you call `startLastmileProbeTest` with a `LastmileProbeConfig` object to check the last-mile uplink and downlink quality. To implement this workflow, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following to `UMyUserWidget`:
+
+ ```cpp
+ void startProbeTest();
+ ```
+
+ 1. Add the network test logic to `startProbeTest`. In `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::startProbeTest()
+ {
+ // Configure a LastmileProbeConfig instance.
+ LastmileProbeConfig config;
+ // Probe the uplink network quality.
+ config.probeUplink = true;
+ // Probe the downlink network quality.
+ config.probeDownlink = true;
+ // The expected uplink bitrate (bps). The value range is [100000,5000000].
+ config.expectedUplinkBitrate = 100000;
+ // The expected downlink bitrate (bps). The value range is [100000,5000000].
+ config.expectedDownlinkBitrate = 100000;
+ // Start probe test.
+ agoraEngine->startLastmileProbeTest(config);
+ }
+ ```
+
+1. **Implement best practice for app initiation**
+
+ When a user starts your , the is initialized in `setupVoiceSDKEngine`. After initialization, do the following:
+
+ * _Set an audio profile and audio scenario_: Setting an audio profile is optional and only required if you have special requirements such as streaming music.
+ * _Start the network probe test_: A quick test at startup to gauge network quality.
+
+ To implement these features, in `MyUserWidget.cpp`, add the following code to `setupVoiceSDKEngine` after `agoraEngine->initialize(context)`;
+
+ ```cpp
+ // Set audio profile and audio scenario.
+ agoraEngine->setAudioProfile(AUDIO_PROFILE_DEFAULT, AUDIO_SCENARIO_GAME_STREAMING);
+ // Start the probe test
+ startProbeTest();
+ ```
+
+3. **Test the user's hardware**
+
+ The echo test checks that the user's hardware is working properly. To implement the echo test logic, in `MyUserWidget.cpp`, add the following method before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnEchoTestButtonClick()
+ {
+ if (isJoin)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Leave the channel first to start echo test!"));
+ return;
+ }
+ if (!isEchoTestRunning)
+ {
+ EchoTestConfiguration echoConfig;
+ echoConfig.enableVideo = true;
+ echoConfig.token = token.c_str();
+ echoConfig.channelId = channelName.c_str();
+ agoraEngine->startEchoTest(echoConfig);
+ isEchoTestRunning = true;
+ UE_LOG(LogTemp, Warning, TEXT("Echo test started"));
+ }
+ else
+ {
+ agoraEngine->stopEchoTest();
+ isEchoTestRunning = false;
+ UE_LOG(LogTemp, Warning, TEXT("Echo test stopped!"));
+ }
+ }
+ ```
+
+4. **Listen to events to receive state change notifications and quality statistics**
+
+ Add the following event handlers to receive state change notifications and quality statistics:
+
+ * `onLastmileQuality`: Receives the network quality result.
+ * `onLastmileProbeResult`: Receives detailed probe test results.
+ * `onNetworkQuality`: Receives statistics on network quality.
+ * `onRtcStats`: Receives the stats.
+ * `onRemoteAudioStateChanged`: Report the audio state of the remote user when changed.
+
+ To implement these callbacks, take the following steps:
+
+ 1. In `MyUserWidget.h`, add the following callbacks after ` void onUserJoined(uid_t uid, int elapsed) override;`:
+
+ ```cpp
+ void onLastmileQuality(int quality) override;
+ void onLastmileProbeResult(const LastmileProbeResult& result) override;
+ void onNetworkQuality(uid_t uid, int txQuality, int rxQuality) override;
+ void onRtcStats(const RtcStats& stats) override;
+ void onRemoteAudioStateChanged(uid_t uid, REMOTE_AUDIO_STATE state, REMOTE_AUDIO_STATE_REASON reason, int elapsed) override;
+ ```
+ 1. Provide definitions for the callbacks you declared in `UMyUserWidget`. In `MyUserWidget.cpp`, add the following before `updateNetworkStatus`:
+
+ ```cpp
+ void UMyUserWidget::onLastmileQuality(int quality)
+ {
+ updateNetworkStatus(quality);
+ }
+ void UMyUserWidget::onLastmileProbeResult(const LastmileProbeResult& result)
+ {
+ agoraEngine->stopLastmileProbeTest();
+ // The result object contains the detailed test results that help you
+ // manage call quality, for example, the downlink jitter.
+ UE_LOG(LogTemp, Warning, TEXT("Downlink jitter: %u") , result.downlinkReport.jitter);
+ }
+ void UMyUserWidget::onNetworkQuality(uid_t uid, int txQuality, int rxQuality)
+ {
+ updateNetworkStatus(txQuality);
+ }
+ void UMyUserWidget::onRtcStats(const RtcStats& stats)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("User(s): %d"), stats.userCount);
+ UE_LOG(LogTemp, Warning, TEXT("Packet loss rate: %d"), stats.rxPacketLossRate);
+ }
+ void UMyUserWidget::onRemoteAudioStateChanged(uid_t uid, REMOTE_AUDIO_STATE state, REMOTE_AUDIO_STATE_REASON reason, int elapsed)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("Remote user %u audio state changed"), uid);
+ UE_LOG(LogTemp, Warning, TEXT("Current state : %d"), state);
+ }
+ ```
+ Each event reports the statistics of the audio stream from each remote user and host.
+
+6. **Configure the log file**
+
+ To customize the location, content and size of log files, in `MyUserWidget.cpp`, add the following code at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ context.logConfig.filePath = R"(C:\Users\\AppData\Local\Agora\AgoraImplementation\agorasdk.log)";
+ context.logConfig.fileSizeInKB = 256;
+ context.logConfig.level = agora::commons::LOG_LEVEL::LOG_LEVEL_WARN;
+ ```
+
+ Make sure you replace the `` in `filePath` with the user name of your development device.
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/project-test/index.mdx b/shared/voice-sdk/develop/ensure-channel-quality/project-test/index.mdx
index 11576ca11..26402622a 100644
--- a/shared/voice-sdk/develop/ensure-channel-quality/project-test/index.mdx
+++ b/shared/voice-sdk/develop/ensure-channel-quality/project-test/index.mdx
@@ -6,6 +6,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx'
+import Unreal from './unreal.mdx'
+
@@ -15,3 +17,4 @@ import MacOS from './macos.mdx'
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/project-test/unreal.mdx b/shared/voice-sdk/develop/ensure-channel-quality/project-test/unreal.mdx
new file mode 100644
index 000000000..f370ab31f
--- /dev/null
+++ b/shared/voice-sdk/develop/ensure-channel-quality/project-test/unreal.mdx
@@ -0,0 +1,37 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, grant microphone and camera access to your .
+
+1. When the starts, it does the following:
+
+ * Sets the audio profile
+ * Sets the log file configuration
+ * Starts the network probe test
+
+ You see the result of the network probe test displayed in the network status indicator.
+
+1. Run the echo test.
+
+ 1. Press **Echo Test**.
+
+ 1. Speak into the device microphone. You hear the recorded audio after a short delay.
+
+ This test confirms that the user's hardware is working properly.
+
+ 1. Press the end button again to end the test before joining a channel.
+
+1. Press **Join** to connect to the same channel as your web demo.
+
+1. After joining a channel, you receive see messages in **Output Log** informing you of some selected call statistics, including:
+
+ * The number of users in the channel
+ * Packet loss rate
+ * Remote audio state changes
+
+9. You see the network status indicator updated periodically based on the result of the `onNetworkQuality` callback.
+
+
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/reference/index.mdx b/shared/voice-sdk/develop/ensure-channel-quality/reference/index.mdx
index 11576ca11..f39242f80 100644
--- a/shared/voice-sdk/develop/ensure-channel-quality/reference/index.mdx
+++ b/shared/voice-sdk/develop/ensure-channel-quality/reference/index.mdx
@@ -6,6 +6,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx'
+import Unreal from './unreal.mdx'
+
@@ -15,3 +17,4 @@ import MacOS from './macos.mdx'
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/ensure-channel-quality/reference/unreal.mdx b/shared/voice-sdk/develop/ensure-channel-quality/reference/unreal.mdx
new file mode 100644
index 000000000..64dd42af1
--- /dev/null
+++ b/shared/voice-sdk/develop/ensure-channel-quality/reference/unreal.mdx
@@ -0,0 +1,27 @@
+
+
+### API reference
+
+- setAudioProfile
+
+- LastmileProbeConfig
+
+- startLastmileProbeTest
+
+- onLocalAudioStats
+
+- onRemoteAudioStats
+
+- onRtcStats
+
+- onNetworkQuality
+
+- onRemoteAudioStateChanged
+
+- startEchoTest
+
+- stopEchoTest
+
+- setLocalAccessPoint
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/android.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/android.mdx
index 5d9788600..ca94c3d3b 100644
--- a/shared/voice-sdk/develop/product-workflow/project-implementation/android.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/android.mdx
@@ -2,7 +2,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want the user to control, you typically add a volume control UI element such as a SeekBar to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `CheckBox` to the interface for each user. In this example, you add a `SeekBar` and a `CheckBox` to the UI to test different volume settings.
+In a real-world application, for each volume setting you want the user to control, you typically add a UI element such as a SeekBar to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `CheckBox` to the interface for each user. In this example, you add a `SeekBar` and a `CheckBox` to the UI to test different volume settings.
To add the UI elements, in `/app/res/layout/activity_main.xml`, add the following code before ``:
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/flutter.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/flutter.mdx
index 8e8e8eb16..73c4ad84f 100644
--- a/shared/voice-sdk/develop/product-workflow/project-implementation/flutter.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/flutter.mdx
@@ -2,7 +2,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want to control, you typically add a volume control UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `Checkbox` to the interface for each user. In this example, you add a `Slider` and a `Checkbox` to the UI to test different volume settings.
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a `Checkbox` to the interface for each user. In this example, you add a `Slider` and a `Checkbox` to the UI to test different volume settings.
To add the UI elements, in `/lib/main.dart`, add the following code to the `build` method after `ListView(... children: [`:
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/index.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/index.mdx
index 918fbe34a..72e75b81f 100644
--- a/shared/voice-sdk/develop/product-workflow/project-implementation/index.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/index.mdx
@@ -7,6 +7,8 @@ import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx';
import Windows from './windows.mdx';
import Unity from './unity.mdx';
+import Unreal from './unreal.mdx';
+
@@ -16,4 +18,5 @@ import Unity from './unity.mdx';
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/swift.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/swift.mdx
index f162969d9..e92fe0d4c 100644
--- a/shared/voice-sdk/develop/product-workflow/project-implementation/swift.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/swift.mdx
@@ -4,7 +4,7 @@ import AdjustMuteVolume from '@docs/assets/code/voice-sdk/product-workflow/swift
### Implement the user interface
-In a real-world application, for each volume setting you want the user to control, you typically add a volume control UI element such as a UISlider to the audio configuration panel. To enable the user to mute local or remote audio, you add a UISwitch to the interface for each user. In this example, you add a `UISlider` and a `UISwitch` to test different volume settings.
+In a real-world application, for each volume setting you want the user to control, you typically add a UI element such as a UISlider to the audio configuration panel. To enable the user to mute local or remote audio, you add a UISwitch to the interface for each user. In this example, you add a `UISlider` and a `UISwitch` to test different volume settings.
To create this user interface, in the `ViewController` class:
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/unity.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/unity.mdx
index 868180a92..7d04d5628 100644
--- a/shared/voice-sdk/develop/product-workflow/project-implementation/unity.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/unity.mdx
@@ -2,7 +2,7 @@
### Implement the user interface
-In a real-world application, for each volume setting you want the user to control, you typically add a volume control UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a `Toggle` to the interface for each user. In this example, you add a `Toggle` and a `Slider` to the UI to test different volume settings.
+In a real-world application, for each volume setting you want the user to control, you typically add a UI element such as a Slider to the audio configuration panel. To enable the user to mute local or remote audio, you add a `Toggle` to the interface for each user. In this example, you add a `Toggle` and a `Slider` to the UI to test different volume settings.
1. Right-click **Sample Scene**, then click **Game Object** > **UI** > **Slider**. A Slider appears in the **Scene** Canvas.
diff --git a/shared/voice-sdk/develop/product-workflow/project-implementation/unreal.mdx b/shared/voice-sdk/develop/product-workflow/project-implementation/unreal.mdx
new file mode 100644
index 000000000..14ccee4c3
--- /dev/null
+++ b/shared/voice-sdk/develop/product-workflow/project-implementation/unreal.mdx
@@ -0,0 +1,118 @@
+
+
+### Implement the user interface
+
+In a real-world application, for each volume setting you want to control, you typically add a UI element such as a Slider Control to the audio configuration panel. To enable the user to mute local or remote audio, you add a switch or a Check Box to the interface for each user. In this example, you add a `Slider Control` and a `Check Box` to the UI to test different volume settings. To
+Implement this UI, take the following steps:
+
+1. **Add a slider control**
+
+ To add a slider control to the UI, take the following steps:
+
+ 1. In **Content Browser**, navigate to the `Content` folder, then double-click `NewBlueprint`.
+
+ The blueprint opens in the editor.
+
+ 1. Drag **Slider** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ A slider control appears on the canvas panel.
+
+ 1. In **Details**, rename **Slider_0** to `volumeSlider`, then change the following properties:
+
+ * **Position X** - 664
+ * **Position Y** - 816
+ * **Size X** - 250
+ * **Size Y** - 50
+
+3. **Add a Check Box**
+
+ 1. Drag **Check Box** from the **Common** section of the **Palette Panel** to the canvas panel.
+
+ 1. In **Details**, rename **CheckBox_0** to `muteCheckBox`, then change the following properties in **Details**.
+
+ * **Pos X** - 1040
+ * **Pos Y** - 868
+
+ 1. Add a text widget for `muteCheckBox`. Drag **Text** from **Palette Panel** to **Hierarchy** and drop it over `muteCheckBox`. Then, in **Details**, change the **Text** field to `Mute`.
+
+### Handle the system logic
+
+### Implement volume control logic
+
+To implement volume control in your , take the following steps:
+
+1. **Declare the variables you need**
+
+ To access and use the UI elements and apply workflow settings, in `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ // A variable to access Mute check box.
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ UCheckBox* muteCheckBox = nullptr;
+ // A variable to access the volume slider .
+ UPROPERTY(BlueprintReadWrite, meta = (BindWidget))
+ USlider* volumeSlider;
+ // A Boolean variable to track remote user mute and un-mute state.
+ bool isChecked = false;
+ ```
+
+1. **Add the required header files**
+
+ To setup access to the UI elements, in `MyUserWidget.h`, add the following header files before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Slider.h"
+ #include "Components/CheckBox.h"
+ ```
+
+
+1. **Setup event listeners for the UI elements**
+
+ To setup event listeners for mute check box and the volume slider, do the following:
+
+ 1. In `MyUserWidget.h`, add the to following to `UMyUserWidget`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnMuteCheckboxChanged(bool bIsChecked);
+ UFUNCTION(BlueprintCallable)
+ void OnSliderValueChanged(float volume);
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following at the end of `setupVoiceSDKEngine`:
+
+ ```cpp
+ muteCheckBox->OnCheckStateChanged.AddDynamic(this, &UMyUserWidget::OnMuteCheckboxChanged);
+ volumeSlider->SetMaxValue(100);
+ volumeSlider->SetMinValue(0);
+ ```
+
+1. **Adjust or mute the volume**
+
+ To adjust the local playback volume, you setup an event listener for the slider control and call `adjustRecordingSignalVolume` whenever the user drags the slider left or right. To implement this workflow, in `MyUserWidget.cpp`, add the following method before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnSliderValueChanged(float volume)
+ {
+ // Use the slider value to adjust the recording volume.
+ agoraEngine->adjustRecordingSignalVolume(volume);
+ }
+ ```
+
+1. **Mute and un-mute the remote user**
+
+ To mute and un-mute the remote user, you setup an event listener for the mute check box and call `muteRemoteAudioStream` with `remoteUId` when the user selects or deselects the mute checkbox. To implement this workflow, in `MyUserWidget.cpp`, add the following before `setupVoiceSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::OnMuteCheckboxChanged(bool isMute)
+ {
+ // Mute and un-mute the remote user.
+ if (remoteUId == NULL)
+ {
+ UE_LOG(LogTemp, Warning, TEXT("No remote user in the channel"));
+ return;
+ }
+ agoraEngine->muteRemoteAudioStream(remoteUId, isMute);
+ }
+ ```
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/product-workflow/project-test/index.mdx b/shared/voice-sdk/develop/product-workflow/project-test/index.mdx
index f448e1f83..e3112cd5d 100644
--- a/shared/voice-sdk/develop/product-workflow/project-test/index.mdx
+++ b/shared/voice-sdk/develop/product-workflow/project-test/index.mdx
@@ -6,6 +6,8 @@ import Flutter from './flutter.mdx';
import MacOS from './macos.mdx';
import Unity from './unity.mdx';
import ReactNative from './react-native.mdx';
+import Unreal from './unreal.mdx';
+
import Windows from './windows.mdx';
@@ -16,4 +18,5 @@ import Windows from './windows.mdx';
+
diff --git a/shared/voice-sdk/develop/product-workflow/project-test/unreal.mdx b/shared/voice-sdk/develop/product-workflow/project-test/unreal.mdx
new file mode 100644
index 000000000..d0dbd6015
--- /dev/null
+++ b/shared/voice-sdk/develop/product-workflow/project-test/unreal.mdx
@@ -0,0 +1,38 @@
+
+
+1. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run your app, grant camera and microphone permissions.
+
+1. Press **Join** to connect to the same channel as your web demo.
+
+1. **Test volume control**
+
+ 1. Speak into your development device as you move the slider to the left and then right. You notice the volume decrease and then increase in the web demo app as the recording volume changes.
+
+ 1. Tap the **Mute** `CheckBox` while you speak into the microphone connected to the web demo app.
+ You notice that the remote audio is muted on your Android device.
+
+ 1. To test other volume control methods, in `OnSliderValueChanged` replace the `adjustRecordingSignalVolume` call with one of the following:
+
+ ```cpp
+ agoraEngine->adjustPlaybackSignalVolume(volume);
+ agoraEngine->adjustUserPlaybackSignalVolume(remoteUId,volume);
+ agoraEngine->adjustAudioMixingVolume(volume);
+ agoraEngine->adjustAudioMixingPlayoutVolume(volume);
+ agoraEngine->adjustAudioMixingPublishVolume(volume);
+ agoraEngine->setInEarMonitoringVolume(volume);
+ ```
+ Run the app again and use the volume slider to test the change in the corresponding volume setting.
+
+ 1. To test other mute methods, in `OnMuteCheckboxChanged` replace the `muteRemoteAudioStream` call with one of the following:
+
+ ```cpp
+ agoraEngine->muteAllRemoteAudioStreams(isMute);
+ agoraEngine->muteLocalAudioStream(isMute);
+ ```
+ Run the app again and tap the `CheckBox` to test the effect of these mute methods.
+
+
diff --git a/shared/voice-sdk/develop/product-workflow/reference/index.mdx b/shared/voice-sdk/develop/product-workflow/reference/index.mdx
index c6f09a116..9c5b266ae 100644
--- a/shared/voice-sdk/develop/product-workflow/reference/index.mdx
+++ b/shared/voice-sdk/develop/product-workflow/reference/index.mdx
@@ -6,6 +6,7 @@ import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
import MacOS from './macos.mdx';
import Unity from './unity.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx';
@@ -17,4 +18,5 @@ import Windows from './windows.mdx';
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/product-workflow/reference/unreal.mdx b/shared/voice-sdk/develop/product-workflow/reference/unreal.mdx
new file mode 100644
index 000000000..76681d8c2
--- /dev/null
+++ b/shared/voice-sdk/develop/product-workflow/reference/unreal.mdx
@@ -0,0 +1,19 @@
+
+
+### API reference
+
+- adjustRecordingSignalVolume
+
+- adjustPlaybackSignalVolume
+
+- adjustUserPlaybackSignalVolume
+
+- adjustAudioMixingVolume
+
+- adjustAudioMixingPlayoutVolume
+
+- adjustAudioMixingPublishVolume
+
+- setInEarMonitoringVolume
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-implementation/index.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/index.mdx
index c88c9a18c..f068c485e 100644
--- a/shared/voice-sdk/develop/stream-raw-audio/project-implementation/index.mdx
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/index.mdx
@@ -6,6 +6,8 @@ import ReactNative from './react-native.mdx';
import MacOs from './macos.mdx';
import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
+import Unreal from './unreal.mdx';
+import Windows from './windows.mdx';
@@ -15,3 +17,5 @@ import Flutter from './flutter.mdx';
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-implementation/unreal.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/unreal.mdx
new file mode 100644
index 000000000..8602699ee
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/unreal.mdx
@@ -0,0 +1,130 @@
+
+
+
+### Handle the system logic
+
+This sections describes the steps required to use the relevant libraries, declare the necessary variables, and set up access to the UI
+elements.
+
+
+1. **Define the variables to manage audio processing**
+
+ In `MyUserWidget.h`, add the following declaration to the `UMyUserWidget`:
+
+ ``` cpp
+ agora::media::IMediaEngine* MediaEngine;
+ class AudioFrameEventHandler* audioHandler;
+ ```
+
+### Implement processing of raw audio
+
+To register and use an audio frame observer in your , take the following steps:
+
+1. **Set up the audio frame observer**
+
+ `IAudioFrameObserver` gives you access to each audio frame after it is captured or access to each audio frame before it is played back. To setup the `IAudioFrameObserver`, do the following:
+
+ 1. In `MyUserWidget.h`, add the following class after `UMyUserWidget`:
+
+ ```cpp
+ class AudioFrameEventHandler : public agora::media::IAudioFrameObserver
+ {
+ public:
+ AudioFrameEventHandler(UMyUserWidget* customAudioAndVideo)
+ {
+ agoraImplementation = customAudioAndVideo;
+ }
+ ~AudioFrameEventHandler() {}
+ bool onPlaybackAudioFrameBeforeMixing(const char* channelId, agora::rtc::uid_t uid, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onRecordAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onPlaybackAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onMixedAudioFrame(const char* channelId, agora::media::IAudioFrameObserverBase::AudioFrame& audioFrame) override;
+ bool onEarMonitoringAudioFrame(AudioFrame& audioFrame);
+ AudioParams getEarMonitoringAudioParams();
+ int getObservedAudioFramePosition() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getPlaybackAudioParams() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getRecordAudioParams() override;
+ agora::media::IAudioFrameObserverBase::AudioParams getMixedAudioParams() override;
+ private:
+ UMyUserWidget* agoraImplementation;
+ agora::media::IAudioFrameObserverBase::AudioParams audioParams;
+ };
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following callbacks before `setupVideoSDKEngine`:
+
+ ```cpp
+ bool AudioFrameEventHandler::onPlaybackAudioFrameBeforeMixing(const char* channelId, rtc::uid_t uid, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onRecordAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onPlaybackAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onMixedAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ bool AudioFrameEventHandler::onEarMonitoringAudioFrame(AudioFrame& audioFrame)
+ {
+ return true;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getEarMonitoringAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ int AudioFrameEventHandler::getObservedAudioFramePosition()
+ {
+ return (int)(AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_PLAYBACK |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_RECORD |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_BEFORE_MIXING |
+ AUDIO_FRAME_POSITION::AUDIO_FRAME_POSITION_MIXED);
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getPlaybackAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getRecordAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ agora::media::IAudioFrameObserverBase::AudioParams AudioFrameEventHandler::getMixedAudioParams()
+ {
+ return agoraImplementation->audioParams;
+ }
+ ```
+
+
+3. **Register the audio frame observer**
+
+ To receive callbacks declared in `IAudioFrameObserver`, you must register the audio frame observers with the before joining a channel. To specify the format of audio frames captured by each `IAudioFrameObserver` callback, use the `setRecordingAudioFrameParameters`, `setMixedAudioFrameParameters` and `setPlaybackAudioFrameParameters` methods. To do this, in `MyUserWidget.cpp`, add the following at the end of `setupVideoSDKEngine`:
+
+ ``` cpp
+ agoraEngine->queryInterface(AGORA_IID_MEDIA_ENGINE, (void**)&MediaEngine);
+ audioHandler = new AudioFrameEventHandler(this);
+ MediaEngine->registerAudioFrameObserver(audioHandler);
+ audioHandler = new AudioFrameEventHandler(this);
+ MediaEngine->registerAudioFrameObserver(audioHandler);
+ // Set the format of the captured raw audio data.
+ int SAMPLE_RATE = 16000, SAMPLE_NUM_OF_CHANNEL = 1, SAMPLES_PER_CALL = 1024;
+ agoraEngine->setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+ agoraEngine->setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+ agoraEngine->setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, SAMPLES_PER_CALL);
+ ```
+
+4. **Unregister the audio observer when you close the **
+
+ When you close the , you unregister the frame observers by calling the register frame observer method again with a `null` pointer. To do this, in `MyUserWidget.cpp`, add the following lines to `NativeDestruct` before `agoraEngine->unregisterEventHandler(this);`:
+
+ ``` cpp
+ MediaEngine->registerAudioFrameObserver(nullptr);
+ ```
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-implementation/windows.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/windows.mdx
new file mode 100644
index 000000000..8dc3506c8
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-implementation/windows.mdx
@@ -0,0 +1,166 @@
+
+
+
+### Implement processing of raw audio data
+
+To register and use audio frame observers in your , provides the `IAudioFrameObserver` interfaces for real-time audio processing. These interfaces enable developers to capture and process raw audio data before it is sent over the network. To implement these interfaces:
+
+1. **Set up the audio frame observer**
+ 1. Implement the `IAudioFrameObserver` audio frame observer interface.
+
+ Put the following code in `AgoraImplementationDlg.h` before the `AgoraEventHandler` class declaration:
+
+ ```cpp
+ class CustomAudioFrameObserver: public IAudioFrameObserver {
+ virtual bool onPlaybackAudioFrameBeforeMixing(const char* channelId, rtc::uid_t uid, AudioFrame& audioFrame)override;
+ virtual bool onRecordAudioFrame(const char* channelId, AudioFrame& audioFrame)override;
+ virtual bool onPlaybackAudioFrame(const char* channelId, AudioFrame& audioFrame)override;
+ virtual bool onMixedAudioFrame(const char* channelId, AudioFrame& audioFrame)override;
+ virtual bool onEarMonitoringAudioFrame(AudioFrame& audioFrame)override;
+ virtual int getObservedAudioFramePosition()override;
+ virtual AudioParams getPlaybackAudioParams()override;
+ virtual AudioParams getRecordAudioParams()override;
+ virtual AudioParams getMixedAudioParams()override;
+ virtual AudioParams getEarMonitoringAudioParams()override;
+ };
+ ```
+
+ 1. Add the audio frame observer:
+
+ In `AgoraImplementationDlg.h`, add the following code after `AgoraEventHandler agoraEventHandler;`
+
+ ```cpp
+ // Audio frame observer object
+ CustomAudioFrameObserver* audio_frame_observer = new CustomAudioFrameObserver();
+ ```
+ 1. Provide the member functions definitions you declared in `CustomAudioFrameObserver`.
+
+
+ 2. Implement audio frame observer callbacks:
+
+ `IAudioFrameObserver` provides access to each captured or played back audio frame, allowing you to process the audio frames according to your needs.To implement `IAudioFrameObserver` callbacks add the following to `AgoraImplementationDlg.cpp`:
+
+ ```cpp
+ // IAudioFrameObserver Implementation
+ // User can provide implementation for "onRecordAudioFrame()", "onPlaybackAudioFrame()" and "onMixedAudioFrame()" as per scenario.
+ bool CustomAudioFrameObserver::onRecordAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ // Gets the captured audio frame.
+ // Add code here to process the recorded audio.
+ return true;
+ }
+
+ bool CustomAudioFrameObserver::onPlaybackAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ // Gets the audio frame for playback.
+ // Add code here to process the playback audio.
+ return true;
+ }
+
+ bool CustomAudioFrameObserver::onMixedAudioFrame(const char* channelId, AudioFrame& audioFrame)
+ {
+ // Retrieves the mixed captured and playback audio frame.
+ // Add code here to process the mixed captured playback audio
+ return true;
+ }
+
+ // Rest the below functions are dummy "do-nothing" implentaion just to sake for overriding of interface pure virtual functions.
+ bool CustomAudioFrameObserver::onPlaybackAudioFrameBeforeMixing(const char* channelId, rtc::uid_t uid, AudioFrame& audioFrame)
+ {
+ return false;
+ }
+
+ bool CustomAudioFrameObserver::onEarMonitoringAudioFrame(AudioFrame& audioFrame)
+ {
+ return false;
+ }
+
+ int CustomAudioFrameObserver::getObservedAudioFramePosition()
+ {
+ return 0;
+ }
+
+ IAudioFrameObserver::AudioParams CustomAudioFrameObserver::getPlaybackAudioParams()
+ {
+ return AudioParams();
+ }
+
+ IAudioFrameObserver::AudioParams CustomAudioFrameObserver::getRecordAudioParams()
+ {
+ return AudioParams();
+ }
+
+ IAudioFrameObserver::AudioParams CustomAudioFrameObserver::CustomAudioFrameObserver::getMixedAudioParams()
+ {
+ return AudioParams();
+ }
+
+ IAudioFrameObserver::AudioParams CustomAudioFrameObserver::getEarMonitoringAudioParams()
+ {
+ return AudioParams();
+ }
+ ```
+
+2. **Register the audio frame observers**
+
+ To receive the callbacks declared in `IAudioFrameObserver`, you register the audio frame observers with the before joining a channel. To specify the format of audio frames captured by each `IAudioFrameObserver` callback, use the `setRecordingAudioFrameParameters`, `setPlaybackAudioFrameParameters` and `setMixedAudioFrameParameters` methods.
+
+ To register the observers, do the following:
+
+ 1. Setup a function to register and unregister observers for audio frames. In `AgoraImplementationDlg.h`, add the following code after `void setupVideoSDKEngine();`:
+
+ ```cpp
+ bool EnableAudioVideoCapture(bool bEnable);
+ ```
+
+ 1. Define `EnableAudioVideoCapture()`. Add the following in `AgoraImplementationDlg.cpp`, before `CAboutDlg()`:
+
+ ```cpp
+ bool CAgoraImplementationDlg::EnableAudioVideoCapture(bool bEnable)
+ {
+ agora::util::AutoPtr mediaEngine;
+ //query interface agora::AGORA_IID_MEDIA_ENGINE in the engine.
+ mediaEngine.queryInterface(agoraEngine, agora::rtc::AGORA_IID_MEDIA_ENGINE);
+ int nRet = 0;
+ agora::base::AParameter apm(agoraEngine);
+ if (mediaEngine.get() == NULL)
+ return FALSE;
+ if (bEnable)
+ {
+ // Register the audio frame observer
+ nRet = mediaEngine->registerAudioFrameObserver(audio_frame_observer);
+
+ // Set the format of the captured raw audio data.
+ int SAMPLE_RATE = 16000, SAMPLE_NUM_OF_CHANNEL = 1, SAMPLES_PER_CALL = 1024;
+
+ agoraEngine->setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+
+ agoraEngine->setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
+ RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
+
+ agoraEngine->setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, SAMPLES_PER_CALL);
+ }
+ else
+ {
+ nRet = mediaEngine->registerAudioFrameObserver(NULL);
+ }
+ return nRet == 0 ? TRUE : FALSE;
+ }
+ ```
+
+ 1. To execute this method before joining the channel, add the following before `agoraEngine->enableAudio()`:
+
+ ```cpp
+ // Enable audio & video frame capture by registering the frames.
+ EnableAudioVideoCapture(TRUE);
+ ```
+3. **Unregister the audio frame observer when you leave a channel**
+
+ When you leave a channel, you unregister the frame observers by setting `EnableAudioVideoCapture()` to `false`.To do this, add the following to the`Leave` button event listener, `OnBnClickedButton1()` after `agoraEngine->disableAudio();`:
+
+ ```cpp
+ // Unregister audio & video frame.
+ EnableAudioVideoCapture(FALSE);
+ ```
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-setup/index.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-setup/index.mdx
index 20ae2b869..eb7289580 100644
--- a/shared/voice-sdk/develop/stream-raw-audio/project-setup/index.mdx
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-setup/index.mdx
@@ -6,6 +6,7 @@ import ReactNative from './react-native.mdx';
import MacOs from './macos.mdx';
import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
+import Windows from './windows.mdx';
@@ -15,4 +16,5 @@ import Flutter from './flutter.mdx';
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-setup/windows.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-setup/windows.mdx
new file mode 100644
index 000000000..fc4a8f531
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-setup/windows.mdx
@@ -0,0 +1,4 @@
+
+
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-test/index.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-test/index.mdx
index c88c9a18c..f068c485e 100644
--- a/shared/voice-sdk/develop/stream-raw-audio/project-test/index.mdx
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-test/index.mdx
@@ -6,6 +6,8 @@ import ReactNative from './react-native.mdx';
import MacOs from './macos.mdx';
import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
+import Unreal from './unreal.mdx';
+import Windows from './windows.mdx';
@@ -15,3 +17,5 @@ import Flutter from './flutter.mdx';
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-test/unreal.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-test/unreal.mdx
new file mode 100644
index 000000000..cdc41c84f
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-test/unreal.mdx
@@ -0,0 +1,23 @@
+
+
+1. Generate a temporary token in .
+
+1. In your browser, navigate to the web demo and update _App ID_, _Channel_, and _Token_ with the values for your temporary token, then click **Join**.
+
+1. In Unreal Studio, open `MyUserWidget.h`, and update `appId`, `channelName` and `token` with the values for your temporary token.
+
+1. In Unreal Studio, click **Play**. A moment later you see the running on your device.
+
+ If this is the first time you run the project, grant microphone and camera access to your app.
+
+1. Press **Join**.
+
+1. Test processing of raw audio data.
+
+ Edit the `IAudioFrameObserver` callbacks by adding code that processes the raw audio data you receive in the following callbacks:
+
+ - `onRecordAudioFrame`: Gets the captured audio frame data
+
+ - `onPlaybackAudioFrame`: Gets the audio frame for playback
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/project-test/windows.mdx b/shared/voice-sdk/develop/stream-raw-audio/project-test/windows.mdx
new file mode 100644
index 000000000..9e7b83a62
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/project-test/windows.mdx
@@ -0,0 +1,33 @@
+
+ 1. Generate a temporary token in .
+
+ 2. In your browser, navigate to the web demo and update _App ID_, _Channel_, and _Token_ with the values for your temporary token, then click **Join**.
+
+ 3. In `CAgoraImplementationDlg.h`, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+ 4. In Visual Studio, click **Local Windows Debugger**. A moment later you see the project running on your development device.
+
+ If this is the first time you run the project, you need to grant microphone and camera access to your .
+
+
+ 5. Select an option and click **Join** to start a session.
+ - When you join as a **Host**, the local video is published and played in the .
+ - When you join as **Audience**, the remote stream is subscribed and played.
+
+
+
+ 5. Click **Join** to start a call. Now, you can see yourself on the device screen and talk to the remote user using your .
+
+
+ 7. Test processing of raw audio data.
+
+ Edit the `iAudioFrameObserver` definition by adding code that processes the raw audio data you receive in the following callbacks:
+
+ - `onRecordAudioFrame`: Gets the captured audio frame data
+
+ - `onPlaybackAudioFrame`: Gets the audio frame for playback
+
+ - `onMixedAudioFrame`: Retrieves the mixed captured frame for playback audio frame
+
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/develop/stream-raw-audio/reference/index.mdx b/shared/voice-sdk/develop/stream-raw-audio/reference/index.mdx
index ec266381f..8ce9a1489 100644
--- a/shared/voice-sdk/develop/stream-raw-audio/reference/index.mdx
+++ b/shared/voice-sdk/develop/stream-raw-audio/reference/index.mdx
@@ -6,6 +6,8 @@ import MacOs from './macos.mdx';
import Unity from './unity.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx';
+import Unreal from './unreal.mdx';
+import Windows from './windows.mdx';
@@ -15,3 +17,5 @@ import ReactNative from './react-native.mdx';
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/reference/unreal.mdx b/shared/voice-sdk/develop/stream-raw-audio/reference/unreal.mdx
new file mode 100644
index 000000000..e1c57ad7b
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/reference/unreal.mdx
@@ -0,0 +1,11 @@
+import * as data from '@site/data/variables';
+
+
+
+### API reference
+
+* IAudioFrameObserver
+
+* registerAudioFrameObserver
+
+
diff --git a/shared/voice-sdk/develop/stream-raw-audio/reference/windows.mdx b/shared/voice-sdk/develop/stream-raw-audio/reference/windows.mdx
new file mode 100644
index 000000000..06261eb5e
--- /dev/null
+++ b/shared/voice-sdk/develop/stream-raw-audio/reference/windows.mdx
@@ -0,0 +1,26 @@
+
+
+- For a more complete example, see the [open source example project](https://github.com/AgoraIO/API-Examples/tree/master/windows) on GitHub.
+
+### API Reference
+
+
+- registerVideoFrameObserver
+
+- IAudioFrameObserver
+
+- IAudioFrameObserverBase
+
+- registerAudioFrameObserver
+
+- setRecordingAudioFrameParameters
+
+- setPlaybackAudioFrameParameters
+
+- setMixedAudioFrameParameters
+
+
+
+
+
+
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-implementation/index.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-implementation/index.mdx
index 619043985..deb600bf6 100644
--- a/shared/voice-sdk/get-started/get-started-sdk/project-implementation/index.mdx
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-implementation/index.mdx
@@ -6,6 +6,7 @@ import ReactNative from './react-native.mdx'
import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
+import Unreal from './unreal.mdx';
import Windows from './windows.mdx'
@@ -17,3 +18,4 @@ import Windows from './windows.mdx'
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx
new file mode 100644
index 000000000..f00ad97a4
--- /dev/null
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-implementation/unreal.mdx
@@ -0,0 +1,365 @@
+
+### Implement the user interface
+
+For a basic , you need two button widgets. One to join the channel and one to leave the channel. To implement this user interface, you need a `UserWidget` class. You use this class to create a blueprint that you use to add the required user widgets. To implement this workflow, do the following:
+
+1. **Add a user widget class**
+
+ In Unreal Editor, go to **Tools** > **New C++ Class**. The **Add C++ Class** window opens. Click **All Classes** and input `UserWidget` in the **Search** field. From the search results, select `UserWidget` and click **Next**, then click **Create Class**. A new C++ class is added to your project and the class source code opens in Visual Studio IDE.
+
+2. **Create a blueprint for the user widget class**
+
+ In **Content Browser**, go to **Add** > **Blueprint Class**. The **Pick Parent Class** window opens. Expand the **ALL CLASSES** dropdown and input `MyUserWidget` in the **Search** field. From the search results, select `MyUserWidget` and press **Select**. You see a new blueprint in the content folder called `NewBlueprint`.
+
+3. **Add the join and leave buttons**
+
+ In **Content Browser**, navigate to the content folder and double-click `NewBlueprint`. The blueprint opens in the editor. To add buttons to the UI, take the following steps:
+
+ 1. Add a canvas panel. Drag **Canvas Panel** from the **Panel** section of the **Palette Panel** to the **Graph**. You see a canvas panel appears in the **Graph**.
+
+ 1. Drag **Button** from the **Common** section of the **Palette Panel** to the canvas panel. You see a button appears on the canvas panel.
+
+ 1. In **Details**, rename **Button_0** to `JoinBtn`, then change the following coordinates:
+
+ * **Position X** - 1000
+ * **Position Y** - 960
+ * **Size X** - 130
+ * **Size Y** - 60
+
+ 1. Use the same procedure to create a button called `LeaveBtn` and change the following properties in **Details**.
+
+ * **Position X** - 1150
+ * **Position Y** - 960
+ * **Size X** - 130
+ * **Size Y** - 60
+
+ The sample code uses **Text** as a button label. You need two **Text** widgets, one for the join button and one for the leave button.
+
+ 5. To add a label to the join button, drag **Text** from the **Common** section of the **Palette Panel** to **Hierarchy** and drop it over `JoinBtn`.
+
+ 1. In **Details**, change the **Text** field to `Join`.
+
+ 1. Use the same procedure and add a **Text** widget for `LeaveBtn` where the **Text** field says `Leave`.
+
+ Click **Compile**. You see the question mark on the **Compile** button turns to a green tick. This means you have successfully added new widgets to the blueprint.
+
+3. **Add the widget blueprint to viewport**
+
+ Once you have created and laid out your widget blueprint, in order for it to be displayed in your game, you need to call it by using the **Create Widget** and **Add to Viewport** nodes inside **Level Blueprint**. To implement this workflow, take the following steps:
+
+ 1. In the list of world blueprints, click **Open Level Blueprint**. The **Event Graph** window opens.
+
+ 1. Right-click in the **Event Graph** window and input `create widget` in the **Search** field. From the search results, choose **Create Widget**. You see a node appears in **Level Blueprint**.
+
+ 1. Use the same procedure and add the **Add to Viewport** node to **Level Blueprint**.
+
+ 1. Connect **Event BeginPlay** to the left connectors of the **Construct None** node.
+
+ 1. Connect the two right pins of the **Construct NONE** node to the **Add to Viewport** left pins.
+
+ 1. Set the **Class** field on the **Construct NONE** node to `NewBlueprint`.
+
+ 1. Click **Compile** in the **Event Graph** window. The yellow question mark turns green.
+
+ 1. Press **Ctrl + S** and save the level blueprint with the default name.
+
+ Your **Event Graph** looks like:
+
+ ![image](/images/video-sdk/unreal-blueprint.png)
+
+To check that you successfully added `NewBlueprint` to the view port, in **Content Browser**, navigate to the `Content` folder and double-click `Untitled`. You see the new game level opens. Click **Play** and you see the following UI:
+
+![image](/images/voice-sdk/unreal-voice-calling-ui.png)
+
+### Handle the system logic
+
+Import the necessary C++ libraries, set up your to run on Android, and request permissions for the camera and microphone.
+
+1. **Reference the user widgets**
+
+ In Visual Studio, open `MyUserWidget.h` and add the following property specifiers to `UMyUserWidget`:
+
+ ```cpp
+ protected:
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* JoinBtn = nullptr;
+ UPROPERTY(VisibleAnywhere, BlueprintReadWrite, meta = (BindWidget))
+ UButton* LeaveBtn = nullptr;
+ ```
+ To access the user widgets from the blueprint, in `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "Components/Button.h"
+ ```
+
+1. **Setup event listeners for the buttons**
+
+ In `MyUserWidget.h`, add the following to `UMyUserWidget` after `UButton* LeaveBtn = nullptr;`:
+
+ ```cpp
+ UFUNCTION(BlueprintCallable)
+ void OnLeaveButtonClick();
+ UFUNCTION(BlueprintCallable)
+ void OnJoinButtonClick();
+ ```
+
+3. **Manage Android permissions**
+
+ 1. Add the Unreal Android libraries. In `MyUserWidget.h`, add the following before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #if PLATFORM_ANDROID
+ #include "AndroidPermission/Classes/AndroidPermissionFunctionLibrary.h"
+ #endif
+ ```
+
+ 2. Setup a function to check permissions are granted. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ``` cpp
+ void CheckAndroidPermission();
+ ```
+
+ 3. Add the logic of requesting permissions to `CheckAndroidPermission`. In `MyUserWidget.cpp`, add the following method after `#include "MyUserWidget.h"`:
+
+ ```cpp
+ void UMyUserWidget::CheckAndroidPermission()
+ {
+ #if PLATFORM_ANDROID
+ FString pathfromName = UGameplayStatics::GetPlatformName();
+ if (pathfromName == "Android")
+ {
+ TArray AndroidPermission;
+ AndroidPermission.Add(FString("android.permission.RECORD_AUDIO"));
+ AndroidPermission.Add(FString("android.permission.READ_PHONE_STATE"));
+ AndroidPermission.Add(FString("android.permission.WRITE_EXTERNAL_STORAGE"));
+ UAndroidPermissionFunctionLibrary::AcquirePermissions(AndroidPermission);
+ }
+ #endif
+ }
+ ```
+
+### Implement the channel logic
+
+The following figure shows the API call sequence of implementing .
+
+ ![image](/images/voice-sdk/voice-call-logic-unity.svg)
+
+To implement this logic, take the following steps:
+
+1. **Import the library**
+
+ In `MyUserWidget.h`, add the following header file before `#include "MyUserWidget.generated.h"`:
+
+ ```cpp
+ #include "AgoraPluginInterface.h"
+ ```
+
+ To import the required namespaces, in `MyUserWidget.h`, add the following before `UCLASS()`:
+
+ ```cpp
+ using namespace agora::rtc;
+ using namespace agora;
+ ```
+
+1. **Import dependency module**
+
+ In `AgoraImplementation.Build.cs`, update the following line:
+
+ ```cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "Json"});
+ ```
+
+ With the following line:
+
+ ```cpp
+ PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "Json","AgoraPlugin"});
+ ```
+
+1. **Declare the required variables**
+
+ In `MyUserWidget.h`, add the following declarations to `UMyUserWidget`:
+
+ ```cpp
+ protected:
+ IRtcEngine* agoraEngine;
+ std::string appId = "";
+ std::string channelName = "";
+ std::string token = "";
+ bool isJoin = false;
+ int remoteUId;
+ ```
+
+1. **Setup **
+
+ To setup an instance of , take the following steps:
+
+ 1. Setup a function to add the logic of creating an engine instance. In `MyUserWidget.h`, add the following method declaration to `UMyUserWidget`:
+
+ ```cpp
+ void setupVoiceSDKEngine();
+ ```
+
+ 2. In `MyUserWidget.cpp`, add the following method before `setupVoiceSDKEngine`:
+
+ ``` cpp
+ void UMyUserWidget::setupVoiceSDKEngine()
+ {
+ // Create an engine instance.
+ agoraEngine = agora::rtc::ue::createAgoraRtcEngine();
+ // Specify a context for the engine.
+ RtcEngineContext context;
+ context.appId = appId.c_str();
+ context.eventHandler = this;
+ // Choose the communication profile for voice calling.
+ context.channelProfile = CHANNEL_PROFILE_TYPE::CHANNEL_PROFILE_COMMUNICATION;
+ // Initialize the engine instance with the context.
+ agoraEngine->initialize(context);
+ // Enable the local audio capture to init the local audio stream.
+ agoraEngine->enableAudio();
+ // Attach event listener functions to the button.
+ LeaveBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnLeaveButtonClick);
+ JoinBtn->OnClicked.AddDynamic(this, &UMyUserWidget::OnJoinButtonClick);
+ }
+ ```
+
+1. **Handle and respond to events**
+
+ To register the callbacks, inherit the `UMyUserWidget` with the `IRtcEngineEventHandler` class. In `MyUserWidget.h`, add the following after `class AGORAIMPLEMENTATION_API UMyUserWidget : public UUserWidget`:
+
+ ```cpp
+ , public IRtcEngineEventHandler
+ ```
+
+ To implement the required callbacks in your , take the following steps:
+
+ 1. Override the definition of necessary callbacks. In `MyUserWidget.h`, add the following callbacks to `UMyUserWidget` after `void setupVideoSDKEngine();`:
+
+ ```cpp
+ // Occurs when a remote user joins the channel.
+ void onUserJoined(uid_t uid, int elapsed) override;
+ // Occurs when a local user joins the channel.
+ void onJoinChannelSuccess(const char* channel, uid_t uid, int elapsed) override;
+ // Occurs when you leave the channel.
+ void onLeaveChannel(const RtcStats& stats) override;
+ // Occurs when the remote user drops offline.
+ void onUserOffline(uid_t uid, USER_OFFLINE_REASON_TYPE reason) override;
+ ```
+ 2. Add your logic to the callbacks you declared in `UMyUserWidget`. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::onLeaveChannel(const RtcStats& stats)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("onLeaveChannel: You left the channel"));
+ });
+ }
+ void UMyUserWidget::onJoinChannelSuccess(const char* channel, uid_t uid, int elapsed)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("JoinChannelSuccess uid: %u"), uid);
+ });
+ }
+ void UMyUserWidget::onUserJoined(uid_t uid, int elapsed)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("onUserJoined uid: %u"), uid);
+ remoteUId = uid;
+ });
+ }
+ void UMyUserWidget::onUserOffline(uid_t uid, USER_OFFLINE_REASON_TYPE reason)
+ {
+ AsyncTask(ENamedThreads::GameThread, [=]()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("onUserOffline uid: %u"), uid);
+ });
+ }
+ ```
+
+1. **Join a channel to start **
+
+ When the user clicks **Join**, you call the `OnJoinButtonClick()` method. This method securely connects the local user to a channel using the authentication token. In `MyUserWidget.cpp`, add the following before `setupVideoSDKEngine`:
+
+ ``` csharp
+ void UMyUserWidget::OnJoinButtonClick()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget:: OnJoinButtonClick ======"));
+ // Set the user role to Host.
+ agoraEngine->setClientRole(CLIENT_ROLE_TYPE::CLIENT_ROLE_BROADCASTER);
+ // Join the channel.
+ agoraEngine->joinChannel(token.c_str(), channelName.c_str(), "", 0);
+ isJoin = true;
+ }
+ ```
+
+1. **Leave the channel when a user ends the call**
+
+ When a user clicks **Leave**, you call `OnLeaveButtonClick()` to exit the channel. In `MyUserWidget.cpp`, add the following before `onUserJoined`:
+ ``` csharp
+ void UMyUserWidget::OnLeaveButtonClick()
+ {
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget:: OnLeaveButtonClick ======"));
+ agoraEngine->leaveChannel();
+ remoteUId = NULL;
+ isJoin = false;
+ }
+ ```
+
+### Start and stop your
+
+In this implementation, you initiate and remove when the app opens and closes. The local user joins and leaves a channel using the same instance. In order to send audio stream to , you need to ensure that the local user gives permission to access the camera and microphone on the local device. To implement this functionality:
+
+1. **Check that the has the correct permissions to start**
+
+ For Android, call `CheckAndroidPermission` and check that the permissions are granted. To execute `CheckAndroidPermission` at startup, call `CheckAndroidPermission` in the constructor of the `UMyUserWidget` class. To implement this workflow, take the following steps:
+
+ 1. Implement a constructor in `UMyUserWidget`. In `MyUserWidget.h`, add the following before `void setupVoiceSDKEngine();`:
+
+ ```cpp
+ void NativeConstruct();
+ ```
+
+ 2. Call `CheckAndroidPermission` in the constructor. In `MyUserWidget.cpp`, add the following code before `setupVideoSDKEngine`:
+
+ ```cpp
+ void UMyUserWidget::NativeConstruct()
+ {
+ CheckAndroidPermission();
+ }
+ ```
+ 3. After you check permissions, you call `setupVoiceSDKEngine` to create an engine instance. In `MyUserWidget.cpp`, add the following at the end of `NativeConstruct`:
+
+ ```cpp
+ setupVoiceSDKEngine();
+ ```
+
+3. **Clean up the resources used by your **
+
+ When a user closes the , use `NativeDestruct` to clean up the resources you created in `setupVideoSDKEngine`. To implement this workflow, take the following steps:
+
+ 1. Implement a destructor for the `UMyUserWidget` class. In `MyUserWidget.h`, add the following before `void NativeConstruct();`:
+
+ ```cpp
+ void NativeDestruct();
+ ```
+
+ 2. Add the resource clean up logic to `NativeDestruct`. In `MyUserWidget.cpp`, add the following before `NativeConstruct`:
+
+ ```cpp
+ void UMyUserWidget::NativeDestruct()
+ {
+ Super::NativeDestruct();
+ UE_LOG(LogTemp, Warning, TEXT("UMyUserWidget::NativeDestruct"));
+ if (agoraEngine != nullptr)
+ {
+ agoraEngine->unregisterEventHandler(this);
+ agoraEngine->release();
+ delete agoraEngine;
+ agoraEngine = nullptr;
+ }
+ }
+ ```
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-setup/index.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-setup/index.mdx
index 9ed94461a..7e25db626 100644
--- a/shared/voice-sdk/get-started/get-started-sdk/project-setup/index.mdx
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-setup/index.mdx
@@ -7,6 +7,8 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx'
+import Unreal from './unreal.mdx';
+
@@ -16,4 +18,5 @@ import Windows from './windows.mdx'
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-setup/unreal.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-setup/unreal.mdx
new file mode 100644
index 000000000..0e28274a2
--- /dev/null
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-setup/unreal.mdx
@@ -0,0 +1,31 @@
+
+1. **Create a new project**
+
+ 1. In Unreal Editor, select the **Games** new project category.
+
+ 1. From **Unreal Project Browser**, select a **Blank template** and choose the following settings from **Project Defaults**.
+
+ 1. Select **C++**.
+
+ 1. From the **Target Platform** dropdown, select **Desktop**.
+
+ 1. Make sure the **Starter Content** check box is unchecked.
+
+ 1. In the **Project Name** field, input `AgoraImplementation` and click **Next**.
+
+ You see your project opens in the Unreal Editor.
+
+1. **Integrate **
+
+ 1. Go to [SDKs](/sdks), download the latest version of the Agora .
+
+ 1. Create a folder called `Plugins` in the root directory of your project folder.
+
+ 1. Unzip the downloaded SDK to `/Plugins`.
+
+ 1. In Solution Explorer, right-click your project, then click **Properties**. The **agoraBlueprint Property Pages** window opens. Go to the **VC++ Directories** menu and add the following string in the **External Include Directories** field, then click **OK**:
+
+ ```
+ $(SolutionDir)Plugins\AgoraPlugin\Source\AgoraPlugin\Public;
+ ```
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-test/index.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-test/index.mdx
index e2416aa7c..c6b2641a3 100644
--- a/shared/voice-sdk/get-started/get-started-sdk/project-test/index.mdx
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-test/index.mdx
@@ -7,6 +7,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import Unity from './unity.mdx';
import Windows from './windows.mdx'
+import Unreal from './unreal.mdx';
@@ -16,4 +17,5 @@ import Windows from './windows.mdx'
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/project-test/unreal.mdx b/shared/voice-sdk/get-started/get-started-sdk/project-test/unreal.mdx
new file mode 100644
index 000000000..394146953
--- /dev/null
+++ b/shared/voice-sdk/get-started/get-started-sdk/project-test/unreal.mdx
@@ -0,0 +1,11 @@
+
+
+3. In **MyUserWidget.h**, update `appId`, `channelName` and `token` with the values for your temporary token.
+
+4. In **Unreal Editor**, click **Play**. A moment later you see the running on your development device.
+
+ If this is the first time you run the project, grant microphone access to your .
+
+5. Click **Join** to start a call. Now, you can talk to the remote user using your .
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/reference/index.mdx b/shared/voice-sdk/get-started/get-started-sdk/reference/index.mdx
index ef80d2868..baeb45f2c 100644
--- a/shared/voice-sdk/get-started/get-started-sdk/reference/index.mdx
+++ b/shared/voice-sdk/get-started/get-started-sdk/reference/index.mdx
@@ -7,6 +7,7 @@ import Electron from './electron.mdx';
import Flutter from './flutter.mdx';
import ReactNative from './react-native.mdx'
import Windows from './windows.mdx'
+import Unreal from './unreal.mdx'
@@ -16,4 +17,5 @@ import Windows from './windows.mdx'
-
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/shared/voice-sdk/get-started/get-started-sdk/reference/unreal.mdx b/shared/voice-sdk/get-started/get-started-sdk/reference/unreal.mdx
new file mode 100644
index 000000000..97dd6291b
--- /dev/null
+++ b/shared/voice-sdk/get-started/get-started-sdk/reference/unreal.mdx
@@ -0,0 +1,9 @@
+
+
+### API reference
+
+- joinChannel
+
+- leaveChannel
+
+
diff --git a/shared/voice-sdk/reference/release-notes/index.mdx b/shared/voice-sdk/reference/release-notes/index.mdx
index ec733a44a..9530ce6f5 100644
--- a/shared/voice-sdk/reference/release-notes/index.mdx
+++ b/shared/voice-sdk/reference/release-notes/index.mdx
@@ -6,6 +6,7 @@ import Electron from './electron.mdx';
import Ios from './ios.mdx';
import Macos from './macos.mdx';
import Windows from './windows.mdx';
+import Unreal from './unreal.mdx';
import Web from '@docs/shared/video-sdk/reference/release-notes/web.mdx';
import NCS from '@docs/shared/video-sdk/reference/release-notes/ncs-release-notes.mdx';
import AINS from '@docs/shared/extensions-marketplace/reference/_ains.mdx';
@@ -26,6 +27,7 @@ This page provides the release notes for the
+
+## v4.1.0
+
+v4.1.0 was released on February 14, 2023.
+
+
+This release is the first version of RTC Unreal SDK, including the following features:
+
+**1. Multiple media tracks**
+
+This release supports one `IRtcEngine` instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting `RtcEngineEx` and `ChannelMediaOptions.`
+
+- After calling `joinChannel` to join the first channel, call `joinChannelEx` multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (`localUid`) and `ChannelMediaOptions` settings.
+- You can simultaneously publish multiple sets of video streams captured by multiple cameras or screen sharing by setting `publishSecondaryCameraTrack` and `publishSecondaryScreenTrack` in `ChannelMediaOptions`.
+
+This release adds `createCustomVideoTrack` method to implement video custom capture. You can refer to the following steps to publish multiple custom captured video in the channel:
+
+1. Create a custom video track: Call this method to create a video track, and get the video track ID.
+2. Set the custom video track to be published in the channel: In each channel's `ChannelMediaOptions`, set the `customVideoTrackId` parameter to the ID of the video track you want to publish, and set `publishCustomVideoTrack` to `true`.
+3. Pushing an external video source: Call `pushVideoFrame`, and specify `videoTrackId` as the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.
+
+You can also experience the following features with the multi-channel capability:
+
+- Publish multiple sets of audio and video streams to the remote users through different user IDs (`uid`).
+- Mix multiple audio streams and publish to the remote users through a user ID (`uid`).
+- Combine multiple video streams and publish them to the remote users through a user ID (`uid`).
+
+**2. Ultra HD resolution (Beta)**
+
+In order to improve the interactive video experience, the SDK optimizes the whole process of video capture, encoding, decoding and rendering, and now supports 4K resolution. The improved FEC (Forward Error Correction) algorithm enables adaptive switches according to the frame rate and number of video frame packets, which further reduces the video stuttering rate in 4K scenes.
+
+Additionally, you can set the encoding resolution to 4K (3840 × 2160) and the frame rate to 60 fps when calling `SetVideoEncoderConfiguration`. The SDK supports automatic fallback to the appropriate resolution and frame rate if your device does not support 4K.
+
+> **Note**: This feature has certain requirements with regards to device performance and network bandwidth, and the supported upstream and downstream frame rates vary on different platforms. To experience this feature, contact technical support.
+
+**3. Build-in media player**
+
+To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the `createMediaPlayer` method to create a media player object, you can then call the methods in the `IMediaPlayer` class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.
+
+
+**4. Screen sharing**
+
+This release optimizes the screen sharing function. You can enable this function in the following ways.
+
+- Call the `StartScreenCaptureByDisplayId` method before joining a channel, and then call `JoinChannel` [2/2] to join a channel and set `publishScreenTrack` or `publishSecondaryScreenTrack` as true.
+- Call the `StartScreenCaptureByDisplayId` method after joining a channel, and then call `UpdateChannelMediaOptions` to set `publishScreenTrack` or `publishSecondaryScreenTrack` as true.
+
+
+**5. Spatial audio**
+
+ > **Note**: This feature is in experimental status. To enable this feature, contact sales@agora.io. Contact technical support if needed.
+
+You can set the spatial audio for the remote user as following:
+
+- Local Cartesian Coordinate System Calculation: This solution uses the `ILocalSpatialAudioEngine` class to implement spatial audio by calculating the spatial coordinates of the remote user. You need to call `updateSelfPosition` and `updateRemotePosition` to update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user.
+ ![img](https://web-cdn.agora.io/docs-files/1656645542473)
+
+You can also set the spatial audio for the media player as following:
+
+- Local Cartesian Coordinate System Calculation: This solution uses the `ILocalSpatialAudioEngine` class to implement spatial audio. You need to call `updateSelfPosition` and `updatePlayerPositionInfo` to update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player.
+ ![img](https://web-cdn.agora.io/docs-files/1656646829637)
+
+This release also adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.
+
+- Sound insulation area: You can set a sound insulation area and sound attenuation parameter by calling `setZones`. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by calling `setPlayerAttenuation` and `setRemoteAudioAttenuation`, and specify whether to use that setting to force an override of the sound attenuation paramter in `setZones`.
+- Doppler sound: You can enable Doppler sound by setting the `enable_doppler` parameter in `SpatialAudioParams`, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario).
+- Headphone equalizer: You can use a preset headphone equalization effect by calling the `setHeadphoneEQPreset` method to improve the hearing of the headphones.
+
+**6. Media Stream Encryption**
+
+This release adds support for media stream encryption, which encrypts your app’s audio and video streams with a unique key and salt controlled by the app developer. While not every use case requires media encryption, Agora provides the option to guarantee data confidentiality during transmission.
+
+
+ **7. Media push**
+
+This release adds support for sending the audio and video of your channel to other RTMP servers through the CDN:
+- Starts or stops publishing at any time.
+- Adds or removes an address while continuously publishing the stream.
+- Adjusts the picture-in-picture layout.
+- To send a live stream to WeChat or Weibo.
+- To allow more people to watch the live stream when the number of audience members in the channel reached the limit.
+
+
+**8. Brand-new AI noise reduction**
+
+The SDK supports a new version of AI noise reduction (in comparison to the basic AI noise reduction in v3.7.0). The new AI noise reduction has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option.
+> **Note**: To experience this feature, contact sales@agora.io.
+
+
+**9. Real-time chorus**
+
+This release gives real-time chorus the following abilities:
+
+- Two or more choruses are supported.
+- Each singer is independent of each other. If one singer fails or quits the chorus, the other singers can continue to sing.
+- Very low latency experience. Each singer can hear each other in real time, and the audience can also hear each singer in real time.
+
+This release adds the `AUDIO_SCENARIO_CHORUS` enumeration in `AUDIO_SCENARIO_TYPE`. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.
+
+**10. Extensions from the Agora extensions marketplace**
+
+In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the [Agora extensions marketplace](https://www.agora.io/en/marketplace):
+
+- Easy to integrate: The integration of modular functions can be achieved simply by calling an API, and the integration efficiency is improved by nearly 95%.
+- Extensibility design: The modular and extensible SDK design style endows the Agora SDK with good extensibility, which enables developers to quickly build real-time interactive apps based on the Agora extensions marketplace ecosystem.
+- Build an ecosystem: A community of real-time audio and video apps has developed that can accommodate a wide range of developers, offering a variety of extension combinations. After integrating the extensions, developers can build richer real-time interactive functions.
+- Become a vendor: Vendors can integrate their products with Agora SDK in the form of extensions, display and publish them in the Agora extensions marketplace, and build a real-time interactive ecosystem for developers together with Agora. For details on how to develop and publish extensions.
+
+**11. Enhanced channel management**
+
+To meet the channel management requirements of various business scenarios, this release adds the following functions to the `ChannelMediaOptions` structure:
+
+- Sets or switches the publishing of multiple audio and video sources.
+- Sets or switches channel profile and user role.
+- Sets or switches the stream type of the subscribed video.
+- Controls audio publishing delay.
+
+Set `ChannelMediaOptions` when calling `joinChannel` or `joinChannelEx` to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call `updateChannelMediaOptions` to update the settings in `ChannelMediaOptions` at any time, for example, to switch the published audio and video sources.
+
+
+**12. Subscription allowlists and blocklists**
+
+This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your whitelist, or add a user ID for the streams you do not wish to see to your blacklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the `IRtcEngineEx` interface:
+
+- `SetSubscribeAudioBlacklist`:Set the audio subscription blocklist.
+- `SetSubscribeAudioWhitelist`:Set the audio subscription allowlist.
+- `SetSubscribeVideoBlacklist`:Set the video subscription blocklist.
+- `SetSubscribeVideoWhitelist`:Set the video subscription allowlist.
+
+If a user is added in a blacklist and a whitelist at the same time, only the blacklist takes effect.
+
+
+**13. Replace video feeds with images**
+
+This release supports replacing video feeds with images when publishing video streams. You can call the enableVideoImageSource method to enable this function and choose your own images through the options parameter. If you disable this function, the remote users see the video feeds that you publish.
+
+**14. Local video mixing**
+
+This release adds a series of APIs supporting local video mixing functions. You can mix multiple video streams into one video stream locally. Common scenarios are as follows:
+
+- In interactive live streaming scenarios with cohosts or when using the Media Push function, you can merge the screens of multiple hosts into one view locally.
+- In scenarios where you capture multiple local video streams (for example, video captured by cameras, screen sharing streams, video files or pictures), you can and merge them into one video stream and then publish the mixed video stream in the channel.
+
+You can call the `startLocalVideoTranscoder` method to start local video mixing and call the `stopLocalVideoTranscoder` method to stop local video mixing. After the local video mixing starts, you can call `updateLocalTranscoderConfiguration` to update the local video mixing configuration.
+
+**15. Video device management**
+
+Video capture devices can support multiple video formats, each supporting a different combination of video frame width, video frame height, and frame rate.
+
+This release adds the `numberOfCapabilities` and `getCapability` methods for getting the number of video formats supported by the video capture device and the details of the video frames in the specified video format. When calling the `startPrimaryCameraCapture` or `startSecondaryCameraCapture` method to capture video using the camera, you can use the specified video format.
+
+> **Note**: The SDK automatically selects the best video format for the video capture device based on your settings in VideoEncoderConfiguration, so normally you should not need to use these new methods.
+
+**16. In-ear monitoring**
+
+
+This release adds support for in-ear monitoring. You can call `enableInEarMonitoring` to enable the in-ear monitoring function.
+
+After successfully enabling the in-ear monitoring function, you can call `registerAudioFrameObserver` to register the audio observer, and the SDK triggers the `onEarMonitoringAudioFrame` callback to report the audio frame data. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. Agora recommends that you choose one of the following two methods to set the audio data format of the in-ear monitoring:
+
+- Call the `setEarMonitoringAudioFrameParameters` method to set the audio data format of in-ear monitoring. The SDK calculates the sampling interval based on the parameters in this method, and triggers the `onEarMonitoringAudioFrame` callback based on the sampling interval.
+- Set the audio data format in the return value of the `getEarMonitoringAudioParams` callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the onEarMonitoringAudioFrame callback based on the sampling interval.
+
+> **Note**: To adjust the in-ear monitoring volume, you can call setInEarMonitoringVolume.
+
+
+**17. Audio stream filter**
+
+This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact support@agora.io to adjust this number according to your scenarios.
+
+Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.
+
+
+> **Note**: To enable this function, contact technical support.
+
+**18. MPUDP (MultiPath UDP) (Beta)**
+
+As of this release, the SDK supports MPUDP protocol, which enables you to connect and use multiple paths to maximize the use of channel resources based on the UDP protocol. You can use different physical NICs on both mobile and desktop and aggregate them to effectively combat network jitter and improve transmission quality.
+
+> **Note**: To enable this function, contact sales@agora.io.
+
+
\ No newline at end of file
diff --git a/video-calling/develop/ai-noise-suppression.mdx b/video-calling/develop/ai-noise-suppression.mdx
new file mode 100644
index 000000000..479c47ffa
--- /dev/null
+++ b/video-calling/develop/ai-noise-suppression.mdx
@@ -0,0 +1,14 @@
+---
+title: 'AI Noise Suppression (beta)'
+sidebar_position: 16
+type: docs
+description: >
+ Suppress hundreds of types of noise and reduce distortion for human voice
+---
+
+import AIDenoiser from '@docs/shared/extensions-marketplace/ai-noise-suppression.mdx';
+
+export const toc = [{}];
+
+
+
diff --git a/video-calling/develop/virtual-background.mdx b/video-calling/develop/virtual-background.mdx
new file mode 100644
index 000000000..12d629731
--- /dev/null
+++ b/video-calling/develop/virtual-background.mdx
@@ -0,0 +1,13 @@
+---
+title: 'Virtual background (beta)'
+sidebar_position: 15
+type: docs
+description: >
+ Blur the background or replace it with a solid color or an image.
+---
+
+import VirtualBackground from '@docs/shared/extensions-marketplace/virtual-background.mdx';
+
+export const toc = [{}];
+
+
diff --git a/voice-calling/get-started/get-started-sdk.mdx b/voice-calling/get-started/get-started-sdk.mdx
index 234a3b7ed..75613dbbd 100644
--- a/voice-calling/get-started/get-started-sdk.mdx
+++ b/voice-calling/get-started/get-started-sdk.mdx
@@ -1,5 +1,6 @@
---
title: 'SDK quickstart'
+sidebar_position: 1
weight: 1
type: docs
description: >