Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] daily update 2024-11-28 #2115

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions lib/src/agora_base.dart
Original file line number Diff line number Diff line change
Expand Up @@ -692,7 +692,7 @@ enum QualityType {
@JsonValue(7)
qualityUnsupported,

/// 8: Detecting the network quality.
/// 8: The last-mile network probe test is in progress.
@JsonValue(8)
qualityDetecting,
}
Expand Down Expand Up @@ -930,7 +930,7 @@ extension OrientationModeExt on OrientationMode {
/// Video degradation preferences when the bandwidth is a constraint.
@JsonEnum(alwaysCreate: true)
enum DegradationPreference {
/// 0: (Default) Prefers to reduce the video frame rate while maintaining video resolution during video encoding under limited bandwidth. This degradation preference is suitable for scenarios where video quality is prioritized.
/// 0: Prefers to reduce the video frame rate while maintaining video resolution during video encoding under limited bandwidth. This degradation preference is suitable for scenarios where video quality is prioritized. Deprecated: This enumerator is deprecated. Use other enumerations instead.
@JsonValue(0)
maintainQuality,

Expand Down Expand Up @@ -1623,7 +1623,7 @@ enum CompressionPreference {
@JsonValue(0)
preferLowLatency,

/// 1: (Default) High quality preference. The SDK compresses video frames while maintaining video quality. This preference is suitable for scenarios where video quality is prioritized.
/// 1: High quality preference. The SDK compresses video frames while maintaining video quality. This preference is suitable for scenarios where video quality is prioritized.
@JsonValue(1)
preferQuality,
}
Expand Down Expand Up @@ -1859,7 +1859,7 @@ class VideoEncoderConfiguration {
@JsonKey(name: 'frameRate')
final int? frameRate;

/// The encoding bitrate (Kbps) of the video. This parameter does not need to be set; keeping the default value standardBitrate is sufficient. The SDK automatically matches the most suitable bitrate based on the video resolution and frame rate you have set. For the correspondence between video resolution and frame rate, see. standardBitrate (0): (Recommended) Standard bitrate mode. compatibleBitrate (-1): Adaptive bitrate mode. In general, Agora suggests that you do not use this value.
/// The encoding bitrate (Kbps) of the video.. This parameter does not need to be set; keeping the default value standardBitrate is sufficient. The SDK automatically matches the most suitable bitrate based on the video resolution and frame rate you have set. For the correspondence between video resolution and frame rate, see. standardBitrate (0): (Recommended) Standard bitrate mode. compatibleBitrate (-1): Adaptive bitrate mode. In general, Agora suggests that you do not use this value.
@JsonKey(name: 'bitrate')
final int? bitrate;

Expand Down Expand Up @@ -2933,7 +2933,7 @@ enum LocalVideoStreamReason {
@JsonValue(8)
localVideoStreamReasonDeviceNotFound,

/// 9: (macOS only) The video capture device currently in use is disconnected (such as being unplugged).
/// 9: (macOS and Windows only) The video capture device currently in use is disconnected (such as being unplugged).
@JsonValue(9)
localVideoStreamReasonDeviceDisconnected,

Expand Down Expand Up @@ -2974,7 +2974,7 @@ enum LocalVideoStreamReason {
@JsonValue(20)
localVideoStreamReasonScreenCaptureWindowNotSupported,

/// 21: (Windows only) The screen has not captured any data available for window sharing.
/// 21: (Windows and Android only) The currently captured window has no data.
@JsonValue(21)
localVideoStreamReasonScreenCaptureFailure,

Expand Down Expand Up @@ -3980,7 +3980,7 @@ class LiveTranscoding {
@JsonKey(name: 'height')
final int? height;

/// The encoding bitrate (Kbps) of the video. This parameter does not need to be set; keeping the default value standardBitrate is sufficient. The SDK automatically matches the most suitable bitrate based on the video resolution and frame rate you have set. For the correspondence between video resolution and frame rate, see.
/// The encoding bitrate (Kbps) of the video.. This parameter does not need to be set; keeping the default value standardBitrate is sufficient. The SDK automatically matches the most suitable bitrate based on the video resolution and frame rate you have set. For the correspondence between video resolution and frame rate, see.
@JsonKey(name: 'videoBitrate')
final int? videoBitrate;

Expand Down Expand Up @@ -4893,11 +4893,11 @@ class VideoDenoiserOptions {
/// @nodoc
const VideoDenoiserOptions({this.mode, this.level});

/// Video noise reduction mode.
/// Video noise reduction mode..
@JsonKey(name: 'mode')
final VideoDenoiserMode? mode;

/// Video noise reduction level.
/// Video noise reduction level..
@JsonKey(name: 'level')
final VideoDenoiserLevel? level;

Expand Down Expand Up @@ -4934,18 +4934,18 @@ extension VideoDenoiserModeExt on VideoDenoiserMode {
}
}

/// The video noise reduction level.
/// Video noise reduction level.
@JsonEnum(alwaysCreate: true)
enum VideoDenoiserLevel {
/// 0: (Default) Promotes video quality during video noise reduction. balances performance consumption and video noise reduction quality. The performance consumption is moderate, the video noise reduction speed is moderate, and the overall video quality is optimal.
@JsonValue(0)
videoDenoiserLevelHighQuality,

/// 1: Promotes reducing performance consumption during video noise reduction. prioritizes reducing performance consumption over video noise reduction quality. The performance consumption is lower, and the video noise reduction speed is faster. To avoid a noticeable shadowing effect (shadows trailing behind moving objects) in the processed video, Agora recommends that you use this settinging when the camera is fixed.
/// 1: Promotes reducing performance consumption during video noise reduction. It prioritizes reducing performance consumption over video noise reduction quality. The performance consumption is lower, and the video noise reduction speed is faster. To avoid a noticeable shadowing effect (shadows trailing behind moving objects) in the processed video, Agora recommends that you use this setting when the camera is fixed.
@JsonValue(1)
videoDenoiserLevelFast,

/// 2: Enhanced video noise reduction. prioritizes video noise reduction quality over reducing performance consumption. The performance consumption is higher, the video noise reduction speed is slower, and the video noise reduction quality is better. If videoDenoiserLevelHighQuality is not enough for your video noise reduction needs, you can use this enumerator.
/// @nodoc
@JsonValue(2)
videoDenoiserLevelStrength,
}
Expand Down Expand Up @@ -5006,7 +5006,7 @@ class VirtualBackgroundSource {
@JsonKey(name: 'source')
final String? source;

/// The degree of blurring applied to the custom background image. This parameter takes effect only when the type of the custom background image is backgroundBlur.
/// The degree of blurring applied to the custom background image.. This parameter takes effect only when the type of the custom background image is backgroundBlur.
@JsonKey(name: 'blur_degree')
final BackgroundBlurDegree? blurDegree;

Expand Down Expand Up @@ -5988,7 +5988,7 @@ class ChannelMediaRelayConfiguration {

/// The information of the target channel ChannelMediaInfo. It contains the following members: channelName : The name of the target channel. token : The token for joining the target channel. It is generated with the channelName and uid you set in destInfos.
/// If you have not enabled the App Certificate, set this parameter as the default value NULL, which means the SDK applies the App ID.
/// If you have enabled the App Certificate, you must use the token generated with the channelName and uid. If the token of any target channel expires, the whole media relay stops; hence Agora recommends that you specify the same expiration time for the tokens of all the target channels. uid : The unique user ID to identify the relay stream in the target channel. The value ranges from 0 to (2 32 -1). To avoid user ID conflicts, this user ID must be different from any other user ID in the target channel. The default value is 0, which means the SDK generates a random user ID.
/// If you have enabled the App Certificate, you must use the token generated with the channelName and uid. If the token of any target channel expires, the whole media relay stops; hence Agora recommends that you specify the same expiration time for the tokens of all the target channels. uid : The unique user ID to identify the relay stream in the target channel. The value ranges from 0 to (2 32 -1). To avoid user ID conflicts, this user ID must be different from any other user ID in the target channel. The default value is 0, which means the SDK generates a random UID.
@JsonKey(name: 'destInfos')
final List<ChannelMediaInfo>? destInfos;

Expand Down Expand Up @@ -6518,7 +6518,7 @@ class ScreenVideoParameters {
@JsonKey(name: 'bitrate')
final int? bitrate;

/// The content hint for screen sharing.
/// The content hint for screen sharing..
@JsonKey(name: 'contentHint')
final VideoContentHint? contentHint;

Expand Down
14 changes: 8 additions & 6 deletions lib/src/agora_media_base.dart
Original file line number Diff line number Diff line change
Expand Up @@ -632,11 +632,11 @@ extension VideoPixelFormatExt on VideoPixelFormat {
/// Video display modes.
@JsonEnum(alwaysCreate: true)
enum RenderModeType {
/// 1: Hidden mode. Uniformly scale the video until one of its dimension fits the boundary (zoomed to fit). One dimension of the video may have clipped contents.
/// 1: Hidden mode. The priority is to fill the window. Any excess video that does not match the window size will be cropped.
@JsonValue(1)
renderModeHidden,

/// 2: Fit mode. Uniformly scale the video until one of its dimension fits the boundary (zoomed to fit). Areas that are not filled due to disparity in the aspect ratio are filled with black.
/// 2: Fit mode. The priority is to ensure that all video content is displayed. Any areas of the window that are not filled due to the mismatch between video size and window size will be filled with black.
@JsonValue(2)
renderModeFit,

Expand Down Expand Up @@ -970,15 +970,17 @@ class VideoFrame {
@JsonKey(name: 'matrix')
final List<double>? matrix;

/// The alpha channel data output by using portrait segmentation algorithm. This data matches the size of the video frame, with each pixel value ranging from [0,255], where 0 represents the background and 255 represents the foreground (portrait). By setting this parameter, you can render the video background into various effects, such as transparent, solid color, image, video, etc. In custom video rendering scenarios, ensure that both the video frame and alphaBuffer are of the Full Range type; other types may cause abnormal alpha data rendering.
/// The alpha channel data output by using portrait segmentation algorithm. This data matches the size of the video frame, with each pixel value ranging from [0,255], where 0 represents the background and 255 represents the foreground (portrait). By setting this parameter, you can render the video background into various effects, such as transparent, solid color, image, video, etc.
/// In custom video rendering scenarios, ensure that both the video frame and alphaBuffer are of the Full Range type; other types may cause abnormal alpha data rendering.
/// Make sure that alphaBuffer is exactly the same size as the video frame (width × height), otherwise it may cause the app to crash.
@JsonKey(name: 'alphaBuffer', ignore: true)
final Uint8List? alphaBuffer;

/// @nodoc
@JsonKey(name: 'pixelBuffer', ignore: true)
final Uint8List? pixelBuffer;

/// The meta information in the video frame. To use this parameter, please contact.
/// The meta information in the video frame. To use this parameter, contact.
@VideoFrameMetaInfoConverter()
@JsonKey(name: 'metaInfo')
final VideoFrameMetaInfo? metaInfo;
Expand Down Expand Up @@ -1068,7 +1070,7 @@ class AudioPcmFrameSink {
///
/// After registering the audio frame observer, the callback occurs every time the player receives an audio frame, reporting the detailed information of the audio frame.
///
/// * [frame] The audio frame information. See AudioPcmFrame.
/// * [frame] The audio frame information.. See AudioPcmFrame.
final void Function(AudioPcmFrame frame)? onFrame;
}

Expand Down Expand Up @@ -1393,7 +1395,7 @@ class AudioSpectrumObserver {
///
/// After successfully calling registerAudioSpectrumObserver to implement the onRemoteAudioSpectrum callback in the AudioSpectrumObserver and calling enableAudioSpectrumMonitor to enable audio spectrum monitoring, the SDK will trigger the callback as the time interval you set to report the received remote audio data spectrum.
///
/// * [spectrums] The audio spectrum information of the remote user, see UserAudioSpectrumInfo. The number of arrays is the number of remote users monitored by the SDK. If the array is null, it means that no audio spectrum of remote users is detected.
/// * [spectrums] The audio spectrum information of the remote user. See UserAudioSpectrumInfo. The number of arrays is the number of remote users monitored by the SDK. If the array is null, it means that no audio spectrum of remote users is detected.
/// * [spectrumNumber] The number of remote users.
final void Function(
List<UserAudioSpectrumInfo> spectrums, int spectrumNumber)?
Expand Down
17 changes: 2 additions & 15 deletions lib/src/agora_media_engine.dart
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,7 @@ abstract class MediaEngine {

/// Registers a raw video frame observer object.
///
/// If you want to obtain the original video data of some remote users (referred to as group A) and the encoded video data of other remote users (referred to as group B), you can refer to the following steps:
/// Call registerVideoFrameObserver to register the raw video frame observer before joining the channel.
/// Call registerVideoEncodedFrameObserver to register the encoded video frame observer before joining the channel.
/// After joining the channel, get the user IDs of group B users through onUserJoined, and then call setRemoteVideoSubscriptionOptions to set the encodedFrameOnly of this group of users to true.
/// Call muteAllRemoteVideoStreams (false) to start receiving the video streams of all remote users. Then:
/// The raw video data of group A users can be obtained through the callback in VideoFrameObserver, and the SDK renders the data by default.
/// The encoded video data of group B users can be obtained through the callback in VideoEncodedFrameObserver. If you want to observe raw video frames (such as YUV or RGBA format), Agora recommends that you implement one VideoFrameObserver class with this method. When calling this method to register a video observer, you can register callbacks in the VideoFrameObserver class as needed. After you successfully register the video frame observer, the SDK triggers the registered callbacks each time a video frame is received.
/// If you want to observe raw video frames (such as YUV or RGBA format), Agora recommends that you implement one VideoFrameObserver class with this method. When calling this method to register a video observer, you can register callbacks in the VideoFrameObserver class as needed. After you successfully register the video frame observer, the SDK triggers the registered callbacks each time a video frame is received.
///
/// * [observer] The observer instance. See VideoFrameObserver.
///
Expand All @@ -65,14 +59,7 @@ abstract class MediaEngine {

/// Registers a receiver object for the encoded video image.
///
/// If you only want to observe encoded video frames (such as h.264 format) without decoding and rendering the video, Agora recommends that you implement one VideoEncodedFrameObserver class through this method. If you want to obtain the original video data of some remote users (referred to as group A) and the encoded video data of other remote users (referred to as group B), you can refer to the following steps:
/// Call registerVideoFrameObserver to register the raw video frame observer before joining the channel.
/// Call registerVideoEncodedFrameObserver to register the encoded video frame observer before joining the channel.
/// After joining the channel, get the user IDs of group B users through onUserJoined, and then call setRemoteVideoSubscriptionOptions to set the encodedFrameOnly of this group of users to true.
/// Call muteAllRemoteVideoStreams (false) to start receiving the video streams of all remote users. Then:
/// The raw video data of group A users can be obtained through the callback in VideoFrameObserver, and the SDK renders the data by default.
/// The encoded video data of group B users can be obtained through the callback in VideoEncodedFrameObserver.
/// Call this method before joining a channel.
/// If you only want to observe encoded video frames (such as H.264 format) without decoding and rendering the video, Agora recommends that you implement one VideoEncodedFrameObserver class through this method. Call this method before joining a channel.
///
/// * [observer] The video frame observer object. See VideoEncodedFrameObserver.
///
Expand Down
Loading
Loading