Class ITRTCCloud
Module: ITRTCCloud @ TXLiteAVSDK
SDK VERSION 6.0
Function: TRTC main API classes
Nouns[1]: primary stream - The channel for camera video is the primary stream.
Nouns[2]: substream - The channel for screen sharing or VOD play is the substream.
Nouns[3]: vodplay - TRTC for Windows allows the streaming of a local video file. This feature is known as VOD play.
Inheritance
Inherited Members
Namespace: trtc
Assembly: cs.temp.dll.dll
Syntax
public abstract class ITRTCCloud
Examples
Sample code for creating/using/terminating an ITRTCCloud
object:
ITRTCCloud trtcCloud = ITRTCCloud.getTRTCShareInstance();
if (trtcCloud != null)
{
string version = trtcCloud->getSDKVersion();
}
Release an ITRTCCloud
singleton object after the process is closed or if the object is no longer needed
ITRTCCloud.destroyTRTCShareInstance();
trtcCloud = NULL;
Methods
addCallback(ITRTCCloudCallback)
Set ITRTCCloudCallback
You can use `ITRTCCloudCallback` to receive different status notifications from the SDK. For details, please see the definitions in `TRTCCloudCallback.cs`.
Declaration
public abstract void addCallback(ITRTCCloudCallback callback)
Parameters
Type | Name | Description |
---|---|---|
ITRTCCloudCallback | callback | Event callback |
callExperimentalAPI(String)
Call experimental APIs
Declaration
public abstract void callExperimentalAPI(string jsonStr)
Parameters
Type | Name | Description |
---|---|---|
System.String | jsonStr | JSON-string API name and parameter description |
Remarks
This API is used to call experimental APIs.
connectOtherRoom(String)
Request a cross-room call (anchor competition)
The cross-room call feature allows two anchors from different rooms to call and compete with each other without having to exit their own rooms.
Declaration
public abstract void connectOtherRoom(string jsonParams)
Parameters
Type | Name | Description |
---|---|---|
System.String | jsonParams | JSON-string parameters, in which |
Remarks
The JsonCpp library is used to format JSON strings. JsonData jsonObj =new JsonData(); jsonObj["roomId"] = 1908; jsonObj["userId"] = "345"; string jsonData = JsonMapper.ToJson(jsonObj); mTRTCCloud.connectOtherRoom(jsonData);
Examples
For example, after anchor A in room 001 uses `connectOtherRoom()` to successfully call anchor B in room 002, all users in room 001 will receive the `onUserEnter(B)` and `onUserVideoAvailable(B,true)` callbacks, and all users in room 002 will receive the `onUserEnter(A)` and `onUserVideoAvailable(A,true)` callbacks.
In essence, a cross-room call is sharing the audio and video of two anchors in different rooms to each other so that the audiences in both rooms can see two anchors.
Room 001 Room 002
------------- ------------
Before cross-room call:| Anchor A | | Anchor B |
| Audience U, V, and W | | Audience X, Y, and Z |
------------- ------------
Room 001 Room 002
------------- ------------
After cross-room call: | Anchors A and B | | Anchors B and A |
| Audience U, V, and W | | Audience X, Y, and Z |
------------- ------------
To ensure the compatibility of extended parameters for the cross-room call API, the JSON format is used for the parameters, which must contain at least two fields:
`roomId`: If anchor A in room 001 wants to call anchor B in room 002, he or she must set `roomId` to `002` when calling `connectOtherRoom()`.
`userId`: If anchor A in room 001 wants to call anchor B in room 002, he or she must set `userId` to the user ID of anchor B when calling `connectOtherRoom()`.
The result is returned through the `onConnectOtherRoom()` callback in `ITRTCCloudCallback`.
destroyTRTCShareInstance()
Release an ITRTCCloud
singleton object
Declaration
public static void destroyTRTCShareInstance()
disconnectOtherRoom()
End a cross-room call
The result is returned through the `onDisconnectOtherRoom()` callback in `ITRTCCloudCallback`.
Declaration
public abstract void disconnectOtherRoom()
enableAudioVolumeEvaluation(UInt32)
Enable/Disable the volume reminder
After this feature is enabled, the SDK will return its evaluation of the volume of each stream via `onUserVoiceVolume()`.
The volume bar in our demo is implemented using this API.
To enable the volume reminder, please call this API before `startLocalAudio()`.
Declaration
public abstract void enableAudioVolumeEvaluation(uint interval)
Parameters
Type | Name | Description |
---|---|---|
System.UInt32 | interval | Set the interval (ms) for triggering the |
enableCustomAudioCapture(Boolean)
Enable/Disable custom audio capturing (not supported on Android)
After custom audio capturing is enabled, the SDK will skip audio capturing but will continue to encode and send audio data.
You need to keep feeding custom audio data to the SDK using `sendCustomAudioData()`.
Declaration
public abstract void enableCustomAudioCapture(bool enable)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | enable | Whether to enable custom audio capturing. Default value: |
enableCustomVideoCapture(Boolean)
Enable/Disable custom video capturing
After custom video capturing is enabled, the SDK will skip video capturing but will continue to encode and send video data.
You need to keep feeding custom video data to the SDK using `sendCustomVideoData()`.
Declaration
public abstract void enableCustomVideoCapture(bool enable)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | enable | Whether to enable custom video capturing. Default value: |
enableSmallVideoStream(Boolean, ref TRTCVideoEncParam)
Enable/Disable dual-channel (big and small images) encoding
Declaration
public abstract void enableSmallVideoStream(bool enable, ref TRTCVideoEncParam smallVideoParam)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | enable | Whether to enable dual-channel encoding. Default value: |
TRTCVideoEncParam | smallVideoParam | Parameters of the small image |
Remarks
You can enable the dual-channel encoding mode for users in the major role (such as anchor, teacher, or host) and using PC or Mac. In this mode, a user will output one audio stream and two video streams: an **HD** stream and a **smooth** stream. This mode consumes more bandwidth and CPU computing resources.
For the audience in the room:
Select the **HD** stream to watch if the user’s downstream network conditions are good.
Select the **smooth** stream to watch if the user’s downstream network conditions are poor.
enterRoom(ref TRTCParams, TRTCAppScene)
1.1 Enter a room
You will receive the `onEnterRoom(result)` callback in `ITRTCCloudCallback`:
- If room entry succeeds, `result` will be a positive integer (`result` > 0), which indicates the time (ms) it takes to enter the room.
- If room entry fails, `result` will be a negative integer (`result` < 0), which represents the error code.
For more information on room entry error codes, please see [Error Codes](https://cloud.tencent.com/document/product/647/32257).
TRTCAppSceneVideoCall:
Video calls support 720p and 1080p HD video. Each room allows up to 300 concurrent users, and up to 50 of them can speak simultaneously.
Use cases: [one-to-one video call], [video conferencing with up to 300 participants], [online medical consultation], [video chat], [video interview], etc.
TRTCAppSceneAudioCall:
Audio calls support 48 kHz stereo audio. Each room allows up to 300 concurrent users, and up to 50 of them can speak simultaneously.
Use cases: [one-to-one audio call], [audio conferencing with up to 300 participants], [online Werewolf playing], [audio chat room], etc.
TRTCAppSceneLIVE:
Interactive video streaming allows smooth mic on/off without waiting. Anchor latency is lower than 300 ms and playback latency lower than 1,000 ms. Up to 100,000 users can play the anchor’s video at the same time.
Use cases: [interactive classroom], [interactive live streaming], [video dating], [remote training], [large-scale conferencing], etc.
TRTCAppSceneVoiceChatRoom:
Interactive audio streaming allows smooth mic on/off without waiting. Anchor latency is lower than 300 ms and playback latency lower than 1,000 ms. Up to 100,000 users can play the anchor’s audio at the same time.
Use cases: [audio chat room], [karaoke], [FM radio], etc.
Declaration
public abstract void enterRoom(ref TRTCParams param, TRTCAppScene scene)
Parameters
Type | Name | Description |
---|---|---|
TRTCParams | param | |
TRTCAppScene | scene | Application scenario. TRTC supports four scenarios currently: video call ( |
Remarks
- If
scene
is set toTRTCAppSceneLIVE
orTRTCAppSceneVoiceChatRoom
, you must select a role for the current user by specifying therole
field inTRTCParams
. - After you call
enterRoom
, regardless of whether room entry succeeds, you must callexitRoom
before callingenterRoom
again; otherwise, an unexpected error will occur.
exitRoom()
1.2 Leave a room
Calling `exitRoom()` will trigger the execution of room exit-related logic, including releasing resources such as audio/video devices and codecs.
After all the resources are released, the SDK will send you the `onExitRoom()` callback in `ITRTCCloudCallback`.
If you need to call `enterRoom()` again or switch to another audio/video SDK, please wait until you receive the `onExitRoom()` callback.
Otherwise, you may encounter problems such as the camera or mic being occupied.
Declaration
public abstract void exitRoom()
getAudioCaptureVolume()
Get the capturing volume of the SDK
Declaration
public abstract int getAudioCaptureVolume()
Returns
Type | Description |
---|---|
System.Int32 |
getAudioEffectManager()
Get the audio effect management class (supported on Android, iOS, and macOS)
Declaration
public abstract ITXAudioEffectManager getAudioEffectManager()
Returns
Type | Description |
---|---|
ITXAudioEffectManager | Audio effect management class |
Remarks
This is the SDK’s audio effect management module. It supports the following features:
In-ear monitoring: play the audio captured by the mic through headphones in real time
Reverb effect: add reverb effects such as karaoke, room, hall, deep, and resonant
Voice changing effect: add voice changing effects such as little girl, middle-aged man, metal, and punk
Background music: play online or local music (support speed and pitch adjustment, playback with and without vocals, as well as looping)
Short audio effect: add short audio effects such as applause and laughter. For files shorter than 10 seconds, set the isShortFile
parameter to YES
.
getAudioPlayoutVolume()
Get the playback volume of the SDK
Declaration
public abstract int getAudioPlayoutVolume()
Returns
Type | Description |
---|---|
System.Int32 |
getDeviceManager()
Get the device management module (supported on Android and iOS)
Declaration
public abstract ITXDeviceManager getDeviceManager()
Returns
Type | Description |
---|---|
ITXDeviceManager | Device management class |
getScreenCaptureSources()
9.5 Enumerate shareable sources (preferably called before startScreenCapture
)
You can use this API to get the ID, type, and name of sharable sources.
Declaration
public abstract TRTCScreenCaptureSourceInfo[] getScreenCaptureSources()
Returns
Type | Description |
---|---|
TRTCScreenCaptureSourceInfo[] | Only screen-type sources are returned currently. |
Remarks
Information of shareable sources
getSDKVersion()
Get the SDK version
Declaration
public abstract string getSDKVersion()
Returns
Type | Description |
---|---|
System.String | UTF-8-encoded version number |
getTRTCShareInstance()
Get an ITRTCCloud
singleton object
Declaration
public static ITRTCCloud getTRTCShareInstance()
Returns
Type | Description |
---|---|
ITRTCCloud | Return an |
GetVideoRenderData(String, ref Int32, ref Int32, ref Int32, ref Int32, Boolean)
Declaration
public abstract IntPtr GetVideoRenderData(string userId, ref int rotation, ref int width, ref int height, ref int length, bool isNeedDestroy)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | |
System.Int32 | rotation | |
System.Int32 | width | |
System.Int32 | height | |
System.Int32 | length | |
System.Boolean | isNeedDestroy |
Returns
Type | Description |
---|---|
IntPtr |
muteAllRemoteAudio(Boolean)
4.6 Mute/Unmute all users' audio
Declaration
public abstract void muteAllRemoteAudio(bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | mute |
|
Remarks
If mute
is true
, the SDK will stop receiving and playing all remote users’ audio; if mute
is false
, it will start receiving and playing all remote users’ audio.
muteAllRemoteVideoStreams(Boolean)
Pause/Resume receiving the videos of all remote users
Declaration
public abstract void muteAllRemoteVideoStreams(bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | mute | Whether to pause receiving video |
muteLocalAudio(Boolean)
4.4 Mute/Unmute local audio
After local audio is muted, other users in the room will receive the `onUserAudioAvailable(userId, false)` callback.
After local audio is unmuted, other users in the room will receive the `onUserAudioAvailable(userId, true)` callback.
Different from `stopLocalAudio`, `muteLocalAudio(true)` does not stop the sending of audio or video data. Data packets continue to be sent, although without audio and at extremely low bitrate.
As MP4 and other video formats have high requirements on audio continuity, `stopLocalAudio` may make an MP4 recording file fail to be played smoothly.
`muteLocalAudio` has less impact on the compatibility of MP4 recording files. Therefore, you are advised to use `muteLocalAudio` in scenarios with high requirements on recording quality.
Declaration
public abstract void muteLocalAudio(bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | mute |
|
muteLocalVideo(Boolean)
3.3 Pause/Resume publishing local video
After a user pauses publishing local video, other users in the room will receive the `onUserVideoAvailable(userId, false)` callback.
After a user resumes publishing local video, other users in the room will receive the `onUserVideoAvailable(userId, true)` callback.
Declaration
public abstract void muteLocalVideo(bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | mute |
|
muteRemoteAudio(String, Boolean)
4.5 Mute/Unmute a remote user's audio
Declaration
public abstract void muteRemoteAudio(string userId, bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | ID of the remote user |
System.Boolean | mute |
|
Remarks
If mute
is true
, the SDK will stop receiving and playing the remote user’s audio; if mute
is false
, it will start receiving and playing the remote user’s audio.
muteRemoteVideoStream(String, Boolean)
Pause/Resume receiving the video of a remote user
This API will pause/resume receiving the video of the specified remote user, but will not release the resources used to display the video. After pause, the last frame of video before the pause will be displayed.
Declaration
public abstract void muteRemoteVideoStream(string userId, bool mute)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | User ID of the remote user |
System.Boolean | mute | Whether to pause receiving video |
pauseScreenCapture()
9.3 Pause screen sharing
Declaration
public abstract void pauseScreenCapture()
removeCallback(ITRTCCloudCallback)
Remove event callbacks
Declaration
public abstract void removeCallback(ITRTCCloudCallback callback)
Parameters
Type | Name | Description |
---|---|---|
ITRTCCloudCallback | callback | Event callback |
resumeScreenCapture()
9.4 Resume screen sharing
Declaration
public abstract void resumeScreenCapture()
selectScreenCaptureTarget(TRTCScreenCaptureSourceInfo, Rect, TRTCScreenCaptureProperty)
9.6 Set screen sharing parameters (can be called during screen sharing)
During screen sharing, you can call this API to switch the screen to share. You don’t need to start screen sharing again.
You can:
-Share an entire screen: select a source whose `type` is `Screen` from `sourceInfoList` and set `captureRect` to `{0, 0, 0, 0}`
-Share a portion of a screen: select a source whose `type` is `Screen` from `sourceInfoList` and set `captureRect` to a non-null value, e.g., {100, 100, 300, 300}
Declaration
public abstract void selectScreenCaptureTarget(TRTCScreenCaptureSourceInfo source, Rect captureRect, TRTCScreenCaptureProperty property)
Parameters
Type | Name | Description |
---|---|---|
TRTCScreenCaptureSourceInfo | source | The source to share |
Rect | captureRect | The portion of the screen to share |
TRTCScreenCaptureProperty | property | Screen sharing properties, such as whether to enable mouse cursor capturing or show a bright border around the shared content. For details, please see the definition of |
sendCustomAudioData(TRTCAudioFrame)
Send custom audio data to the SDK (not supported on Android)
We recommend the following settings for the parameters of `TRTCAudioFrame` (other parameters can be left empty).
-audioFormat: only `LiteAVAudioFrameFormatPCM` is supported.
-data: audio frame buffer
-length: audio frame size. We recommend a frame duration of 20 ms. **Assume that the PCM format is used, the sample rate is 48000 Hz, and the audio mono. The size of an audio frame would be 48000 × 0.02s × 1 × 16 bits = 15360 bits = 1920 bytes.
-sampleRate: sample rate, which can only be `48000`
-channel: number of sound channels. Valid values: `1`: mono-channel; `2`: dual-channel. If dual channels are used, data will be interleaved.
-timestamp: Separate timestamps equally; otherwise, an audio-to-video sync error will occur and the quality of MP4 recording files will be severely compromised.
For more information, please see [Custom Capturing and Rendering](https://cloud.tencent.com/document/product/647/34066).
Declaration
public abstract void sendCustomAudioData(TRTCAudioFrame frame)
Parameters
Type | Name | Description |
---|---|---|
TRTCAudioFrame | frame | Audio frame. Currently, only mono audio, 48 kHz sample rate, and the |
Remarks
You can leave the setting of timestamps to the SDK by setting timestamp
to 0
, but to avoid choppy audio, make sure that you call sendCustomAudioData
at regular intervals.
sendCustomCmdMsg(Int32, Byte[], Int32, Boolean, Boolean)
Send a custom message to all users in the room
This API allows you to broadcast custom data to other users in the room through the audio/video data channel. Due to the use of this channel, it’s important that you control the message sending frequency and message size to avoid affecting the quality control logic of audio/video data.
Declaration
public abstract bool sendCustomCmdMsg(int cmdId, byte[] data, int dataSize, bool reliable, bool ordered)
Parameters
Type | Name | Description |
---|---|---|
System.Int32 | cmdId | Message ID. Value range: 1-10 |
System.Byte[] | data | Message to send, which cannot exceed 1 KB |
System.Int32 | dataSize | Size of the data to send |
System.Boolean | reliable | Whether to enable reliable messaging, which may cause latency as the recipient needs to retain the data for a while in case resending is required. |
System.Boolean | ordered | Whether to enable ordered messaging, i.e., whether to require that data arrive in the same order as it is sent. This may cause latency as the recipient needs to retain the data for a while to sort it. |
Returns
Type | Description |
---|---|
System.Boolean |
|
Remarks
This API has the following limitations:
You can send at most 30 messages to all users in the room (not supported on web and WeChat Mini Program).
A data packet must not exceed 1 KB, or it may be dropped by an intermediate router or the server.
Each client can send up to 8 KB of data per second.
You must set both reliable
and ordered
to true
or false
.
We strongly recommend that you use different cmdID
for messages of different types. This can reduce message latency when ordered messaging is enabled.
sendCustomVideoData(TRTCVideoFrame)
Send custom video data to the SDK
We recommend the following settings for the parameters of `TRTCVideoFrame` (other parameters can be left empty).
-pixelFormat: only `LiteAVVideoPixelFormat_I420` is supported.
-bufferType: only `LiteAVVideoBufferType_Buffer` is supported.
-data: video frame buffer
-length: video frame size, whose value in I420 format is `width × height × 3/2`
-width: video width
-height: video height
-timestamp: Separate timestamps equally; otherwise, an audio-to-video sync error will occur and the quality of MP4 recording files will be severely compromised.
For more information, please see [Custom Capturing and Rendering](https://cloud.tencent.com/document/product/647/34066).
Declaration
public abstract void sendCustomVideoData(TRTCVideoFrame frame)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoFrame | frame | Video data in I420 format |
Remarks
- The SDK has an internal frame rate control logic. It drops frames if the frame rate is higher than the target specified in
setVideoEncoderParam
and inserts frames if the frame rate is lower than the target.
- You can leave the setting of timestamps to the SDK by setting `timestamp` to `0`, but to avoid unstable frame rate, make sure that you call `sendCustomVideoData` at regular intervals.
sendSEIMsg(Byte[], Int32, Int32)
Insert small-volume custom data into video frames
Unlike `sendCustomCmdMsg`, `sendSEIMsg` inserts data directly into the header of video data. As a result, the data is retained even after the video frames are relayed to live streaming CDNs. However, the size of inserted data should be kept small, preferably several bytes.
The most common practice is using `sendSEIMsg` to insert custom timestamps into video frames, which can ensure that the messages and video images are in sync.
Declaration
public abstract bool sendSEIMsg(byte[] data, int dataSize, int repeatCount)
Parameters
Type | Name | Description |
---|---|---|
System.Byte[] | data | Data to send, which cannot exceed 1 KB |
System.Int32 | dataSize | Size of the data to send |
System.Int32 | repeatCount | Number of times to send the data |
Returns
Type | Description |
---|---|
System.Boolean |
|
Remarks
This API has the following limitations:
-The data is not sent the moment the API is called, but inserted into the next video frame.
-You can send at most 30 messages per second to all users in the room. This limit also applies to `sendCustomCmdMsg`.
-A data packet must not exceed 1 KB. Data packets too large may reduce video quality or cause video stutter. This limit also applies to `sendCustomCmdMsg`.
-Each client can send up to 8 KB of data per second. This limit also applies to `sendCustomCmdMsg`.
-If you send a message multiple times (`repeatCount` > 1), the data will be inserted into multiple subsequent frames (whose number equals `repeatCount`). This will drive up the video bitrate.
-If `repeatCount` is greater than 1, the same message may be returned via the `onRecvSEIMsg` callback multiple times, making deduplication necessary.
setAudioCaptureVolume(Int32)
Set the capturing volume of the SDK
Declaration
public abstract void setAudioCaptureVolume(int volume)
Parameters
Type | Name | Description |
---|---|---|
System.Int32 | volume | Volume. Value range: 0-100 |
setAudioPlayoutVolume(Int32)
Set the playback volume of the SDK
Declaration
public abstract void setAudioPlayoutVolume(int volume)
Parameters
Type | Name | Description |
---|---|---|
System.Int32 | volume | Volume. Value range: 0-150; default: |
setBeautyStyle(TRTCBeautyStyle, UInt32, UInt32, UInt32)
Set the strength of the beauty, skin brightening, and rosy skin filters (not supported on Windows) The SDK has two built-in skin smoothing algorithms. One is "smooth", which features more obvious smoothing effect and is designed for showrooms. The other is "natural", which retains more facial details and is more natural.
Declaration
public abstract void setBeautyStyle(TRTCBeautyStyle style, uint beauty, uint white, uint ruddiness)
Parameters
Type | Name | Description |
---|---|---|
TRTCBeautyStyle | style | Beauty style, which may be smooth or natural. The former features more obvious skin smoothing effect and is suitable for entertainment scenarios. |
System.UInt32 | beauty | |
System.UInt32 | white | |
System.UInt32 | ruddiness |
setConsoleEnabled(Boolean)
Enable/Disable console log printing
Declaration
public abstract void setConsoleEnabled(bool enabled)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | enabled | Whether to enable console log printing. It is disabled by default. |
setDefaultStreamRecvMode(Boolean, Boolean)
Set the audio/video data receiving mode
It must be set before room entry to take effect.
Declaration
public abstract void setDefaultStreamRecvMode(bool autoRecvAudio, bool autoRecvVideo)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | autoRecvAudio |
|
System.Boolean | autoRecvVideo |
|
Remarks
To ensure instant streaming, the SDK automatically receives audio/video upon successful room entry. This means you will receive the audio and video data of all remote users right after you enter a room. If you do not call startRemoteView
, video data will be automatically canceled after the timeout period elapses. If your application scenario involves only audio (e.g., audio chat), you can use this API to disable the automatic receiving mode for video to reduce your costs.
setLocalVideoRenderCallback(TRTCVideoStreamType, TRTCVideoPixelFormat, TRTCVideoBufferType, ITRTCVideoRenderCallback)
Configure the custom rendering of local video
Declaration
public abstract int setLocalVideoRenderCallback(TRTCVideoStreamType streamType, TRTCVideoPixelFormat pixelFormat, TRTCVideoBufferType bufferType, ITRTCVideoRenderCallback callback)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoStreamType | streamType | Stream type |
TRTCVideoPixelFormat | pixelFormat | Pixel format of the data returned |
TRTCVideoBufferType | bufferType | Buffer type of the data returned |
ITRTCVideoRenderCallback | callback | Custom rendering callback |
Returns
Type | Description |
---|---|
System.Int32 |
|
Remarks
After this API is called, the SDK will return the video data captured via a callback, which is then rendered by Unity’s Texture2D
.
Call `setLocalVideoRenderCallback(TRTCVideoPixelFormat_Unknown, TRTCVideoBufferType_Unknown, null)` to disable the callback.
iOS, macOS, and Windows support only video frames in the pixel format of `TRTCVideoPixelFormat_BGRA32`. Unity can render data in BGRA32 format. Method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.BGRA32, false);
Android supports only video frames in the pixel format of `TRTCVideoPixelFormat_RGBA32`. Unity can render data in RGBA32 format. Method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.RGBA32, false);
setLogCallback(ITRTCLogCallback)
Set the log callback
Declaration
public abstract void setLogCallback(ITRTCLogCallback callback)
Parameters
Type | Name | Description |
---|---|---|
ITRTCLogCallback | callback | Log callback |
setLogCompressEnabled(Boolean)
Enable/Disable local log compression
Compression can significantly reduce log size, but compressed logs can be read only after being decompressed by the Python script provided by Tencent Cloud.
If compression is disabled, logs will be stored in plaintext and will take up more storage space, but they can be read directly in Notepad.
Declaration
public abstract void setLogCompressEnabled(bool enabled)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | enabled | Whether to enable local log compression. It is disabled by default. |
setLogDirPath(String)
Set the path to save logs
Declaration
public abstract void setLogDirPath(string path)
Parameters
Type | Name | Description |
---|---|---|
System.String | path | Path to save log files, e.g., "D:\Log", which must be converted to UTF-8 |
Remarks
Log files are stored in "C:/Users/[system username]/AppData/Roaming/Tencent/liteav/log" ("%appdata%/Tencent/liteav/log") by default. To change the path, you need to call this API before calling other APIs.
setLogLevel(TRTCLogLevel)
Set the log output level
Declaration
public abstract void setLogLevel(TRTCLogLevel level)
Parameters
Type | Name | Description |
---|---|---|
TRTCLogLevel | level | Log output level. For details, please see |
setMixTranscodingConfig(Nullable<TRTCTranscodingConfig>)
Set On-Cloud MixTranscoding parameters (not supported on Windows)
Declaration
public abstract void setMixTranscodingConfig(TRTCTranscodingConfig? config)
Parameters
Type | Name | Description |
---|---|---|
System.Nullable<TRTCTranscodingConfig> | config | For more information, please see the description of |
Remarks
Notes:
On-Cloud MixTranscoding will increase the delay of CDN live streaming by about 1-2 seconds.
If you call this API, the streams of co-anchors will be mixed into your stream or the stream whose streamId
is specified in config
.
If you are still in the room but do not need to mix streams any more, make sure that you pass in nullptr
to cancel On-Cloud MixTranscoding. The On-Cloud MixTranscoding module starts working the moment you enable On-Cloud MixTranscoding. You may incur additional costs if you do not cancel it in a timely manner.
When you leave the room, mixing will be canceled automatically.
Examples
If you enable relayed push on the "Function Configuration" page of the TRTC console, each stream in a room will have a default CDN playback address.
There may be multiple anchors in a room, each sending their own video and audio, but CDN audience needs only one live stream. Therefore, you need to mix multiple audio/video streams into one standard live stream, which requires mixtranscoding.
When you call the `setMixTranscodingConfig()` API, the SDK will send a command to the Tencent Cloud transcoding server to mix multiple audio/video streams in the room into one stream. You can use the `mixUsers` parameter to set the position of each channel of image and specify whether to mix only audio. You can also set the encoding parameters of the mixed stream, including `videoWidth`, `videoHeight`, and `videoBitrate`.
**Image 1** => decoding ====> \
\
**Image 2**=> decoding => image mixing => encoding => **mixed image**
/
**Image 3** => decoding ====> /
**Audio 1** => decoding ====> \
\
**Audio 2** => decoding => audio mixing => encoding => **mixed audio**
/
**Audio 3** => decoding ====> /
setNetworkQosParam(ref TRTCNetworkQosParam)
Set QoS parameters
The setting determines the SDK’s QoS policy under different network conditions, for example, whether to prioritize clarity or smoothness under poor network conditions.
Declaration
public abstract void setNetworkQosParam(ref TRTCNetworkQosParam param)
Parameters
Type | Name | Description |
---|---|---|
TRTCNetworkQosParam | param |
setRemoteAudioVolume(String, Int32)
Set the playback volume of a remote user
This API controls the volume of audio delivered to the system for playback. It affects the volume of local recording files, but not the volume of in-ear monitoring.
If you want to set the volume to a value greater than 100
, please contact technical support.
Declaration
public abstract void setRemoteAudioVolume(string userId, int volume)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | ID of the remote user |
System.Int32 | volume | Volume. Value range: 0-100 |
setRemoteVideoRenderCallback(String, TRTCVideoStreamType, TRTCVideoPixelFormat, TRTCVideoBufferType, ITRTCVideoRenderCallback)
Configure the custom rendering of remote video
This API is similar to `setLocalVideoRenderDelegate`, but it sets the callback of remote video rather than local video.
Declaration
public abstract int setRemoteVideoRenderCallback(string userId, TRTCVideoStreamType streamType, TRTCVideoPixelFormat pixelFormat, TRTCVideoBufferType bufferType, ITRTCVideoRenderCallback callback)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | User ID |
TRTCVideoStreamType | streamType | Stream type |
TRTCVideoPixelFormat | pixelFormat | Pixel format of the data returned |
TRTCVideoBufferType | bufferType | Buffer type of the data returned |
ITRTCVideoRenderCallback | callback | Custom rendering callback |
Returns
Type | Description |
---|---|
System.Int32 |
|
Remarks
After this API is called, the SDK will return decoded remote video data via a callback, which is then rendered by Unity’s Texture2D
.
Call `setRemoteVideoRenderCallback(userId, TRTCVideoPixelFormat_Unknown, TRTCVideoBufferType_Unknown, nullptr)` to disable the callback.
iOS, macOS, and Windows support only video frames in the pixel format of `TRTCVideoPixelFormat_BGRA32`. Unity can render data in BGRA32 format. Method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.BGRA32, false);
Android supports only video frames in the pixel format of `TRTCVideoPixelFormat_RGBA32`. Unity can render data in RGBA32 format. Method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.RGBA32, false);
setRemoteVideoStreamType(String, TRTCVideoStreamType)
Specify whether to play a remote user’s big or small image
Declaration
public abstract void setRemoteVideoStreamType(string userId, TRTCVideoStreamType type)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | ID of the remote user |
TRTCVideoStreamType | type | Type of the remote user’s video stream (big or small image) to play. Default value: |
Remarks
For this API to work, the remote user must first call enableEncSmallVideoStream
to enable dual-channel encoding.
setSubStreamEncoderParam(ref TRTCVideoEncParam)
9.7 Set encoding parameters for screen sharing
Declaration
public abstract void setSubStreamEncoderParam(ref TRTCVideoEncParam param)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoEncParam | param | Substream encoding parameters. For details, please see the definition of |
Remarks
You need to use the API to set encoding parameters for screen sharing even if you use the primary stream for screen sharing (by setting type
to TRTCVideoStreamTypeBig
when calling startScreenCapture
).
setVideoEncoderMirror(Boolean)
Set the mirror mode of encoded video (not supported on Windows)
Declaration
public abstract void setVideoEncoderMirror(bool mirror)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | mirror | Whether to mirror video for remote users. |
Remarks
This API does not change the mirror mode of local video preview, but affects the video presented to remote users and recorded by the server.
setVideoEncoderParam(ref TRTCVideoEncParam)
3.9 Set video encoder parameters (not supported on Android and iOS)
Encoder parameters determine the quality of video watched by remote users and recorded in the cloud.
Declaration
public abstract void setVideoEncoderParam(ref TRTCVideoEncParam param)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoEncParam | param |
setWaterMark(TRTCVideoStreamType, String, TRTCWaterMarkSrcType, UInt32, UInt32, Single, Single, Single)
Set a watermark (not supported on Android and Windows)
Declaration
public abstract void setWaterMark(TRTCVideoStreamType streamType, string srcData, TRTCWaterMarkSrcType srcType, uint nWidth, uint nHeight, float xOffset, float yOffset, float fWidthRatio)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoStreamType | streamType | Type of stream to watermark. Valid values: |
System.String | srcData | Source of the watermark image. |
TRTCWaterMarkSrcType | srcType | Data type of the watermark source |
System.UInt32 | nWidth | Pixel width of the watermark. This parameter is ignored if the watermark source is a file path. |
System.UInt32 | nHeight | Pixel height of the watermark. This parameter is ignored if the watermark source is a file path. |
System.Single | xOffset | X-axis offset of the top-left corner of the watermark image. Value range: 0-1 (floating point number) |
System.Single | yOffset | Y-axis offset of the top-left corner of the watermark image. Value range: 0-1 (floating point number) |
System.Single | fWidthRatio | Ratio of the width of the watermark image to that of the video (the watermark image will be scaled by this ratio). Value range: 0-1 (floating point number) |
startAudioRecording(ref TRTCAudioRecordingParams)
Start audio recording
After this API is called, the SDK will record all audio of a call, including local audio, remote audio, and background music, into a file. This API works regardless of whether the user is in the room. When exitRoom
is called, audio recording will stop automatically.
Declaration
public abstract int startAudioRecording(ref TRTCAudioRecordingParams audioRecordingParams)
Parameters
Type | Name | Description |
---|---|---|
TRTCAudioRecordingParams | audioRecordingParams | Audio recording parameters. For details, please see |
Returns
Type | Description |
---|---|
System.Int32 |
|
startLocalAudio(TRTCAudioQuality)
4.2 Enable local audio capturing and publishing
This API will start mic capturing and send audio data to other users in the room.
The SDK does not publish local audio automatically, so if you do not call this API, other users in the room will not hear your audio.
Declaration
public abstract void startLocalAudio(TRTCAudioQuality quality)
Parameters
Type | Name | Description |
---|---|---|
TRTCAudioQuality | quality | Audio quality. For details, please see |
Remarks
The TRTC SDK does not enable local mic capturing automatically.
startLocalPreview(Boolean, Object)
3.1 Enable local video preview
Declaration
public abstract void startLocalPreview(bool frontCamera, object rendObj)
Parameters
Type | Name | Description |
---|---|---|
System.Boolean | frontCamera |
|
System.Object | rendObj | Only custom rendering is supported currently. Set this parameter to |
Remarks
Only custom video rendering is supported currently. You need to call setLocalVideoRenderCallback
first and then startLocalPreview
to publish the local stream. The video frames will be returned via the onRenderFrame
callback.
Rendering method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.BGRA32, false);
For more information, refer to `Demo/TRTCVideoRender.cs`.startLocalRecording(ref TRTCLocalRecordingParams)
Start local recording This API records the audio and video data during live streaming into a file and saves it locally. Use cases:
- If no streams are published, you can record local audio and video into a file after calling
startLocalPreview
. - If streams are published, you can record the entire live streaming session into a file and save it locally.
Declaration
public abstract void startLocalRecording(ref TRTCLocalRecordingParams localRecordingParams)
Parameters
Type | Name | Description |
---|---|---|
TRTCLocalRecordingParams | localRecordingParams | Recording parameters. For details, please see |
startPublishCDNStream(TRTCPublishCDNParam)
2.3 Start publishing to the live streaming CDN of a non-Tencent Cloud vendor (not supported on Windows)
The startPublishCDNStream()
API is similar to startPublishing()
, but it supports relaying to the live streaming CDN of a non-Tencent Cloud vendor.
Declaration
public abstract void startPublishCDNStream(TRTCPublishCDNParam param)
Parameters
Type | Name | Description |
---|---|---|
TRTCPublishCDNParam | param |
startPublishing(String, TRTCVideoStreamType)
2.1 Start publishing to Tencent Cloud’s live streaming CDN
When calling this API, you need to specify a `StreamId` for the current user in Tencent Cloud’s CDN, which is used to splice the CDN playback address of the user.
For example, if you use the code below to set the `StreamId` of the current user's primary stream to `user_stream_001`, the CDN playback address of the user’s primary stream will be:
"http://yourdomain/live/user_stream_001.flv", where `yourdomain` is the domain name you use for playback.
You can configure your playback domain name in the [CSS console](https://console.cloud.tencent.com/live). Tencent Cloud doesn't provide a default playback domain name.
You can also specify `streamId` in `TRTCParams` when calling `enterRoom`. This method is recommended.
Declaration
public abstract void startPublishing(string streamId, TRTCVideoStreamType type)
Parameters
Type | Name | Description |
---|---|---|
System.String | streamId | Custom stream ID |
TRTCVideoStreamType | type | Only |
Remarks
To play streams via CDNs, you need to enable relayed push on the "Function Configuration" page of the TRTC console.
- If you select "Specified stream for relayed push", you can use this API to publish a stream to Tencent Cloud’s CDN and specify a stream ID for it.
- If you select "Global auto-relayed push", you can use this API to change the default stream ID.
Examples
ITRTCCloud trtcCloud = ITRTCCloud.getTRTCShareInstance();
trtcCloud.enterRoom(params, TRTCAppScene.TRTCAppSceneLIVE);
trtcCloud.startLocalPreview(System.Object);
trtcCloud.startLocalAudio();
trtcCloud.startPublishing("user_stream_001", TRTCVideoStreamType.TRTCVideoStreamTypeBig);
startRemoteView(String, TRTCVideoStreamType, Object)
3.4 Start playing the video of a remote user
The `onUserVideoAvailable(userId, true)` callback indicates that a remote user has enabled video. After receiving this callback, call `startRemoteView(userId)` to load the user’s video. You can use a loading animation to improve user experience during the waiting period. When the first video frame of the remote user is rendered, you will receive the `onFirstVideoFrame(userId)` callback.
Declaration
public abstract void startRemoteView(string userId, TRTCVideoStreamType streamType, object rendObj)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | User ID of the remote user |
TRTCVideoStreamType | streamType | Type of the remote user’s stream to play:
|
System.Object | rendObj | Only custom rendering is supported currently. Set this parameter to |
Remarks
Only custom video rendering is supported currently. You need to call setRemoteVideoRenderCallback
first and then startRemoteView
to pull the remote stream. The video frames will be returned via the onRenderFrame
callback.
Rendering method: new Texture2D((int)_textureWidth, (int)_textureHeight, TextureFormat.BGRA32, false);
For more information, refer to `Demo/TRTCVideoRender.cs`.startScreenCapture(TRTCVideoStreamType, ref TRTCVideoEncParam)
9.1 Start screen sharing
Declaration
public abstract void startScreenCapture(TRTCVideoStreamType type, ref TRTCVideoEncParam param)
Parameters
Type | Name | Description |
---|---|---|
TRTCVideoStreamType | type | Type of stream to use for screen sharing. Valid values: |
TRTCVideoEncParam | param | Encoding parameters for screen sharing images |
Remarks
A user can publish at most one primary stream (TRTCVideoStreamTypeBig
) and one substream (TRTCVideoStreamTypeSub
) at the same time.
By default, the screen is shared via the substream. If you use the primary stream, we recommend that you stop camera capturing (stopLocalPreview
) to avoid conflicts.
startSpeedTest(Int32, String, String)
Start network speed testing (which should be avoided during video calls to ensure call quality)
The test result can be used to optimize the SDK's server selection policy, so you are advised to run the test before the first call, which will help the SDK select the optimal server.
In addition, if the test result is not satisfactory, you can show a UI message asking users to change to a better network.
Declaration
public abstract void startSpeedTest(int sdkAppId, string userId, string userSig)
Parameters
Type | Name | Description |
---|---|---|
System.Int32 | sdkAppId | Application ID |
System.String | userId | User ID |
System.String | userSig | User signature |
stopAllRemoteView()
Stop playing and pulling the videos of all remote users
Declaration
public abstract void stopAllRemoteView()
Remarks
The playing and pulling of the screen sharing stream, if any, will stop too.
stopAudioRecording()
Stop audio recording
When exitRoom
is called, audio recording will stop automatically.
Declaration
public abstract void stopAudioRecording()
stopLocalAudio()
4.3 Disable local audio capturing and publishing
After local audio capturing and publishing are disabled, other users in the room will receive the `onUserAudioAvailable(false)` callback.
Declaration
public abstract void stopLocalAudio()
stopLocalPreview()
3.2 Stop local video capturing and preview
Declaration
public abstract void stopLocalPreview()
stopLocalRecording()
Stop local recording
When exitRoom
is called, recording will stop automatically.
Declaration
public abstract void stopLocalRecording()
stopPublishCDNStream()
2.4 Stop publishing to the live streaming CDN of a non-Tencent Cloud vendor (not supported on Windows)
Declaration
public abstract void stopPublishCDNStream()
stopPublishing()
2.2 Stop publishing to Tencent Cloud’s live streaming CDN
Declaration
public abstract void stopPublishing()
stopRemoteView(String, TRTCVideoStreamType)
3.5 Stop playing and pulling the video of a remote user
After this API is called, the SDK will stop receiving the video of the specified user and release the resources used to display the video.
Declaration
public abstract void stopRemoteView(string userId, TRTCVideoStreamType streamType)
Parameters
Type | Name | Description |
---|---|---|
System.String | userId | User ID of the remote user |
TRTCVideoStreamType | streamType | Type of the remote user’s stream to stop playing:
|
stopScreenCapture()
9.2 Stop screen sharing
Declaration
public abstract void stopScreenCapture()
stopSpeedTest()
Stop network speed testing
Declaration
public abstract void stopSpeedTest()
switchRole(TRTCRoleType)
1.3 Switch roles. This API works only in live streaming scenarios (TRTCAppSceneLIVE
and TRTCAppSceneVoiceChatRoom
).
In live streaming scenarios, a user may need to switch between the “anchor” and “audience” roles.
You can set the role during room entry by specifying `role` in `TRTCParams`. You can also call `switchRole` to switch the role after room entry.
Declaration
public abstract void switchRole(TRTCRoleType role)
Parameters
Type | Name | Description |
---|---|---|
TRTCRoleType | role | Role, which is “anchor” by default:
|
switchRoom(TRTCSwitchRoomConfig)
Switch rooms
After calling this API, the user will leave the current room and immediately enter the room specified in `TRTCSwitchRoomConfig`. This API is easier to use than `exitRoom` + `enterRoom`.
It does not stop the user’s video capturing or preview. The result is returned through the `onSwitchRoom(errCode, errMsg)` callback in `ITRTCCloudCallback`.
Declaration
public abstract void switchRoom(TRTCSwitchRoomConfig config)
Parameters
Type | Name | Description |
---|---|---|
TRTCSwitchRoomConfig | config | Room switching parameters. For details, please see |