问题描述
我想使用自定义视频源通过WebRTC Android实现对视频流进行实时直播.如果我理解正确,那么现有的实现仅支持Android手机上的前置和后置摄像头.在这种情况下,以下类是相关的:
I would like to use a custom video source to live stream video via WebRTC Android implementation. If I understand correctly, existing implementation only supports front and back facing cameras on Android phones. The following classes are relevant in this scenario:
- Camera1Enumerator.java
- VideoCapturer.java
- PeerConnectionFactory
- VideoSource.java
- VideoTrack.java
当前要在Android手机上使用前置摄像头,我正在执行以下步骤:
Currently for using front facing camera on Android phone I'm doing the following steps:
CameraEnumerator enumerator = new Camera1Enumerator(false);
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
VideoSource videoSource = peerConnectionFactory.createVideoSource(false);
videoCapturer.initialize(surfaceTextureHelper, this.getApplicationContext(), videoSource.getCapturerObserver());
VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack(VideoTrackID, videoSource);
我的情况
我有一个回调处理程序,用于从自定义视频源接收字节数组中的视频缓冲区:
My scenario
I've a callback handler that receives video buffer in byte array from custom video source:
public void onReceive(byte[] videoBuffer, int size) {}
如何发送此字节数组缓冲区?我不确定该解决方案,但我想我必须实现自定义VideoCapturer
?
How would I be able to send this byte array buffer? I'm not sure about the solution, but I think I would have to implement custom VideoCapturer
?
此问题可能是相关的,尽管我没有使用libjingle库,而仅使用本地WebRTC Android软件包.
This question might be relevant, though I'm not using libjingle library, only native WebRTC Android package.
类似的问题/文章
- (适用于iOS平台),但不幸的是,我无能为力
- 用于本机C ++平台
- 有关本机实现的文章
- for iOS platform but unfortunately I couldn't help with the answers.
- for native C++ platform
- article about native implementation
推荐答案
此问题有两种可能的解决方案:
There are two possible solutions to this problem:
- 实施自定义
VideoCapturer
,并在onReceive
处理程序中使用byte[]
流数据创建VideoFrame
.实际上存在一个 FileVideoCapturer ,它实现了VideoCapturer
. - 从 NV21Buffer ,它是根据字节数组流数据创建的.然后,我们只需要使用我们先前创建的
VideoSource
来捕获该帧.示例:
- Implement custom
VideoCapturer
and createVideoFrame
usingbyte[]
stream data inonReceive
handler. There actually exists a very good example of FileVideoCapturer, which implementsVideoCapturer
. - Simply construct
VideoFrame
from NV21Buffer, which is created from our byte array stream data. Then we only need to use our previously createdVideoSource
to capture this frame. Example:
public void onReceive(byte[] videoBuffer, int size, int width, int height) {
long timestampNS = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
NV21Buffer buffer = new NV21Buffer(videoBuffer, width, height, null);
VideoFrame videoFrame = new VideoFrame(buffer, 0, timestampNS);
videoSource.getCapturerObserver().onFrameCaptured(videoFrame);
videoFrame.release();
}
这篇关于适用于Android的WebRTC的自定义视频源的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!