本文介绍了图像中的Android org.webrtc.VideoRenderer.I420Frame数组的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直希望一些代码会出现在互联网上,但无济于事;)我正在运行此 github 示例.WebRTC传入的I420Frame对象似乎具有3个yuvPlanes数组

I keep hoping some code will appear on the internet, but getting nowhere ;)I am running this github example.WebRTC incoming I420Frame object seems to have 3 arrays of yuvPlanes

一个典型的Android相机应用程序将PreviewCallback.onPreviewFrame byte []作为单个字节数组获取.我的工作是按固定的时间间隔将图像流化为I420.有人可以帮我如何从JPEG/PNG文件之类的单字节[]数组生成I420Frames yuvPlanes吗?

A typical Android camera app gets PreviewCallback.onPreviewFrame byte[] as a single array of bytes.My job is to stream an image as I420 at regular interval of time.Can someone help me in how to generate a I420Frames yuvPlanes from single byte[] array like JPEG/PNG file?

这很关键.感谢所有答案.

It is pretty critical. All Answers appreciated.

推荐答案

PreviewCallback.onPreviewFrame()将永远不会返回JPEG或PNG流.您应该检查相机 getSupportedPreviewFormats() 列表(请注意,前后摄像头可能有所不同).您可以肯定在此列表中有 NV21 .如果运气好的话,您可以选择 YV12 ,因为API级别12 (请注意,某些设备(例如Amazon Fire HD(2012))就此而已,实际上无法提供 YV12 流).

PreviewCallback.onPreviewFrame() will never return JPEG or PNG stream. You should check your camera getSupportedPreviewFormats() list (note that this may differ for front and rear cameras). You are guaranteed to have NV21 in this list. If you are lucky, you can choose YV12 since API level 12 (note that some devices, e.g. Amazon Fire HD (2012), lie about this and actually cannot deliver YV12 stream).

构建 I420Frame 来自 YV12 字节数组:

It's easy to build a I420Frame from a YV12 byte array:

private VideoRenderer.I420Frame mFrame;
void onPreviewFrame(byte[] yv12_data, Camera camera) {
    if (mFrame == null) {
        Camera.Parameters params = camera.getParameters(); // this is an expensive call, don't repeat it on every frame!
        assert(params.getPreviewFormat() == ImageFormat.YV12);
        int width = params.getPreviewSize().width;
        int stride_y = 16 + ((width-1)/16)*16;
        int stride_uv = 16 + ((stride_y/2-1)/16)*16;
        int height = params.getPreviewSize().height; 
        mFrame = new VideoRenderer.I420Frame(width, height, 0, new int[]{stride_y, stride_uv, stride_uv}, new ByteBuffer[3], 0);
    }

    mFrame.yuvPlanes[0] = ByteBuffer.wrap(yv12_data, 0, mFrame.yuvStrides[0]*mFrame.height) // Y
    mFrame.yuvPlanes[1] = ByteBuffer.wrap(yv12_data, mFrame.yuvStrides[0]*mFrame.height+mFrame.yuvStrides[2]*mFrame.height/2, mFrame.yuvStrides[1]*mFrame.height/2) // U
    mFrame.yuvPlanes[2] = ByteBuffer.wrap(yv12_data, mFrame.yuvStrides[0]*mFrame.height, mFrame.yuvStrides[2]*mFrame.height/4) // V

    ... do something with the frame
}

对于 NV21 ,您必须分配U和V平面:

For NV21, you must allocate the U and V planes:

private VideoRenderer.I420Frame mFrame;
void onPreviewFrame(byte[] nv21_data, Camera camera) {
    if (mFrame == null) {
        Camera.Parameters params = camera.getParameters(); // this is an expensive call, don't repeat it on every frame!
        assert(params.getPreviewFormat() == ImageFormat.NV21);
        int width = params.getPreviewSize().width;
        int height = params.getPreviewSize().height; 
        mFrame = new VideoRenderer.I420Frame(width, height, 0, new int[]{width, width/2, width/2}, new ByteBuffer[3], 0);
        mFrame.yuvPlanes[1] = ByteBuffer.wrap(new byte[width*height/4]);
        mFrame.yuvPlanes[2] = ByteBuffer.wrap(new byte[width*height/4]);
    }

    mFrame.yuvPlanes[0] = ByteBuffer.wrap(nv21_data, 0, mFrame.width*mFrame.height) // Y
    for (int top=0, from=mFrame.width*mFrame.height; from < mFrame.width*mFrame.height*3/2; to++, from+=2) {
        mframe.yuvPlanes[1][to] = nv21_data[from+1]; // U
        mframe.yuvPlanes[2][to] = nv21_data[from]; // V
    }

    ... do something with the frame
}

这篇关于图像中的Android org.webrtc.VideoRenderer.I420Frame数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-18 00:21