本文介绍了FFMPEG:在解码视频时,是否可以将结果生成到用户提供的缓冲区?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

例如,在ffmpeg解码视频场景中,例如 H264 ,我们通常分配一个 AVFrame 并解码压缩后的数据,然后从成员 data AVFrame linesize 。如下代码:

In ffmpeg decoding video scenario, H264 for example, typically we allocate an AVFrame and decode the compressed data, then we get the result from the member data and linesize of AVFrame. As following code:

// input setting: data and size are a H264 data.
AVPacket avpkt;
av_init_packet(&avpkt);
avpkt.data = const_cast<uint8_t*>(data);
avpkt.size = size;

// decode video: H264 ---> YUV420
AVFrame *picture = avcodec_alloc_frame();
int len = avcodec_decode_video2(context, picture, &got_picture, &avpkt);

我们可以使用结果执行其他任务,例如使用DirectX9进行渲染。也就是说,准备缓冲区(DirectX9纹理),然后从解码结果中进行复制。

We may use the result to do something other tasks, for example, using DirectX9 to render. That is, to prepare buffers(DirectX9 Textures), and copy from the result of decoding.

D3DLOCKED_RECT lrY;
D3DLOCKED_RECT lrU;
D3DLOCKED_RECT lrV;
textureY->LockRect(0, &lrY, NULL, 0);
textureU->LockRect(0, &lrU, NULL, 0);
textureV->LockRect(0, &lrV, NULL, 0);

// copy YUV420: picture->data ---> lr.pBits.
my_copy_image_function(picture->data[0], picture->linesize[0], lrY.pBits, lrY.Pitch, width, height);
my_copy_image_function(picture->data[1], picture->linesize[1], lrU.pBits, lrU.Pitch, width / 2, height / 2);
my_copy_image_function(picture->data[2], picture->linesize[2], lrV.pBits, lrV.Pitch, width / 2, height / 2);

此过程被认为是发生了 2个副本(将ffmpeg复制到图片) ->数据,然后将图片->数据复制到DirectX9 Texture)。

This process is considered that 2 copy happens(ffmpeg copy result to picture->data, and then copy picture->data to DirectX9 Texture).

我的问题是:是否可以将过程改进为仅 1个副本?另一方面,我们是否可以将缓冲区(DirectX9纹理的缓冲区 pBits )提供给ffmpeg,并将函数结果解码到DirectX9纹理的缓冲区,而不是 AVFrame 吗?

My question is: is it possible to improve the process to only 1 copy ? On the other hand, can we provide buffers(pBits, the buffer of DirectX9 textures) to ffmpeg, and decode function results to buffer of DirectX9 texture, not to buffers of AVFrame ?

推荐答案

我找到了出路。

有一个 AVCodecContext 的公共成员 get_buffer2 ,它是一个回调函数。在调用 avcodec_decode_video2 时,将调用此回调函数,并且此回调函数负责将缓冲区和某些信息委托给 AVFrame ,然后 avcodec_decode_video2 将结果生成到 AVFrame 的缓冲区。

There is a public member of AVCodecContext, get_buffer2, which is a callback function. While calling avcodec_decode_video2, this callback function will be invoked, and this callback function is responsible to delegate buffers and some informations to AVFrame, then avcodec_decode_video2 generate the result to the buffers of AVFrame.

回调函数 get_buffer2 被默认设置为 avcodec_default_get_buffer2 。但是,我们可以将其作为私有函数覆盖。例如:

The callback function, get_buffer2, is set avcodec_default_get_buffer2 as default. However, we can override this as our privided function. For example:

void our_buffer_default_free(void *opaque, uint8_t *data)
{
    // empty
}
int our_get_buffer(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    assert(c->codec_type == AVMEDIA_TYPE_VIDEO);
    pic->data[0] = lrY.pBits;
    pic->data[1] = lrU.pBits;
    pic->data[2] = lrV.pBits;
    picture->linesize[0] = lrY.Pitch;
    picture->linesize[1] = lrU.Pitch;
    picture->linesize[2] = lrV.Pitch;
    pic->buf[0] = av_buffer_create(pic->data[0], pic->linesize[0] * pic->height, our_buffer_default_free, NULL, 0);
    pic->buf[1] = av_buffer_create(pic->data[1], pic->linesize[1] * pic->height / 2, our_buffer_default_free, NULL, 0);
    pic->buf[2] = av_buffer_create(pic->data[2], pic->linesize[2] * pic->height / 2, our_buffer_default_free, NULL, 0);
    return 0;
}

在解码之前,我们将覆盖回调函数:

Before decoding, we override the callback function:

context->get_buffer2 = our_get_buffer;

然后 avcodec_decode_video2 将生成结果

顺便说一下,对于经常在类中实现这些过程的C ++程序,我们可以先记录该指针:

By the way, for C++ programs which often implementing these processes in classes, we can record this pointer first:

context->opaque = this;

并将覆盖的回调函数定义为静态成员:

And define the overridden callback function as static member:

static int myclass::my_get_buffer(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    auto this_pointer = static_cast<decode_context*>(c->opaque);
    return this_pointer->my_get_buffer_real(c, pic, flags);
}
int myclass::my_get_buffer_real(struct AVCodecContext *c, AVFrame *pic, int flags)
{
    // ditto with above our_get_buffer.
    // ...
}

这篇关于FFMPEG:在解码视频时,是否可以将结果生成到用户提供的缓冲区?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-04 23:04