本文介绍了如何将已解码的缓冲区从ffmpeg映射到QVideoFrame?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将解码的ffmpeg缓冲区放入QFrame中,因此我可以将该帧放入QAbstractVideoBuffer中,然后将该缓冲区放入QMediaPlayer中.

I'm trying to put by decoded ffmpeg buffer into a QFrame so I can put this frame into a QAbstractVideoBuffer and then put this buffer into a QMediaPlayer.

这是VideoSurface的代码.根据QT的文档,我只需要实现这两个功能:构造函数和bool present,它们将帧处理为名为frame

Here's the code for the VideoSurface. According to QT's documentation, I just have to implement these two functions: constructor and bool present, which processes the frame into the QVideoFrame named frame

QList<QVideoFrame::PixelFormat> VideoSurface::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const
{
    Q_UNUSED(handleType);

    // Return the formats you will support
    return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_YUV420P;
}

bool VideoSurface::present(const QVideoFrame &frame)
{
    //Q_UNUSED(frame);
    std:: cout << "VideoSurface processing 1 frame " << std::endl;

    QVideoFrame frametodraw(frame);

    if(!frametodraw.map(QAbstractVideoBuffer::ReadOnly))
    {
        setError(ResourceError);
        return false;
    }
    // Handle the frame and do your processing
    const size_t bufferSize = 398304;
    uint8_t frameBuffer[bufferSize];
    this->mediaStream->receiveFrame(frameBuffer, bufferSize);
    //Frame is now in frameBuffer, we must put into frametodraw, I guess
    // ------------What should I do here?-------------
    frametodraw.unmap();
    return true;
}

查看this->mediaStream.decodeFrame(frameBuffer, bufferSize).这行代码将新的h264帧解码为YUV420P格式的frameBuffer.

Look at this->mediaStream.decodeFrame(frameBuffer, bufferSize). This line decodes a new h264 frame into frameBuffer in the YUV420P format.

我的想法是使用map函数,然后尝试使用frametodraw.bits()函数接收缓冲区指针,并尝试将该指针指向另一件事,但是我不认为这是方法.我想我应该将frameBuffer的内容复制到此指针,但是例如,该指针不会通知我它的大小,所以我想这也不可行.

My idea was to use the map function and then try to receive a buffer pointer using frametodraw.bits() function and try to point this pointer to another thing, but I don't think this is the way. I think I should copy the contents of frameBuffer to this pointer, but this pointer does not inform me of its size, for example, so I guess this is also not the way.

所以...我应该如何将缓冲区映射到名为frameQVideoFrame中?

So... How should I map my buffer into the QVideoFrame called frame?

我还注意到,当我将我的VideoSurface实例放入我的QMediaPlayer时,永远不会调用present.我认为有问题,即使使用player->play() 这很重要.

I also noticed that, when I put my VideoSurface instance into my QMediaPlayer, present is never called. I think something's wrong, even with player->play() This is important.

我也没有frameBuffer内部解码图像的大小,我只有它的总大小.我认为这也应该是一个问题.

I also do not have the size of the decoded image inside frameBuffer, I only have its total size. I think this should also be a problem.

我还注意到QMediaPlayer不是可显示的元素...那么哪个小组件将显示我的视频?在我看来,这很重要.

I also noticed that QMediaPlayer is not a displayable element... So which Widget will display my video? This seems to me important.

推荐答案

我认为您误解了每个类的作用.您正在子类化QAbstractVideoSurface,它应该有助于访问准备好进行演示的数据.在本方法内部,将提供一个已解码的QVideoFrame.如果要在屏幕上显示此内容,则需要在VideoSurface类中实现它.

I think you are misunderstanding the role of each class. You are subclassing QAbstractVideoSurface and it is supposed to assist with access to data that is ready for presentation. Inside the present method, you are provided an already decoded QVideoFrame. If you would like to display this onscreen then you would need to implement it in the VideoSurface class.

您可以在QMediaPlayer上设置VideoSurface,并且媒体播放器已经可以处理视频的解码和像素格式的协商.您现在收到的VideoSurface中的QVideoFrame已经具有来自媒体播放器的高度/宽度和像素格式.媒体播放器的典型用途是使它加载和解码文件,并通过视频小部件将其显示在屏幕上.

You can set the VideoSurface on the QMediaPlayer, and the media player already handles the decoding of the video and the negotiation of the pixel format. That QVideoFrame you receive in the the present of the VideoSurface already has the height/width and pixel format from the media player. The typical use of the media player is to have it load and decode the files and have it display onscreen with a video widget.

如果需要使用自己的自定义ffmpeg解码器,我的建议是将帧从yuv420转换为rgb(libswscale?),创建自己的自定义小部件,您也可以传递帧数据,并可以使用将QPainter加载到QPixmap后使用.

If you require to use your own custom ffmpeg decoder, my advice is to convert the frame from yuv420 to rgb (libswscale?), create your own custom widget that you can pass the frame data too and you can render it onscreen with QPainter using after loading it into a QPixmap.

这篇关于如何将已解码的缓冲区从ffmpeg映射到QVideoFrame?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-23 02:36