本文介绍了更改 QCamera 的输入分辨率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 QCamera 和 QAbstractVideoSurface 实现了相机捕获.我将 QAbstractVideoSurface 扩展到派生类,以便将捕获的数据编组到缓冲区中以备将来处理.一切正常,但我在更改输入捕获的捕获分辨率时遇到问题.

I have implemented a camera capture using QCamera with QAbstractVideoSurface.I extended the QAbstractVideoSurface to a derived class to marshal the captures into a buffer for future processing. Everything works fine but I am having an issue changing the capture resolution of the input capture.

使用 setNativeResolution() 似乎不起作用.

using setNativeResolution() does not seem to work.

下面是代码的简要说明.

Below is a brief of the code.

#ifndef _CAPTURE_BUFFER_H_
#define _CAPTURE_BUFFER_H_

#include <QMutex>
#include <QWidget>
#include <QImage>
#include <QVideoFrame>
#include <QAbstractVideoSurface>
#include <QVideoSurfaceFormat>
#include <control/qcircularbuffer.h>

class CaptureBuffer: public QAbstractVideoSurface
{
    Q_OBJECT

public:
    CaptureBuffer(int size = 30);
    QList<QVideoFrame::PixelFormat> supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const;
    bool start(const QVideoSurfaceFormat& format);
    void stop();
    bool present(const QVideoFrame& frame);
    bool isEmpty() const;
    void pushBack(const QVideoFrame& new_frame);
    void popFront();
    bool top(QVideoFrame& frame);
    bool back(QVideoFrame& frame);

    const QImage::Format& image_format() const {return m_image_format;}
    const QSize& image_size() const {return m_image_size;}

protected:
    void setNativeResolution(const QSize & resolution);

private:
    QSize                        m_image_size;
    QImage::Format               m_image_format;
    QCircularBuffer<QVideoFrame> m_buffer;
    QMutex                       m_buffer_mutex;
};

#endif



   CaptureBuffer::CaptureBuffer(int size) :
    m_buffer(QCircularBuffer<QVideoFrame>(size))
{
}

QList<QVideoFrame::PixelFormat> CaptureBuffer::supportedPixelFormats(
        QAbstractVideoBuffer::HandleType handleType) const
{
    if (handleType == QAbstractVideoBuffer::NoHandle) {
        return QList<QVideoFrame::PixelFormat>()
                << QVideoFrame::Format_RGB24
                << QVideoFrame::Format_RGB32
                << QVideoFrame::Format_ARGB32
                << QVideoFrame::Format_ARGB32_Premultiplied
                << QVideoFrame::Format_RGB565
                << QVideoFrame::Format_RGB555;
    } else {
        return QList<QVideoFrame::PixelFormat>();
    }
}

bool CaptureBuffer::start(const QVideoSurfaceFormat& format)
{
    const QImage::Format image_format = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
    const QSize size = format.frameSize();

    if (image_format != QImage::Format_Invalid && !size.isEmpty()) {
        m_image_format = image_format;
        m_image_size = size;

        QAbstractVideoSurface::start(format);

        return true;
    } else {
        return false;
    }
}

void CaptureBuffer::stop()
{
    QAbstractVideoSurface::stop();
}

bool CaptureBuffer::present(const QVideoFrame& frame)
{
    pushBack(frame);
    return true;
}

bool CaptureBuffer::isEmpty() const
{
    return m_buffer.empty();
}

void CaptureBuffer::pushBack(const QVideoFrame& frame)
{
    m_buffer_mutex.lock();
    m_buffer.push_back(frame);
    m_buffer_mutex.unlock();
}

void CaptureBuffer::popFront()
{
    m_buffer_mutex.lock();
    m_buffer.pop_front();
    m_buffer_mutex.unlock();
}

bool CaptureBuffer::top(QVideoFrame& frame)
{
    if(m_buffer.empty())
        return false;

    m_buffer_mutex.lock();
    frame = m_buffer.front();
    m_buffer_mutex.unlock();
    return true;
}

bool CaptureBuffer::back(QVideoFrame& frame)
{
    if(m_buffer.empty())
        return false;

    m_buffer_mutex.lock();
    frame = m_buffer.back();
    m_buffer_mutex.unlock();
    return true;
}

void CaptureBuffer::setNativeResolution( const QSize & resolution )
{
    QAbstractVideoSurface::setNativeResolution(resolution);
}

以下是 QCamera 的使用和连接到捕获缓冲区的方式:

Here is how the QCamera is used and attached to the capture buffer:

m_camera = camera;
m_camera->setCaptureMode(QCamera::CaptureVideo);
m_camera->setViewfinder(m_capture_buffer);
m_camera->start();

鉴于网络摄像头支持此分辨率,我如何将输入捕获分辨率从 640 x 480 调整为 1280 x 720 等.

How do I adjust the input capture resolution to say from 640 x 480 to 1280 x 720 etc given the fact that the web camera supports this resolution.

推荐答案

从 Qt5.2.1(来自 git)开始,Digia 似乎没有完全为 Windows 完成 QCamera.然而,解决分辨率设置问题还有更多选择.

As of Qt5.2.1 (from git) looks like Digia did not finish QCamera completely for Windows. However there are more options how to overcome resolution setting problem.

如果便携性是必须的:

你可以试试gstreamer.如我所见,gstreamer 插件的必要部分已实现.

You can try gstreamer. As I see the necessary part of the gstreamer plugin is implemented.

我在 Windows 上工作,gstreamer 在 Windows 上使用 DirectShow,所以我决定直接使用 DirectShow 插件.

I work on Windows and gstreamer uses DirectShow on Windows, so I decided to go with the DirectShow plugin directly.

Qt5.2.1 有一个功能强大的 DirectShow 核心插件,但它并没有完全连接到 Qt 框架本身,因为 DirectShow 插件中不存在 DSImageEncoderControl(将从 QImageEncoderControl 派生).

Qt5.2.1 has a functional DirectShow core plugin, but it is not fully connected to Qt framework itself because DSImageEncoderControl (would be derived from QImageEncoderControl) does not exists in DirectShow plugin.

QAndroidImageEncoderControl 和其他一些移动实现存在.Digia 似乎决定首先推动 Qt Mobility 业务.

QAndroidImageEncoderControl and some other mobility implementation exists. It seems Digia decided to push the Qt Mobility business first.

无论如何在 Qt 文档中他们说:

Anyway in the Qt documents they say:

QImageEncoderSettings imageSettings;
imageSettings.setCodec("image/jpeg");
imageSettings.setResolution(1600, 1200);
imageCapture->setEncodingSettings(imageSettings);

然而,当您调用 QCameraImageCapture::setEncodingSettings 时,Qt 尝试通过 DSImageEncoderControl 设置分辨率,但由于缺少它,该部分代码将无法运行.大多数控件尚未在 DirectShow 插件中实现.

However when you call QCameraImageCapture::setEncodingSettings Qt tries to setup the resolution through DSImageEncoderControl, but since it is missing, that part of the code will not run.Most of the controls are not implemented yet in the DirectShow plugin.

对我来说另一个问题是捕捉静止图像也需要设置表面,但我只想使用图像数据进行进一步处理,例如使用 OpenCV.

For me another problem was that capturing a still image requires to setup surface as well, but I only want to use the image data for further processing with OpenCV for example.

可能的解决方案:

如果你不需要跨平台的东西,你可以使用我从 Qt 的 DirectShow 插件中借来的代码,你也可以设置分辨率和像素格式.在我的示例中,我尝试遵循 Qt 的命名约定.

If you do not need cross platform stuff you can use my borrowed code from Qt's DirectShow plugin and you can set resolution and pixel format as well.In my example I tried to follow Qt's naming convention.

另一个问题是目前 QImage 只知道 RGB 格式,但一些相机设备以 YUYV 格式输出捕获的数据.因此需要转换.我还在我的代码中添加了一个简单的 YUYV 到 RGB24 转换器(感谢 FourCC.org)来测试我的笔记本电脑的摄像头,但主要是我使用 Logitech C920 Pro 高清摄像头,它输出 RGB24 以及无需转换.

Another problem is at the moment QImage knows only RGB formats, but some camera devices outputs the captured data in YUYV format. Therefore a conversion needed.I also added a simple YUYV to RGB24 converter (thanks to FourCC.org) in my code to test my Laptop's camera, but mainly I use Logitech C920 Pro HD camera which outputs RGB24 as well where no conversion needed.

从这里下载代码

这篇关于更改 QCamera 的输入分辨率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-21 17:17