本文介绍了在OpenCV中覆盖/合并两个(和更多个)YUV图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我调查并剥离了我之前的问题():



/ p>

和:



有没有办法正确读取这些值?我想我不应该把整个图像(他们的Y,U,V分量)直接复制到计算的位置。 U和V分量应该在它们之下,并且以正确的顺序,我是对吗?

解决方案

首先,有几种YUV格式,因此您需要清楚使用哪一种。

根据您的图片,您的YUV格式似乎是。

无论如何,在那里转换为BGR工作,然后转换回来要简单得多。



如果这不是一个选择,你几乎必须自己管理投资回报。 YUV通常是平面格式,其中信道不是(完全)复用的 - 并且一些信道具有不同的大小和深度。如果您不使用内部颜色转换,那么您将必须知道确切的YUV格式,并自己管理像素复制ROI。



使用YUV图像, code> CV_8UC * 格式说明符并不意味着超出实际的内存要求。它当然不指定像素/通道多路复用。



例如,如果你只想使用Y分量,那么Y通常是图像中的第一个平面所以整个图像的第一一半可以被视为单色 8UC1 图像。在这种情况下使用ROI很容易。


I investigated and stripped down my previous question (Is there a way to avoid conversion from YUV to BGR?). I want to overlay few images (format is YUV) on the resulting, bigger image (think about it like it is a canvas) and send it via network library (OPAL) forward without converting it to to BGR.

Here is the code:

    Mat tYUV;
    Mat tClonedYUV;
    Mat tBGR;
    Mat tMergedFrame;
    int tMergedFrameWidth = 1000;
    int tMergedFrameHeight = 800;
    int tMergedFrameHalfWidth = tMergedFrameWidth / 2;

    tYUV = Mat(tHeader->height * 1.5f, tHeader->width, CV_8UC1, OPAL_VIDEO_FRAME_DATA_PTR(tHeader));
    tClonedYUV = tYUV.clone();

    tMergedFrame = Mat(Size(tMergedFrameWidth, tMergedFrameHeight), tYUV.type(), cv::Scalar(0, 0, 0));
    tYUV.copyTo(tMergedFrame(cv::Rect(0, 0, tYUV.cols > tMergedFrameWidth ? tMergedFrameWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
    tClonedYUV.copyTo(tMergedFrame(cv::Rect(tMergedFrameHalfWidth, 0, tYUV.cols > tMergedFrameHalfWidth ? tMergedFrameHalfWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));


    namedWindow("merged frame", 1);
    imshow("merged frame", tMergedFrame);
    waitKey(10);

The result of above code looks like this:

I guess the image is not correctly interpreted, so the pictures stay black/white (Y component) and below them, we can see the U and V component. There are images, which describes the problem well (http://en.wikipedia.org/wiki/YUV):

and: http://upload.wikimedia.org/wikipedia/en/0/0d/Yuv420.svg

Is there a way for these values to be correctly read? I guess I should not copy the whole images (their Y, U, V components) straight to the calculated positions. The U and V components should be below them and in the proper order, am I right?

解决方案

First, there are several YUV formats, so you need to be clear about which one you are using.
According to your image, it seems your YUV format is Y'UV420p.
Regardless, it is a lot simpler to convert to BGR work there and then convert back.

If that is not an option, you pretty much have to manage the ROIs yourself. YUV is commonly a plane-format where the channels are not (completely) multiplexed - and some are of different sizes and depths. If you do not use the internal color conversions, then you will have to know the exact YUV format and manage the pixel copying ROIs yourself.

With a YUV image, the CV_8UC* format specifier does not mean much beyond the actual memory requirements. It certainly does not specify the pixel/channel muxing.

For example, if you wanted to only use the Y component, then the Y is often the first plane in the image so the first "half" of whole image can just be treated as a monochrome 8UC1 image. In this case using ROIs is easy.

这篇关于在OpenCV中覆盖/合并两个(和更多个)YUV图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-31 05:50