本文介绍了在DirectX 11中渲染H264视频帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是 DirectX 的新手。我试图编写一个自定义IP摄像机视频播放器,并为此使用 DirectX11 渲染解码的图像,并以Wpf Gui作为前端。



我是 c#开发人员,并且使用的托管Directx不再由 microsoft 更新,因此转移到了wpf和Directx11。



应用程序的所有部分,直到渲染帧都正常。



我设法创建一个将在Wpf应用程序中使用的D3DImage源,成功创建我的视口和包括共享资源的设备,因为D3DImage仅适用于Directx9。我使用 SharpDX 作为 DirectX API 的包装器。



现在我的问题是我似乎找不到找到从解码的图像字节创建纹理/更新纹理的方法,或者是从接收到的字节中渲染解码的图像的正确方法是什么。 / p>

任何对此的帮助都会很大,或者如果有人可以将我定向到正确的方向,那么该怎么做?



谢谢。

解决方案

经过将近2周的搜索并尝试找到上述问题的解决方案,我终于找到了,如下所示。



但是,它的确显示了图像,但没有达到预期的效果,但是我相信这对我来说是一个开始,因为下面的代码最初回答了我的问题

  Device.ImmediateContext.ClearRenderTargetView(this.m_RenderTargetView,Color4.Black); 

Texture2DDescription colordesc = new Texture2DDescription
{
BindFlags = BindFlags.ShaderResource,
Format = m_PixelFormat,
Width = iWidth,
Height = iHeight,
MipLevels = 1,
SampleDescription =新的SampleDescription(1,0),
用法= ResourceUsage.Dynamic,
OptionFlags = ResourceOptionFlags。无,
CpuAccessFlags = CpuAccessFlags.Write,
ArraySize = 1
};

Texture2D newFrameTexture = new Texture2D(this.Device,colordesc);

DataStream dtStream = null;
DataBox dBox = Device.ImmediateContext.MapSubresource(newFrameTexture,0,MapMode.WriteDiscard,0,out dtStream);
if(dtStream!= null)
{
int iRowPitch = dBox.RowPitch;

for(int iHeightIndex = 0; iHeightIndex< iHeight; iHeightIndex ++)
{
//将图像字节复制到Texture
dtStream.Position = iHeightIndex * iRowPitch ;
Marshal.Copy(decodedData,iHeightIndex * iWidth * 4,新的IntPtr(dtStream.DataPointer.ToInt64()+ iHeightIndex * iRowPitch),iWidth * 4);
}
}

Device.ImmediateContext.UnmapSubresource(newFrameTexture,0);


Device.ImmediateContext.CopySubresourceRegion(newFrameTexture,0,null,this.RenderTarget,0);
var shaderRescVw = new ShaderResourceView(this.Device,this.RenderTarget);

Device.ImmediateContext.PixelShader.SetShaderResource(0,shaderRescVw);

Device.ImmediateContext.Draw(6,0);
Device.ImmediateContext.Flush();
this.D3DSurface.InvalidateD3DImage();

Disposer.SafeDispose(ref newFrameTexture);

使用上面的代码,我现在可以用收到的新图像数据填充纹理,但是图像未按照正确的颜色/像素进行渲染,如下图的红色框中所示。



渲染图像的屏幕截图:



图像字节通过BGRA32像素格式的解码器接收。
解决此问题的任何建议都会很有帮助。


I am new to DirectX. I am trying to write a custom IP camera video player and for which I am using DirectX11 to render the decoded image with Wpf Gui as my front end.

I am a c# developer and have used managed directx which is no longer updated by microsoft hence moved to wpf and directx11.

All parts of my application up to the rendering of the frames is working fine.

I have managed to create a D3DImage source which will be used in Wpf app, successfully create my viewports and my device including my shared resource since D3DImage only works Directx9. I am using SharpDX as the wrapper for DirectX API.

Now my problem is I can't seem to find a way to create a texture/update a texture from decoded image bytes or what would be the correct way to do so in order to render the decoded image from the bytes received.

Any help on this would be great or if someone can direct me to the right direction as to how this is to be approached?

Thanks.

解决方案

After nearly 2 weeks of searching and trying to find the solution to my stated problem, i have finally found it as below.

However, this does display the image but not as expected but i believe it is a start for me as the code below answers my question originally asked.

Device.ImmediateContext.ClearRenderTargetView(this.m_RenderTargetView, Color4.Black);

    Texture2DDescription colordesc = new Texture2DDescription
    {
        BindFlags = BindFlags.ShaderResource,
        Format = m_PixelFormat,
        Width = iWidth,
        Height = iHeight,
        MipLevels = 1,
        SampleDescription = new SampleDescription(1, 0),
        Usage = ResourceUsage.Dynamic,
        OptionFlags = ResourceOptionFlags.None,
        CpuAccessFlags = CpuAccessFlags.Write,
        ArraySize = 1
    };

    Texture2D newFrameTexture = new Texture2D(this.Device, colordesc);

    DataStream dtStream = null;
    DataBox dBox = Device.ImmediateContext.MapSubresource(newFrameTexture, 0, MapMode.WriteDiscard, 0, out dtStream);
    if (dtStream != null)
    {
        int iRowPitch = dBox.RowPitch;

        for (int iHeightIndex = 0; iHeightIndex < iHeight; iHeightIndex++)
        {
            //Copy the image bytes to Texture
            dtStream.Position = iHeightIndex * iRowPitch;
             Marshal.Copy(decodedData, iHeightIndex * iWidth * 4, new IntPtr(dtStream.DataPointer.ToInt64() + iHeightIndex * iRowPitch), iWidth * 4);
        }
    }

    Device.ImmediateContext.UnmapSubresource(newFrameTexture, 0);


    Device.ImmediateContext.CopySubresourceRegion(newFrameTexture, 0, null, this.RenderTarget, 0);
    var shaderRescVw = new ShaderResourceView(this.Device, this.RenderTarget);

    Device.ImmediateContext.PixelShader.SetShaderResource(0, shaderRescVw);

    Device.ImmediateContext.Draw(6, 0);
    Device.ImmediateContext.Flush();
    this.D3DSurface.InvalidateD3DImage();

    Disposer.SafeDispose(ref newFrameTexture);

With the code above i am now able to populate the texture with the new images data i receive but the images are not being rendered in correct colors/pixels as shown within the red box in the image below.

Screenshot of the rendered image:

The image bytes are received via decoder in BGRA32 pixel format.Any suggestion to resolve this would be very helpful.

这篇关于在DirectX 11中渲染H264视频帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-23 02:39