但不能在Linux上运行

但不能在Linux上运行

本文介绍了还有另一种方法可以将ffmpeg中的帧导出到texture2d吗?我的代码可以在Windows上运行,但不能在Linux上运行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Sound在Linux中的工作方式与Windows中相同.但是视频只是黑屏,当我尝试将帧另存为BMP文件时,它们都是损坏/空文件.我正在使用Ffmpeg.Autogen与库进行接口. https://github.com/Ruslan-B/FFmpeg.AutoGen .该文件是MKV容器中的VP8和OGG.尽管由于某种原因该扩展名是AVI.

Sound is working in Linux the same as it did in Windows. But the video is just a black screen and when I attempt to save the frames as BMP files all of them were corrupt/empty files. I am using Ffmpeg.Autogen to interface with the libraries. https://github.com/Ruslan-B/FFmpeg.AutoGen. The file is VP8 and OGG in a MKV container. Though the extension is AVI for some reason.

我尝试弄乱代码的顺序.我检查以确保在Linux上构建的Ffmpeg具有VP8.我当时在网上搜索,但是在寻找另一种方法来做自己的工作时遇到了麻烦.这是对OpenVIII项目的贡献.我的fork-> https://github.com/Sebanisu/OpenVIII

I tried messing with the order of the code a bit. I checked to make sure the build of Ffmpeg on Linux had VP8. I was searching online but was having trouble finding another way to do what I am doing. This is to contribute to the OpenVIII project. My fork-> https://github.com/Sebanisu/OpenVIII

这只是准备缩放器以更改像素格式,否则人的脸会变蓝.

This just preps the scaler to change the pixelformat or else people have blue faces.

        private void PrepareScaler()
        {

            if (MediaType != AVMediaType.AVMEDIA_TYPE_VIDEO)
            {
                return;
            }

            ScalerContext = ffmpeg.sws_getContext(
                Decoder.CodecContext->width, Decoder.CodecContext->height, Decoder.CodecContext->pix_fmt,
                Decoder.CodecContext->width, Decoder.CodecContext->height, AVPixelFormat.AV_PIX_FMT_RGBA,
                ffmpeg.SWS_ACCURATE_RND, null, null, null);
            Return = ffmpeg.sws_init_context(ScalerContext, null, null);

            CheckReturn();
        }

将帧转换为BMP我在想这就是问题所在.因为我向其中添加了bitmap.save并获得了空的BMP.

Converts Frame to BMPI am thinking this is where the problem is. Because I had added bitmap.save to this and got empty BMPs.

public Bitmap FrameToBMP()
        {
            Bitmap bitmap = null;
            BitmapData bitmapData = null;

            try
            {
                bitmap = new Bitmap(Decoder.CodecContext->width, Decoder.CodecContext->height, PixelFormat.Format32bppArgb);
                AVPixelFormat v = Decoder.CodecContext->pix_fmt;

                // lock the bitmap
                bitmapData = bitmap.LockBits(new Rectangle(0, 0, Decoder.CodecContext->width, Decoder.CodecContext->height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

                byte* ptr = (byte*)(bitmapData.Scan0);

                byte*[] srcData = { ptr, null, null, null };
                int[] srcLinesize = { bitmapData.Stride, 0, 0, 0 };

                // convert video frame to the RGB bitmap
                ffmpeg.sws_scale(ScalerContext, Decoder.Frame->data, Decoder.Frame->linesize, 0, Decoder.CodecContext->height, srcData, srcLinesize); //sws_scale broken on linux?
            }
            finally
            {
                if (bitmap != null && bitmapData != null)
                {
                    bitmap.UnlockBits(bitmapData);
                }
            }
            return bitmap;

        }

获得位图后,我们将其转换为Texture2D,以便绘制它.

After I get a bitmap we turn it into a Texture2D so we can draw it.

 public Texture2D FrameToTexture2D()
        {
            //Get Bitmap. there might be a way to skip this step.
            using (Bitmap frame = FrameToBMP())
            {
                //string filename = Path.Combine(Path.GetTempPath(), $"{Path.GetFileNameWithoutExtension(DecodedFileName)}_rawframe.{Decoder.CodecContext->frame_number}.bmp");

                //frame.Save(filename);
                BitmapData bmpdata = null;
                Texture2D frameTex = null;
                try
                {
                    //Create Texture
                    frameTex = new Texture2D(Memory.spriteBatch.GraphicsDevice, frame.Width, frame.Height, false, SurfaceFormat.Color); //GC will collect frameTex
                                                                                                                                        //Fill it with the bitmap.
                    bmpdata = frame.LockBits(new Rectangle(0, 0, frame.Width, frame.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);// System.Drawing.Imaging.PixelFormat.Format32bppArgb);
                    byte[] texBuffer = new byte[bmpdata.Width * bmpdata.Height * 4]; //GC here
                    Marshal.Copy(bmpdata.Scan0, texBuffer, 0, texBuffer.Length);

                    frameTex.SetData(texBuffer);


                }
                finally
                {
                    if (bmpdata != null)
                    {
                        frame.UnlockBits(bmpdata);
                    }
                }
                return frameTex;

            }
        }

如果您愿意,我可以发表更多文章

I can post more if you want it's pretty much all up on my fork

视频将像在Windows中一样播放.可以达到15 fps的流畅度. :)

Video will play back as it does in Windows. As smooth as 15 fps can be. :)

推荐答案

我最终删除了代码的位图部分.而且有效!因此,以前我会将帧转换为位图,然后将像素从位图中复制到Texture2D中.我靠近看了一下,发现我可以跳过位图的这一步.很抱歉,我的问题不够清楚.

I ended up removing the bitmap part of the code. And it worked! So previously I would convert the frame to a bitmap and it would copy the pixels out of the bitmap into the Texture2D. I looked closer and realized I could skip that step of the bitmap. I'm sorry for not being clear enough on my question.

        /// <summary>
        /// Converts Frame to Texture
        /// </summary>
        /// <returns>Texture2D</returns>
        public Texture2D FrameToTexture2D()
        {
            Texture2D frameTex = new Texture2D(Memory.spriteBatch.GraphicsDevice, Decoder.CodecContext->width, Decoder.CodecContext->height, false, SurfaceFormat.Color);
            const int bpp = 4;
            byte[] texBuffer = new byte[Decoder.CodecContext->width * Decoder.CodecContext->height * bpp];
            fixed (byte* ptr = &texBuffer[0])
            {
                byte*[] srcData = { ptr, null, null, null };
                int[] srcLinesize = { Decoder.CodecContext->width * bpp, 0, 0, 0 };
                // convert video frame to the RGB data
                ffmpeg.sws_scale(ScalerContext, Decoder.Frame->data, Decoder.Frame->linesize, 0, Decoder.CodecContext->height, srcData, srcLinesize);
            }
            frameTex.SetData(texBuffer);
            return frameTex;
        }

这篇关于还有另一种方法可以将ffmpeg中的帧导出到texture2d吗?我的代码可以在Windows上运行,但不能在Linux上运行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-23 10:32