本文介绍了ffmpeg/libx264 C API:从短MP4末尾丢弃的帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在C ++应用程序中,我将拍摄一系列JPEG图像,使用FreeImage处理其数据,然后使用ffmpeg/libx264 C API将位图编码为H264.输出为MP4,该MP4以12fps的速度显示22幅图像.我的代码改编自ffmpeg C源代码附带的"muxing"示例.

In my C++ application, I am taking a series of JPEG images, manipulating their data using FreeImage, and then encoding the bitmaps as H264 using the ffmpeg/libx264 C API. The output is an MP4 which shows the series of 22 images at 12fps. My code is adapted from the "muxing" example that comes with ffmpeg C source code.

我的问题:无论如何调整编解码器参数,传递到编码器的序列末尾一定数量的帧都不会出现在最终输出中.我已经这样设置了AVCodecContext参数:

My problem: no matter how I tune the codec parameters, a certain number of frames at the end of the sequence which are passed to the encoder do not appear in the final output. I've set the AVCodecContext parameters like this:

//set context params
ctx->codec_id = AV_CODEC_ID_H264;
ctx->bit_rate = 4000 * 1000;
ctx->width = _width;
ctx->height = _height;
ost->st->time_base = AVRational{ 1, 12 };
ctx->time_base = ost->st->time_base;
ctx->gop_size = 1;
ctx->pix_fmt = AV_PIX_FMT_YUV420P;

我发现gop_size越高,从视频末尾丢弃的帧就越多.从输出中我还可以看到,在这种gop大小下(我实际上是在指导所有输出帧为I帧),只写入了9帧.

I have found that the higher the gop_size the more frames are dropped from the end of the video. I can also see from the output that, with this gop size (where I'm essentially directing that all output frames be I frames) that only 9 frames are written.

我不确定为什么会这样.我尝试过编码重复的帧并制作更长的视频.这样就不会丢失任何帧.我知道使用ffmpeg命令行工具时,有一个串联命令可以完成我想做的事情,但是我不确定如何使用C API来实现相同的目标.

I'm not sure why this is occurring. I experimented with encoding duplicate frames and making a much longer video. This resulted in no frames being dropped. I know with the ffmpeg command line tool there is a concatenation command that accomplishes what I am trying to do, but I'm not sure how to accomplish the same goal using the C API.

这是我从控制台获得的输出:

Here's the output I'm getting from the console:

[libx264 @ 026d81c0]帧I:9平均QP:17.83大小:111058 [libx264 @ Id.I4.mb:I.1..4:1.9%47.7%50.5%[libx264 @ 026d81c0]最终版本 ratefactor:19.14 [libx264 @ 026d81c0] 8x8转换帧内:47.7% [libx264 @ 026d81c0]编码为y,uvDC,uvAC内部:98.4%96.9%89.5% [libx264 @ 026d81c0] i16 v,h,dc,p:64%6%2%28%[libx264 @ 026d81c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:32%15%9%5%5%6%8% 10%10%[libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu:28%18% 7%6%8%8%8%9%8%[libx264 @ 026d81c0] i8c dc,h,v,p:43%22% 25%10%[libx264 @ 026d81c0] kb/s:10661.53

[libx264 @ 026d81c0] frame I:9 Avg QP:17.83 size:111058 [libx264 @ 026d81c0] mb I I16..4: 1.9% 47.7% 50.5% [libx264 @ 026d81c0] final ratefactor: 19.14 [libx264 @ 026d81c0] 8x8 transform intra:47.7% [libx264 @ 026d81c0] coded y,uvDC,uvAC intra: 98.4% 96.9% 89.5% [libx264 @ 026d81c0] i16 v,h,dc,p: 64% 6% 2% 28% [libx264 @ 026d81c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 15% 9% 5% 5% 6% 8% 10% 10% [libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 18% 7% 6% 8% 8% 8% 9% 8% [libx264 @ 026d81c0] i8c dc,h,v,p: 43% 22% 25% 10% [libx264 @ 026d81c0] kb/s:10661.53

下面包含的代码:

MP4Writer.h

MP4Writer.h

#ifndef MPEG_WRITER
#define MPEG_WRITER

#include <iostream>
#include <string>
#include <vector>
#include <ImgData.h>
extern "C" {
    #include <libavformat/avformat.h>
    #include <libswscale/swscale.h>
    #include <libswresample/swresample.h>
    #include <libswscale/swscale.h>
}

typedef struct OutputStream
{
    AVStream *st;
    AVCodecContext *enc;

    //pts of the next frame that will be generated
    int64_t next_pts;
    int samples_count;

    AVFrame *frame;
    AVFrame *tmp_frame;

    float t, tincr, tincr2;

    struct SwsContext *sws_ctx;
    struct SwrContext *swr_ctx;
};

class MP4Writer {
    public:
        MP4Writer();
        void Init();
        int16_t SetOutput( const std::string & path );
        int16_t AddFrame( uint8_t * imgData );
        int16_t Write( std::vector<ImgData> & imgData );
        int16_t Finalize();
        void SetHeight( const int height ) { _height = _width = height; } //assuming 1:1 aspect ratio

    private:
        int16_t AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId );
        int16_t OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg );
        static AVFrame * AllocPicture( enum AVPixelFormat pixFmt, int width, int height );
        static AVFrame * GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height );
        static int WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet );

        int _width;
        int _height;
        OutputStream _ost;
        AVFormatContext * _formatCtx;
        AVDictionary * _dict;
};

#endif //MPEG_WRITER

MP4Writer.cpp

MP4Writer.cpp

#include <MP4Writer.h>
#include <algorithm>

MP4Writer::MP4Writer()
{
    _width = 0;
    _height = 0;
}

void MP4Writer::Init()
{
    av_register_all();
}

/**
 sets up output stream for the specified path.
 note that the output format is deduced automatically from the file extension passed
 @param path: output file path
 @returns: -1 = output could not be deduced, -2 = invalid codec, -3 = error opening output file,
           -4 = error writing header
*/
int16_t MP4Writer::SetOutput( const std::string & path )
{
    int error;
    AVCodec * codec;
    AVOutputFormat * format;

    _ost = OutputStream{}; //TODO reset state in a more focused way?

    //allocate output media context
    avformat_alloc_output_context2( &_formatCtx, NULL, NULL, path.c_str() );
    if ( !_formatCtx ) {
        std::cout << "could not deduce output format from file extension.  aborting" << std::endl;
        return -1;
    }
    //set format
    format = _formatCtx->oformat;
    if ( format->video_codec != AV_CODEC_ID_NONE ) {
        AddStream( &_ost, _formatCtx, &codec, format->video_codec );
    }
    else {
        std::cout << "there is no video codec set.  aborting" << std::endl;
        return -2;
    }

    OpenVideo( _formatCtx, codec, &_ost, _dict );

    av_dump_format( _formatCtx, 0, path.c_str(), 1 );

    //open output file
    if ( !( format->flags & AVFMT_NOFILE )) {
        error = avio_open( &_formatCtx->pb, path.c_str(), AVIO_FLAG_WRITE );
        if ( error < 0 ) {
            std::cout << "there was an error opening output file " << path << ".  aborting" << std::endl;
            return -3;
        }
    }

    //write header
    error = avformat_write_header( _formatCtx, &_dict );
    if ( error < 0 ) {
        std::cout << "an error occurred writing header. aborting" << std::endl;
        return -4;
    }

    return 0;
}

/**
 initialize the output stream
 @param ost: the output stream
 @param formatCtx: the context format
 @param codec: the output codec
 @param codec: the ffmpeg enumerated id of the codec
 @returns: -1 = encoder not found, -2 = stream could not be allocated, -3 = encoding context could not be allocated
*/
int16_t MP4Writer::AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId )
{
    AVCodecContext * ctx; //TODO not sure why this is here, could just set ost->enc directly
    int i;

    //detect the encoder
    *codec = avcodec_find_encoder( codecId );
    if ( (*codec) == NULL ) {
        std::cout << "could not find encoder.  aborting" << std::endl;
        return -1;
    }

    //allocate stream
    ost->st = avformat_new_stream( formatCtx, NULL );
    if ( ost->st == NULL ) {
        std::cout << "could not allocate stream.  aborting" << std::endl;
        return -2;
    }

    //allocate encoding context
    ost->st->id = formatCtx->nb_streams - 1;
    ctx = avcodec_alloc_context3( *codec );
    if ( ctx == NULL ) {
        std::cout << "could not allocate encoding context.  aborting" << std::endl;
        return -3;
    }

    ost->enc = ctx;

    //set context params
    ctx->codec_id = AV_CODEC_ID_H264;
    ctx->bit_rate = 4000 * 1000;
    ctx->width = _width;
    ctx->height = _height;
    ost->st->time_base = AVRational{ 1, 12 };
    ctx->time_base = ost->st->time_base;
    ctx->gop_size = 1;
    ctx->pix_fmt = AV_PIX_FMT_YUV420P;

    //if neccesary, set stream headers and formats separately
    if ( formatCtx->oformat->flags & AVFMT_GLOBALHEADER ) {
        std::cout << "setting stream and headers to be separate" << std::endl;
        ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    return 0;
}

/**
 open the video for writing
 @param formatCtx: the format context
 @param codec: output codec
 @param ost: output stream
 @param optArg: dictionary
 @return: -1 = error opening codec, -2 = allocate new frame, -3 = copy stream params
*/
int16_t MP4Writer::OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg )
{
    int error;
    AVCodecContext * ctx = ost->enc;
    AVDictionary * dict = NULL;
    av_dict_copy( &dict, optArg, 0 );

    //open codec
    error = avcodec_open2( ctx, codec, &dict );
    av_dict_free( &dict );
    if ( error < 0 ) {
        std::cout << "there was an error opening the codec.  aborting" << std::endl;
        return -1;
    }

    //allocate new frame
    ost->frame = AllocPicture( ctx->pix_fmt, ctx->width, ctx->height );
    if ( ost->frame == NULL ) {
        std::cout << "there was an error allocating a new frame.  aborting" << std::endl;
        return -2;
    }

    //copy steam params
    error = avcodec_parameters_from_context( ost->st->codecpar, ctx );
    if ( error < 0 ) {
        std::cout << "could not copy stream parameters.  aborting" << std::endl;
        return -3;
    }

    return 0;
}

/**
 allocate a new frame
 @param pixFmt: ffmpeg enumerated pixel format
 @param width: output width
 @param height: output height
 @returns: an inititalized frame
*/
AVFrame * MP4Writer::AllocPicture( enum AVPixelFormat pixFmt, int width, int height )
{
    AVFrame * picture;
    int error;

    //allocate the frame
    picture = av_frame_alloc();
    if ( picture == NULL ) {
        std::cout << "there was an error allocating the picture" << std::endl;
        return NULL;
    }

    picture->format = pixFmt;
    picture->width = width;
    picture->height = height;

    //allocate the frame's data buffer
    error = av_frame_get_buffer( picture, 32 );
    if ( error < 0 ) {
        std::cout << "could not allocate frame data" << std::endl;
        return NULL;
    }
    picture->pts = 0;
    return picture;
}

/**
 convert raw RGB buffer to YUV frame
 @return: frame that contains image data
*/
AVFrame * MP4Writer::GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height )
{
    int error;
    AVCodecContext * ctx = ost->enc;

    //prepare the frame
    error = av_frame_make_writable( ost->frame );
    if ( error < 0 ) {
        std::cout << "could not make frame writeable" << std::endl;
        return NULL;
    }

    //TODO set this context one time per run, or even better, one time at init
    //convert RGB to YUV
    struct SwsContext* fooContext = sws_getContext( width, height, AV_PIX_FMT_BGR24,
        width, height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );
    int inLinesize[1] = { 3 * width }; // RGB stride
    uint8_t * inData[1] = { imgData };
    int sliceHeight = sws_scale( fooContext, inData, inLinesize, 0, height, ost->frame->data, ost->frame->linesize );
    sws_freeContext( fooContext );

    ost->frame->pts = ost->next_pts++;
    //TODO does the frame need to be returned here as it is available at the class level?
    return ost->frame;
}

/**
 write frame to file
 @param formatCtx: the output format context
 @param timeBase: the framerate
 @param stream: output stream
 @param packet: data packet
 @returns: see return values for av_interleaved_write_frame
*/
int MP4Writer::WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet )
{
    av_packet_rescale_ts( packet, *timeBase, stream->time_base );
    packet->stream_index = stream->index;

    //write compressed file to media file
    return av_interleaved_write_frame( formatCtx, packet );
}

int16_t MP4Writer::Write( std::vector<ImgData> & imgData )
{
    int16_t errorCount = 0;
    int16_t retVal = 0;
    bool countingUp = true;
    size_t i = 0;
    while ( true ) {
        //don't show first frame again when counting back down
        if ( !countingUp && i == 0 ) {
            break;
        }
        uint8_t * pixels = imgData[i].GetBits( imgData[i].mp4Input );
        AddFrame( pixels );

        //handle inc/dec without repeating last frame
        if ( countingUp ) {
            if ( i == imgData.size() -1 ) {
                countingUp = false;
                i--;
            }
            else {
                i++;
            }
        }
        else {
            i--;
        }
    }
    Finalize();
    return 0; //TODO return error code
}

/**
 add another frame to output video
 @param imgData: the raw image data
 @returns -1 = error encoding video frame, -2 = error writing frame
*/
int16_t MP4Writer::AddFrame( uint8_t * imgData )
{
    int error;
    AVCodecContext * ctx;
    AVFrame * frame;
    int gotPacket = 0;
    AVPacket pkt = { 0 };

    ctx = _ost.enc;
    av_init_packet( &pkt );

    frame = GetVideoFrame( imgData, &_ost, _width, _height );

    //encode the image
    error = avcodec_encode_video2( ctx, &pkt, frame, &gotPacket );
    if ( error < 0 ) {
        std::cout << "there was an error encoding the video frame" << std::endl;
        return -1;
    }

    //write the frame.  NOTE: this doesn't kick in until the encoder has received a certain number of frames
    if ( gotPacket ) {
        error = WriteFrame( _formatCtx, &ctx->time_base, _ost.st, &pkt );
        if ( error < 0 ) {
            std::cout << "the video frame could not be written" << std::endl;
            return -2;
        }
    }
    return 0;
}

/**
 finalize output video and cleanup
*/
int16_t MP4Writer::Finalize()
{
    av_write_trailer( _formatCtx );
    avcodec_free_context( &_ost.enc );
    av_frame_free( &_ost.frame);
    av_frame_free( &_ost.tmp_frame );
    avio_closep( &_formatCtx->pb );
    avformat_free_context( _formatCtx );
    sws_freeContext( _ost.sws_ctx );
    swr_free( &_ost.swr_ctx);
    return 0;
}

用法

#include <FreeImage.h>
#include <MP4Writer.h>
#include <vector>

struct ImgData
{
    unsigned int width;
    unsigned int height;
    std::string path;
    FIBITMAP * mp4Input;

    uint8_t * GetBits( FIBITMAP * bmp ) { return FreeImage_GetBits( bmp ); }
};

int main()
{
     std::vector<ImgData> imgDataVec;
     //load images and push to imgDataVec
     MP4Writer mp4Writer;
     mp4Writer.SetHeight( 1200 ); //assumes 1:1 aspect ratio
     mp4Writer.Init();
     mp4Writer.SetOutput( "test.mp4" );
     mp4Writer.Write( imgDataVec );
}

推荐答案

我看不到该代码中的任何地方都刷新了编解码器.您需要在编写预告片等之前冲洗编解码器,以免由于其他原因而导致不完整的GOP和帧延迟而从编解码器中退出.请参阅ffmpeg文档中包含的任何编码示例,以获取正确的编码方式(例如 https://github.com/FFmpeg/FFmpeg/blob/6d7192bcb7bbab17dc194e8dbb56c208bced0a92/doc/examples/encode_video.c#L166 ).

I don't see codec being flushed anywhere in that code. You need to flush the codecs before writing trailer etc, so that incomplete GOPs and frames delayed for whatever else reason are forced out from the codecs. See any of encoding examples included in ffmpeg docs for the correct way to do it (e.g. https://github.com/FFmpeg/FFmpeg/blob/6d7192bcb7bbab17dc194e8dbb56c208bced0a92/doc/examples/encode_video.c#L166).

这篇关于ffmpeg/libx264 C API:从短MP4末尾丢弃的帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-24 09:23