本文介绍了PTS和DTS的计算用于视​​频和音频帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我收到视频H264连接codeD数据和音频G.711 PCM连接codeD数据来自两个不同的线程来MUX /写入 MOV 多媒体容器。

I am receiving video H264 encoded data and audio G.711 PCM encoded data from two different threads to mux / write into mov multimedia container.

笔者函​​数签名就像是:

The writer function signatures are like:

bool WriteAudio(const unsigned char *pEncodedData, size_t iLength);
bool WriteVideo(const unsigned char *pEncodedData, size_t iLength, bool const bIFrame);

和添加音频和视频流功能如下:

And the function for adding audio and video streams looks like:

AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
    Log("Adding stream: %s.", avcodec_get_name(codecID));
    AVCodecContext* pCodecCtx;
    AVStream* pStream;

    /* find the encoder */
    AVCodec* codec = avcodec_find_encoder(codecID);
    if (!codec) {
        LogErr("Could not find encoder for %s", avcodec_get_name(codecID));
        return NULL;
    }

    pStream = avformat_new_stream(m_pFormatCtx, codec);
    if (!pStream) {
        LogErr("Could not allocate stream.");
        return NULL;
    }
    pStream->id = m_pFormatCtx->nb_streams - 1;
    pStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
    pCodecCtx = pStream->codec;


    switch(codec->type) {
    case AVMEDIA_TYPE_VIDEO:
        pCodecCtx->codec_id = codecID;
        pCodecCtx->bit_rate = VIDEO_BIT_RATE;
        pCodecCtx->width = PICTURE_WIDTH;
        pCodecCtx->height = PICTURE_HEIGHT;
        pCodecCtx->gop_size = VIDEO_FRAME_RATE;
        pCodecCtx->pix_fmt = PIX_FMT_YUV420P;
        m_pVideoStream = pStream;
        break;

    case AVMEDIA_TYPE_AUDIO:
        pCodecCtx->codec_id = codecID;
        pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
        pCodecCtx->bit_rate = 64000;
        pCodecCtx->sample_rate = 8000;
        pCodecCtx->channels = 1;
        m_pAudioStream = pStream;
        break;

    default:
        break;
    }

    /* Some formats want stream headers to be separate. */
    if (m_pOutputFmt->flags & AVFMT_GLOBALHEADER)
        m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

    return pStream;
}

WriteAudio(..) WriteVideo(..)的功能,我创建 AVPakcet 使用 av_init_packet(...)并设置笔codedData iLength packet.data packet.size 。我印 packet.pts packet.dts 及其等同于 AV_NOPTS_VALUE

Inside WriteAudio(..) and WriteVideo(..) functions, I am creating AVPakcet using av_init_packet(...) and set pEncodedData and iLength as packet.data and packet.size. I printed packet.pts and packet.dts and its equivalent to AV_NOPTS_VALUE.

现在,我该如何计算PTS,DTS,和分组的持续时间( packet.dts packet.pts packet.duration )正确地为音频和视频数据,这样我可以同步音频和放大器;视频正常播放呢?我在网上看到很多例子,但他们都不是我的决策意识。我是新与的ffmpeg ,并在某些方面我的概念可能不正确。我想这样做的适当方式。

Now, how do I calculate the PTS, DTS, and packet duration (packet.dts, packet.pts and packet.duration) correctly for both audio and video data so that I can sync audio & video and play it properly? I saw many examples on the internet, but none of them are making sense to me. I am new with ffmpeg, and my conception may not be correct in some context. I want to do it in the appropriate way.

在此先感谢!

编辑:在我的视频流,也没有B帧。所以,我认为PTS和DTS可以保持相同的位置。

In my video streams, there is no B frame. So, I think PTS and DTS can be kept the same here.

推荐答案

PTS / DTS为时间戳,它们应被设置到输入数据的时间标记。我不知道你的日期从何而来,但是任何输入有某种形式与之关联的时间戳。通常情况下,如果你从你的声卡+摄像头录制等输入媒体文件的时间戳或系统时钟派生指标。您应该将这些数字转换成期望的格式,然后将其分配给 AVPacket.pts / DTS

PTS/DTS are timestamps, they should be set to the timestamps of the input data. I don't know where your date comes from, but any input has some form of timestamps associated with it. Typically, the timestamps of the input media file or a system clock-derived metric if you're recording from your soundcard+webcam, and so on. You should convert these numbers into the form expected, and then assign them to AVPacket.pts/dts.

这篇关于PTS和DTS的计算用于视​​频和音频帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-12 20:33