问题描述
我在设备上的MPEG-TS文件。我想剪一个相当-确切时间过的文件的设备上启动。
使用作为基础,我希望能实现这一目标。
我有点失去的ffmpeg的C API,但是。我在哪里开始?
我尝试只是删除之前的开始PTS我一直在寻找的所有数据包,但是这打破了视频流。
分组> PTS = av_rescale_q(分组> PTS,inputStream.stream->那么time_base,outputStream.stream->那么time_base);
分组> DTS = av_rescale_q(分组> DTS,inputStream.stream->那么time_base,outputStream.stream->那么time_base); 如果(startPts == 0){
startPts =分组> PTS;
} 如果(分组> PTS< cutTimeStartPts + startPts){
av_free_packet(包);
继续;
}
我如何切断输入文件的开头部分,而不破坏视频流?当打背靠背,我想2切口段无缝地协同运行。
的ffmpeg -i time.ts -c:v libx264 -c:副本-ss $ CUT_POINT -map 0 -y after.ts
ffmpeg的-i time.ts -c:v libx264 -c:副本-to $ CUT_POINT -map 0 -y before.ts
似乎是我所需要的。我认为需要重新连接code让视频可以在任意点开始,而不是将现有关键帧。如果有一个更有效的解决方案,这是伟大的。如果不是,这是足够好的。
编辑:这里是我的尝试。我补鞋各种拼在一起,我不完全了解从。我要离开了节流这块现在尝试并获得音频+视频连接不分层的复杂性书面codeD。我得到EXC_BAD_ACCESS在 AV codec_en code_video2(...)
- (无效)convertInputPath:(* NSString的)inputPath outputPath:(* NSString的)outputPath
选择:(NSDictionary的*)选项progressBlock:(FFmpegWrapperProgressBlock)progressBlock
completionBlock:(FFmpegWrapperCompletionBlock)completionBlock {
dispatch_async(conversionQueue,^ {
FFInputFile * INPUTFILE =零;
FFOutputFile * OUTPUTFILE =零;
NSError *误差=零; INPUTFILE = [[FFInputFile页头] initWithPath:inputPath选项:选择];
OUTPUTFILE = [[FFOutputFile页头] initWithPath:outputPath选项:选择]; [个体经营setupDirectStreamCopyFromInputFile:INPUTFILE OUTPUTFILE:OUTPUTFILE];
如果(![OUTPUTFILE openFileForWritingWithError:放大器;错误]){
[个体经营finishWithSuccess:没有错误:错误completionBlock:completionBlock];
返回;
}
如果(![OUTPUTFILE writeHeaderWithError:放大器;错误]){
[个体经营finishWithSuccess:没有错误:错误completionBlock:completionBlock];
返回;
} AVRational default_timebase;
default_timebase.num = 1;
default_timebase.den = AV_TIME_BASE;
FFStream * outputVideoStream = outputFile.streams [0];
FFStream * inputVideoStream = inputFile.streams [0]; AVFrame *帧;
AVPacket inPacket,outPacket; 帧= AV codec_alloc_frame();
av_init_packet(安培; inPacket); 而(av_read_frame(inputFile.formatContext,&安培; inPacket)> = 0){
如果(inPacket.stream_index == 0){
INT frameFinished;
AV codec_de code_video2(inputVideoStream.stream-> codeC,框架,和放大器; frameFinished,&安培; inPacket);
//如果(frameFinished&安培;&安培;框架> pkt_pts> = starttime_int64和放大器;&安培;框架> pkt_pts< = endtime_int64){
如果(frameFinished){
av_init_packet(安培; outPacket);
INT输出;
AV codec_en code_video2(outputVideoStream.stream-> codeC,&安培; outPacket,框架,和放大器;输出);
如果(输出){
如果(av_write_frame(outputFile.formatContext,&放大器;!outPacket)= 0){
fprintf中(标准错误,转换():错误而写的视频帧\\ n);
[个体经营finishWithSuccess:没有错误:无completionBlock:completionBlock];
}
}
av_free_packet(安培; outPacket);
}
如果(框架> pkt_pts> endtime_int64){
打破;
}
}
}
av_free_packet(安培; inPacket); 如果(![OUTPUTFILE writeTrailerWithError:放大器;错误]){
[个体经营finishWithSuccess:没有错误:错误completionBlock:completionBlock];
返回;
} [个体经营finishWithSuccess:YES错误:无completionBlock:completionBlock];
});
}
FFmpeg的(libavformat流/ codeC,在此情况下)API的ffmpeg.exe命令行参数pretty紧密地映射。要打开一个文件,使用avformat_open_input_file().最后两个参数可以为NULL。这填补了AVFormatContext为您服务。现在,你开始使用读码框av_read_frame()在一个循环。该pkt.stream_index会告诉你哪些数据流的每个数据包所属的,以及avformatcontext->溪流[pkt.stream_index]是伴随流信息,它告诉你什么$它使用C $ CC,无论是视频/音频等使用avformat_close()关闭。
有关混流,您可以使用逆,请参见了解详细信息。基本上它是allocate, avio_open2, add流在输入文件中的每个现有的视频流(基本上下文>流[]),avformat_write_header(), av_interleaved_write_frame()在一个循环中,av_write_trailer()关闭(和free最终分配的情况下)。
视频流(多个)的编码/解码是使用libav codeC完成的。对于每个AVPacket您从复用器获得,使用。使用为输出AVFrame的编码。请注意,这两个将推出延迟,以便每个函数的第几个电话不会返回任何数据,你需要通过调用带有NULL输入数据的每个函数来获取尾包/帧出它刷新缓存数据。 av_interleave_write_frame将正确交错数据包,使视频/音频流不会不同步(如:在TS文件中的音频数据包后,相同的时间标记的视频数据包发生的MB)。
如果您需要AV codec_de code_video2,AV codec_en code_video2,av_read_frame或av_interleaved_write_frame,只是谷歌$函数示例更详细的例子,你会看到完全的例子展示如何正确地使用它们。对于X264编码,的codec_open2编码质量的设置。在C API,你这样做,使用,例如:
AVDictionary选择采用= * NULL;
av_dict_set(&放大器;选择采用,preset中,veryslow,0);
//请使用CRF或b,而不是两个!见上面的H264编码选项的链接
av_dict_set_int(&放大器;选择采用,B,1000,0);
av_dict_set_int(&放大器;选择采用,CRF,10,0);
哦,我忘了其中的一部分,该时间戳。每个AVPacket和AVFrame在其结构的PTS变量,你可以用它来决定是否包含在输出流中的数据包/帧。因此,对于音频,你会使用AVPacket.pts从多路分解步骤作为分隔符,并为视频,你会使用AVFrame.pts从解码步骤作为分隔。它们各自的文档告诉你他们是什么单位。
我看你还是有一些问题,而没有实际的code,所以这里是一个真正的(工作)反codeR它重新codeS视频,并重新多路复用器的音频。它可能有吨错误,泄漏和缺乏适当的错误报告,它也不会时间戳(我要离开了你作为一个练习)处理,但它确实是你问的基本的东西:
的#include<&stdio.h中GT;
#包括LT&; libavformat流/ avformat.h>
#包括LT&; libav codeC / AV codec.h>静态AVFormatContext * inctx,* outctx;
#定义MAX_STREAMS 16
静态AV codecContext * inavctx [MAX_STREAMS]
静态AV codecContext * outavctx [MAX_STREAMS]静态INT openInputFile(为const char *文件){
中期业绩; inctx = NULL;
RES = avformat_open_input(安培; inctx,文件,NULL,NULL);
如果(RES!= 0)
返回水库;
RES = avformat_find_stream_info(inctx,NULL);
如果(RES℃,)
返回水库; 返回0;
}静态无效closeInputFile(无效){
INT N; 为(N = 0; N< inctx-> nb_streams; N ++)
如果(inavctx [N]){
AV codec_close(inavctx [N]);
AV codec_free_context(安培; inavctx [N]);
} avformat_close_input(安培; inctx);
}静态INT openOutputFile(为const char *文件){
INT资源,N; outctx = avformat_alloc_context();
outctx-> oformat = av_guess_format(NULL,文件,NULL);
如果((解析度= avio_open2(安培; outctx-> PB,文件,AVIO_FLAG_WRITE,NULL,NULL))小于0)
返回水库; 为(n = 0时; N&下; inctx-> nb_streams; N ++){
AVStream *研究所= inctx->流[N];
AV codecContext * INC = inst-> codeC; 如果(INC-> codeC_TYPE == AVMEDIA_TYPE_VIDEO){
//视频去codeR
inavctx [N] = AV codec_alloc_context3(INC-> codeC);
AV codec_copy_context(inavctx [N],INC。);
如果((RES = AV codec_open2(inavctx [N],AV codec_find_de codeR(INC-> codec_id),NULL))小于0)
返回水库; // EN视频codeR
AV codeC * EN codeR = AV codec_find_en coder_by_name(libx264);
AVStream * outst = avformat_new_stream(outctx,连接codeR);
outst-> codeC-> WIDTH = inavctx [N] - >宽度;
outst-> codeC->高度= inavctx [N] - >高度;
outst-> codeC-> pix_fmt = inavctx [N] - > pix_fmt;
AVDictionary *字典= NULL;
av_dict_set(安培;字典,preset中,veryslow,0);
av_dict_set_int(安培;字典,CRF,10,0);
outavctx [N] = AV codec_alloc_context3(EN codeR);
AV codec_copy_context(outavctx [N],outst-> codeC);
如果((RES = AV codec_open2(outavctx [N],EN codeR,&安培;字典))小于0)
返回水库;
}否则如果(INC-> codeC_TYPE == AVMEDIA_TYPE_AUDIO){
avformat_new_stream(outctx,INC-> codeC);
inavctx [N] = outavctx [N] = NULL;
}其他{
fprintf中(标准错误,不知道该怎么流%d个\\ n做,N);
返回-1;
}
} 如果((RES = avformat_write_header(outctx,NULL))小于0)
返回水库; 返回0;
}静态无效closeOutputFile(无效){
INT N; av_write_trailer(outctx);
为(N = 0; N< outctx-> nb_streams; N ++)
如果(outctx->流[N] - > codeC)
AV codec_close(outctx->流[N] - > codeC);
avformat_free_context(outctx);
}静态INT EN codeFrame(INT stream_index,AVFrame *帧,为int * gotOutput){
AVPacket outPacket;
中期业绩; av_init_packet(安培; outPacket);
如果((RES = AV codec_en code_video2(outavctx [stream_index],和放大器; outPacket,框架,gotOutput))小于0){
fprintf中(标准错误,无法连接code画幅\\ n);
返回水库;
}
如果(* gotOutput){
outPacket.stream_index = stream_index;
如果((解析度= av_interleaved_write_frame(outctx,&放大器; outPacket))小于0){
fprintf中(标准错误,无法写入数据包\\ n);
返回水库;
}
}
av_free_packet(安培; outPacket); 返回0;
}静态INT德codePacket(INT stream_index,AVPacket * PKT,AVFrame *帧,为int * frameFinished){
中期业绩; 如果((RES = AV codec_de code_video2(inavctx [stream_index],支架,
frameFinished,PKT))≤; 0){
fprintf中(标准错误,无法解除code画幅\\ n);
返回水库;
}
如果(* frameFinished){
INT hasOutput; 框架> PTS =框架> pkt_pts;
返回连接codeFrame(stream_index,框架和放大器; hasOutput);
}其他{
返回0;
}
}INT主(INT ARGC,CHAR *的argv []){
字符*输入=的argv [1];
字符*输出=的argv [2];
INT资源,N; 的printf(转换%s到%s \\ n,输入,输出);
av_register_all();
如果((解析度= openInputFile(输入))小于0){
fprintf中(标准错误,无法打开输入文件%s \\ n,输入);
返回水库;
}
如果((解析度= openOutputFile(输出))小于0){
fprintf中(标准错误,无法打开输出文件%s \\ n,输入);
返回水库;
} AVFrame *帧= av_frame_alloc();
AVPacket inPacket; av_init_packet(安培; inPacket);
而(av_read_frame(inctx,&安培; inPacket)> = 0){
如果(inavctx [inPacket.stream_index]!= NULL){
INT frameFinished;
如果((RES =去codePacket(inPacket.stream_index,&安培; inPacket,架,&安培; frameFinished))小于0){
返回水库;
}
}其他{
如果((解析度= av_interleaved_write_frame(outctx,&放大器; inPacket))小于0){
fprintf中(标准错误,无法写入数据包\\ n);
返回水库;
}
}
} 为(n = 0时; N&下; inctx-> nb_streams; N ++){
如果(inavctx [N]){
//冲洗去codeR
INT frameFinished;
做{
inPacket.data = NULL;
inPacket.size = 0;
如果((RES =去codePacket(N,放大器; inPacket,架,&安培; frameFinished))小于0)
返回水库;
}而(frameFinished); // EN冲洗codeR
INT gotOutput;
做{
如果((RES = EN codeFrame(N,NULL,&安培; gotOutput))小于0)
返回水库;
}而(gotOutput);
}
}
av_free_packet(安培; inPacket); closeInputFile();
closeOutputFile(); 返回0;
}
I have MPEG-TS files on the device. I would like to cut a fairly-exact time off the start of the files on-device.
Using FFmpegWrapper as a base, I'm hoping to achieve this.
I'm a little lost on the C API of ffmpeg, however. Where do I start?
I tried just dropping all packets prior to a start PTS I was looking for, but this broke the video stream.
packet->pts = av_rescale_q(packet->pts, inputStream.stream->time_base, outputStream.stream->time_base);
packet->dts = av_rescale_q(packet->dts, inputStream.stream->time_base, outputStream.stream->time_base);
if(startPts == 0){
startPts = packet->pts;
}
if(packet->pts < cutTimeStartPts + startPts){
av_free_packet(packet);
continue;
}
How do I cut off part of the start of the input file without destroying the video stream? When played back to back, I want 2 cut segments to run seamlessly together.
ffmpeg -i time.ts -c:v libx264 -c:a copy -ss $CUT_POINT -map 0 -y after.ts
ffmpeg -i time.ts -c:v libx264 -c:a copy -to $CUT_POINT -map 0 -y before.ts
Seems to be what I need. I think the re-encode is needed so the video can start at any arbitrary point and not an existing keyframe. If there's a more efficient solution, that's great. If not, this is good enough.
EDIT: Here's my attempt. I'm cobbling together various pieces I don't fully understand copied from here. I'm leaving off the "cutting" piece for now to try and get audio + video encoded written without layering complexity. I get EXC_BAD_ACCESS on avcodec_encode_video2(...)
- (void)convertInputPath:(NSString *)inputPath outputPath:(NSString *)outputPath
options:(NSDictionary *)options progressBlock:(FFmpegWrapperProgressBlock)progressBlock
completionBlock:(FFmpegWrapperCompletionBlock)completionBlock {
dispatch_async(conversionQueue, ^{
FFInputFile *inputFile = nil;
FFOutputFile *outputFile = nil;
NSError *error = nil;
inputFile = [[FFInputFile alloc] initWithPath:inputPath options:options];
outputFile = [[FFOutputFile alloc] initWithPath:outputPath options:options];
[self setupDirectStreamCopyFromInputFile:inputFile outputFile:outputFile];
if (![outputFile openFileForWritingWithError:&error]) {
[self finishWithSuccess:NO error:error completionBlock:completionBlock];
return;
}
if (![outputFile writeHeaderWithError:&error]) {
[self finishWithSuccess:NO error:error completionBlock:completionBlock];
return;
}
AVRational default_timebase;
default_timebase.num = 1;
default_timebase.den = AV_TIME_BASE;
FFStream *outputVideoStream = outputFile.streams[0];
FFStream *inputVideoStream = inputFile.streams[0];
AVFrame *frame;
AVPacket inPacket, outPacket;
frame = avcodec_alloc_frame();
av_init_packet(&inPacket);
while (av_read_frame(inputFile.formatContext, &inPacket) >= 0) {
if (inPacket.stream_index == 0) {
int frameFinished;
avcodec_decode_video2(inputVideoStream.stream->codec, frame, &frameFinished, &inPacket);
// if (frameFinished && frame->pkt_pts >= starttime_int64 && frame->pkt_pts <= endtime_int64) {
if (frameFinished){
av_init_packet(&outPacket);
int output;
avcodec_encode_video2(outputVideoStream.stream->codec, &outPacket, frame, &output);
if (output) {
if (av_write_frame(outputFile.formatContext, &outPacket) != 0) {
fprintf(stderr, "convert(): error while writing video frame\n");
[self finishWithSuccess:NO error:nil completionBlock:completionBlock];
}
}
av_free_packet(&outPacket);
}
if (frame->pkt_pts > endtime_int64) {
break;
}
}
}
av_free_packet(&inPacket);
if (![outputFile writeTrailerWithError:&error]) {
[self finishWithSuccess:NO error:error completionBlock:completionBlock];
return;
}
[self finishWithSuccess:YES error:nil completionBlock:completionBlock];
});
}
The FFmpeg (libavformat/codec, in this case) API maps the ffmpeg.exe commandline arguments pretty closely. To open a file, use avformat_open_input_file(). The last two arguments can be NULL. This fills in the AVFormatContext for you. Now you start reading frames using av_read_frame() in a loop. The pkt.stream_index will tell you which stream each packet belongs to, and avformatcontext->streams[pkt.stream_index] is the accompanying stream information, which tells you what codec it uses, whether it's video/audio, etc. Use avformat_close() to shut down.
For muxing, you use the inverse, see muxing for details. Basically it's allocate, avio_open2, add streams for each existing stream in the input file (basically context->streams[]), avformat_write_header(), av_interleaved_write_frame() in a loop, av_write_trailer() to shut down (and free the allocated context in the end).
Encoding/decoding of the video stream(s) is done using libavcodec. For each AVPacket you get from the muxer, use avcodec_decode_video2(). Use avcodec_encode_video2() for encoding of the output AVFrame. Note that both will introduce delay so the first few calls to each function will not return any data and you need to flush cached data by calling each function with NULL input data to get the tail packets/frames out of it. av_interleave_write_frame will interleave packets correctly so the video/audio stream will not desync (as in: video packets of the same timestamp occur MBs after audio packets in the ts file).
If you need more detailed examples for avcodec_decode_video2, avcodec_encode_video2, av_read_frame or av_interleaved_write_frame, just Google "$function example" and you'll see full-fledged examples showing how to use them correctly. For x264 encoding, set some default parameters in the AVCodecContext when calling avcodec_open2 for encoding quality settings. In the C API, you do that using AVDictionary, e.g.:
AVDictionary opts = *NULL;
av_dict_set(&opts, "preset", "veryslow", 0);
// use either crf or b, not both! See the link above on H264 encoding options
av_dict_set_int(&opts, "b", 1000, 0);
av_dict_set_int(&opts, "crf", 10, 0);
[edit] Oh I forgot one part, the timestamping. Each AVPacket and AVFrame has a pts variable in its struct, and you can use that to decide whether to include the packet/frame in the output stream. So for audio, you'd use AVPacket.pts from the demuxing step as a delimiter, and for video, you'd use AVFrame.pts from the decoding step as a delimited. Their respective documentation tells you in what unit they are.
[edit2] I see you're still having some issues without actual code, so here's a real (working) transcoder which re-codes video and re-muxes audio. It probably has tons of bugs, leaks and lacks proper error reporting, it also doesn't deal with timestamps (I'm leaving that to you as an exercise), but it does the basic things that you asked for:
#include <stdio.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
static AVFormatContext *inctx, *outctx;
#define MAX_STREAMS 16
static AVCodecContext *inavctx[MAX_STREAMS];
static AVCodecContext *outavctx[MAX_STREAMS];
static int openInputFile(const char *file) {
int res;
inctx = NULL;
res = avformat_open_input(& inctx, file, NULL, NULL);
if (res != 0)
return res;
res = avformat_find_stream_info(inctx, NULL);
if (res < 0)
return res;
return 0;
}
static void closeInputFile(void) {
int n;
for (n = 0; n < inctx->nb_streams; n++)
if (inavctx[n]) {
avcodec_close(inavctx[n]);
avcodec_free_context(&inavctx[n]);
}
avformat_close_input(&inctx);
}
static int openOutputFile(const char *file) {
int res, n;
outctx = avformat_alloc_context();
outctx->oformat = av_guess_format(NULL, file, NULL);
if ((res = avio_open2(&outctx->pb, file, AVIO_FLAG_WRITE, NULL, NULL)) < 0)
return res;
for (n = 0; n < inctx->nb_streams; n++) {
AVStream *inst = inctx->streams[n];
AVCodecContext *inc = inst->codec;
if (inc->codec_type == AVMEDIA_TYPE_VIDEO) {
// video decoder
inavctx[n] = avcodec_alloc_context3(inc->codec);
avcodec_copy_context(inavctx[n], inc);
if ((res = avcodec_open2(inavctx[n], avcodec_find_decoder(inc->codec_id), NULL)) < 0)
return res;
// video encoder
AVCodec *encoder = avcodec_find_encoder_by_name("libx264");
AVStream *outst = avformat_new_stream(outctx, encoder);
outst->codec->width = inavctx[n]->width;
outst->codec->height = inavctx[n]->height;
outst->codec->pix_fmt = inavctx[n]->pix_fmt;
AVDictionary *dict = NULL;
av_dict_set(&dict, "preset", "veryslow", 0);
av_dict_set_int(&dict, "crf", 10, 0);
outavctx[n] = avcodec_alloc_context3(encoder);
avcodec_copy_context(outavctx[n], outst->codec);
if ((res = avcodec_open2(outavctx[n], encoder, &dict)) < 0)
return res;
} else if (inc->codec_type == AVMEDIA_TYPE_AUDIO) {
avformat_new_stream(outctx, inc->codec);
inavctx[n] = outavctx[n] = NULL;
} else {
fprintf(stderr, "Don’t know what to do with stream %d\n", n);
return -1;
}
}
if ((res = avformat_write_header(outctx, NULL)) < 0)
return res;
return 0;
}
static void closeOutputFile(void) {
int n;
av_write_trailer(outctx);
for (n = 0; n < outctx->nb_streams; n++)
if (outctx->streams[n]->codec)
avcodec_close(outctx->streams[n]->codec);
avformat_free_context(outctx);
}
static int encodeFrame(int stream_index, AVFrame *frame, int *gotOutput) {
AVPacket outPacket;
int res;
av_init_packet(&outPacket);
if ((res = avcodec_encode_video2(outavctx[stream_index], &outPacket, frame, gotOutput)) < 0) {
fprintf(stderr, "Failed to encode frame\n");
return res;
}
if (*gotOutput) {
outPacket.stream_index = stream_index;
if ((res = av_interleaved_write_frame(outctx, &outPacket)) < 0) {
fprintf(stderr, "Failed to write packet\n");
return res;
}
}
av_free_packet(&outPacket);
return 0;
}
static int decodePacket(int stream_index, AVPacket *pkt, AVFrame *frame, int *frameFinished) {
int res;
if ((res = avcodec_decode_video2(inavctx[stream_index], frame,
frameFinished, pkt)) < 0) {
fprintf(stderr, "Failed to decode frame\n");
return res;
}
if (*frameFinished){
int hasOutput;
frame->pts = frame->pkt_pts;
return encodeFrame(stream_index, frame, &hasOutput);
} else {
return 0;
}
}
int main(int argc, char *argv[]) {
char *input = argv[1];
char *output = argv[2];
int res, n;
printf("Converting %s to %s\n", input, output);
av_register_all();
if ((res = openInputFile(input)) < 0) {
fprintf(stderr, "Failed to open input file %s\n", input);
return res;
}
if ((res = openOutputFile(output)) < 0) {
fprintf(stderr, "Failed to open output file %s\n", input);
return res;
}
AVFrame *frame = av_frame_alloc();
AVPacket inPacket;
av_init_packet(&inPacket);
while (av_read_frame(inctx, &inPacket) >= 0) {
if (inavctx[inPacket.stream_index] != NULL) {
int frameFinished;
if ((res = decodePacket(inPacket.stream_index, &inPacket, frame, &frameFinished)) < 0) {
return res;
}
} else {
if ((res = av_interleaved_write_frame(outctx, &inPacket)) < 0) {
fprintf(stderr, "Failed to write packet\n");
return res;
}
}
}
for (n = 0; n < inctx->nb_streams; n++) {
if (inavctx[n]) {
// flush decoder
int frameFinished;
do {
inPacket.data = NULL;
inPacket.size = 0;
if ((res = decodePacket(n, &inPacket, frame, &frameFinished)) < 0)
return res;
} while (frameFinished);
// flush encoder
int gotOutput;
do {
if ((res = encodeFrame(n, NULL, &gotOutput)) < 0)
return res;
} while (gotOutput);
}
}
av_free_packet(&inPacket);
closeInputFile();
closeOutputFile();
return 0;
}
这篇关于通过ffmpegwrapper切割MPEG-TS文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!