我试图控制由我的应用程序制作的视频在iOS上的“照片”应用程序中显示的方式。我制作的所有视频均以黑框开始,然后事物淡入淡出,依此类推。将这些保存到“照片”后,Apple会获取第一帧(黑色正方形)并将其用作“照片”中的缩略图。我想更改此设置,以便我可以设置自己的缩略图,以使人们可以轻松识别视频。
由于找不到与此相关的任何内置API,因此我试图通过添加作为视频第一帧生成的缩略图来对其进行破解。我正在尝试为此使用AVFoundation,但遇到了一些问题。
我的代码抛出了以下错误:[AVAssetReaderTrackOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader'
,尽管调用了该方法。
这是我的代码:
AVAsset *asset = [[AVURLAsset alloc] initWithURL:fileUrl options:nil];
UIImage *frame = [self generateThumbnail:asset];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:360], AVVideoHeightKey,
nil];
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:asset error:nil];
AVAssetReaderOutput *readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[asset.tracks firstObject]
outputSettings:nil];
[assetReader addOutput:readerOutput];
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:path
fileType:AVFileTypeMPEG4
error:nil];
NSParameterAssert(videoWriter);
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
[assetReader startReading];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:frame.CGImage andSize:frame.size];
BOOL append_ok = NO;
while (!append_ok) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
NSParameterAssert(bufferPool != NULL);
[NSThread sleepForTimeInterval:0.05];
} else {
[NSThread sleepForTimeInterval:0.1];
}
}
CVBufferRelease(buffer);
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
CMSampleBufferRef nextBuffer;
while (writerInput.readyForMoreMediaData) {
nextBuffer = [readerOutput copyNextSampleBuffer];
if(nextBuffer) {
NSLog(@"Wrote: %zu bytes", CMSampleBufferGetTotalSampleSize(nextBuffer));
[writerInput appendSampleBuffer:nextBuffer];
} else {
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
//int res = videoWriter.status;
}];
break;
}
}
}];
我对此进行了一些尝试,但均无济于事。我也看到由于文件格式而导致的某些崩溃。我正在使用mp4文件(不确定如何确定其压缩状态或是否受支持),但即使使用未压缩的.mov文件(在Mac上使用Photo Booth制作)也无法使其正常工作)。
有什么想法我做错了吗?
最佳答案
只是有同样的问题。
功能结束后,ARC将释放您的assetReader。但是来自readerOutput的块读取缓冲区继续尝试读取内容。
当assetReader消失时,readerOutput与之断开连接,因此出现错误,指出您需要将其重新连接到assetReader。
解决方法是确保未发布assetReader。例如。通过将其放在一个属性中。
关于ios - AVFoundation将第一帧添加到视频,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/27608510/