本文介绍了有没有办法用AVCaptureStillImageOutput改善拍摄之间的时间?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我目前使用以下代码拍摄一系列图片:
I currently use the following code to shoot a series of images:
- (void)shootSeries:(int)photos {
if(photos == 0) {
[self mergeImages];
} else {
[output captureStillImageAsynchronouslyFromConnection:connection completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(@"Shot picture %d.", 7 - photos);
[self shootSeries:(photos - 1)];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int dataSize = CVPixelBufferGetDataSize(pixelBuffer);
CFDataRef data = CFDataCreate(NULL, (const UInt8 *)CVPixelBufferGetBaseAddress(pixelBuffer), dataSize);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(data);
CFRelease(data);
CGImageRef image = CGImageCreate(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
8, 32,
CVPixelBufferGetBytesPerRow(pixelBuffer),
colorspace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CFRelease(dataProvider);
CFArrayAppendValue(shotPictures, image);
CFRelease(image);
}];
}
}
虽然效果相当好但速度很慢。为什么像ClearCam这样的应用程序可以比这更快地拍摄图片,我怎么能这样做呢?
While this works rather well it is very slow. How come apps like ClearCam can shoot pictures much faster in series than this and how can I do it too?
推荐答案
捕获之后图像,将样本缓冲区存储在CFArray中,一旦你完成所有手机的拍摄,然后将它们转换为图像(或者在你的情况下转换为CGImageRefs)。
After capturing the image, store the sample buffer in a CFArray, and once you're done taking all your phones, THEN convert them to images (or in your case CGImageRefs).
这篇关于有没有办法用AVCaptureStillImageOutput改善拍摄之间的时间?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!