从iPhone上的视频输出中获取静止图像

从iPhone上的视频输出中获取静止图像

本文介绍了从iPhone上的视频输出中获取静止图像?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个应用程序,以显示iphone相机看到的光照条件下的统计信息.我每秒拍摄一张图像,并对其进行计算.

I am writing an application to show stats on the light conditions as seen by the iphone camera. I take an image every second, and the performing calculations on it.

要捕获图像,我正在使用以下方法:

To capture an image, I am using the following method:

-(void) captureNow
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    [captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         latestImage = [[UIImage alloc] initWithData:imageData];
     }];
}

但是, captureStillImageAsynhronously .... 方法会使电话播放快门"声音,这对我的应用程序没有好处,因为它将不断捕获图像.

However, the captureStillImageAsynhronously.... method causes the 'shutter' sound to be played by the phone, which is no good for my application, as it will be capturing images constantly.

我已阅读到无法禁用此声音效果.相反,我想从手机的视频输入中捕获帧:

I have read that it is not possible to disable this sound effect. Instead, I want to capture frames from the video input for the phone:

AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];

,希望将它们变成 UIImage 对象.

and hopefully turn these into UIImage objects.

我将如何实现?我对AVFoundation的工作方式了解不多-我下载了一些示例代码,并针对我的目的对其进行了修改.

How would I achieve this? I don't know that much about how the AVFoundation stuff is working - I downloaded some example code and modified it for my purposes.

推荐答案

请勿为此使用静态相机.取而代之的是,从设备的摄像机中抓取并处理包含在像素缓冲区中的数据,以响应作为AVCaptureVideoDataOutputSampleBufferDelegate的情况.

Don't use a still camera for this. Instead, grab from the video camera of the device and process the data contained within the pixel buffer you get in response to being an AVCaptureVideoDataOutputSampleBufferDelegate.

您可以使用以下代码建立视频连接:

You can set up a video connection using code like the following:

// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
    if ([device position] == AVCaptureDevicePositionBack)
    {
        backFacingCamera = device;
    }
}

// Create the capture session
captureSession = [[AVCaptureSession alloc] init];

// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput])
{
    [captureSession addInput:videoInput];
}

// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

if ([captureSession canAddOutput:videoOutput])
{
    [captureSession addOutput:videoOutput];
}
else
{
    NSLog(@"Couldn't add video output");
}

// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
    [captureSession startRunning];
};

然后,您需要使用类似于以下内容的委托方法来处理这些帧:

You'll then need to process these frames in a delegate method that looks like the following:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
    int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

        // Process pixel buffer bytes here

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}

BGRA图像的原始像素字节将从 CVPixelBufferGetBaseAddress(cameraFrame)开始包含在数组中.您可以遍历这些值以获得所需的值.

The raw pixel bytes for your BGRA image will be contained within the array starting at CVPixelBufferGetBaseAddress(cameraFrame). You can iterate over those to obtain your desired values.

但是,您会发现在CPU上对整个映像执行的任何操作都会有点慢.您可以使用加速"来帮助您进行平均色彩操作,就像您想要的那样.一旦将它们放在数组中,我过去就使用 vDSP_meanv()来求平均亮度值.对于这样的情况,最好从相机中获取YUV平面数据,而不是我在此处下拉的BGRA值.

However, you'll find that any operation performed over the entire image on the CPU will be a little slow. You can use Accelerate to help with an average color operation, like you want here. I've used vDSP_meanv() in the past to average luminance values, once you have those in an array. For something like that, you might be best served to grab YUV planar data from the camera instead of the BGRA values I pull down here.

我还编写了一个开源框架,用于使用OpenGL ES处理视频,尽管我没有这样做.尚未在其中进行全图像缩小操作,就像您需要在此处进行这种图像分析那样.我的直方图生成器可能最接近您要尝试的操作.

I've also written an open source framework to process video using OpenGL ES, although I don't yet have whole-image reduction operations in there like you'd need for the kind of image analysis here. My histogram generator is probably the closest I have to what you're trying to do.

这篇关于从iPhone上的视频输出中获取静止图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 00:16