因此,我一直在从事视频捕获项目,该项目允许用户捕获图像和视频并应用滤镜。我正在使用AVfoundation框架,成功捕获了静态图像,并将视频帧捕获为UIImage对象...剩下的唯一事情就是录制视频。

这是我的代码:

- (void)initCapture {

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;



    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];




    stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];
    [session addOutput:stillImageOutput];


    captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES;


    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;

    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [captureOutput setVideoSettings:videoSettings];

    [session addOutput:captureOutput];

    [session startRunning];
}




- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);


    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);


    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    CGImageRelease(newImage);

    UIImage *ima = [filter applyFilter:image];

    /*if(isRecording == YES)
    {
        [imageArray addObject:ima];
    }
     NSLog(@"Count= %d",imageArray.count);*/

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:ima waitUntilDone:YES];


    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

   [pool drain];

}


我试图将UIImages存储在可变数组中,但这是一个愚蠢的主意。
有什么想法吗?
任何帮助将不胜感激

最佳答案

您正在使用CIFilter吗?如果没有,也许您应该寻找它来进行基于GPU的快速转换。

您可能要在生成相应图像后直接将其记录到AVAssetWriter。请查看Apple提供的RosyWriter示例代码,以获取有关执行此操作的指导。总之,他们利用AVAssetWriter将帧捕获到一个临时文件中,然后在完成后将该文件存储到相机中。

但是,一个警告是RosyWriter在我的第四代iPod touch上获得了4fps。他们正在强行更改CPU上的像素。 Core Image使用基于GPU的滤镜,我能够达到12fps,我认为这仍然不是应该的。

祝好运!

10-08 02:41