AVCaptureSessionPresetPhoto

AVCaptureSessionPresetPhoto

我似乎无法使像素对齐与AVCaptureSessionPresetPhoto分辨率的AVFoundation一起使用。像素对齐可以在较低分辨率下正常工作,例如AVCaptureSessionPreset1280x720(AVCaptureSessionPreset1280x720_Picture)。

具体来说,当我取消注释以下行时:
如果([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]){
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
}其他{
NSLog(@“无法将分辨率设置为AVCaptureSessionPresetPhoto”);
}
我得到了一个未对齐的图像,如下面的第二个图像所示。任何意见/建议,不胜感激。

这是我的代码,用于设置1)捕获 session ,2)委托(delegate)回调和3)保存一张蒸汽图像以验证像素对齐。
1.捕获 session 设置

    - (void)InitCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if ([captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
    [captureSession setSessionPreset:AVCaptureSessionPreset1280x720];
} else {
    NSLog(@"Unable to set resolution to AVCaptureSessionPreset1280x720");
}

//    if ([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
//        [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
//    } else {
//        NSLog(@"Unable to set resolution to AVCaptureSessionPresetPhoto");
//    }

captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:nil];

AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;

dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
    NSNumber* value = [NSNumber     numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];

[captureSession addInput:videoInput];
[captureSession addOutput:captureOutput];
    [captureOutput release];

    previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    CALayer *rootLayer = [previewView layer];// self.view.layer; //

[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
[captureSession startRunning];
}


- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
   fromConnection:(AVCaptureConnection *)connection
{

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
static int processedImage = 0;
processedImage++;
if (processedImage==100) {
    [self SaveImage:sampleBuffer];
}

[pool drain];
}

// Create a UIImage CMSampleBufferRef and save for verifying pixel alignment
- (void) SaveImage:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
CvSize imageSize;
imageSize.width = CVPixelBufferGetWidth(imageBuffer); ;
imageSize.height = CVPixelBufferGetHeight(imageBuffer);
IplImage *image = cvCreateImage(imageSize, IPL_DEPTH_8U, 1);
void *y_channel = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
char *tempPointer = image->imageData;
memcpy(tempPointer, y_channel, image->imageSize);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
                                    8, 8, image->width,
                                    colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,
                                    provider, NULL, false, kCGRenderingIntentDefault);
UIImage *Saveimage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
UIImageWriteToSavedPhotosAlbum(Saveimage, nil, nil, nil);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}

最佳答案

SaveImage中,CGImageCreate的第5个参数是bytesPerRow,并且您不应该传递image->width,因为在内存对齐的情况下,每行的字节数可能会有所不同。 AVCaptureSessionPresetPhoto就是这种情况,其中width = 852(带有iPhone 4相机),而第1平面(Y)的每行字节数是864,因为它是16字节对齐的。

1 /您应该获得每行的字节,如下所示:

size_t bpr = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);

2 /然后在将像素复制到IplImage时要注意每行的字节数:
char *y_channel = (char *) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

// row by row copy
for (int i = 0; i < image->height; i++)
  memcpy(tempPointer + i*image->widthStep, y_channel + i*bpr, image->width);

您可以保持[NSData dataWithBytes:image->imageData length:image->imageSize];不变,因为image->imageSize考虑了对齐方式(imageSize = height*widthStep)。

3 /最后将IplImage宽度步长作为CGImageCreate的第5个参数传递:
CGImageCreate(image->width, image->height, 8, 8, image->widthStep, ...);

10-08 18:43