在第一台iPhone时,在app里面整合相机的唯一方法就是使用UIImagePickerController。到了iOS4,发布了更灵活的AVFoundation框架。
UIImagePickerController提供了简单的拍照方法,支持所有的基本功能。
AVFoundation框架则提供了完全的访问相机的操作权,eg:以编程方式更改硬件参数,或者操纵实时预览图。
AVFoundation相关类:
AVCaptureDevice 关于相机硬件的接口。被用于控制硬件特性,诸如镜头的位置、曝光、闪光灯等。
AVCaptureDeviceInput 提供来自设备的数据。
AVCaputureOutput 是一个抽象类,描述capture session的结果。有三种关于静态图片捕捉的具体子类:AVCaptureStillImageOutput,AVCaptureMetadataOutput,AVCaptureVideoOutput
AVCaptureSession 管理输入与输出之间的数据流,以及再出现问题时生成运行时错误。
AVCapureVideoPreviewLayer是CALayer的子类,可被用于自动显示相机产生的实时图像。它还有几个工具性质的方法,可将layer上的坐标转化到设备上,看起来像输出,但其实不是,另外,它拥有session。(session拥有outputs),可以用它来实现拍摄预览。
如何捕获图像呢?
1 建立session:AVCaptureSession *tmpSession = [[AVCaptureSession alloc]init];可以设置采集的质量,AVCaptureSessionPresetHigh,Medium,Low,Preset640*480,1280*720,Photo。(preset预设)
[tmpSession startRunning];
重新设置 session
[session beginConfiguration];
// 移除capture device //添加新的capture device // reset the preset
[session commitCongiration];
2 添加input
想创建输入,必须先拥有一个相机设备(或是麦克风)输入。
//我们不能直接创建AVCaptureDevice的实例,只能通过devicesWithMediaType或者defaultDeviceWithMediaType来获取
NSArray *avialableCameraDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *backCameraDevice;
AVCaptureDevice *frontCameraDevice;
for (AVCaptureDevice *device in avialableCameraDevices) {
if (device.position == AVCaptureDevicePositionBack) {
backCameraDevice = device;
}
else if (device.position == AVCaptureDevicePositionFront){
frontCameraDevice = device;
}
}
//这时程序就可以设置该对象的对焦模式、闪光灯模式、曝光补偿、白平衡等各种拍照相关的属性了,但需要注意的是在设置各种相关属性之前必须先调用lockForConfiguration方法执行锁定,配置完成后调用unlockForConfiguration解锁
NSError *error = nil;
AVCaptureDeviceInput *frontFacingCameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:frontCamera error:&error];
if (!error) {
if ([_session canAddInput:frontFacingCameraDeviceInput]) {
[_session addInput:frontFacingCameraDeviceInput];
self.inputDevice = frontFacingCameraDeviceInput;
}else{
NSLog(@"couldn't add front facing video input");
}
}
3添加output
AVCaptureMovieFileOutput(视频),VideoDataOutput(可以从指定的视频中采集数据),AudioDataOutput(采集音频),StillImageOutput (采集静态图片)
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc]init];//为了从session中取得数据
if ([captureSession canAddOutput:movieOutput]) {
[_captureSession addOutput:output];
}else{
//handle the failure
}
3.2 保存视频到文件用AVCaptureMovieFileOutput 1)声明一个输出,可设置最大录音间隔,可设置最小的可保证影片格式和时间段的磁盘大小 2)配置写到指定文件 3)根据代理确定文件是否写成功。
3.3 对象集截图
3.3.1 设置采集图片的像素格式,设置videoSettings
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[session addOutput:output];
//线程必须是串行的,确保视频帧按序到达。
dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:videoDataOutputQueue];
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey];//设置像素格式
3.3.2 采集静态图片
可以自己指定想要捕捉的格式,下面可以指定捕捉一个jpeg的图片
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImgeOutput alloc]init];
NSDictionary *outputSettings = @{AVVideoCodeKey:AVVideoCodeJEPG};
[stillImageOutput setOutputSettings:outputSettings];
如果使用JPEG图片格式,就不应该再指定其它的压缩了,output会自动压缩,这个压缩会使用硬件加速。而我们要使用这个图片数据时,可以使用jpegStillImageNSDataRepresentation:这个方法来获取响应的NSData,这个方法不会做重复压缩的动作。(This method merges the image data and Exif metadata sample buffer attachments without re-compressing the image.)
如何捕捉图片:
When you want to capture an image, you send the output a message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:
- (AVCaptureConnection *)findVideoConnection
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in _stillImageOutput.connections) {
for (AVCaptureInputPort *port in connection.inputPorts) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
return videoConnection;
}
- (void)takePicture:(DidCapturePhotoBlock)block
{
AVCaptureConnection *videoConnection = [self findVideoConnection];
[videoConnection setVideoScaleAndCropFactor:_scaleNum];
//CMSampleBuffer 样本缓冲区 包括image data 和一个 errror信息。CMSampleBuffer本身还包含metadata,譬如exif dictionary作为attachment。You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats.”
NSLog(@"about to request a capture from:%@",_stillImageOutput);
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer,NSError *error){
CFDictionaryRef exifAttachments = CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(@"attachements:%@",exifAttachments);
// do something with the attachments
}
else{
NSLog(@"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
NSLog(@"oringinImage:%@",[NSValue valueWithCGSize:image.size]);
CGFloat squareLength = [[UIScreen mainScreen] applicationFrame].size.width;
CGFloat headHeight = _previewLayer.bounds.size.height - squareLength;
CGSize size = CGSizeMake(squareLength*2, squareLength*2);
。。。
}];
}
简书:http://www.jianshu.com/p/9d267825687e