我想像instagram一样尝试在iPhone上使用某些图像过滤器功能。
我使用imagePickerController从相机胶卷获取照片。我了解减少了imagePickerController返回的图像以节省内存。并且将原始图像加载到UIImage是不明智的。但是,如何处理图像然后将其另存为原始像素呢?
我将iPhone 4S用作开发设备。

相机胶卷中的原始照片为3264 * 2448。

UIImagePickerControllerOriginalImage返回的图像为1920 * 1440

UIImagePickerControllerEditedImage返回的图像为640 * 640

imageViewOld(使用UIImagePickerControllerCropRect [80,216,1280,1280]裁剪由UIImagePickerControllerOriginalImage返回的图像)为1280 * 1224

imageViewNew(使用双倍大小的UIImagePickerControllerCropRect [80,216,2560,2560]裁剪由UIImagePickerControllerOriginalImage返回的图像)为1840 * 1224。

我检查同一张照片,Instagram继续显示是1280 * 1280

我的问题是:

  • 为什么UIImagePickerControllerOriginalImage不返回“原始”照片?为什么将其减少到1920 * 1440?
  • 为什么UIImagePickerControllerEditedImage不能在1280 * 1280之前返回图像?作为
    UIImagePickerControllerCropRect显示它被削减1280 * 1280平方吗?
  • 如何将原始照片切成2448 * 2448的图像?

  • 提前致谢。
    下面是我的代码:
      - (void)imagePickerController:(UIImagePickerController *)picker  didFinishPickingMediaWithInfo:(NSDictionary *)info
      {
    
       NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
       if ([mediaType isEqualToString:@"public.image"])
       {
    
        UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
        UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
    
        CGRect cropRect;
        cropRect = [[info valueForKey:@"UIImagePickerControllerCropRect"] CGRectValue];
    
        NSLog(@"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
        //Original width = 1440.000000 height= 1920.000000
    
        NSLog(@"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
        //imageEdited width = 640.000000 height = 640.000000
    
        NSLog(@"corpRect %f %f %f %f", cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
        //corpRect 80.000000 216.000000 1280.000000 1280.000000
    
        CGRect rectNew = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width*2, cropRect.size.height*2);
    
        CGRect rectOld = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
    
        CGImageRef imageRefNew = CGImageCreateWithImageInRect([imagePicked CGImage], rectNew);
        CGImageRef imageRefOld = CGImageCreateWithImageInRect([imagePicked CGImage], rectOld);
    
        UIImageView *imageViewNew = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefNew]];
        CGImageRelease(imageRefNew);
    
        UIImageView *imageViewOld = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefOld]];
        CGImageRelease(imageRefOld);
    
    
        NSLog(@"imageViewNew width = %f height = %f",imageViewNew.image.size.width, imageViewNew.image.size.height);
        //imageViewNew width = 1840.000000 height = 1224.000000
    
        NSLog(@"imageViewOld width = %f height = %f",imageViewOld.image.size.width, imageViewOld.image.size.height);
        //imageViewOld width = 1280.000000 height = 1224.000000
    
         UIImageWriteToSavedPhotosAlbum(imageEdited, nil, nil, NULL);
    
         UIImageWriteToSavedPhotosAlbum([imageViewNew.image imageRotatedByDegrees:90.0], nil, nil, NULL);
         UIImageWriteToSavedPhotosAlbum([imageViewOld.image imageRotatedByDegrees:90.0], nil, nil, NULL);
    
    
        //assign the image to an UIImage Control
        self.imageV.contentMode = UIViewContentModeScaleAspectFit;
        self.imageV.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.width);
        self.imageV.image = imageEdited;
    
    
      }
    
      [self dismissModalViewControllerAnimated:YES];
    
     }
    

    最佳答案

    如您所见,UIImagePickerController将返回按比例缩小的编辑图像,有时是640x640,有时是320x320(取决于设备)。

    你的问题:



    为此,您需要首先使用UIImagePickerControllerCropRect从使用信息字典的UIImagePickerControllerOriginalImage键获得的原始图像中创建一个新图像。使用Quartz Core方法CGImageCreateWithImageInRect,您可以创建一个仅包含由传递的rect界定的像素的新图像。在这种情况下,庄稼。您将需要考虑方向,以使其正常工作。然后,您只需要将图像缩放到所需的大小即可。重要的是要注意,正确定向后的裁剪矩形是相对于原始图像的,而不是因为它是从相机或照片库中出来的。这就是为什么当我们开始使用Quartz方法创建新图像等时,我们需要转换裁剪矩形以匹配方向的原因。

    我在上面使用了您的代码并将其设置为基于裁剪rect从原始图像创建1280x1280图像。这里仍然存在一些边缘情况,即考虑到裁剪矩形有时可能具有负值(代码假定为方形裁剪矩形),这些值尚未解决。

  • 首先转换裁剪矩形,以考虑传入图像的方向和大小。此transformCGRectForUIImageOrientation函数来自NiftyBean
  • 创建裁剪到转换后的裁剪矩形的图像。
  • 将图像缩放(并旋转)到所需大小。即1280x1280。
  • 使用正确的比例和方向从CGImage创建UIImage。

  • 这是您所做的更改的代码:更新在此之下添加了新代码,该新代码应处理丢失的情况。
    - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
    if ([mediaType isEqualToString:@"public.image"])
    {
    
        UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
        UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
    
        CGRect cropRect;
        cropRect = [[info valueForKey:@"UIImagePickerControllerCropRect"] CGRectValue];
    
        NSLog(@"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
        //Original width = 1440.000000 height= 1920.000000
    
        NSLog(@"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
        //imageEdited width = 640.000000 height = 640.000000
    
        NSLog(@"corpRect %@", NSStringFromCGRect(cropRect));
        //corpRect 80.000000 216.000000 1280.000000 1280.000000
    
        CGSize finalSize = CGSizeMake(1280,1280);
        CGImageRef imagePickedRef = imagePicked.CGImage;
    
        CGRect transformedRect = transformCGRectForUIImageOrientation(cropRect, imagePicked.imageOrientation, imagePicked.size);
        CGImageRef cropRectImage = CGImageCreateWithImageInRect(imagePickedRef, transformedRect);
        CGColorSpaceRef colorspace = CGImageGetColorSpace(imagePickedRef);
        CGContextRef context = CGBitmapContextCreate(NULL,
                                                     finalSize.width,
                                                     finalSize.height,
                                                     CGImageGetBitsPerComponent(imagePickedRef),
                                                     CGImageGetBytesPerRow(imagePickedRef),
                                                     colorspace,
                                                     CGImageGetAlphaInfo(imagePickedRef));
        CGContextSetInterpolationQuality(context, kCGInterpolationHigh); //Give the context a hint that we want high quality during the scale
        CGContextDrawImage(context, CGRectMake(0, 0, finalSize.width, finalSize.height), cropRectImage);
        CGImageRelease(cropRectImage);
    
        CGImageRef instaImage = CGBitmapContextCreateImage(context);
        CGContextRelease(context);
    
        //assign the image to an UIImage Control
        UIImage *image = [UIImage imageWithCGImage:instaImage scale:imagePicked.scale orientation:imagePicked.imageOrientation];
        self.imageView.contentMode = UIViewContentModeScaleAspectFit;
        self.imageView.image = image;
        CGImageRelease(instaImage);
    
        UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
    }
    
    [self dismissModalViewControllerAnimated:YES];
    

    }
    CGRect transformCGRectForUIImageOrientation(CGRect source, UIImageOrientation orientation, CGSize imageSize) {
    switch (orientation) {
        case UIImageOrientationLeft: { // EXIF #8
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
            return CGRectApplyAffineTransform(source, txCompound);
        }
        case UIImageOrientationDown: { // EXIF #3
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
            return CGRectApplyAffineTransform(source, txCompound);
        }
        case UIImageOrientationRight: { // EXIF #6
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI + M_PI_2);
            return CGRectApplyAffineTransform(source, txCompound);
        }
        case UIImageOrientationUp: // EXIF #1 - do nothing
        default: // EXIF 2,4,5,7 - ignore
            return source;
    }
    }
    

    更新我已经做了一些方法来处理其余的情况。
    步骤基本相同,但进行了一些修改。
  • 第一个修改是正确转换和缩放上下文,以处理传入图像的方向
  • ,第二个是支持您可以从中获得的非方形裁剪UIImagePickerController。在这些情况下,正方形图像为
    充满您选择的颜色。

  • 新代码
    // CropRect is assumed to be in UIImageOrientationUp, as it is delivered this way from the UIImagePickerController when using AllowsImageEditing is on.
    // The sourceImage can be in any orientation, the crop will be transformed to match
    // The output image bounds define the final size of the image, the image will be scaled to fit,(AspectFit) the bounds, the fill color will be
    // used for areas that are not covered by the scaled image.
    -(UIImage *)cropImage:(UIImage *)sourceImage cropRect:(CGRect)cropRect aspectFitBounds:(CGSize)finalImageSize fillColor:(UIColor *)fillColor {
    
    CGImageRef sourceImageRef = sourceImage.CGImage;
    
    //Since the crop rect is in UIImageOrientationUp we need to transform it to match the source image.
    CGAffineTransform rectTransform = [self transformSize:sourceImage.size orientation:sourceImage.imageOrientation];
    CGRect transformedRect = CGRectApplyAffineTransform(cropRect, rectTransform);
    
    //Now we get just the region of the source image that we are interested in.
    CGImageRef cropRectImage = CGImageCreateWithImageInRect(sourceImageRef, transformedRect);
    
    //Figure out which dimension fits within our final size and calculate the aspect correct rect that will fit in our new bounds
    CGFloat horizontalRatio = finalImageSize.width / CGImageGetWidth(cropRectImage);
    CGFloat verticalRatio = finalImageSize.height / CGImageGetHeight(cropRectImage);
    CGFloat ratio = MIN(horizontalRatio, verticalRatio); //Aspect Fit
    CGSize aspectFitSize = CGSizeMake(CGImageGetWidth(cropRectImage) * ratio, CGImageGetHeight(cropRectImage) * ratio);
    
    
    CGContextRef context = CGBitmapContextCreate(NULL,
                                                 finalImageSize.width,
                                                 finalImageSize.height,
                                                 CGImageGetBitsPerComponent(cropRectImage),
                                                 0,
                                                 CGImageGetColorSpace(cropRectImage),
                                                 CGImageGetBitmapInfo(cropRectImage));
    
    if (context == NULL) {
        NSLog(@"NULL CONTEXT!");
    }
    
    //Fill with our background color
    CGContextSetFillColorWithColor(context, fillColor.CGColor);
    CGContextFillRect(context, CGRectMake(0, 0, finalImageSize.width, finalImageSize.height));
    
    //We need to rotate and transform the context based on the orientation of the source image.
    CGAffineTransform contextTransform = [self transformSize:finalImageSize orientation:sourceImage.imageOrientation];
    CGContextConcatCTM(context, contextTransform);
    
    //Give the context a hint that we want high quality during the scale
    CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
    
    //Draw our image centered vertically and horizontally in our context.
    CGContextDrawImage(context, CGRectMake((finalImageSize.width-aspectFitSize.width)/2, (finalImageSize.height-aspectFitSize.height)/2, aspectFitSize.width, aspectFitSize.height), cropRectImage);
    
    //Start cleaning up..
    CGImageRelease(cropRectImage);
    
    CGImageRef finalImageRef = CGBitmapContextCreateImage(context);
    UIImage *finalImage = [UIImage imageWithCGImage:finalImageRef];
    
    CGContextRelease(context);
    CGImageRelease(finalImageRef);
    return finalImage;
    }
    
    //Creates a transform that will correctly rotate and translate for the passed orientation.
    //Based on code from niftyBean.com
    - (CGAffineTransform) transformSize:(CGSize)imageSize orientation:(UIImageOrientation)orientation {
    
    CGAffineTransform transform = CGAffineTransformIdentity;
    switch (orientation) {
        case UIImageOrientationLeft: { // EXIF #8
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
            transform = txCompound;
            break;
        }
        case UIImageOrientationDown: { // EXIF #3
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
            transform = txCompound;
            break;
        }
        case UIImageOrientationRight: { // EXIF #6
            CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
            CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,-M_PI_2);
            transform = txCompound;
            break;
        }
        case UIImageOrientationUp: // EXIF #1 - do nothing
        default: // EXIF 2,4,5,7 - ignore
            break;
    }
    return transform;
    
    }
    

    10-07 18:34