本文介绍了如何找到UIImage瓶颈的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我有一个使用 UIImage 对象的应用。到目前为止,我一直在使用这样的东西初始化图像对象:

I have an app that uses UIImage objects. Up to this point, I've been using image objects initialized using something like this:

UIImage *image = [UIImage imageNamed:imageName];

使用我的应用套装中的图片。我一直在添加功能,允许用户使用 UIImagePickerController 从相机或其库中使用图像。显然,这些图片不能出现在我的应用包中,因此我以不同的方式初始化 UIImage 对象:

using an image in my app bundle. I've been adding functionality to allow users to use imagery from the camera or their library using UIImagePickerController. These images, obviously, can't be in my app bundle, so I initialize the UIImage object a different way:

UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];

首先将图片大小调整为与我的应用包中的其他文件大小相似,在像素尺寸和总字节数中都使用Jpeg格式(有趣的是,即使对于相同的文件大小,PNG也要慢得多)。换句话说, pathToFile 指向的文件是一个与包中的图像大小相似的文件(像素尺寸匹配,选择了压缩,因此字节数相似) 。

This is done after first resizing the image to a size similar to the other files in my app bundle, in both pixel dimensions and total bytes, both using Jpeg format (interestingly, PNG was much slower, even for the same file size). In other words, the file pointed to by pathToFile is a file of similar size as an image in the bundle (pixel dimensions match, and compression was chosen so byte count was similar).

该应用程序通过一个循环从原始图像中制作小块,以及与此帖子无关的其他内容。我的问题是使用第二种方式创建的图像循环比使用第一种方式创建的图像需要更长的时间。

The app goes through a loop making small pieces from the original image, among other things that are not relevant to this post. My issue is that going through the loop using an image created the second way takes much longer than using an image created the first way.

我意识到第一种方法缓存图像,但我不认为这是相关的,除非我不理解缓存是如何工作的。如果是相关因素,我如何将缓存添加到第二种方法?

I realize the first method caches the image, but I don't think that's relevant, unless I'm not understanding how the caching works. If it is the relevant factor, how can I add caching to the second method?

造成瓶颈的相关代码部分是:

The relevant portion of code that is causing the bottleneck is this:

[image drawInRect:self.imageSquare];

这里,self是UIImageView的子类。它的属性imageSquare只是一个 CGRect ,用于定义绘制的内容。这两部分方法的部分相同。那么为什么第二种方法使用类似大小的 UIImage 对象这么慢?

Here, self is a subclass of UIImageView. Its property imageSquare is simply a CGRect defining what gets drawn. This portion is the same for both methods. So why is the second method so much slower with similar sized UIImage object?

我有什么可以做的吗不同以优化此过程?

Is there something I could be doing differently to optimize this process?

编辑:我将对包中图像的访问权限更改为 imageWithContentsOfFile 和时间执行循环从大约4秒变为刚刚超过一分钟。所以看起来我需要找到一些方法来执行缓存,例如 imageNamed ,但是使用非捆绑文件。

I change access to the image in the bundle to imageWithContentsOfFile and the time to perform the loop changed from about 4 seconds to just over a minute. So it's looking like I need to find some way to do caching like imageNamed does, but with non-bundled files.

推荐答案

UIImage imageNamed 不会简单地缓存图像。它会缓存未压缩的图像。花费的额外时间不是从本地存储读取到RAM而是通过解压缩图像引起的。

UIImage imageNamed doesn't simply cache the image. It caches an uncompressed image. The extra time spent was not caused by reading from local storage to RAM but by decompressing the image.

解决方案是创建一个新的未压缩的 UIImage object并将其用于代码的时间敏感部分。当该段代码完成时,将丢弃未压缩的对象。为了完整性,这里有一个类方法的副本,用于从压缩的对象返回未压缩的 UIImage 对象,这要归功于。请注意,这假设数据在 CGImage 中。对于 UIImage 对象,情况并非总是如此。

The solution was to create a new uncompressed UIImage object and use it for the time sensitive portion of the code. The uncompressed object is discarded when that section of code is complete. For completeness, here is a copy of the class method to return an uncompressed UIImage object from a compressed one, thanks to another thread. Note that this assumes data is in CGImage. That is not always true for UIImage objects.

+(UIImage *)decompressedImage:(UIImage *)compressedImage
{
   CGImageRef originalImage = compressedImage.CGImage;
   CFDataRef imageData = CGDataProviderCopyData(
                         CGImageGetDataProvider(originalImage));
   CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
   CFRelease(imageData);
   CGImageRef image = CGImageCreate(
                                CGImageGetWidth(originalImage),
                                CGImageGetHeight(originalImage),
                                CGImageGetBitsPerComponent(originalImage),
                                CGImageGetBitsPerPixel(originalImage),
                                CGImageGetBytesPerRow(originalImage),
                                CGImageGetColorSpace(originalImage),
                                CGImageGetBitmapInfo(originalImage),
                                imageDataProvider,
                                CGImageGetDecode(originalImage),
                                CGImageGetShouldInterpolate(originalImage),
                                CGImageGetRenderingIntent(originalImage));
   CGDataProviderRelease(imageDataProvider);
   UIImage *decompressedImage = [UIImage imageWithCGImage:image];
   CGImageRelease(image);
   return decompressedImage;
}

这篇关于如何找到UIImage瓶颈的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-06 10:55