我将两个UIImage合并到一个上下文中。它可以工作,但是执行起来很慢,我需要一个更快的解决方案。作为我的解决方案,在iPad 1G上进行mergeImage: withImage:调用大约需要400毫秒。

这是我的工作:

-(CGContextRef)mergeImage:(UIImage*)img1 withImage:(UIImage*)img2
{
    CGSize size = [ImageToolbox getScreenSize];
    CGContextRef context = [ImageToolbox createARGBBitmapContextFromImageSize:CGSizeMake(size.width, size.height)];

    CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);

    CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img1.CGImage);
    CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img2.CGImage);


    return context;
}

这是ImageToolbox类的静态方法:
static CGRect screenRect;

+ (CGContextRef)createARGBBitmapContextFromImageSize:(CGSize)imageSize
{
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    void *          bitmapData;
    int             bitmapByteCount;
    int             bitmapBytesPerRow;

    size_t pixelsWide = imageSize.width;
    size_t pixelsHigh = imageSize.height;

    bitmapBytesPerRow   = (pixelsWide * 4);
    bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

    colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL)
    {
        fprintf(stderr, "Error allocating color space\n");
        return NULL;
    }

    bitmapData = malloc( bitmapByteCount );
    if (bitmapData == NULL)
    {
        fprintf (stderr, "Memory not allocated!");
        CGColorSpaceRelease( colorSpace );
        return NULL;
    }

    context = CGBitmapContextCreate (bitmapData,
                                     pixelsWide,
                                     pixelsHigh,
                                     8,      // bits per component
                                     bitmapBytesPerRow,
                                     colorSpace,
                                     kCGImageAlphaPremultipliedFirst);
    if (context == NULL)
    {
        free (bitmapData);
        fprintf (stderr, "Context not created!");
    }

    CGColorSpaceRelease( colorSpace );

    return context;
}

+(CGSize)getScreenSize
{
    if (screenRect.size.width == 0 && screenRect.size.height == 0)
    {
        screenRect = [[UIScreen mainScreen] bounds];

    }
    return CGSizeMake(screenRect.size.height, screenRect.size.width-20);
}

有任何提高性能的建议吗?

最佳答案

我绝对会建议您使用Instruments来分析花费最多时间的消息,以便您可以对其进行真正的分解。另外,我已经编写了一些方法,我认为应该用更少的代码来完成相同的事情,但是您必须以实际的方式写出所有东西来使事情保持可自定义。无论如何,它们在这里:

-(CGContextRef)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
  CGSize size = [ImageToolbox getScreenSize];
  CGRect rect = CGRectMake(0, 0, size.width, size.height);

  UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);

  CGContextRef context = UIGraphicsGetCurrentContext();

  CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);

  [img1 drawInRect:rect];
  [img2 drawInRect:rect];

  UIGraphicsEndImageContext();

  return context;
}

或者,如果您想立即合成图像:
- (UIImage *)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
  CGSize size = [ImageToolbox getScreenSize];
  CGRect rect = CGRectMake(0, 0, size.width, size.height);

  UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);

  CGContextRef context = UIGraphicsGetCurrentContext();

  CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);

  [img1 drawInRect:rect];
  [img2 drawInRect:rect];

  UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

  UIGraphicsEndImageContext();

  return image;
}

我不知道它们是否会更快,但我真的不知道如何非常轻松地加快您拥有的设备的速度,除非您将仪器的配置文件分解掉。

无论如何,我希望这会有所帮助。

关于iphone - 合并两个UIImage的速度比CGContextDrawImage快,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/10802189/

10-12 14:38