问题描述
我试图从两个现有图片的复合材料中创建一个图片蒙版。
I'm trying to create an image mask that from a composite of two existing images.
首先,我创建的复合材料包含一个小图像是掩蔽图像,以及与背景尺寸相同的较大图像:
First I start with creating the composite which consists of a small image that is the masking image, and a larger image which is the same size as the background:
UIImage * BaseTextureImage = [UIImage imageNamed:@"background.png"];
UIImage * MaskImage = [UIImage imageNamed:@"my_mask.jpg"];
UIImage * ShapesBase = [UIImage imageNamed:@"largerimage.jpg"];
UIImage * MaskImageFull;
CGSize finalSize = CGSizeMake(480.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[ShapesBase drawInRect:CGRectMake(0, 0, 480, 320)];
[MaskImage drawInRect:CGRectMake(150, 50, 250, 250)];
MaskImageFull = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
我可以输出这个UIImage(MaskImageFull),它看起来不错,它是一个全尺寸的背景大小,
I can output this UIImage (MaskImageFull) and it looks right, it is a fullsize background size and it is a white background with my mask object in black, in the right place on the screen.
然后我通过MaskImageFull UIImage通过这个方法:
I then pass the MaskImageFull UIImage through this:
CGImageRef maskRef = [maskImage CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage* retImage= [UIImage imageWithCGImage:masked];
问题是retImage都是黑色的。如果我发送一个预制的UIImage作为面具,它的工作正常,只是当我尝试从它断裂的多个图像。
The problem is that the retImage is all black. If I send a pre-made UIImage in as the mask it works fine, it is just when I try to make it from multiple images that it breaks.
我想是一个色彩空间的东西,但似乎无法修复它。非常感谢任何帮助。
I thought it was a colorspace thing but couldn't seem to fix it. Any help is much appreciated!
推荐答案
我尝试了同样的事情与CGImageCreateWithMask,并得到相同的结果。我发现的解决方案是使用CGContextClipToMask改为:
I tried the same thing with CGImageCreateWithMask, and got the same result. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:@"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
这篇关于使用CGImageMaskCreate创建蒙版是所有黑色(iphone)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!