问题描述
我的主要目标是在图像上放置灰色区域,然后当用户在该灰色区域上摩擦时,它会显示下方的图像。基本上像彩票刮刮卡。我已经完成了大量的文档搜索,以及这个网站,但找不到解决方案。
My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
以下只是对概念的证明测试根据用户触摸的位置擦除图像,但它不起作用。 :(
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
我有一个检测触摸的UIView,然后将移动的坐标发送到UIViewController,通过执行以下操作在UIImageView中剪切图像:
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
问题在于接触是发送到这个方法就好了,但不是发生在原件上。
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
我是否错误地进行了剪辑路径?要么?
Am I doing the clip path incorrectly? Or?
不太确定...所以您可能会得到任何帮助。非常感谢。
Not really sure...so any help you may have would be greatly appreciated.
提前致谢,
Joel
Thanks in advance,Joel
推荐答案
很多时候我一直试图做同样的事情,只使用Core图形,它可以完成,但相信我,效果不像用户期望的那样平滑和柔和。所以,我知道如何使用OpenCV(开放式计算机视觉库),并且因为它是用C语言编写的,我知道我可以在iPhone上使用它。
使用OpenCV做你想做的事非常容易。
首先你需要一些函数来将UIImage转换为IplImage,它是OpenCV中用来表示各种图像的类型,反之亦然。
I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.Doing what you want to do with OpenCV is extremely easy.First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
既然您拥有所需的基本功能,那么您可以做你想要的IplImage:
这就是你想要的:
Now that you have both the basic functions you need you can do whatever you want with your IplImage: this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
抱歉格式化。
如果你想知道如何将OpenCV移植到iPhone上
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
如果您想在AppStore上查看当前使用OpenCV的应用程序,请获取:
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces
这篇关于如何在用户触摸图像时擦除部分图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!