我在iPad上有一个png图像,尺寸为1214x1214(是视网膜的两倍),并将其设置为位于屏幕坐标(0,-20)的UIImageView。为了使其在设备旋转/方向更改期间适合屏幕,我将其设置为Aspect Fit类型。
我想做的是能够触摸屏幕并读取触摸下方像素的RGB值。我已经实现了一个UIGestureRecognizer并将其绑定到UIImage上,并且可以成功获取触摸坐标。
给我带来麻烦的是,我尝试实现了几种检索RGB值的方式(例如[如何在iPhone上获取像素的RGB值])1
但是我的RGB值似乎是图像倾斜并映射到了UIView上的其他位置。
我的问题是,我该如何满足将UIImageView设置为Aspect Fit的事实,以及该设备可能处于横向或纵向(上下或左右)的事实?
最佳答案
好的,所以我解决了这个问题,这对于尝试做类似事情的人可能会有帮助。
我从另一个答案使用此函数计算了图像的缩放大小
-(CGRect)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)imageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = imageView.frame.size.width / imageView.frame.size.height;
if(imageRatio < viewRatio)
{
float scale = imageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (imageView.frame.size.width - width) * 0.5;
return CGRectMake(topLeftX, 0, width, imageView.frame.size.height);
}
else
{
float scale = imageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (imageView.frame.size.height - height) * 0.5;
return CGRectMake(0, topLeftY, imageView.frame.size.width, height);
}
}
从将功能注册为侦听器获得了接触点
CGPoint tapPoint = [sender locationInView:imageMap];
根据iPad旋转将图像移动到的位置更改了触摸点
if([UIApplication sharedApplication].statusBarOrientation == UIInterfaceOrientationPortrait ||
[UIApplication sharedApplication].statusBarOrientation == UIInterfaceOrientationPortraitUpsideDown )
{
// portrait (y has increased, x has stayed the same)
tapPoint.y -= rectScaleSize.origin.y;
}
else
{
// landscape (x has increased, y has stayed the same)
tapPoint.x -= rectScaleSize.origin.x;
}
然后根据图像的原始大小及其“宽高比”大小进行重新缩放
tapPoint.x = (tapPoint.x * imageMap.image.size.width) / rectScaleSize.size.width;
tapPoint.y = (tapPoint.y * imageMap.image.size.height) / rectScaleSize.size.height;
其中imageMap.image是我的原始图像,rectScaleSize是frameForImage函数的返回值
最后得到了RGB值
CGImageRef image = [imageMap.image CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
// NSLog(@"RGB Image is %d x %d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height),image);
CGContextRelease(context);
int byteIndex = (bytesPerRow * (int)tapPoint.y) + (int)tapPoint.x * bytesPerPixel;
int red = rawData[byteIndex];
int green = rawData[byteIndex + 1];
int blue = rawData[byteIndex + 2];
//int alpha = rawData[byteIndex + 3];
NSLog(@"RGB is %d,%d,%d",red,green,blue);
似乎工作正常,希望它有用。
如果我做错了什么,欢迎发表评论!
关于ios - 缩放后在UIImageView内的位置查找RGB值,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/11014010/