问题描述
假设黑色bg上有彩色形状的jpg.某些形状会碰触.我想点击一个形状,然后选择所有具有相似颜色和轮廓的像素,并在同一位置的新图层上进行填充.可能是uiview,可能是uiimage.
Assume a jpg with colored shapes on a black bg. Some of the shapes touch. I want to tap on a shape, and then select all pixels of similar color and outline and fill on a new layer in same position. Could be a uiview, could be a uiimage.
对类似颜色选择和轮廓填充的方向表示赞赏.
Direction on similar color selection and outline fill appreciated.
推荐答案
问题不完全相同,但相似:我的应用程序需要在具有透明背景的图标边缘周围创建Bezier路径.确实,这两个问题之间唯一的大区别是您选择的标准是确定哪些像素是形状"的一部分,哪些不是形状"的一部分.
Not exactly the same problem but similar: my application needed to create a Bezier path around the edge of icons with transparent backgrounds. Really, the only big difference between these two problems is the criteria you choose to determine which pixels are part of the "shape" and which aren't.
该算法使用一种称为Moore-Nieghbor跟踪的简单技术.本质上,想象一个盲人在建筑物边缘行走,一只手扶在墙上,直到他们回到起始位置.生成的路径是建筑物的轮廓.
The algorithm uses a simple technique called Moore-Nieghbor tracing. In essence, imagine a blind person walking around the edge of a building, keeping one hand on the wall, until they come back to their starting location. The resulting path is an outline of the building.
这段代码有点复杂,因为我引入了一个插入"值,该值有效地将轮廓缩小了N个像素;我这样做是为了以更高的分辨率将路径绘制在图标的边缘上,而不是绘制在图标的外部.如果不需要,可以轻松地绕过此代码.
This code is a tad more complicated because I introduce an "inset" value that effectively shrinks the outline N pixels; I do this so at higher resolutions the path draws over the edge of the icon, rather than outside it. It should be easy to bypass this code if you don't need that.
我的解决方案基于称为 SystemIconBuffer
的结构,该结构可容纳图标的多种分辨率.您基本上可以忽略所有这些.重要的步骤是将图像光栅化为RGBA字节的数组(我使用 CGBitmatContextCreate
),并在 srcBuffer
中获得指向该数组的指针.
My solution is based on a structure called SystemIconBuffer
that holds multiple resolutions of an icon. You can basically ignore all of that. The important step is to rasterize your image into an array (I use CGBitmatContextCreate
) of RGBA bytes and get a pointer to that array in srcBuffer
.
最后,我的图像始终是正方形的,所以我的代码只有一个维度值( dim
),您需要将其转换为 dimX
和 dimY
(用于矩形图像).
Finally, my images are always square, so my code only has one dimension value (dim
) which you'd need to convert into dimX
and dimY
, for a rectangular image.
同样,我的代码只对找到图像的透明边缘感兴趣,因此它只关注每个像素的"alpha"分量.根据需要调整条件.
And again, my code is only interested in finding the transparent edge of the image so it's only looking at the "alpha" component of each pixel. Adjust your criteria as needed.
而且事不宜迟...
// Based on Moore-Nieghbor tracing
// <http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/moore.html>
// <https://en.wikipedia.org/wiki/Moore_neighborhood>
// RGBA buffers store each pixel value as four sequential 8-bit integers
// The outline trace is interested only in the alpha component, and uses the other three bytes for intermediate values
#define kAlphaOffset 3
#define kBlackOffset 0 // reuse R to record if this pixel is considered to be "black"
#define kVisitedOffset 2 // reuse G to record if this pixel has been visited
#define kIsBodyAlphaThreshold (255/20) // a pixel with > 5% opacity is considered to be "black" (part of the image)
// Notes: all of these functions assume a square image buffer exactly dim x dim x RGBA
// all functions clip x & y to valid pixels addresses in the buffer and ignore anything outside
static NSUInteger GetValue( NSInteger x, NSInteger y, NSUInteger offset, const UInt8* buffer, NSInteger dim );
static void FillSquareOfValues( NSInteger x, NSInteger y, NSInteger size, NSUInteger value, NSUInteger offset, UInt8* buffer, NSInteger dim );
#define kNeighbors 8 // each pixel has eight Moore neighbors
// The x & y offset of neighboring pixels, starting with the pixel immediately above the pixel and moving clockwise around it
enum {
kUpNeighbor = 0,
kUpperRightNeighbor,
kRightNeighbor,
kLowerRightNeighbor,
kDownNeighbor,
kLowerLeftNeighbor,
kLeftNeighbor,
kUpperLeftNeighbor
};
static NSInteger NeighborX[kNeighbors] = { 0, 1, 1, 1, 0, -1, -1, -1 }; // coordinate offsets of neighbor directions
static NSInteger NeighborY[kNeighbors] = { 1, 1, 0, -1, -1, -1, 0, 1 };
static NSUInteger EntryDirection[kNeighbors] = { // translates the neighboring pixel hit into an entry direction
kRightNeighbor, // above
kRightNeighbor, // upper right
kDownNeighbor, // right
kDownNeighbor, // lower right
kLeftNeighbor, // below
kLeftNeighbor, // lower left
kUpNeighbor, // left
kUpNeighbor }; // upper left
#define OppositeDirection(DIRECTION) ((DIRECTION+4)&0x07) // macro to calculate the opposite neighbor position or direction
- (NSInteger)edgeIndentForSize:(IconSizeSelector)selector
{
// the inset of black pixels from any transparent pixel for a given icon size
// right now it's aribrarily the pixel size selector, so mini icons get no
// indent, large get 1 pixel, huge gets 2 pixels, ...
return (NSInteger)selector;
}
- (NSBezierPath*)outlineFromBuffer:(SystemIconBuffer*)buffer forSize:(IconSizeSelector)selector
{
// Get the array of RGBA pixels for the source image
// Do this before locking drawBufferLock becuase -dataForSize might need to convert an NSImage into
// a pixel buffer, and it will need IconFrameBuffer() to do that
const UInt8* srcBuffer = [buffer dataForSize:selector].bytes;
// Aquire a temporary frame buffer to use for the calculations
OSSpinLockLock(&drawBufferLock);
UInt8* pixels = IconFrameBuffer(selector);
bzero(pixels,IconRGBABufferSize(selector)); // fill buffer with zeros
NSInteger dim = IconPixelIntegerSize[selector];
// Determine the "black" pixels of the image. This is done by setting all of the pixels to "black" (true) and then erasing
// a range of them near any transparent pixels in the source image. The size of the range is determined by edgeIndent.
// When edgeIndent is non-zero, the black pixels are required to be a least that many pixels away from any transparent
// pixel, effectively insetting the image from its edges.
NSInteger edgeIdent = [self edgeIndentForSize:selector];
// Paint the whole pixel map "black"
FillSquareOfValues(0,0,dim,1,kBlackOffset,pixels,dim);
// Scan all of the pixels (including one row & column of phantom pixels beyond the edge of the image) looking for
// transparent pixels. If found, erase the black pixel at that coordinate plus all pixels within edgeIndent of it.
for ( NSInteger x=-1; x<=dim; x++ )
for (NSInteger y=-1; y<=dim; y++ )
if (GetValue(x,y,kAlphaOffset,srcBuffer,dim)<=kIsBodyAlphaThreshold)
// fill with not-black values at x,y plus edgeIndent pixels adjacent to it
FillSquareOfValues(x-edgeIdent,y-edgeIdent,edgeIdent*2+1,0,kBlackOffset,pixels,dim);
// Find the starting pixel
NSInteger startX = -1;
NSInteger startY = -1;
// Scanning left to right, bottom to top, find the first lower-left(ish) pixel
for ( NSInteger x=0; x<dim; x++ )
for ( NSInteger y=0; y<dim; y++ )
if (GetValue(x, y, kBlackOffset, pixels, dim)!=0)
{
startX = x;
startY = y;
x = y = dim; // break both loops
}
// Create the path and set the start point
NSBezierPath* path = [NSBezierPath bezierPath];
if (startX<0 && startY<0)
goto bail; // there are no opaque pixels: return an empty path
[path moveToPoint:NSMakePoint(startX+0.5,startY+0.5)];
// Determine initial entry direction for first pixel
// Conceptually, we'd want to walk around this pixel counter-clockwise to find the next-to-the-last pixel
// in the outline, which will tell us the direction the last edge pixel will (re)enter this one.
// However ...
// Because we "snuck" up on the first pixel by scanning columns left to right, we know that there are no
// black pixels immeidately below it or to its left. The only variable is whether there are pixels to its right and
// whether they are above or below it. Ultimately, we only need to test one configuration. If there
// are no pixels to the immediate right or lower-right, then the final entry direction will be from
// the right (6). If there are pixels in either of these locations, the final entry direction will be
// from the bottom (0).
NSUInteger initialEntryDirection = 0;
if (GetValue(startX+1,startY-1,kBlackOffset,pixels,dim)==0 && GetValue(startX+1,startY,kBlackOffset,pixels,dim)==0)
initialEntryDirection = 6;
// Start the outline trace
// At each pixel, starting with the pixel in the opposite direction of the entry direction, test the pixels,
// clockwise, until we find the next black pixel adjacent to this one. Add it to the outline and repeat
// until we encounter the first pixel again, entered from the same direction (Jacob's stopping criterion).
NSInteger x = startX;
NSInteger lastX = x;
NSInteger y = startY;
NSInteger lastY = y;
NSUInteger direction = initialEntryDirection;
do {
// Search, clockwise, for the next neighboring black pixel
NSInteger nextX, nextY;
NSUInteger nextDir = OppositeDirection(direction);
do {
// Progress to the next neighbor direction (note that when first entering the loop we always
// know that the pixel we entered from must be white, or we couldn't have entered from that direction,
// so the entry pixel never needs to be tested)
nextDir += 1;
if (nextDir>kNeighbors*2)
// Safety check: an image with a single, isolated, pixel will cause this loop to run forever
goto bail;
nextX = x+NeighborX[nextDir&0x07];
nextY = y+NeighborY[nextDir&0x07];
} while (GetValue(nextX,nextY,kBlackOffset,pixels,dim)==0);
// Loop exits with [nextX,nextY] of next clockwise black pixel and the direction of that pixel relative to this one
x = nextX; // move to this point
y = nextY;
direction = EntryDirection[nextDir&0x07]; // translate pixel relative direction into entry direction
if (x!=lastX || y!=lastY)
{
// This is a new point in the list: add it to the path
[path lineToPoint:NSMakePoint(x+0.5,y+0.5)];
lastX = x;
lastY = y;
}
} while (x!=startX || y!=startY || direction!=initialEntryDirection);
// Loop will exit when the countour has been traced back to its orignal point
bail:
[path closePath];
OSSpinLockUnlock(&drawBufferLock);
return path;
}
static NSUInteger GetValue( NSInteger x, NSInteger y, NSUInteger offset, const UInt8* buffer, NSInteger dim )
{
if (x>=0 && x<dim && y>=0 && y<dim )
return buffer[((dim-y-1)*dim+x)*4+offset];
return 0;
}
static void FillSquareOfValues( NSInteger x, NSInteger y, NSInteger size, NSUInteger value, NSUInteger offset, UInt8* buffer, NSInteger dim )
{
NSInteger endX = MIN(x+size,dim);
NSInteger endY = MIN(y+size,dim);
if (x<0) x = 0;
if (y<0) y = 0;
for ( NSInteger i=x; i<endX; i++ )
for ( NSInteger j=y; j<endY; j++ )
buffer[((dim-j-1)*dim+i)*4+offset] = value;
}
最后一点:结果贝塞尔曲线路径不是效率很高,因为它包含数百个(即使不是数千个)细线段.在我的代码中,我使用生成的路径(一次)将轮廓绘制到屏幕外的缓冲区中,然后将其转换为可重复使用的缓存图像.
One last note: The resulting bezier path is not very efficient because it contains hundreds, if not thousands, of tiny line segments. In my code, I use the resulting path to draw the outline (once) into an off-screen buffer, which I then turn into an cached image that I reuse.
按照他们的说法,以此为基础创建和绘画某些东西,作为学生的练习...
Creating something from this and drawing it is, as they say, left as an exercise for the student...
这篇关于选择相似的像素,在像素周围绘制贝塞尔曲线路径并填充颜色-iOS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!