本文介绍了如何使用 OpenGL ES 2.0 着色器执行这些图像处理任务?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何使用 OpenGL ES 2.0 着色器执行以下图像处理任务?

  • 色彩空间变换(RGB/YUV/HSL/Lab)
  • 图像旋转
  • 转换为草图
  • 转换为油画

解决方案

我刚刚在我的开源 并具有以下代码:

 不同的 highp vec2 纹理坐标;统一的 sampler2D inputImageTexture;统一的 highp vec2 中心;统一的高浮动半径;均匀的高浮角;无效主(){highp vec2 textureCoordinateToUse = 纹理坐标;highp float dist = distance(center, textureCoordinate);纹理坐标使用 -= 中心;如果(距离

草图过滤器是使用 Sobel 边缘检测生成的,边缘以不同的灰度显示.其着色器如下:

 不同的 highp vec2 纹理坐标;统一的 sampler2D inputImageTexture;均匀的中浮点强度;均匀的中浮点图像宽度因子;均匀的中浮点图像高度因子;const mediump vec3 W = vec3(0.2125, 0.7154, 0.0721);无效主(){mediump vec3 textureColor = texture2D(inputImageTexture, textureCoordinate).rgb;mediump vec2 stp0 = vec2(1.0/imageWidthFactor, 0.0);mediump vec2 st0p = vec2(0.0, 1.0/imageHeightFactor);mediump vec2 stpp = vec2(1.0/imageWidthFactor, 1.0/imageHeightFactor);mediump vec2 stpm = vec2(1.0/imageWidthFactor, -1.0/imageHeightFactor);中浮点 i00 = 点(纹理颜色,W);mediump float im1m1 = dot(texture2D(inputImageTexture, textureCoordinate - stpp).rgb, W);mediump float ip1p1 = dot(texture2D(inputImageTexture, textureCoordinate + stpp).rgb, W);mediump float im1p1 = dot(texture2D(inputImageTexture, textureCoordinate - stpm).rgb, W);mediump float ip1m1 = dot(texture2D(inputImageTexture, textureCoordinate + stpm).rgb, W);mediump float im10 = dot(texture2D(inputImageTexture, textureCoordinate - stp0).rgb, W);mediump float ip10 = dot(texture2D(inputImageTexture, textureCoordinate + stp0).rgb, W);mediump float i0m1 = dot(texture2D(inputImageTexture, textureCoordinate - st0p).rgb, W);mediump float i0p1 = dot(texture2D(inputImageTexture, textureCoordinate + st0p).rgb, W);中等浮动 h = -im1p1 - 2.0 * i0p1 - ip1p1 + im1m1 + 2.0 * i0m1 + ip1m1;中浮点 v = -im1m1 - 2.0 * im10 - im1p1 + ip1m1 + 2.0 * ip10 + ip1p1;mediump float mag = 1.0 - length(vec2(h, v));中等 vec3 目标 = vec3(mag);gl_FragColor = vec4(混合(纹理颜色,目标,强度),1.0);}

最后,使用 Kuwahara 过滤器生成油画外观.这个特殊的过滤器来自 Jan Eric Kyprianidis 和他的同事们的杰出工作,如文章各向异性桑原GPU Pro 书籍中的在 GPU 上过滤".其着色器代码如下:

 不同的 highp vec2 纹理坐标;统一的 sampler2D inputImageTexture;统一的 int 半径;精密高浮点数;const vec2 src_size = vec2 (768.0, 1024.0);无效主要(无效){vec2 uv = 纹理坐标;float n = float((radius + 1) * (radius + 1));vec3 m[4];vec3 s[4];for (int k = 0; k 

同样,这些都是 GPUImage 中的内置过滤器,因此您可以将该框架放入您的应用程序并开始在图像、视频和电影上使用它们,而无需接触任何 OpenGL ES.该框架的所有代码都在 BSD 许可下可用,如果您想了解它的工作原理或对其进行调整.

How can I perform the following image processing tasks using OpenGL ES 2.0 shaders?

  • Colorspace transform ( RGB/YUV/HSL/Lab )
  • Swirling of the image
  • Converting to a sketch
  • Converting to an oil painting

解决方案

I just added filters to my open source GPUImage framework that perform three of the four processing tasks you describe (swirling, sketch filtering, and converting to an oil painting). While I don't yet have colorspace transforms as filters, I do have the ability to apply a matrix to transform colors.

As examples of these filters in action, here is a sepia tone color conversion:

a swirl distortion:

a sketch filter:

and finally, an oil painting conversion:

Note that all of these filters were done on live video frames, and all but the last filter can be run in real time on video from iOS device cameras. The last filter is pretty computationally intensive, so even as a shader it takes ~1 second or so to render on an iPad 2.

The sepia tone filter is based on the following color matrix fragment shader:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform lowp mat4 colorMatrix;
 uniform lowp float intensity;

 void main()
 {
     lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     lowp vec4 outputColor = textureColor * colorMatrix;

     gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
 }

with a matrix of

self.colorMatrix = (GPUMatrix4x4){
        {0.3588, 0.7044, 0.1368, 0},
        {0.2990, 0.5870, 0.1140, 0},
        {0.2392, 0.4696, 0.0912 ,0},
        {0,0,0,0},
    };

The swirl fragment shader is based on this Geeks 3D example and has the following code:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 center;
 uniform highp float radius;
 uniform highp float angle;

 void main()
 {
     highp vec2 textureCoordinateToUse = textureCoordinate;
     highp float dist = distance(center, textureCoordinate);
     textureCoordinateToUse -= center;
     if (dist < radius)
     {
         highp float percent = (radius - dist) / radius;
         highp float theta = percent * percent * angle * 8.0;
         highp float s = sin(theta);
         highp float c = cos(theta);
         textureCoordinateToUse = vec2(dot(textureCoordinateToUse, vec2(c, -s)), dot(textureCoordinateToUse, vec2(s, c)));
     }
     textureCoordinateToUse += center;

     gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );

 }

The sketch filter is generated using Sobel edge detection, with edges shown in varying grey shades. The shader for this is as follows:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform mediump float intensity;
 uniform mediump float imageWidthFactor;
 uniform mediump float imageHeightFactor;

 const mediump vec3 W = vec3(0.2125, 0.7154, 0.0721);

 void main()
 {
    mediump vec3 textureColor = texture2D(inputImageTexture, textureCoordinate).rgb;

    mediump vec2 stp0 = vec2(1.0 / imageWidthFactor, 0.0);
    mediump vec2 st0p = vec2(0.0, 1.0 / imageHeightFactor);
    mediump vec2 stpp = vec2(1.0 / imageWidthFactor, 1.0 / imageHeightFactor);
    mediump vec2 stpm = vec2(1.0 / imageWidthFactor, -1.0 / imageHeightFactor);

    mediump float i00   = dot( textureColor, W);
    mediump float im1m1 = dot( texture2D(inputImageTexture, textureCoordinate - stpp).rgb, W);
    mediump float ip1p1 = dot( texture2D(inputImageTexture, textureCoordinate + stpp).rgb, W);
    mediump float im1p1 = dot( texture2D(inputImageTexture, textureCoordinate - stpm).rgb, W);
    mediump float ip1m1 = dot( texture2D(inputImageTexture, textureCoordinate + stpm).rgb, W);
    mediump float im10 = dot( texture2D(inputImageTexture, textureCoordinate - stp0).rgb, W);
    mediump float ip10 = dot( texture2D(inputImageTexture, textureCoordinate + stp0).rgb, W);
    mediump float i0m1 = dot( texture2D(inputImageTexture, textureCoordinate - st0p).rgb, W);
    mediump float i0p1 = dot( texture2D(inputImageTexture, textureCoordinate + st0p).rgb, W);
    mediump float h = -im1p1 - 2.0 * i0p1 - ip1p1 + im1m1 + 2.0 * i0m1 + ip1m1;
    mediump float v = -im1m1 - 2.0 * im10 - im1p1 + ip1m1 + 2.0 * ip10 + ip1p1;

    mediump float mag = 1.0 - length(vec2(h, v));
    mediump vec3 target = vec3(mag);

    gl_FragColor = vec4(mix(textureColor, target, intensity), 1.0);
 }

Finally, the oil painting look is generated using a Kuwahara filter. This particular filter is from the outstanding work of Jan Eric Kyprianidis and his fellow researchers, as described in the article "Anisotropic Kuwahara Filtering on the GPU" within the GPU Pro book. The shader code from that is as follows:

 varying highp vec2 textureCoordinate;
 uniform sampler2D inputImageTexture;
 uniform int radius;

 precision highp float;

 const vec2 src_size = vec2 (768.0, 1024.0);

 void main (void)
 {
    vec2 uv = textureCoordinate;
    float n = float((radius + 1) * (radius + 1));

    vec3 m[4];
    vec3 s[4];
    for (int k = 0; k < 4; ++k) {
        m[k] = vec3(0.0);
        s[k] = vec3(0.0);
    }

    for (int j = -radius; j <= 0; ++j)  {
        for (int i = -radius; i <= 0; ++i)  {
            vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
            m[0] += c;
            s[0] += c * c;
        }
    }

    for (int j = -radius; j <= 0; ++j)  {
        for (int i = 0; i <= radius; ++i)  {
            vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
            m[1] += c;
            s[1] += c * c;
        }
    }

    for (int j = 0; j <= radius; ++j)  {
        for (int i = 0; i <= radius; ++i)  {
            vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
            m[2] += c;
            s[2] += c * c;
        }
    }

    for (int j = 0; j <= radius; ++j)  {
        for (int i = -radius; i <= 0; ++i)  {
            vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
            m[3] += c;
            s[3] += c * c;
        }
    }


    float min_sigma2 = 1e+2;
    for (int k = 0; k < 4; ++k) {
        m[k] /= n;
        s[k] = abs(s[k] / n - m[k] * m[k]);

        float sigma2 = s[k].r + s[k].g + s[k].b;
        if (sigma2 < min_sigma2) {
            min_sigma2 = sigma2;
            gl_FragColor = vec4(m[k], 1.0);
        }
    }
 }

Again, these are all built-in filters within GPUImage, so you can just drop that framework into your application and start using them on images, video, and movies without having to touch any OpenGL ES. All the code for the framework is available under a BSD license, if you'd like to see how it works or tweak it.

这篇关于如何使用 OpenGL ES 2.0 着色器执行这些图像处理任务?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-31 16:04