本文介绍了如何在iOS中正确线性化OpenGL ES中的深度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用OpenGL为iOS应用程序渲染forrest场景。为了使它更好一点,我想在场景中实现深度效果。但是我需要从OpenGL深度缓冲区获得线性化的深度值才能这样做。目前我在片段着色器中使用计算(我发现
正如你所看到的,你越远越远,得到的深度值越条纹(特别是在船后面) )。如果地形图块靠近相机,输出就可以了。



我甚至尝试了另一种计算:

  float linearizeDepth(浮动深度){
返回2.0 * nearz * farz /(farz + nearz - (2.0 * depth - 1.0)*(farz - nearz));
}

这导致价值太高,所以我通过划分来缩小它:

  float linearized =(linearizeDepth(depth) -  2.0)/ 40.0; 


尽管如此,它给出了类似的结果。



那么如何在近平面和远平面之间实现平滑,线性的过渡,没有任何条纹?有没有人有类似的问题?

解决方案

问题是你存储的非线性值被截断,所以当你偷看深度时后来你得到了不稳定的结果,因为你失去准确性,你离znear飞机越远。无论您评价什么,除非:


  1. 精度损失



    您可以更改 znear,zfar 值,以便它们更接近。尽可能多地放大znear,以便更精确的区域覆盖更多场景。



    另一个选择是每个深度缓冲区使用更多位(16位太低)不确定是否可以在OpenGL ES中执行此操作,但在标准OpenGL中,您可以在大多数卡上使用24,32位。


  2. 使用线性深度缓冲区



    因此将线性值存储到深度缓冲区中。有两种方法。一个是计算深度,因此在所有基础操作之后,您将获得线性值。



    另一个选项是使用单独的纹理/ FBO并将线性深度直接存储到它。问题是你不能在同一个渲染过程中使用它的内容。


[Edit1]线性深度缓冲区



要线性化深度缓冲区本身(不仅仅是从中获取的值),请尝试:



顶点:

 不同的浮动深度; 
void main()
{
vec4 p = ftransform();
depth = p.z;
gl_Position = p;
gl_FrontColor = gl_Color;
}

片段:

  uniform float znear,zfar; 
不同的浮动深度; //原始z在相机空间而不是gl_FragCoord.z因为已被截断
void main(void)
{
float z =(depth-znear)/(zfar-znear);
gl_FragDepth = z;
gl_FragColor = gl_Color;
}

在CPU端线性化的非线性深度缓冲区(就像你一样) :



线性深度缓冲GPU侧(如您所愿):



场景参数为:

  //每深度24位值
const double zang = 60.0;
const double znear = 0.01;
const double zfar = 20000.0;

以及覆盖整个深度视野的简单旋转板。展位图片由 glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); 拍摄并转换为 2D RGB CPU 侧的纹理。然后渲染为单个 QUAD 覆盖单位矩阵上的整个屏幕......



现在从线性获取原始深度值深度缓冲你只需这样做:

  z = znear +(zfar-znear)* depth_value; 

我使用古老的东西只是为了保持这个简单,所以把它移到你的个人资料......



请注意,我不会在 OpenGL ES 中编码,也不会 IOS ,所以我希望我没有错过与此相关的内容(我我习惯于Win和PC)。



为了显示差异,我将另一个旋转的板添加到同一场景(因此它们相交)并使用彩色输出(不再深度获取):





正如你所看到的线性深度缓冲区很多更好(对于覆盖大部分深度FOV的场景)。


I'm trying to render a forrest scene for an iOS App with OpenGL. To make it a little bit nicer, I'd like to implement a depth effect into the scene. However I need a linearized depth value from the OpenGL depth buffer to do so. Currently I am using a computation in the fragment shader (which I found here).

Therefore my terrain fragment shader looks like this:

#version 300 es

precision mediump float;
layout(location = 0) out lowp vec4 out_color;

float linearizeDepth(float depth) {
    return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}

void main(void) {
    float depth = gl_FragCoord.z;
    float linearized = (linearizeDepth(depth));
    out_color = vec4(linearized, linearized, linearized, 1.0);
}

However, this results in the following output:

As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). If the terrain tile is close to the camera, the output is somewhat okay..

I even tried another computation:

float linearizeDepth(float depth) {
    return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}

which resulted in a way too high value so I scaled it down by dividing:

float linearized = (linearizeDepth(depth) - 2.0) / 40.0;

Nevertheless, it gave a similar result.

So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? Has anybody had a similar problem?

解决方案

the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. No matter what you evaluate you will not obtain better results unless:

  1. Lower accuracy loss

    You can change znear,zfar values so they are closer together. enlarge znear as much as you can so the more accurate area covers more of your scene.

    Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards.

  2. use linear depth buffer

    So store linear values into depth buffer. There are two ways. One is compute depth so after all the underlying operations you will get linear value.

    Another option is to use separate texture/FBO and store the linear depths directly to it. The problem is you can not use its contents in the same rendering pass.

[Edit1] Linear Depth buffer

To linearize depth buffer itself (not just the values taken from it) try this:

Vertex:

varying float depth;
void main()
    {
    vec4 p=ftransform();
    depth=p.z;
    gl_Position=p;
    gl_FrontColor = gl_Color;
    }

Fragment:

uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
    {
    float z=(depth-znear)/(zfar-znear);
    gl_FragDepth=z;
    gl_FragColor=gl_Color;
    }

Non linear Depth buffer linearized on CPU side (as you do):

Linear Depth buffer GPU side (as you should):

The scene parameters are:

// 24 bits per Depth value
const double zang =   60.0;
const double znear=    0.01;
const double zfar =20000.0;

and simple rotated plate covering whole depth field of view. Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. Then rendered as single QUAD covering whole screen on unit matrices ...

Now to obtain original depth value from linear depth buffer you just do this:

z = znear + (zfar-znear)*depth_value;

I used the ancient stuff just to keep this simple so port it to your profile ...

Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC).

To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore):

As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV).

这篇关于如何在iOS中正确线性化OpenGL ES中的深度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 20:14