问题描述
我想要真实的世界空间距离,从实验中我得到的感觉
I want actual world space distance, and I get the feeling from experimentation that
(gl_FragCoord.z / gl_FragCoord.w)
世界空间的深度是多少?但是我不太确定.
is the depth in world space? But I'm not too sure.
编辑,我刚刚找到了我最初所在的位置该代码段.显然这是距相机的实际深度?
EDIT I've just found where I had originally located this snippet of code. Apparently it is the actual depth from the camera?
推荐答案
(同一个人)和.我在这里解释并修饰答案:
This was asked (by the same person) and answered elsewhere. I'm paraphrasing and embellishing the answer here:
如 OpenGL 4.3核心配置文件规范(PDF)的15.2.2节所述
As stated in section 15.2.2 of the OpenGL 4.3 core profile specification (PDF), gl_FragCoord.w
is 1 / clip.w
, where clip.w
is the W component of the clip-space position (ie: what you wrote to gl_Position
).
gl_FragCoord.z
是通过以下过程生成的,并假设进行了常规转换:
gl_FragCoord.z
is generated by the following process, assuming the usual transforms:
- 通过顶点着色器中的投影矩阵乘法,将相机空间转换为剪辑空间.
- 转换为规范化的设备坐标.
ndc.z = clip.z / clip.w
- 使用
glDepthRange
近/远值转换为窗口坐标.win.z = ((dfar-dnear)/2) * ndc.z + (dfar+dnear)/2
.
clip.z = (projectionMatrix * cameraPosition).z
- Camera-space to clip-space transform, via projection matrix multiplication in the vertex shader.
clip.z = (projectionMatrix * cameraPosition).z
- Transform to normalized device coordinates.
ndc.z = clip.z / clip.w
- Transform to window coordinates, using the
glDepthRange
near/far values.win.z = ((dfar-dnear)/2) * ndc.z + (dfar+dnear)/2
.
现在,使用默认深度范围Near = 0,far = 1,我们可以根据剪辑空间(clip.z/clip.w)/2 + 0.5
定义win.z
.如果再将其除以gl_FragCoord.w
,则等于乘以clip.w
,从而得出:
Now, using the default depth range of near=0, far=1, we can define win.z
in terms of clip-space: (clip.z/clip.w)/2 + 0.5
. If we then divide this by gl_FragCoord.w
, that is the equivalent of multiplying by clip.w
, thus giving us:
(gl_FragCoord.z / gl_FragCoord.w) = clip.z/2 + clip.w/2 = (clip.z + clip.w) / 2
使用标准投影矩阵,clip.z
表示比例尺和相对于相机空间Z分量的偏移.比例和偏移量由相机的近/远深度值定义.同样,在标准投影矩阵中,clip.w
只是相机空间Z的负数.因此,我们可以用以下术语重新定义方程式:
Using the standard projection matrix, clip.z
represents a scale and offset from camera-space Z component. The scale and offset are defined by the camera's near/far depth values. clip.w
is, again in the standard projection matrix, just the negation of the camera-space Z. Therefore, we can redefine our equation in those terms:
(gl_FragCoord.z / gl_FragCoord.w) = (A * cam.z + B -cam.z)/2 = (C * cam.z + D)
其中A
和B
表示基于近/远和C = (A - 1)/2
和D = B / 2
的偏移量和比例尺.
Where A
and B
represent the offset and scale based on near/far, and C = (A - 1)/2
and D = B / 2
.
因此,gl_FragCoord.z / gl_FragCoord.w
不是相机的相机空间(或世界空间)距离.相机空间到相机的平面距离也不是.但这是相机空间深度的线性变换.如果两个深度值来自相同的投影矩阵等,则可以将其用作比较两个深度值的方法.
Therefore, gl_FragCoord.z / gl_FragCoord.w
is not the camera-space (or world-space) distance to the camera. Nor is it the camera-space planar distance to the camera. But it is a linear transform of the camera-space depth. You could use it as a way to compare two depth values together, if they came from the same projection matrix and so forth.
要实际计算摄影机空间Z,您需要将摄影机传递到矩阵附近/远处(OpenGL ),并根据它们计算出这些A
和B
值,或者您需要使用投影矩阵的逆函数.另外,您也可以直接自己使用投影矩阵,因为片段着色器可以使用与顶点着色器相同的制服.您可以直接从该矩阵中选择A
和B
项. A = projectionMatrix[2][2]
和B = projectionMatrix[3][2]
.
To actually compute the camera-space Z, you need to either pass the camera near/far from your matrix (OpenGL already gives you the range near/far) and compute those A
and B
values from them, or you need to use the inverse of the projection matrix. Alternatively, you could just use the projection matrix directly yourself, since fragment shaders can use the same uniforms available to vertex shaders. You can pick the A
and B
terms directly from that matrix. A = projectionMatrix[2][2]
, and B = projectionMatrix[3][2]
.
这篇关于(gl_FragCoord.z/gl_FragCoord.w)代表什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!