问题描述
NDC坐标形成一个立方体,当-Z
侧最远时,其-Z
侧压在屏幕上.
NDC coordinates for OpenGL form a cube, who's -Z
side presses against the screen while it's +Z
side is farthest away.
当我使用...
// ortho arguments are: left, right, bottom, top, near, far
pos = pos * glm::ortho<float>(-1, 1, -1, 1, -1, 1);
...反映了pos
的z
组件; -1变成1,10变成-10,依此类推
...the z
component of pos
is reflected; -1 becomes 1, 10 becomes -10, etc.
glm :: persp做类似的事情,有点奇怪吗?如果一个位置的z
等于near
,我希望它停留在NDC立方体面向屏幕的平面上,但是它的符号被任意翻转了;它甚至没有降落到最远的一面.
glm::persp does a similar thing and it's kind of a weird? If a position has a z
equal to near
, I would expect it to rest on the screen facing plane of the NDC cube, but instead it's sign is flipped arbitrarily; it doesn't even land on the farthest facing side.
这是为什么?
推荐答案
我浏览了Song Ho Ahns有关OpenGL转换的教程,以确保不要讲傻话.
I had a look into Song Ho Ahns tutorial about OpenGL transformations to be sure not to tell something silly.
请注意,眼睛坐标是在右手坐标系中定义的,但是 NDC使用左手坐标系.也就是说,原点的相机在眼睛空间中沿-Z轴方向观察,而在NDC中则沿+ Z轴方向进行观察.
Note that the eye coordinates are defined in the right-handed coordinate system, but NDC uses the left-handed coordinate system. That is, the camera at the origin is looking along -Z axis in eye space, but it is looking along +Z axis in NDC.
(强调是我的.)
他为此提供了以下很好的说明:
He provides the following nice illustration for this:
所以,我得出的结论是
glm::ortho<float>(-1, 1, -1, 1, -1, 1);
不应生成单位矩阵,而应生成z轴镜像的矩阵,例如像
shouldn't produce an identity matrix but instead one where z axis is mirrored, e.g. something like
| 1 0 0 0 |
| 0 1 0 0 |
| 0 0 -1 0 |
| 0 0 0 1 |
因为我手边没有glm
,所以我从github( glm ).在源代码中花了一段时间,我终于在glm::ortho()的实现="nofollow noreferrer"> orthoLH_ZO()
:
As I have no glm
at hand, I took the relevant code lines from the source code on github (glm). Digging a while in the source code, I finally found the implementation of glm::ortho()
in orthoLH_ZO()
:
template<typename T>
GLM_FUNC_QUALIFIER mat<4, 4, T, defaultp> orthoLH_ZO(T left, T right, T bottom, T top, T zNear, T zFar)
{
mat<4, 4, T, defaultp> Result(1);
Result[0][0] = static_cast<T>(2) / (right - left);
Result[1][1] = static_cast<T>(2) / (top - bottom);
Result[2][2] = static_cast<T>(1) / (zFar - zNear);
Result[3][0] = - (right + left) / (right - left);
Result[3][1] = - (top + bottom) / (top - bottom);
Result[3][2] = - zNear / (zFar - zNear);
return Result;
}
我对代码进行了一些转换,以使其成为以下示例:
I transformed this code a bit to make the following sample:
#include <iomanip>
#include <iostream>
struct Mat4x4 {
double values[4][4];
Mat4x4() { }
Mat4x4(double val)
{
values[0][0] = val; values[0][1] = 0.0; values[0][2] = 0.0; values[0][3] = 0.0;
values[1][0] = 0.0; values[1][1] = val; values[1][2] = 0.0; values[1][3] = 0.0;
values[2][0] = 0.0; values[2][1] = 0.0; values[2][2] = val; values[2][3] = 0.0;
values[3][0] = 0.0; values[3][1] = 0.0; values[3][2] = 0.0; values[3][3] = val;
}
double* operator[](unsigned i) { return values[i]; }
const double* operator[](unsigned i) const { return values[i]; }
};
Mat4x4 ortho(
double left, double right, double bottom, double top, double zNear, double zFar)
{
Mat4x4 result(1.0);
result[0][0] = 2.0 / (right - left);
result[1][1] = 2.0 / (top - bottom);
result[2][2] = - 1;
result[3][0] = - (right + left) / (right - left);
result[3][1] = - (top + bottom) / (top - bottom);
return result;
}
std::ostream& operator<<(std::ostream &out, const Mat4x4 &mat)
{
for (unsigned i = 0; i < 4; ++i) {
for (unsigned j = 0; j < 4; ++j) {
out << std::fixed << std::setprecision(3) << std::setw(8) << mat[i][j];
}
out << '\n';
}
return out;
}
int main()
{
Mat4x4 matO = ortho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
std::cout << matO;
return 0;
}
编译并启动后,将提供以下输出:
Compiled and started it provides the following output:
1.000 0.000 0.000 0.000
0.000 1.000 0.000 0.000
0.000 0.000 -1.000 0.000
-0.000 -0.000 0.000 1.000
呵呵! z用-1缩放,即z值在x-y平面上镜像(如预期).
Huh! z is scaled with -1 i.e. z values are mirrored on x-y plane (as expected).
因此,OP的观察是完全正确和合理的:
Hence, OP's observation is fully correct and reasonable:
最困难的部分:
The hardest part:
我个人的猜测:发明了所有这些GL东西的SGI大师之一都是按照她/他的明智之举做到这一点的.
My personal guess: one of the SGI guru's who invented all this GL stuff did this in her/his wiseness.
另一个猜测:在眼部空间中,x轴指向右,y轴指向上.将其转换为屏幕坐标后,y轴应指向下方(因为通常/技术上是从左上角开始对像素进行寻址).因此,这引入了另一个镜像轴,再次改变了坐标系的手性.
Another guess: In eye space, x axis points to right and y axis points up. Translating this into screen coordinates, y axis should point down (as pixels are usually/technically addressed beginning in the upper left corner). So, this introduces another mirrored axis which changes handedness of coordinate system (again).
有点不满意,因此我在Google上搜索并发现了此信息(重复吗?):
It's a bit unsatisfying and hence I googled and found this (duplicate?):
这篇关于Ortho和Persp正在反转Z深度符号吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!