问题描述
我正试图在iOS上使用 OpenGL ES 1.1 渲染地球仪(带有地图的球体)。
I am trying to render a globe (sphere with maps on it) with OpenGL ES 1.1 on iOS.
我能够绘制球体,并绘制边框但有一个问题:在我的视图中不面向前方的线条也在屏幕上绘制。像这样:
I am able to draw the sphere, and map borders but with one problem: lines that are not facing front in my view, are also being drawn on the screen. Like this:
在图片中,您可以看到美国渲染得很好,但您可以看到澳大利亚在背面渲染。它不应该被显示,因为它在地球的后面,并且地球上的黑色和紫色条纹不透明。
In the picture, you can see that America renders just fine, but you can see Australia rendered on the back. It is not supposed to be shown because it's in the back of the globe, and BLACK and PURPLE stripes in the globe are not transparent.
关于我应该使用什么参数的任何想法为了获得合适的地球而进行调整?
Any ideas on what parameters should I be tweaking in order to get a proper globe?
如果有帮助,我可以发布代码的相关部分。只要问哪个部分,我就会更新问题。
If it helps, I can post the relevant parts of the code. Just ask which part and I will update the question.
提前多多谢谢。
更新:这个是我用于Sphere渲染的:
Update: This is what I am using for Sphere rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glPolygonOffset(-1.0f, -1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
int x, y;
GLfloat curR, curG, curB;
curR = curG = curB = 0.15f;
for (y=0; y<EARTH_LAT_RES; y++) {
if (y%10 == 0) {
glColor4f(curR, curG, curB, 1.0f);
curR = curR == 0.15f ? 0.6f : 0.15f;
curB = curB == 0.15f ? 0.6f : 0.15f;
}
for (x=0; x<EARTH_LON_RES; x++) {
Vertex3D vs[4];
vs[1] = vertices[x][y];
vs[0] = vertices[x][y+1];
vs[3] = vertices[x+1][y];
vs[2] = vertices[x+1][y+1];
glVertexPointer(3, GL_FLOAT, 0, vs);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
}
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDisable(GL_POLYGON_OFFSET_FILL);
glDisableClientState(GL_VERTEX_ARRAY);
这是我用来渲染边界线的原因:
This is what I am using to render the border lines:
// vxp is a data structure with vertex arrays that represent
// border lines
int i;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
for (i=0; i<vxp->nFeatures; i++)
{
glVertexPointer(3, GL_FLOAT, 0, vxp->pFeatures[i].pVerts);
glDrawArrays(GL_LINE_STRIP, 0, vxp->pFeatures[i].nVerts);
}
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
这些是我在使用之前渲染任何对象的设置:
These are the settings I am using before rendering any of the objects:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glEnable(GL_DEPTH_TEST); /* enable depth testing; required for z-buffer */
glEnable(GL_CULL_FACE); /* enable polygon face culling */
glCullFace(GL_BACK);
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glFrustumf (-1.0, 1.0, -1.5, 1.5, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
推荐答案
显而易见的方法,如果它没有阻碍代码的其余部分是以不可见的方式将球体绘制为实体对象以填充深度缓冲区,然后让深度测试确定哪条线是可见的。您可以使用glPolygonOffset将特定于实现的少量添加到用于深度计算的值,这样您就可以避免深度缓冲区争用。所以它类似于:
The obvious way, if it doesn't obstruct the rest of your code, is to draw the sphere as a solid object in an invisible way to prime the depth buffer, then let the depth test figure out which of the lines is visible. You can use glPolygonOffset to add an implementation-specific 'small amount' to values that are used for depth calculations, so you can avoid depth-buffer fighting. So it'd be something like:
// add a small bit of offset, so that lines that should be visible aren't
// clipped due to depth rounding errors; note that ES allows GL_POLYGON_OFFSET_FILL
// but not GL_POLYGON_OFFSET_LINE, so we're going to push the polygons back a bit
// in terms of values written to the depth buffer, rather than the more normal
// approach of pulling the lines forward a bit
glPolygonOffset(-1.0, -1.0);
glEnable(GL_POLYGON_OFFSET_FILL);
// disable writes to the colour buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
drawSolidPolygonalSphere();
// enable writing again
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// disable the offset
glDisable(GL_POLYGON_OFFSET_FILL);
drawVectorMap();
这样就可以在深度缓冲区中保留值,就好像地球是实心的一样。如果这是不可接受的,那么我能想到的唯一选择就是在CPU上进行可见性计算。您可以使用glGet获取当前视图矩阵,直接从您将它们映射到球体的方式确定每个顶点的法线(它只是它们相对于中心的位置),然后绘制任何至少一个的线顶点返回从摄像机到点和法线的矢量点积的负值。
So that'll leave values in your depth buffer as though the globe were solid. If that's not acceptable, then the only alternative I can think of is to do visibility calculations on the CPU. You can use glGet to get the current view matrix, determine the normal at each vertex directly from the way you map them to the sphere (it'll just be their location relative to the centre), then draw any line for which at least one vertex returns a negative value for the dot product of the vector from the camera to the point and the normal.
这篇关于用OpenGL ES绘制地球仪的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!