问题描述
我的目的是渲染没有窗口的OpenGL场景,直接进入文件。场景可能大于我的屏幕分辨率。
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
我如何做到这一点?
推荐答案
所有这些都以,您将使用它来传输存储在GPU上的特定缓冲区到主存储器(RAM)。正如你将在文档中注意到的,没有参数来选择哪个缓冲区。和OpenGL通常一样,要读取的当前缓冲区是一个状态,可以使用 glReadBuffer
设置。
因此,一个非常基本的屏幕外渲染方法将类似于下面的。我使用c ++伪代码,所以它可能包含错误,但应该使一般流程清楚:
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
这将读取当前的后台缓冲区(通常是你要绘制的缓冲区)。您应该在交换缓冲区之前调用此函数。注意,你也可以用上述方法完美地读取后台缓冲区,清除它并绘制完全不同的东西,然后再进行交换。从技术上讲,你也可以读取前缓冲区,但是这通常是不鼓励的,因为理论上允许实现一些优化,可能会使你的前缓冲区包含垃圾。
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
这个很少有缺点。首先,我们不是真的做屏幕渲染我们。我们渲染到屏幕缓冲区并从中读取。我们可以模仿屏幕外渲染从来不交换在后台缓冲区,但它不觉得正确。紧接着,前后缓冲器被优化以显示像素,而不是读取它们。这就是发挥作用的地方。
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
本质上, FBO允许您创建非默认的帧缓冲区(如FRONT和BACK缓冲区),允许您绘制到内存缓冲区而不是屏幕缓冲区。实际上,您可以绘制纹理或。第一个是最佳的,当你想重新使用OpenGL本身的像素作为纹理(例如在游戏中一个天真的安全摄像头),后者,如果你只想渲染/回读。有了这个,上面的代码会变成这样,再次伪代码,所以如果错误输入或忘记了一些语句,不要杀我。
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
这是一个简单的例子,在现实中你可能还需要存储深度。你也可能想渲染到纹理,但我会把它作为一个练习。
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
最后,您可以使用,以使读取像素异步。问题是 glReadPixels
阻塞,直到像素数据完全转移,这可能会拖延你的CPU。使用PBO的实现可能立即返回,因为它控制缓冲区。只有当映射缓冲区时,管道才会阻塞。然而,PBO可以被优化以仅在RAM上缓冲数据,因此该块可以花费更少的时间。读取像素代码将变成这样:
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels
blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
帽子中的部分至关重要。如果你只是向PBO发出 glReadPixels
,然后是该PBO的 glMapBuffer
,你就得到了很多的代码。当然 glReadPixels
可能会立即返回,但现在 glMapBuffer
会停止,因为它必须安全地映射数据从读缓冲区到PBO和主RAM中的一块内存。
The part in caps is essential. If you just issue a glReadPixels
to a PBO, followed by a glMapBuffer
of that PBO, you gained nothing but a lot of code. Sure the glReadPixels
might return immediately, but now the glMapBuffer
will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
请注意,我使用GL_BGRA无处不在,这是因为许多显卡内部使用这个作为最佳呈现格式(或不带alpha的GL_BGR版本)。它应该是像这样的像素传输的最快的格式。
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
当使用OpenGL ES 2.0时, GL_DRAW_FRAMEBUFFER
可能不可用,您应该在这种情况下使用 GL_FRAMEBUFFER
。
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER
might not be available, you should just use GL_FRAMEBUFFER
in that case.
这篇关于如何在OpenGL上渲染屏幕?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!