问题描述
我的OpenGL应用程序使用OpenGL以全屏模式渲染纹理,并定期更新部分纹理。到目前为止,我一直在使用glTexImage2D来推送我的初始纹理,然后用glTexSubImage2D更新脏区域。为此,我使用单缓冲。这很有效。
My OpenGL app uses OpenGL to render a texture in full screen and updates part of it at regular intervals. So far, I've been using glTexImage2D to push my initial texture and then I update the dirty regions with glTexSubImage2D. To do that, I'm using single buffering. This works well.
我看到可能有另一种方法可以使用CVOpenGLESTextureCache实现相同的功能。纹理缓存中保存的纹理引用CVPixelBuffer。我想知道我是否可以改变这些缓存的纹理。我试图为每次更新重新创建一个CVOpenGLESTexture,但这会大大降低我的帧速率(毕竟因为我没有在任何地方指定脏区域,所以并不奇怪)。也许我完全误解了这个纹理缓存的用例。
I've seen that there might be another way to achieve the same thing using CVOpenGLESTextureCache. The textures held in the texture cache reference a CVPixelBuffer. I'd like to know if I can mutate these cached textures. I tried to recreate a CVOpenGLESTexture for each update but this decreases my frame rate dramatically (not surprising after all since I'm not specifying the dirty region anywhere). Maybe I totally misunderstood the use case for this texture cache.
有人可以提供一些指导吗?
Can someone provide some guidance?
更新:这是我正在使用的代码。第一次更新工作正常。后续更新没有(没有任何反应)。在每次更新之间,我修改原始位图。
UPDATE: Here is the code I'm using. The first update works fine. The subsequent updates don't (nothing happens). Between each update I modify the raw bitmap.
if (firstUpdate) {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, ctx, NULL, &texCache);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(NULL, width_, height_, kCVPixelFormatType_32BGRA, bitmap, width_*4, NULL, 0, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CVOpenGLESTextureRef texture = NULL;
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, texCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, width_, height_, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
texture_[0] = CVOpenGLESTextureGetName(texture);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
CVOpenGLESTextureCacheFlush(texCache, 0);
if (firstUpdate) {
glBindTexture(GL_TEXTURE_2D, texture_[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
if (firstUpdate) {
static const float textureVertices[] = {
-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
1.0, 1.0
};
static const float textureCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
glVertexPointer(2, GL_FLOAT, 0, &textureVertices[0]);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
firstUpdate = false;
推荐答案
我一直在做这些黑客攻击纹理API和我终于能够生成一个使用纹理缓存API通过内存写入纹理的工作示例。这些API可以在iOS设备上运行,但不能在模拟器上运行,因此需要一个特殊的解决方法(基本上只是在模拟器中显式调用glTexSubImage2D())。代码需要双重缓冲在另一个线程中完成的纹理加载,以避免在渲染进行时进行更新。完整的源代码和时序结果位于。链接的Xcode项目从PNG解码,因此较旧的iPhone硬件上的性能有点差。但是代码可以自由地做任何你想做的事情,所以不应该很难适应其他像素源。要只写一个脏区域,只写入后台线程中内存缓冲区的那一部分。
I have been doing quite a bit of hacking with these texture APIs and I finally was able to produce a working example of writing to a texture via memory using the texture cache API. These APIs work on the iOS device but not on the simulator, so a special workaround was needed (basically just calling glTexSubImage2D() explicitly in the simulator). The code needed to double buffer the texture loading done in another thread to avoid updating while rendering was going on. The full source code and timing results are at opengl_write_texture_cache. The linked Xcode project decodes from PNGs and the performance on older iPhone hardware is a little poor as a result. But the code is free to do whatever you want with, so it should not be hard to adapt to some other pixel source. To only write a dirty region, only write to that portion of the memory buffer in the background thread.
这篇关于iOS上的CVOpenGLESTextureCache vs glTexSubImage2D的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!