这是节奏节奏拳手或任何知道答案的人的问题:
根据线程:How to use OpenGL ES on a separate thread on iphone?
Rhythmic Fistman发现:“iOS5的CVOpenGLESTextureCaches本质上使纹理上传免费,因此我不再需要shareGroups,并且我的代码更简单,更快。”
我目前正在开发一个应用程序,该应用程序可绘制3D图形并将其保存在电影文件中。据我了解,UIView的OpenGL ES帧缓冲区必须使用colorRenderBuffer而不是CVOpenGLESTextureCache进行备份,这是电影文件保存3D图形的opengl纹理的方式。
我不希望OpenGLES两次渲染相同的3D图形,而是希望共享渲染结果。
您能否分享您的知识和/或源代码,以了解如何使用CVOpenGLESTextureCache在保存openGL纹理的工作线程和显示帧缓冲区的主线程UIView之间共享?
提前致谢。
问候,霍华德
===========更新========
谢谢,我遵循Brad的答案和RosyWriter示例代码,通过从avCaptureOutput Dispatch队列线程渲染最终缓冲区和主UIView来编写一些简单的代码。 (稍后将对其进行抛光)。
有2个OpenGL-ES 2.0上下文,为UIWebView创建了1个mainContext,为avCaptureOutput的调度队列创建了2个workingContext。他们共享同一组。
到目前为止,一切都很好。将尝试查看屏幕撕裂效果是否发生。
非常感谢!
下面是我的代码:
//Draw texture
-(void)Draw:(CVPixelBufferRef)updatedImageBuffer
{
CVOpenGLESTextureRef cvTexture;
/////////////////////////////////////////////
//first draw graphics to the CVPixelBufferRef
/////////////////////////////////////////////
//creates a live binding between the image buffer and the underlying texture object.
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
cvTextureCache,
updatedImageBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,// opengl format
esContext.bufWidth,
esContext.bufHeight,
GL_BGRA,// native iOS format
GL_UNSIGNED_BYTE,
0,
&cvTexture);
if (err == kCVReturnSuccess) {
assert(CVOpenGLESTextureGetTarget(cvTexture) == GL_TEXTURE_2D);
GLint texId = CVOpenGLESTextureGetName(cvTexture);
if (!workingContext || [EAGLContext setCurrentContext:workingContext] == NO) {
NSLog(@"SwapBuffers: [EAGLContext setCurrentContext:workingContext] failed");
return;
}
glBindTexture(GL_TEXTURE_2D, texId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, workerFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if ( status == GL_FRAMEBUFFER_COMPLETE ) {
drawGraphics(&esContext);
glBindTexture(GL_TEXTURE_2D, 0);
//glFlush();
/////////////////////////////////////////////
//then draw the texture to the main UIView
/////////////////////////////////////////////
if (!mainContext || [EAGLContext setCurrentContext:mainContext] == NO) {
NSLog(@"SwapBuffers: [EAGLContext setCurrentContext:mainContext] failed");
return;
}
glBindTexture(GL_TEXTURE_2D, texId);
// Set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBuffer);
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if ( status == GL_FRAMEBUFFER_COMPLETE ) {
// Draw the texture on the screen with OpenGL ES 2
drawUIView(&esContext, textureVertices);
// Present the UIView
glBindRenderbuffer(GL_RENDERBUFFER, mainColorBuffer);
[mainContext presentRenderbuffer:GL_RENDERBUFFER];
glBindTexture(GL_TEXTURE_2D, 0);
}
//glFlush();
}
//Flush textureCache
CVOpenGLESTextureCacheFlush(cvTextureCache, 0);
//Release created texture
CFRelease(cvTexture);
}
}
void drawUIView( ESContext *esContext, const GLfloat* textureVertices)
{
UserData *userData = esContext->userData;
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
// Set the view port to the entire view
glViewport(0, 0, esContext->viewWidth, esContext->viewHeight);
// Clear the color buffer
glClear ( GL_COLOR_BUFFER_BIT );
// Use shader program.
glUseProgram(userData->passThroughProgram);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Update uniform values if there are any
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
///
// Draw a triangle using the shader pair created in Init()
//
void drawGraphics ( ESContext *esContext )
{
UserData *userData = esContext->userData;
static const GLfloat vVertices[] = { 0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f };
// Set the viewport
glViewport ( 0, 0, esContext->bufWidth, esContext->bufHeight );
// Clear the color buffer
glClear ( GL_COLOR_BUFFER_BIT );
// Use the program object
glUseProgram ( userData->graphicsProgram );
// Load the vertex data
glVertexAttribPointer ( 0, 3, GL_FLOAT, GL_FALSE, 0, vVertices );
glEnableVertexAttribArray ( 0 );
glDrawArrays ( GL_TRIANGLES, 0, 3 );
}
最佳答案
纹理缓存中的像素缓冲区必须与纹理相关联,因此您不能直接将其与CAEAGLLayer渲染目标一起使用。
但是,可以通过将场景渲染到与CVOpenGLESTextureCache关联的纹理,然后使用简单的四边形和直通着色器在CAEAGLLayer中将该纹理渲染到屏幕来避免两次渲染同一场景。这对渲染性能的影响很小,可以避免使用glReadPixels()
提取电影录制的场景信息的能力所抵消。