如何重写Apple的GLCameraRipple example,使其不需要iOS 5.0?

我需要在iOS 4.x上运行它,所以我不能使用CVOpenGLESTextureCacheCreateTextureFromImage。我该怎么办?

接下来,我使用下面的代码提供YUV数据而不是RGB,但是图片不正确,屏幕为绿色。好像UV平面不起作用。

CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);

glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE,
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));

glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA,
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

[self drawFrame];

glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);


我怎样才能解决这个问题?

最佳答案

iOS 5.0的快速纹理上传功能可以实现非常快的uploading of camera framesextraction of texture data,这就是Apple在其最新示例代码中使用它们的原因。对于相机数据,使用iPhone 4S上的这些iOS 5.0纹理缓存,我已经看到640x480帧的上传时间从9毫秒变为1.8毫秒,对于电影拍摄,切换到它们时,我看到了超过四倍的改进。

也就是说,您可能仍想为尚未更新到iOS 5.x的散客提供后备服务。我通过在运行时检查纹理上传功能来在my open source image processing framework中执行此操作:

+ (BOOL)supportsFastTextureUpload;
{
    return (CVOpenGLESTextureCacheCreate != NULL);
}


如果返回否,我将使用自iOS 4.0以来的标准上传过程:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);


GLCameraRipple在上传过程中有一个怪癖,那就是它使用YUV平面框架(分为Y和UV图像)而不是一个BGRA图像。我从BGRA上传的视频中获得了不错的性能,所以我还没有看到需要自己处理YUV数据的需求。您可以修改GLCameraRipple以使用BGRA框架和上面的代码,也可以将我上面的内容重做为YUV平面数据上传。

07-28 01:50
查看更多