使用深度缓冲区和抗锯齿技术时

使用深度缓冲区和抗锯齿技术时

本文介绍了使用深度缓冲区和抗锯齿技术时,通过glreadpixel()读取数据时出现问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用glreadpixel()捕获游戏的屏幕.它在ios版本3.1.1的2g iphone上也可以在模拟器上正常工作.但在ios版本4.2.1的ipad上却没有.我开始知道与此有关的问题.适用于特定设备(ipad)上的ios 4.0以上版本我们绑定深度缓冲区并使用抗锯齿技术.当我们使用opengl的glreadpixel()从帧缓冲区捕获数据时,将在目标缓冲区中返回全0 ...

I want to capture the screen of my game using glreadpixel().it works fine over simulator also on 2g iphone with ios version 3.1.1 .but on ipad with ios version 4.2.1 it doesnt . i came to know the issue regarding this. for ios version 4.0 above on a particular device (ipad)we bind depth buffer and use anti-aliasing technique. And when we use glreadpixel() of opengl that capture data from frame buffer returns all 0 in the destination buffer...

如果我们不将深度缓冲区绑定到帧缓冲区,并且不使用抗锯齿技术,则效果很好.

if we dont bind the depth buffer to frame buffer and dont use the anti-aliasing technique it works fine.

我使用的代码是:-

CGRect screenBounds = [[UIScreen mainScreen]边界];

CGRect screenBounds = [[UIScreen mainScreen] bounds];

int backingWidth = screenBounds.size.width;
int backingHeight =screenBounds.size.height;

NSLog(@"width : %f Height : %f",screenBounds.size.width,screenBounds.size.height);
CGSize esize = CGSizeMake(screenBounds.size.width, screenBounds.size.height);
NSInteger myDataLength = esize.width * esize.height * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y < backingHeight / 2; y++) {
    for(int xt = 0; xt < backingWidth; xt++) {
        GLuint top = buffer[y * backingWidth + xt];
        GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + xt];
        buffer[(backingHeight - 1 - y) * backingWidth + xt] = top;
        buffer[y * backingWidth + xt] = bottom;
    }
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth,backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
/*
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
[snap setImage:myImage];
[self addSubview:snap];*/

有什么想法在opegl es中使用glreadpixel()或任何其他类似函数时如何在深度锯齿中包含深度信息?

Any idea how to include depth information with anti-aliasing while using glreadpixel() or any other similar function in opegl es ?

推荐答案

想通了!您必须在调用glReadPixels()之前将resolve-framebuffer绑定回GL_FRAMEBUFFER

Figured it out! You have to bind the resolve-framebuffer back to GL_FRAMEBUFFER before calling glReadPixels()

glBindFramebuffer(GL_FRAMEBUFFER, resolveFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glReadPixels(xpos, ypos, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixelByteArray);
glBindFramebuffer(GL_FRAMEBUFFER, sampleFrameBuffer);

在渲染下一帧之前,请确保将样本帧缓冲区绑定为GL_FRAMEBUFFER,但是默认的Apple模板已经做到了这一点.

Make sure to bind your sample-framebuffer as GL_FRAMEBUFFER before rendering the next frame, but the default Apple template already does this.

这篇关于使用深度缓冲区和抗锯齿技术时,通过glreadpixel()读取数据时出现问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-01 15:28