本文介绍了如何使用动态大小的纹理数组与glTexImage2D?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我可以加载我创建的静态大小的纹理。

 <$> 




$ b

c $ c> #define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512

GLubyte textureArray [TEXTURE_HEIGHT] [TEXTURE_WIDTH] [4];

这里是glTexImage2D的用法:

  glTexImage2D(
GL_TEXTURE_2D,0,GL_RGBA,
TEXTURE_WIDTH,TEXTURE_HEIGHT,
0,GL_RGBA,GL_UNSIGNED_BYTE,textureArray);

这里是如何填充数组(粗略的例子,不是从我的代码完全复制):

  for(int i = 0; i< getTexturePixelCount(); i ++)
{
textureArray [column] [row] [0] =(GLubyte)pixelValue1;
textureArray [column] [rows] [1] =(GLubyte)pixelValue2;
textureArray [column] [row] [2] =(GLubyte)pixelValue3;
textureArray [column] [row] [3] =(GLubyte)pixelValue4;
}

如何更改,以便不需要TEXTURE_WIDTH和TEXTURE_HEIGHT?也许我可以使用指针样式数组并动态分配内存...



编辑:



我看到的问题,在C ++它不能真正做到。 Budric指出的工作是使用单维数组,但是使用所有3个维度相乘来表示索引:

  GLbyte * array = new GLbyte [xMax * yMax * zMax]; 

要访问,例如x / y / z 1/2/3,需要做:

  GLbyte byte = array [1 * 2 * 3];但是,问题是,我不认为 glTexImage2D  

code>函数支持这个。



编辑2:



注意OpenGL开发人员,这可以通过使用一维像素数组来克服...

...无需使用3维数组。在这种情况下,我不得不使用这个工作,因为3维数组显然不是严格可能在C ++。

解决方案

可以使用

  int width = 1024; 
int height = 1024;
GLubyte * texture = new GLubyte [4 * width * height];
...
glTexImage2D(
GL_TEXTURE_2D,0,GL_RGBA,
width,height,
0,GL_RGBA,GL_UNSIGNED_BYTE,textureArray);
delete [] texture; //删除不需要的纹理的本地副本;

但是,你仍然需要在glTexImage2D调用中指定OpenGL的宽度和高度。此调用复制纹理数据,该数据由OpenGL管理。您可以删除,调整大小,更改您的原始纹理数组所有你想要的,它不会改变你指定给OpenGL的纹理。



编辑:
C / C ++仅处理1维数组。事实上,你可以做纹理[a] [b]在编译时被编译器隐藏和转换。编译器必须知道列的数量,并且会做纹理[a * cols + b]。



使用类隐藏分配,访问纹理。 p>

出于学术目的,如果你真的想要动态多维数组,以下应该工作:

  int rows = 16,cols = 16; 
char * storage = new char [rows * cols];
char ** accessor2D = new char * [rows];
for(int i = 0; i< rows; i ++)
{
accessor2D [i] = storage + i * cols;
}
accessor2D [5] [5] = 2;
assert(storage [5 * cols + 5] == accessor2D [5] [5]);
delete [] accessor2D;
delete [] storage;

请注意,在所有情况下,我使用1D数组。它们只是指针数组和指针指针数组。这里有内存开销。这也适用于没有颜色分量的二维数组。对于3D解引用,这变得很杂乱。不要在代码中使用此方法。


Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512.

This code is from the header:

#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512

GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4];

Here's the usage of glTexImage2D:

glTexImage2D(
	GL_TEXTURE_2D, 0, GL_RGBA,
	TEXTURE_WIDTH, TEXTURE_HEIGHT,
	0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);

And here's how I'm populating the array (rough example, not exact copy from my code):

for (int i = 0; i < getTexturePixelCount(); i++)
{
    textureArray[column][row][0] = (GLubyte)pixelValue1;
    textureArray[column][row][1] = (GLubyte)pixelValue2;
    textureArray[column][row][2] = (GLubyte)pixelValue3;
    textureArray[column][row][3] = (GLubyte)pixelValue4;
}

How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory...

Edit:

I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes:

GLbyte *array = new GLbyte[xMax * yMax * zMax];

And to access, for example x/y/z of 1/2/3, you'd need to do:

GLbyte byte = array[1 * 2 * 3];

However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function?

Edit 2:

Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels...

... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.

解决方案

You can use

int width = 1024;
int height = 1024;
GLubyte * texture = new GLubyte[4*width*height];
...
glTexImage2D(
    GL_TEXTURE_2D, 0, GL_RGBA,
    width, height,
    0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
delete [] texture;         //remove the un-needed local copy of the texture;

However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL.

Edit:C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b].

Use a class to hide the allocation, access to the texture.

For academic purposes, if you really want dynamic multi dimensional arrays the following should work:

int rows = 16, cols = 16;
char * storage = new char[rows * cols];
char ** accessor2D = new char *[rows];
for (int i = 0; i < rows; i++)
{
    accessor2D[i] = storage + i*cols;
}
accessor2D[5][5] = 2;
assert(storage[5*cols + 5] == accessor2D[5][5]);
delete [] accessor2D;
delete [] storage;

Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.

这篇关于如何使用动态大小的纹理数组与glTexImage2D?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-21 13:21