我正在使用Oculus Rfit支持编写一个小项目,并且使用点精灵渲染我的粒子。我根据点精灵与顶点着色器中“相机”的距离来计算像素的大小(以像素为单位)。在默认屏幕上(而不是在Rift上)绘制时,尺寸可以完美工作,但是当我切换到Rift时,我会注意到以下现象:
屏幕截图:
禁用裂谷:http://i.imgur.com/EoguiF0.jpg
启用了裂谷:http://i.imgur.com/4IcBCf0.jpg
这是顶点着色器:
#version 120
attribute vec3 attr_pos;
attribute vec4 attr_col;
attribute float attr_size;
uniform mat4 st_view_matrix;
uniform mat4 st_proj_matrix;
uniform vec2 st_screen_size;
varying vec4 color;
void main()
{
vec4 local_pos = vec4(attr_pos, 1.0);
vec4 eye_pos = st_view_matrix * local_pos;
vec4 proj_vector = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
float proj_size = st_screen_size.x * proj_vector.x / proj_vector.w;
gl_PointSize = proj_size;
gl_Position = st_proj_matrix * eye_pos;
color = attr_col;
}
st_screen_size制服是视口的大小。由于我在Rift上渲染时使用的是单个frambuffer(每只眼睛的一半),因此st_screen_size的值应为(frabuffer_width / 2.0,frambuffer_height)。
这是我的抽奖电话:
/*Drawing starts with a call to ovrHmd_BeginFrame.*/
ovrHmd_BeginFrame(game::engine::ovr_data.hmd, 0);
/*Start drawing onto our texture render target.*/
game::engine::ovr_rtarg.bind();
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Update the particles.
game::engine::nuc_manager->update(dt, get_msec());
/*for each eye... */
for(unsigned int i = 0 ; i < 2 ; i++){
ovrEyeType eye = game::engine::ovr_data.hmd->EyeRenderOrder[i];
/* -- Viewport Transformation --
* Setup the viewport to draw in the left half of the framebuffer when we're
* rendering the left eye's view (0, 0, width / 2.0, height), and in the right half
* of the frambuffer for the right eye's view (width / 2.0, 0, width / 2.0, height)
*/
int fb_width = game::engine::ovr_rtarg.get_fb_width();
int fb_height = game::engine::ovr_rtarg.get_fb_height();
glViewport(eye == ovrEye_Left ? 0 : fb_width / 2, 0, fb_width / 2, fb_height);
//Send the Viewport size to the shader.
set_unistate("st_screen_size", Vector2(fb_width /2.0 , fb_height));
/* -- Projection Transformation --
* We'll just have to use the projection matrix supplied but he oculus SDK for this eye.
* Note that libovr matrices are the transpose of what OpenGL expects, so we have to
* send the transposed ovr projection matrix to the shader.*/
proj = ovrMatrix4f_Projection(game::engine::ovr_data.hmd->DefaultEyeFov[eye], 0.01, 40000.0, true);
Matrix4x4 proj_mat;
memcpy(proj_mat[0], proj.M, 16 * sizeof(float));
//Send the Projection matrix to the shader.
set_projection_matrix(proj_mat);
/* --view/camera tranformation --
* We need to construct a view matrix by combining all the information provided by
* the oculus SDK, about the position and orientation of the user's head in the world.
*/
pose[eye] = ovrHmd_GetHmdPosePerEye(game::engine::ovr_data.hmd, eye);
camera->reset_identity();
camera->translate(Vector3(game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.x,
game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.y,
game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.z));
/*Construct a quaternion from the data of the oculus SDK and rotate the view matrix*/
Quaternion q = Quaternion(pose[eye].Orientation.w, pose[eye].Orientation.x,
pose[eye].Orientation.y, pose[eye].Orientation.z);
camera->rotate(q.inverse().normalized());
/*Translate the view matrix with the positional tracking*/
camera->translate(Vector3(-pose[eye].Position.x, -pose[eye].Position.y, -pose[eye].Position.z));
camera->rotate(Vector3(0, 1, 0), DEG_TO_RAD(theta));
//Send the View matrix to the shader.
set_view_matrix(*camera);
game::engine::active_stage->render(STAGE_RENDER_SKY | STAGE_RENDER_SCENES | STAGE_RENDER_GUNS |
STAGE_RENDER_ENEMIES | STAGE_RENDER_PROJECTILES, get_msec());
game::engine::nuc_manager->render(RENDER_PSYS, get_msec());
game::engine::active_stage->render(STAGE_RENDER_COCKPIT, get_msec());
}
/* After drawing both eyes into the texture render target, revert to drawing directly to the display,
* and we call ovrHmd_EndFrame, to let the Oculus SDK draw both images properly, compensated for lens
* distortion and chromatic abberation onto the HMD screen.
*/
game::engine::ovr_rtarg.unbind();
ovrHmd_EndFrame(game::engine::ovr_data.hmd, pose, &game::engine::ovr_data.fb_ovr_tex[0].Texture);
这个问题困扰了我很多天了……我觉得自己已经走到了尽头。我可以只使用广告牌四边形.....但我不想轻易放弃:)加点精灵更快。
在Rift上渲染时,基于距离的Point size衰减背后的数学是否会发生变化?
不考虑某事吗?
数学不是(但至少)不是我的长处。 :)任何见识将不胜感激!
PS:如果需要有关我发布的代码的任何其他信息,我会很乐意提供。
最佳答案
基本上,您首先要将视点转换为视点空间,以找出视点空间中的Z坐标(与观察者之间的距离),然后构造一个与X轴对齐且具有所需粒子大小的 vector ,并将其投影以了解如何投影和视口转换(sortof)时覆盖的许多像素。
假设投影矩阵是对称的,这是完全合理的。在处理裂痕时,这种假设是错误的。我画了一个图来更好地说明问题:
http://i.imgur.com/vm33JUN.jpg
如您所见,当视锥面不对称时(裂谷肯定是这种情况),使用投影点距屏幕中心的距离将为您的每只眼睛提供截然不同的值,并且与“正确您要寻找的投影尺寸。
您必须做的是,使用相同的方法投影两个点,例如(0,0,z,1)AND(attr_size,0,z,1),并计算它们在屏幕空间上的差异(投影之后,透视划分)和视口)。