NOTE: I’m going to try something new here: rather than write about something I have already solved, this series will be my notes as I work through this problem. As such, this is *not* a guide. The code is a pile of hacks and I will make silly mistakes.

### The Problem:

I am working on a city-building game. I want to be able to use detailed models with large numbers of polygons and textures, but without the runtime rendering cost that entails. To do so, I want to pre-render detailed models as textures for lower level-of-detail (LOD) models. This is not a particularly original solution (it is the system SimCity 4 uses). There are a few restrictions implied by this choice:

- Camera projection must be orthographic.
- A perspective projection will appear to skew the model the further it gets from the center of the view. Orthographic projection will not.

- Camera angle must be fixed (or a finite set of fixed positions).
- While camera *position* is irrelevant (because an orthographic projection doesn’t affect depth), its rotation is not. Each camera angle needs its own LOD image. Therefore, we need a small number of them.

The process is simple enough: render the detail model for each view. Projecting the vertices of the LOD model with the same view produces the proper texture coordinates for the LOD. Then, pack all of the rendered textures into an atlas.

### Orthographic Rendering

Rendering an orthographic projection is, conceptually, very straightforward. After transforming the world to camera space (moving the camera to the origin, and aligning the z-axis with the camera’s forward view), we select a box of space and map it onto a unit cube (that is, a cube with a minimum point of (-1, -1, -1) and a maximum point of (1, 1, 1).) The z-component of each point is then discarded.

I will not get into the math here. Wikipedia has a nicely concise explanation.

What this *means* is that we can fit this box tightly to the model we want to render. With clever application of glViewport, we could even render all of our views in one go. (At the moment, I have not yet implemented this.) This makes building the texture atlas much simpler.

### Calculating Camera-space LOD

To get this tight fit, we need to know the bounds of the rendered model in camera space. Since we are projecting onto the LOD model, *its* bounds are what we’re concerned with. (This implies, by the way, that the LOD model must completely enclose the detail model.)

struct AABB { vec3 min; vec3 max; }; AABB GetCameraSpaceBounds(Model * model, mat4x4 camera_from_model) { AABB result; vec4 * model_verts = model->vertices; vec4 camera_vert = camera_from_model * model_verts[0]; // We don't initialize to 0 vectors, because we don't // guarentee the model is centered on the origin. result.min = vec3(camera_vert); result.max = vec3(camera_vert); for (u32 i = 1; i < model->vertex_count; ++i) { camera_vert = camera_from_model * model_verts[i]; result.min.x = min(result.min.x, camera_vert.x); result.min.y = min(result.min.y, camera_vert.y); result.min.z = min(result.min.z, camera_vert.z); result.max.x = max(result.max.x, camera_vert.x); result.max.y = max(result.max.y, camera_vert.y); result.max.z = max(result.max.z, camera_vert.z); } return result; }

This bounding box gives us the clip volume to render.

AABB lod_bounds = GetCameraSpaceBounds(lod_model, camera_from_model); mat4x4 eye_from_camera = OrthoProjectionMatrix( lod_bounds.max.x, lod_bounds.min.x, // right, left lod_bounds.max.y, lod_bounds.min.y, // top, bottom lod_bounds.min.z, lod_bounds.max.z); // near, far mat4x4 eye_from_model = eye_from_camera * camera_from_model;

### Rendering LOD Texture

Rendering to a texture requires a framebuffer. (In theory, we could also simply render to the screen and extract the result with glGetPixels. However, that limits us to a single a single color output and makes it more difficult to do any GPU postprocessing.)

After binding a new framebuffer, we need to create the texture to render onto:

glGenTextures(1, &result->texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, result->texture); // render_width and render_height are the width and height of the // camera space lod bounding box. glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, render_width, render_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0 ); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, result->texture, 0); glViewport(0, 0, render_width, render_height);

This gives us a texture large enough to fit our rendered image, attaches it to the framebuffer, and sets the viewport to the entire framebuffer. To render multiple views to one texture, we’d allocate a larger texture, and use the viewport call to selected the target region.

Now we simply render the detail model with the eye_from_model transform calculated earlier. The result:

(The detail shader used simply maps position to color.)

That’s all for now. Next time: how do we light this model?