Posts Tagged ‘computer graphics’

WARP

I recently read about the Windows Advanced Rasterization Platform (WARP), which is a software rasterizer that will ship as part of Windows 7. WARP is targeted at:

Casual Games: Games have simple rendering requirements but also want the ability to use impressive visual effects that can be hardware accelerated. The majority of the best selling game titles for Windows are either simulations or casual games, neither of which requires high performance graphics, but both styles of games greatly benefit from modern shader based graphics and the ability to scale on hardware if present.

Existing Non-Gaming Applications: There is a large gamut of graphical applications that want to minimize the number of code paths in their rendering layer. WARP10 enables these applications to implement a single Direct3D 10, 10.1, or 11 code-path that can target a very large number of machine configurations.

Advanced Rendering Games: Game developers that want to isolate graphics card or driver specific rendering errors. We believe that all games, even extremely graphically demanding games would benefit from being able to render their content using WARP to validate that any visual artifacts they might experience are due to rendering errors or problems with hardware or drivers.

Using WARP as a tool for isolating rendering errors is understandable, but as a fallback for DirectX 10 casual games or non-gaming applications attempting to run on a PC w/o a DX10 GPU, a few things pop into my mind.

  • As a fallback mechanism, it goes back too far. We’re talking about going from DX10 -> software rasterization. There’s still lots of graphics hardware out there that targeted previous versions of DirectX, at the very least DX7, DX8, and DX9. Why not allow for seamless fallback to these earlier classes of graphics hardware, instead of a making a gigantic leap backwards to software rasterization? From a developer’s perspective, there would be a real benefit here in writing a DX10 codepath and having it run on older hardware.
  • DX10 adoption is slow to non-existent due to the slow adoption rate of Windows Vista. Unless Microsoft is able to generate massive demand for Windows 7, WARP will have little impact due to the little impact of DX10.
  • A project like WARP seems to be based around the mentality that a GPU is something special for a PC instead of a requirement. Versus software rasterization, GPU rasterization is orders of magnitude faster and the price of a decent card is under $50. Why is setting a GPU requirement such an endeavor, for Microsoft of all companies?!
  • On performance, WARP beats Intel integrated graphics. This really isn’t a surprise or any sort of accomplishment. Intel is really just selling overpriced garbage here.
  • Perhaps Microsoft working on a project like WARP instead of setting stricter graphics hardware requirements for Windows 7 is due to another shady deal with Intel. Remember the one with Vista.

Unexpected results

Every once in a while I’ll test some piece of code and encounter a bug or some unexpected behavior that produces something weird, peculiar, or just something pretty damn cool. Here’s a perfect example,

weird output

This is from some vectorization code I’m working on. Just for the hell of it, I decided to run the output image (the one with the green pixels, which represents vertices of a polygon) through the vectorization algorithm again. The subsequent images show what happened as I kept running the vectorization algorithm on the output, in effect creating a feedback loop. (The colors that are present in the subsequent images are a result of an earlier stage in the vectorization process, the output of which is no longer adequately processed, resulting in the pattern that’s visible).

Automatic mipmap generation on Radeon 9500

I stumbled upon an annoying little graphics bug recently where I was getting corrupted textures on a Radeon 9500 graphics card. I eventually came across this thread which hinted at the problem. Apparently, automatic mipmap generation is messed up on the Radeon 9500 and screws up your textures (I got rainbow colors, weird blocks, etc.). I’m pretty sure the hardware supports it, so perhaps it’s a driver issue, but in any case it didn’t work.

What’s even more annoying about this is that everything worked perfectly on a much older Radeon 7500 card.

Zerospace lighting model

Work on zerospace has finally picked up in the past few weeks and a few days ago I posted the first screenshot in the zerospace blog. Not much to see as yet, just an background, starfield, and untextured model. However, I did put in some major work on the lighting system, which is visible in the screenshot (to a certain extent; the specular highlights are dull and, although they’re there, they’re only apparent when the model rotates).

Anyway, what I wanted to discuss in this post is the lighting model, since I think it’s fairly unique and it gives some amazing results.

Ambient
The ambient component is done per-vertex (i.e. all computations are done in vertex shader and color is interpolated over the face of the triangle) and consists of 7 directional lights hitting the model from various directions. It’s sort of an ultra-simplified version of ambient occlusion mapping.

Diffuse
The diffuse component is sampled from a 2d texture. This texture is a scaled down and heavily blurred version of the background texture (done offline; a heavy blur, such as the one required would be too expensive in the pixel shader). Scaling it down is simply a matter of performance, as the background texture is large (2048×2048). Blurring (I do a guassian blur) is a trick to get the diffuse lighting from a scene (this is a bit difficult to explain, I’ll try to do later in another post).

So how how is the texture mapped onto the 3d model? Spherical texture mapping! (see this article for an explanation). Note that the normal vector used to computer the texture coordinates is the the normal vector transformed by the world matrix (since I don’t want to texture map the diffuse lighting onto the 3d model in model space; this will cause the lighting to be “static” – i.e. when the model rotates the lighting values won’t change to reflect the change in orientation).

Specular
The specular component is is done per-pixel because per-vertex specular highlights usually look terrible (there are some other issues in general, but this was the main one for zerospace). For the specular lighting there is really only 1 light, a single directional light pointing down the z-axis. However, there are 8 view vectors and the specular computation is done between the single light and each of the 8 view vectors. I came up with this hack as I found that experimenting with multiple lights was more difficult than experimenting with multiple view vectors (different light vectors would cause either a too extreme or too weak specular highlight). Anyway, I do the specular computation with an exponent of 7, and I them multiple the results by 0.075 to dull the highlights.

One final note, the screenshot is no longer 100% accurate of the lighting system . I just found a major (and stupid!) mistake where I was multiply the ambient and diffuse components together instead of adding them. However, what I described above should stay the same, I just have to tweak some values to prevent the lighting from being too bright or too dark.

Also, on a more final note, this is not the complete lighting model for zerospace. Lighting from weapons, particles, etc. will also be taken into account.