Archive for the ‘Graphics and Rendering’ Category

Circular stipple patterns on an HTML5 canvas

In my previous post on stipple patterns, I presented code to draw a few simple stipple patterns based on drawing single pixels at fixed locations. In this post, I’ll present something just a bit more complex: drawing circles to create a circular stipple patten, again writing a shader that makes use of the GraphicsCore and FXController classes.

Shader.circleStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
var alpha = 1.0;
var r1 = r / 255.0;
var rF = Math.floor((alpha * r1 + (1.0 - alpha)) * 255.0);

var circleMaxDiam = 12; // circle at every 12th pixel, also defines max diameter of circle,
var circleMaxRadius = circleMaxDiam / 2; // maximum radius of circle

// figure out the x, y indices of the circle we're within
// x,y need to be shifted by the circle radius b/c circleMaxDiam defined the offset of the circle center
// ... e.g. going along the x-axis, we are within the next circle not at x/circleMaxDiam, but at (x+6)/circleMaxDiam
var iX = Math.floor((x + circleMaxRadius) / circleMaxDiam);
var iY = Math.floor((y + circleMaxRadius) / circleMaxDiam);

// multiply the circle indices by the diameter to get the actual coordinates of the circle's center
var targetX = iX * circleMaxDiam;
var targetY = iY * circleMaxDiam;

// calculate squared distance to the circle we are within
var dist = (targetX - x) * (targetX - x) + (targetY - y) * (targetY - y);

if (dist < 25) {
GraphicsCore.setPixel(bufWrite, index, rF, 0, 0, 255);
}
else {
GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
}

}
Shader.circleStippleShader.numPassesRequired = 1;    

Circle stipple

Conceptually, we define the center of a circle at every 12th pixel (both along the x and y axis). At every pixel (x,y) we figure out which circle we are within, and calculate the distance to the center. If the distance is less than our threshold (25), we change the color of the pixel (use the red channel only).

Stipple patterns on an HTML5 canvas

I wanted to play around a bit with stipple patterns after seeing stippling done with photos on the LinkedIn news feed. However, what I’m going to present is not what LinkedIn does. LinkedIn applies the stipple pattern as a background-image on a DOM element above a <img> element with a (fairly low resolution) JPEG – the stippling may help to alleviate the negative visual impact of the low-resolution image. What I’m going to show is how to do stippling on an HTML5 canvas, which allows for a much greater degree of freedom in terms of what’s possible, but is also slower and requires a modern browser.

I’m going to make use of the GraphicsCore and FXController classes in a previous post, Gaussian blur on an HTML5 canvas. In that post I presented the concept of writing shaders as plug-in to the FXController class to apply different per-pixel effects. What I’m going to present are shaders for a few simple stipple patterns. Applying the shader is simply a matter of passing it into the constructor for the FXController class, e.g.

var theShader = Shader.crossStippleShader;
var fxCtrlr = new FXController(ctxSource, ctxDest, theShader, width, height, 100, 1);
fxCtrlr.init();

Checkerboard Stipple

This shader has the effect of creating a checkerboard pattern.
The source pixel is preserved if (x+y) % 2 == 0, otherwise the pixel’s alpha is reduced to 66.

Shader.checkerboardStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{                
    
if( (x+y)%2 == 0) {
        GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
    }
    
else {
        GraphicsCore.setPixel(bufWrite, index, r, g, b, 66);
    }
}
Shader.checkerboardStippleShader.numPassesRequired = 1;    

Checkerboard Stipple

Dot Stipple

This shader blends a white pixel into the source image where x%2 == 0 && y%2 == 0, in effect creating a dotted grid pattern.

The alpha blending code is a straightforward implementation of alpha compositing, but since we’re blending with white (where r=1.0, g=1.0, and b=1.0) the equation is simplified and there is no second color value; we’re just biasing the source color by the alpha value. Also note that this is different than simply changing the alpha of the source pixel (as was done in the Checkerboard Stipple shader), here we’re always blending with white, in the previous shader we’re blending with whatever is the background of the DOM element.

Shader.dotStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{    
    
var alpha = 0.8;
        
    
var r1 = r / 255.0;
    
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
            
    
var g1 = g / 255.0;
    
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);

    
var b1 = b / 255.0;
    
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);            
            
    
if( x%2 == 0 && y%2 == 0) {
        GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
    }
else {
                
        GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
    }            

}
Shader.dotStippleShader.numPassesRequired = 1;

Dot Stipple

Quincunx Stipple

With this shader we blend in a white pixel at every 4 pixels (x%4 == 0 && y%4 == 0, the target pixel) and also at the 4 orthogonally adjacent pixels around the target, creating a quincunx pattern.

Shader.quincunxStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{    
    
var alpha = 0.78;
        
    
var r1 = r / 255.0;
    
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
            
    
var g1 = g / 255.0;
    
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);

    
var b1 = b / 255.0;
    
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);            
            
    
if( (x%4 == 0 && y%4 == 0) ||
        ((x+1)%4 == 0 && y%4 == 0) ||
        ((x-1)%4 == 0 && y%4 == 0) ||
        (x%4 == 0 && (y+1)%4 == 0) ||
        (x%4 == 0 && (y-1)%4 == 0) )
    {
        GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
    }
    
else
    
{
        GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
    }            

}
Shader.quincunxStippleShader.numPassesRequired = 1;

Quincunx Stipple

Cross Stipple

Similar to the quincunx stipple, but we blend in a white pixel at every 6 pixels (x%6 == 0 && y%6 == 0, the target pixel), the 4 orthogonally adjacent pixels around the target, and 4 additional pixels extending beyond the orthogonals, creating a cross (“+”) pattern.

Shader.crossStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{    
    
var alpha = 0.78;
        
    
var r1 = r / 255.0;
    
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
            
    
var g1 = g / 255.0;
    
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);

    
var b1 = b / 255.0;
    
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);            
            
    
if( (x%6 == 0 && y%6 == 0) ||
        ((x+1)%6 == 0 && y%6 == 0) ||
        ((x-1)%6 == 0 && y%6 == 0) ||
        ((x+2)%6 == 0 && y%6 == 0) ||
        ((x-2)%6 == 0 && y%6 == 0) ||                
        (x%6 == 0 && (y+1)%6 == 0) ||
        (x%6 == 0 && (y-1)%6 == 0) ||
        (x%6 == 0 && (y+2)%6 == 0) ||
        (x%6 == 0 && (y-2)%6 == 0)                 
        )
                
    {
        GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
    }
    
else
    
{
        GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
    }            

}
Shader.crossStippleShader.numPassesRequired = 1;    

Cross Stipple

That’s all for now. There’s tons of variations possible with only minor code changes to alter blending, color, and the shape of the stipple patten.

TransparencyExtract

A long while ago, I wrote a post on a method to extract transparency from images with solid-colored backgrounds, and somewhat effectively reduce the halo effect around objects in the image. I never got around to releasing the code or application, which I’m finally doing now.

All code + app is here.
It’s in C# + WinForms, and I put the entire VS solution in the bitbucket repo.

TransparencyExtract app

The guts of it all is the PerformExtraction() function in TransparencyExtract.cs

public void PerformExtraction()
{
ColorF bgColorF = new ColorF(backgroundColor);
ColorF fgColorF = new ColorF(foregroundColor);
float backL = bgColorF.L();
float foreL = fgColorF.L();

float minL = Math.Min(backL, foreL);
float deltaL = Math.Max(backL, foreL) - minL;


newBitmap =
new Bitmap(srcBmp.Width, srcBmp.Height, PixelFormat.Format32bppArgb);

BitmapData bdDest = newBitmap.LockBits(new Rectangle(0, 0, newBitmap.Width, newBitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, newBitmap.PixelFormat);
IntPtr ptrDest = bdDest.Scan0;

BitmapData bd = srcBmp.LockBits(new Rectangle(0, 0, srcBmp.Width, srcBmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, srcBmp.PixelFormat);
IntPtr ptr = bd.Scan0;

int bpp = 4;
int bytes = srcBmp.Height * bd.Stride;
byte[] rgbValues = new byte[bytes];
byte[] rgbValuesDest = new byte[bytes];
System.Runtime.InteropServices.
Marshal.Copy(ptr, rgbValues, 0, bytes);
System.Runtime.InteropServices.
Marshal.Copy(ptrDest, rgbValuesDest, 0, bytes);

Array.Copy(rgbValues, rgbValuesDest, rgbValues.Length);

for (int y = 0; y < srcBmp.Height; y++)
{
for (int x = 0; x < srcBmp.Width; x++)
{
int idx = (x * bpp + y * bd.Stride);

ColorF cf = new ColorF(Color.FromArgb(rgbValues[idx + 3], rgbValues[idx + 2], rgbValues[idx + 1], rgbValues[idx]));

// scale from [minL, 1] -> [0,1]
float scaleUpCoeff = 1.0f / deltaL;
float ld = (cf.L() - minL) * scaleUpCoeff;

if (ld > 1.0f)
ld = 1.0f;

if (ld < 0.0f)
ld = 0.0f;

float alpha = 1.0f - ld;

rgbValuesDest[idx+3] = (
byte)(alpha * 255.0f); // this is the alpha
}
}

System.Runtime.InteropServices.
Marshal.Copy(rgbValues, 0, ptr, bytes);
System.Runtime.InteropServices.
Marshal.Copy(rgbValuesDest, 0, ptrDest, bytes);

srcBmp.UnlockBits(bd);
newBitmap.UnlockBits(bdDest);
}

Note, this is not a generic algorithm by any means; read my original post to understand what’s going on and the limitations of this method.

poly2path

I’m working on a little SVG project using Raphaël. Unfortunately, Illustrator exports polygons in its SVG output which is not supported by Raphaël (only paths are supported). So I wrote an app to convert the SVG polygon string to an SVG path string.

Download Here

the conversion…

poly2path conversion

Note that you only input the points data from the polygon (from the points attribute), not the entire polygon element. The result is the path string for the d attribute of the path element.

The conversion is very simple and based upon the fact that a polygon is a path starting with an absolute moveto, linetos to each of the points, and a closepath (this bug report [yes, a bug report!] was pretty helpful).

I actually wanted to render the output, but I was disappointed to discover that Adobe Air doesn’t currently support SVG.

The main reason for not including it was runtime size concerns (adding it would have increased the runtime size by 15 to 20 percent). Initially, the main pain-points regarding AIR were the size of the runtime, integration with the operating system and native APIs, support for the <canvas> tag and new CSS properties, and JavaScript performance. These priorities, coupled with a trend toward reduced interest in SVG graphics, led to SVG support not being included in the current version of Adobe AIR.

Gaussian blur on an HTML5 canvas

I’m actually going to present more than the actual gaussian blur implementation, also showing how to setup a simple animation controller and lightweight pixel shader system, allowing for defining the final color of a pixel on a per-pixel basis and allowing other effects to easily be plugged into the system. Be warned, this stuff is slow. This is all CPU processing (atop JavaScript no-less), there’s, sadly, no GPU hardware acceleration here. If your thinking about doing this on high-resolution images or writing effects that require a ton of passes over the image, you’re going bring the browser to a crawl, even on a fairly high-end system.

gaussian blur on HTML5 canvas element

I can’t embed JavaScript within this post, so you’ll have to go here to view the result (obviously, you’ll need an HTML 5 capable browser). For those who are curious, the very cool test image used is of a bird of paradise flower by the Agricultural Research Service.

So, first things first, the HTML, which is very simple. There’s 2 canvas element, one will hold the source image and the other will be the destination for the post-processed image. The width and height attributes on the canvas elements are set to the width and height of the image.

<!DOCTYPE html>
<html>
<head>
<title>HTML5 Blur FX</title>

<
script type="text/javascript">
// JS code will go here!
</
script>

</
head>
<body>
<canvas id="cvs-source" width="72" height="50">
</canvas>

<canvas id="cvs-dest" width="72" height="50">
</canvas>
</body>
</html>

Next, load the test image onto the source canvas (cvs-source) and setup the destination canvas (cvs-dest) as a blank image, which will occur when the page is loaded. Ignore the reference to the FXController object for now.

window.onload = function ()
{
var img = new Image();
img.onload =
function ()
{
// setup source
var ctxSource = document.getElementById('cvs-source').getContext('2d');
ctxSource.drawImage(img, 0, 0);

// setup destination
var cvsElement = document.getElementById('cvs-dest');
var ctxDest = cvsElement.getContext('2d');

var width = parseInt(cvsElement.getAttribute("width"));
var height = parseInt(cvsElement.getAttribute("height"));

ctxDest.createImageData(width, height);

var theShader = Shader.gaussBlur;
var fxCtrlr = new FXController(ctxSource, ctxDest, theShader, width, height, 10, 10);
fxCtrlr.init();
}

img.src =
'test3.png';
}

Stepping away from the actual code for a minute, it’s important to note how to actually modify the pixels on a canvas element:

  • Get the 2d context of the element by calling getContext(‘2d’) on the DOM element.
  • Call CanvasRenderingContext2D.getImageData(…) to get a buffer with the pixels in RGBA format.
  • To commit changes to the pixels onto a canvas, call CanvasRenderingContext2D.putImageData(…) with the buffer of modified pixels.
var ctxSource = document.getElementById('cvs-source').getContext('2d');
var imageData = ctxSource.getImageData(0, 0, width, height);


ctxDest.putImageData(bufWrite, 0, 0);

Back to the actual code. One of the very simple and primitive operations needed is to set a pixel to a color:

// GraphicsCore object
var GraphicsCore = {};
GraphicsCore.setPixel =
function (imageData, index, r, g, b, a)
{
imageData.data[index + 0] = r;
imageData.data[index + 1] = g;
imageData.data[index + 2] = b;
imageData.data[index + 3] = a;
}

I didn’t implement a corresponding getPixel() function because, as you’ll see, it’s very clean and easy to get the a pixel directly from the buffer and wasn’t worth invoking a function call.

The FXController object is (for the most part) the animation controller.

// FXController object
function FXController(_ctxSource, _ctxDest, _theShader, _width, _height, _fps, _maxFrames)
{
this.ctxSource = _ctxSource;
this.ctxDest = _ctxDest;
this.theShader = _theShader;
this.width = _width;
this.height = _height;
this.fps = _fps;
this.curFrame = 1; // [1, ...]
this.maxFrames = _maxFrames;
this.numPassesPerFrame = _theShader.numPassesRequired;
this.invervalPtr = null;

this.shaderFunc = function (fxCtrlr, passNum, frameNum, maxFrames)
{
Shader.run(fxCtrlr.ctxSource, fxCtrlr.ctxDest, fxCtrlr.width, fxCtrlr.height, fxCtrlr.theShader, passNum, frameNum, maxFrames);
}

this.init = function ()
{
var fxCtrlr = this;
var runFunc = function () { fxCtrlr.run(fxCtrlr); }

this.invervalPtr = setInterval(runFunc, 1000.0 / this.fps);
}

this.unInit = function()
{
clearInterval(
this.invervalPtr);
this.invervalPtr = null;
}

this.run = function (sender /*FXController*/)
{
for (var pn = 1; pn <= sender.numPassesPerFrame; pn++) {
sender.shaderFunc(sender, pn, sender.curFrame, sender.maxFrames);
}

sender.curFrame++;
if (sender.curFrame > sender.maxFrames) {
sender.unInit();
}
}
}

Most of what going on here is simply holding values which are passed to Shader.run(…). However, a few important things are being setup:

  • FXController.run(…) will be called at a certain number of frames per seconds (this.fps), until this.maxFrames is hit.
  • For each frame, Shader.run(…) will be called for each pass necessary (this.numPassesPerFrame). Certain effects will require more passes than other, for example, the gaussian blur implementation will require 2 passes.

The Shader object, the core of which is within Shader.run(…),

// Shader object
// Note: Shader.<shader_name>.numPassesRequired must be defined
var Shader = {};
Shader.run =
function (ctxSource, ctxDest, width, height, shaderFunc, passNum, frameNum, maxFrames)
{
//
// netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); // REMOVE ME BEFORE DEPLOYMENT
//

var bufWrite = ctxDest.getImageData(0, 0, width, height);

var imageData = null;
if (passNum == 1 && frameNum == 1) {
imageData = ctxSource.getImageData(0, 0, width, height);
}
else {
imageData = ctxDest.getImageData(0, 0, width, height);
}

for (var y = 0; y < height; y++) {

for (var x = 0; x < width; x++) {

var index = (x + y * imageData.width) << 2;
shaderFunc(imageData, bufWrite, index, x, y, imageData.data[index + 0], imageData.data[index + 1], imageData.data[index + 2], imageData.data[index + 3], passNum, frameNum, maxFrames);
}
}

ctxDest.putImageData(bufWrite, 0, 0);
}

(The UniversalBrowserRead privilege is necessary to run this locally in Firefox)

Shader.run(…) will setup the source and destination buffers, iterate over every pixel, compute the pixel index, get the pixel color, call shaderFunc(…) with all the necessary params, and finally commit any changes to the destination buffer (bufWrite). Note, the source canvas is only used for the first frame, first pass; in all other cases whatever is rendered on the destination canvas is used. This allows for certain effects (such as the gaussian blur) in which the effect can be progressively applied again and again, in a feedback loop, to produce updated iterations of the effect (in the case of a gaussian blur, the image is blurred more and more).

The code presented so far lays the framework to allow for writing a shader, plugging it into the system, and watching the result. Before, getting to the more complex gaussian blur filter, here’s a much simpler one: a single-pass, per-pixel image fade-in.

// fade in shader
Shader.fadeInShader =
function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
var dt = frameNum/maxFrames; // [0, 1]
GraphicsCore.setPixel(bufWrite, index, r, g, b, dt * 255);
}
Shader.fadeInShader.numPassesRequired = 1;

(Shader..numPassesRequired is required for every shader, as FXController will query the value to determine how many times to call Shader.run(…) per frame.)

The shader function allows us to define what the final color of a pixel will be, given a set of input parameters, on a pixel-by-pixel basis, the core of what a pixel shader system is and allowing for a amazing degree of flexibility.

Finally, the gaussian blur shader. I won’t go into too many details here. If you’re interested in how a guassian blur is actually done, this article on gamedev.net is probably the best out there (esp. for transitioning from theory to practice), and the code here is almost a direct translation of what’s up there. Also note that bitshifts are used to do the power-of-2 divisions and multiplications.

// gaussian blur filter
Shader.gaussFact = Array(1, 6, 15, 20, 15, 6, 1);
Shader.gaussSum = 64;
// not used, >> 6 bitshift used in Shader.gaussBlur()
Shader.gaussWidth = 7;

Shader.gaussBlur =
function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
if (passNum == 1 && (x <= 0 || x >= imageData.width - 1)) {
GraphicsCore.setPixel(bufWrite, index, r, g, b, a);
return;
}

if (passNum == 2 && (y <= 0 || y >= imageData.height - 1)) {
GraphicsCore.setPixel(bufWrite, index, r, g, b, a);
return;
}

var readBuf = imageData;
var writeBuf = bufWrite;

var sumR = 0;
var sumG = 0;
var sumB = 0;
var sumA = 0;

for (var k = 0; k < Shader.gaussWidth; k++) {

var nx = x;
var ny = y;

if (passNum == 1) { nx = (x - ((Shader.gaussWidth - 1) >> 1) + k); }
else if (passNum == 2) { ny = (y - ((Shader.gaussWidth - 1) >> 1) + k); }
else { }

// wrap around if we're trying to read pixels beyond the edge
if (nx < 0) { nx = readBuf.width + nx; }
if (ny < 0) { ny = readBuf.height + ny; }
if (nx >= readBuf.width) { nx = nx - readBuf.width; }
if (ny >= readBuf.height) { ny = ny - readBuf.height; }

var pxi = (nx + ny * readBuf.width) << 2;
var pxR = readBuf.data[pxi];
var pxG = readBuf.data[pxi + 1];
var pxB = readBuf.data[pxi + 2];
var pxA = readBuf.data[pxi + 3];

// little hack to make alpha=0 pixels look a bit better
// Note, the proper way to handle the alpha channel is to premultiply, blur, "unpremultiply"
if (pxA == 0) {
pxR = 255;
pxG = 255;
pxB = 255;
pxA = 255;
}

sumR += pxR * Shader.gaussFact[k];
sumG += pxG * Shader.gaussFact[k];
sumB += pxB * Shader.gaussFact[k];
sumA += pxA * Shader.gaussFact[k];
}

GraphicsCore.setPixel(writeBuf, index, sumR >> 6, sumG >> 6, sumB >> 6, sumA >> 6);
}
Shader.gaussBlur.numPassesRequired = 2;

The blur is done with a 3×3 convolution filter, over 2 passes. In the first pass, neighboring pixels are sampled and blurred along the x-axis. In the second pass, the same is done along the y-axis.

A few simple conditionals allow for wrapping around and sampling from the other side of the bitmap, if there’s an attempt to sample beyond the edges.

Note the little hack for transparent/translucent pixels; this is not the proper way to do this (and simply makes the error more grey-ish instead of black-ish), but I didn’t want to deal with premultiplying the alpha, so I’ve left it out.

The demo + all code is up @ http://aautar.digital-radiation.com/HTML5-BlurFX/

Extracting transparency

From time to time, I’ve run across the problem of trying to get rid of a white background of an image with a design that’s mostly the same shade throughout. The hard part is not getting rid of the white background, but getting all the “transition pixels” (i.e. those that allow the design’s edges to gradually fade into the background) to have a somewhat accurate alpha (i.e. transparency) value, so that the design can then be taken and blended nicely atop an arbitrary background without the very common halo effect; this is shown below, trying to remove a white background with Photoshop’s magic wand.

halo problem with magic wand

There are ways to mitigate the issue shown above in Photoshop (see post), but none that are truly simple to the point where they can be done with a click of the mouse. This isn’t really a problem with Photoshop; after all, PS is made to be a general solution and what I’m presenting here is a very specific case.

Anyways, I finally stumbled across the idea of using a user-defined background color and foreground color, and using some bit of magic to interpolate between the values to generate a valid alpha channel. My first instinct was to compute the saturation value of a pixel and use it to find an alpha value for the pixel. However, after a bit of investigation, I realized this wasn’t the correct approach. From Wikipedia’s article on the HSL and HSV color spaces,

There are two factors that determine the “strength” of the saturation. The first is the distances between the different RGB values. The closer they are together, the more the RGB values neutralize each other, and the less the hue is emphasized, thus lowering the saturation. (R = G = B means no saturation at all.) The second factor is the distance the RGB values are from the midpoint. This is because the closer they are to 0, the darker they are until they are totally black (and thus no saturation); and, the closer they get to MAX value, the brighter they are until they are until totally white (and once again no saturation).

Note that grayscale pixels (R = G = B) are considered totally unsaturated, and this could easily lead to problematic cases – definately cases of a black design on a white background.

Moving on, I realized that lightness was a better indicator and it was surprisingly easy to calculate: l = 0.5(max + min). I decided to use a fixed background color of white, just to make things easier, so based upon the lightness of the background (1.0) and foreground color (supplied by the user), I computed the minimum (minL = lightness of the darker, foreground color) and computed the lightness value of each pixel in the image. In general, lightness values should increase as you go from the foreground color to the background color and they should be in the range [minL, 1]. I then did a simple linear scale to the [0,1] range, and did a few checks for pixels that were outside the [0,1] range (caused rogue pixels that were darker than the foreground color). I then computed the alpha value, alpha = 1.0 – l, and that was it. You can see the result below.

transparency extract

transparency extract 2

It’s not perfect. If you zoom in, you will see a white-ish halo, but it’s certainly good enough for many cases. The algorithm could also be refined by replacing the simple linear scaling from [0, minL] to [0,1] with a more “aggressive” function, skewing the lightness values toward 1.0, which could minimize or possibly eliminate the halo effect.

Will post code and application soon. Hopefully, I can also spare some time to work on improving this.

WARP

I recently read about the Windows Advanced Rasterization Platform (WARP), which is a software rasterizer that will ship as part of Windows 7. WARP is targeted at:

Casual Games: Games have simple rendering requirements but also want the ability to use impressive visual effects that can be hardware accelerated. The majority of the best selling game titles for Windows are either simulations or casual games, neither of which requires high performance graphics, but both styles of games greatly benefit from modern shader based graphics and the ability to scale on hardware if present.

Existing Non-Gaming Applications: There is a large gamut of graphical applications that want to minimize the number of code paths in their rendering layer. WARP10 enables these applications to implement a single Direct3D 10, 10.1, or 11 code-path that can target a very large number of machine configurations.

Advanced Rendering Games: Game developers that want to isolate graphics card or driver specific rendering errors. We believe that all games, even extremely graphically demanding games would benefit from being able to render their content using WARP to validate that any visual artifacts they might experience are due to rendering errors or problems with hardware or drivers.

Using WARP as a tool for isolating rendering errors is understandable, but as a fallback for DirectX 10 casual games or non-gaming applications attempting to run on a PC w/o a DX10 GPU, a few things pop into my mind.

  • As a fallback mechanism, it goes back too far. We’re talking about going from DX10 -> software rasterization. There’s still lots of graphics hardware out there that targeted previous versions of DirectX, at the very least DX7, DX8, and DX9. Why not allow for seamless fallback to these earlier classes of graphics hardware, instead of a making a gigantic leap backwards to software rasterization? From a developer’s perspective, there would be a real benefit here in writing a DX10 codepath and having it run on older hardware.
  • DX10 adoption is slow to non-existent due to the slow adoption rate of Windows Vista. Unless Microsoft is able to generate massive demand for Windows 7, WARP will have little impact due to the little impact of DX10.
  • A project like WARP seems to be based around the mentality that a GPU is something special for a PC instead of a requirement. Versus software rasterization, GPU rasterization is orders of magnitude faster and the price of a decent card is under $50. Why is setting a GPU requirement such an endeavor, for Microsoft of all companies?!
  • On performance, WARP beats Intel integrated graphics. This really isn’t a surprise or any sort of accomplishment. Intel is really just selling overpriced garbage here.
  • Perhaps Microsoft working on a project like WARP instead of setting stricter graphics hardware requirements for Windows 7 is due to another shady deal with Intel. Remember the one with Vista.

Unexpected results

Every once in a while I’ll test some piece of code and encounter a bug or some unexpected behavior that produces something weird, peculiar, or just something pretty damn cool. Here’s a perfect example,

weird output

This is from some vectorization code I’m working on. Just for the hell of it, I decided to run the output image (the one with the green pixels, which represents vertices of a polygon) through the vectorization algorithm again. The subsequent images show what happened as I kept running the vectorization algorithm on the output, in effect creating a feedback loop. (The colors that are present in the subsequent images are a result of an earlier stage in the vectorization process, the output of which is no longer adequately processed, resulting in the pattern that’s visible).

Automatic mipmap generation on Radeon 9500

I stumbled upon an annoying little graphics bug recently where I was getting corrupted textures on a Radeon 9500 graphics card. I eventually came across this thread which hinted at the problem. Apparently, automatic mipmap generation is messed up on the Radeon 9500 and screws up your textures (I got rainbow colors, weird blocks, etc.). I’m pretty sure the hardware supports it, so perhaps it’s a driver issue, but in any case it didn’t work.

What’s even more annoying about this is that everything worked perfectly on a much older Radeon 7500 card.

Zerospace lighting model

Work on zerospace has finally picked up in the past few weeks and a few days ago I posted the first screenshot in the zerospace blog. Not much to see as yet, just an background, starfield, and untextured model. However, I did put in some major work on the lighting system, which is visible in the screenshot (to a certain extent; the specular highlights are dull and, although they’re there, they’re only apparent when the model rotates).

Anyway, what I wanted to discuss in this post is the lighting model, since I think it’s fairly unique and it gives some amazing results.

Ambient
The ambient component is done per-vertex (i.e. all computations are done in vertex shader and color is interpolated over the face of the triangle) and consists of 7 directional lights hitting the model from various directions. It’s sort of an ultra-simplified version of ambient occlusion mapping.

Diffuse
The diffuse component is sampled from a 2d texture. This texture is a scaled down and heavily blurred version of the background texture (done offline; a heavy blur, such as the one required would be too expensive in the pixel shader). Scaling it down is simply a matter of performance, as the background texture is large (2048×2048). Blurring (I do a guassian blur) is a trick to get the diffuse lighting from a scene (this is a bit difficult to explain, I’ll try to do later in another post).

So how how is the texture mapped onto the 3d model? Spherical texture mapping! (see this article for an explanation). Note that the normal vector used to computer the texture coordinates is the the normal vector transformed by the world matrix (since I don’t want to texture map the diffuse lighting onto the 3d model in model space; this will cause the lighting to be “static” – i.e. when the model rotates the lighting values won’t change to reflect the change in orientation).

Specular
The specular component is is done per-pixel because per-vertex specular highlights usually look terrible (there are some other issues in general, but this was the main one for zerospace). For the specular lighting there is really only 1 light, a single directional light pointing down the z-axis. However, there are 8 view vectors and the specular computation is done between the single light and each of the 8 view vectors. I came up with this hack as I found that experimenting with multiple lights was more difficult than experimenting with multiple view vectors (different light vectors would cause either a too extreme or too weak specular highlight). Anyway, I do the specular computation with an exponent of 7, and I them multiple the results by 0.075 to dull the highlights.

One final note, the screenshot is no longer 100% accurate of the lighting system . I just found a major (and stupid!) mistake where I was multiply the ambient and diffuse components together instead of adding them. However, what I described above should stay the same, I just have to tweak some values to prevent the lighting from being too bright or too dark.

Also, on a more final note, this is not the complete lighting model for zerospace. Lighting from weapons, particles, etc. will also be taken into account.