Posts Tagged ‘webgl’

A look at 2D vs WebGL canvas performance

I did some quick benchmarking with canvas-image-transformer, looking at the performance between directly manipulating pixels on a 2D canvas versus using a fragment shader on a WebGL canvas. For testing, I used a grayscale transformation as it can be done with a simple weighted sum (R*0.2126 + G*0.7152 + B*0.0722) and there’s a high degree of parity between the fragment shader code and the code for pixel operations on a 2D canvas.

Converting to grayscale

Pixel operations on the 2D canvas are as follows:

for(var i=0; i<pixels.data.length; i+=4) { var grayPixel = parseInt(((0.2126*(pixels.data[i]/255.0)) + (0.7152*(pixels.data[i+1]/255.0)) + (0.0722*(pixels.data[i+2]/255.0))) * 255.0); pixels.data[i] = grayPixel; pixels.data[i + 1] = grayPixel; pixels.data[i + 2] = grayPixel; }

The corresponding fragment shader for the WebGL canvas is as follows:

precision mediump float; uniform sampler2D uSampler; varying vec2 vTextureCoord; void main(void) { vec4 src = texture2D( uSampler, ( vTextureCoord ) ); float grayPx = src.r*0.2126 + src.g*0.7152 + src.b*0.0722; gl_FragColor = vec4(grayPx, grayPx, grayPx, 1); }

Performance comparisons in Chrome

Here’s the setup for comparing performance of the 2 method:

  • Input was a 3864×3864 image of the Crab Nebula, rendered onto a 2D canvas (note that time to render onto the 2D canvas is not considered in the data points below)
  • Output is the 2D canvas that the input image was render on
  • CPU was an AMD Ryzen 7 5700X
  • GPU was a RTX 2060
  • OS is Windows 10 Build 19044
  • Browser is Chrome 108.0.5359.125
  • Hard refresh on page load to bypass any browser-level caching
  • Transformation via WebGL approach for 25 iterations
  • Transformation via 2D canvas approach for 25 iterations

Visually, this is what’s being done:

canvas-image-transformer grayscale conversion

I tried to eliminate as much background noise as possible from the result; that is, eliminating anything that may have a impact on CPU or GPU usage: closing other applications that may have significant usage, not having any other tabs open in the browser, and not having DevTools open when image processing was being done. That said, I was not rigorous about this and the numbers presented are to show overall/high-level behavior and performance; they’re not necessarily representative of what peak performance would be on the machine or browser.

It’s also worth noting that canvas-image-transformer doesn’t attempt to do any sort of caching in the first iteration (i.e. textures are re-created, shaders are re-compiled, etc. on each iteration), so we shouldn’t expect large variances in performance from one iteration to the next.

Graphing the data points for each approach, for each iteration, I got the following (note that what’s presented is just the data for 1 test run; I did test multiple times and consistently saw the same behavior but, for simplicity, I just graphed the values from 1 test run):

canvas-image-transformer performance data

So, the data points for the first iteration are interesting.

  • On the 2d canvas, the transformation initially takes 371.8ms
  • On the webgl2, the transformation initially takes 506.5ms

That’s a massive gap in performance between the 2 methods, with the 2d canvas method being significantly faster. I would have expected the WebGL approach to be faster here as, generally, graphics-related things would be faster with a lower-level GPU interface, but that’s clearly not the case here.

For subsequent iterations, we can see that performance improves and normalizes for both approaches, with significantly better performance using the WebGL approach; however, why don’t we see this sort of performance during the first iteration? Profiling the code, I noticed I was consistently seeing the majority of execution time spent on texImage2D() during the first iteration:

gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA8, gl.RGBA, gl.UNSIGNED_BYTE, srcCanvas);

Looking at the execution time of texImage2D() across iterations, we get the following:

canvas-image-transformer texImage2D execution time

We see 486.9ms spent in texImage2D() during the first iteration but then execution time drops to only ~0.1ms in subsequent iterations. It’s clear that loading data into a texture is the most costly operation on the first iteration, however it looks like there’s some sort of caching mechanism, likely in Chrome’s GPU component, that essentially eliminates this cost on subsequent iterations.

In an attempt to optimize the call in the first iteration, I briefly looked into potential optimizations to the texImage2D() call but didn’t find much. There’s no mipmap creation or doing any sort of format conversion here, so we’re just bound by how quickly we can get the pixels into VRAM.

Normal refresh (after previous page load)

There’s a bit more nuance here that’s worth touching on. Looking at just the first iteration in Chrome, after normal/soft refreshes, we see some interesting behavior:

canvas-image-transformer performance data, first iteration only
  • For the 2d canvas, the first iteration transformation times look the same as when doing a hard refresh
  • For the WebGL canvas, we’re getting the transformation times we saw after the first iteration when doing a soft refresh!

It looks like Chrome’s texture caching mechanism is in play and preserves cache entries across soft page refreshes.

What about Firefox and other browsers?

I would expect most Webkit-based browsers would have similar behavior to what’s in Chrome and some quick testing in Edge confirms this.

Firefox is a different beast. Testing in Firefox 108.0.2, we see the following transformation times:

canvas-image-transformer performance data

Performance, overall, is much more consistent than in Chrome, but not always better.

  • For the 2d canvas method, performance is simply worse; on the first iteration we see transformations take 150+ milliseconds more than in Chrome, and on subsequent iterations the performance gap is even wider.
  • For the WebGL method, our first iteration performance is significantly better than Chrome, reduced by more than 175 milliseconds. However, on subsequent iterations we don’t see the drastic performance improvement we see in Chrome.

For the 2d canvas method, it’s hard to say why it performs so differently than Chrome. However, for the WebGL method, a bit of profiling led to some interesting insights. In Firefox, the execution time of texImage2D() is consistent across iterations, hovering ~40ms; this means it performs significantly better than Chrome’s worst case (first iteration) and significantly worse than Chrome’s best case (non-first iteration where execution time is below 0.1ms), as shown below.

canvas-image-transformer performance data

The other significant aspect to Firefox’s performance is in the performance of the Canvas drawImage() call, in drawing from a WebGL canvas to a 2D canvas. At the tail end of the transformation process, canvas-image-transformer does the following:

const srcCtx = srcCanvas.getContext('2d'); srcCtx.drawImage(glCanvas, 0, 0, srcCanvas.width, srcCanvas.height);

Basically, it’s taking what’s on the WebGL canvas and writing it out to the input/source canvas, which is a 2D canvas. In Chrome this is a very fast operation, typically less that 2ms, in Firefox I see this typically going above 200ms.

canvas-image-transformer performance data

Firefox consistency

Finally, looking at transformation times across soft refreshes, we see Firefox performance is very consistent for both the 2D canvas and WebGL method:

canvas-image-transformer performance data

However, I did encounter a case where WebGL performance was more erratic. This was testing when I had a lot of tabs open and I suspect there was some contention for GPU resources.

Takeaways

There’s perhaps a number of small insights here depending on use-case and audience, but there’s 2 significant high-level takeaways for me:

  • GPUs are very fast at parallel processing but loading data to be processed and retrieving the processed data can be expensive operations
  • It’s worthwhile to measure things; I was fairly surprised by the different performance profiles between Firefox and Chrome

Post-process shaders in glfx

Pushed an update to glfx to allow for post-process shading. When a post-process shader is defined, the scene is rendered to a screen-space quad (the size of the viewport), and that quad is then rendered to the viewport with the post-process shader applied.

The shader is loaded (asynchronously) like any other:

glfx.shaders.load('screenspace.fs', "frag-shader-screenspace", glfx.gl.FRAGMENT_SHADER);

Once loaded, we create the shader program, and get locations for whatever variables are used. The vertex shader isn’t anything special, it just transforms a vertex by the model-view and projection matrices, and passes along the texture coordinates.

glfx.whenAssetsLoaded(function() {

var postProcessShaderProgram = glfx.shaders.createProgram([glfx.shaders.buffer['vert-shader-basic'], glfx.shaders.buffer['frag-shader-screenspace']],
function(_shprog) {

// Setup variables for shader program
_shprog.vertexPositionAttribute = glfx.gl.getAttribLocation(_shprog, "aVertexPosition");
_shprog.pMatrixUniform = glfx.gl.getUniformLocation(_shprog,
"uPMatrix");
_shprog.mvMatrixUniform = glfx.gl.getUniformLocation(_shprog,
"uMVMatrix");
_shprog.textureCoordAttribute = glfx.gl.getAttribLocation(_shprog,
"aTextureCoord");

_shprog.uPeriod = glfx.gl.getUniformLocation(_shprog,
"uPeriod");
_shprog.uSceneWidth = glfx.gl.getUniformLocation(_shprog,
"uSceneWidth");
_shprog.uSceneHeight = glfx.gl.getUniformLocation(_shprog,
"uSceneHeight");

glfx.gl.enableVertexAttribArray(_shprog.vertexPositionAttribute);
glfx.gl.enableVertexAttribArray(_shprog.textureCoordAttribute);

});

...

We then tell glfx to apply our post-process shader program:

glfx.scene.setPostProcessShaderProgram(postProcessShaderProgram);

This call will result in different rendering path, which renders the scene to a texture, applies that texture to a screen-space quad, and renders the quad with the post-process shader.

Here is the shader for screenspace.fs, used in the demo shown above:

precision mediump float;

uniform float uPeriod;
uniform float uSceneWidth;
uniform float uSceneHeight;
uniform sampler2D uSampler;        
varying vec2 vTextureCoord;

void main(void) {

vec4 sum = vec4( 0. );
float blurSampleOffsetScale = 2.8;
float px = (1.0 / uSceneWidth) * blurSampleOffsetScale;
float py = (1.0 / uSceneHeight) * blurSampleOffsetScale;

vec4 src = texture2D( uSampler, ( vTextureCoord ) );

sum += texture2D( uSampler, ( vTextureCoord + vec2(-px, 0) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(-px, -py) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(0, -py) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(px, -py) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(px, 0) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(px, py) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(0, py) ) );
sum += texture2D( uSampler, ( vTextureCoord + vec2(-px, py) ) );
sum += src;

sum = sum / 9.0;

gl_FragColor = src + (sum * 2.5 * uPeriod);

}

Note that it requires a few uniforms to be supplied to it, we use the glfx.scene.onPostProcessPreDraw() callback to setup the variables (before the post-processed scene is drawn):

var timeAcc = 0;
glfx.scene.onPostProcessPreDraw =
function(tdelta) {

timeAcc += tdelta;
var timeScaled = timeAcc * 0.00107;

if(timeScaled > 2.0*Math.PI) {
timeScaled = 0;
timeAcc = 0;
}

var period = Math.cos(timeScaled);
glfx.gl.uniform1f(postProcessShaderProgram.uPeriod, period + 1.0);

glfx.gl.uniform1f(postProcessShaderProgram.uSceneWidth, glfx.gl.viewportWidth);
glfx.gl.uniform1f(postProcessShaderProgram.uSceneHeight, glfx.gl.viewportHeight);
};

What we’re doing is using the scene rendering time deltas to generate a periodic/sinusoidal wave. This results in the pulsing brightness/fading effect of the scene. The brightness effect itself is done by adding the source pixel to a blurred + brightened version of itself. The blurring allows for the soft fade in and fade out.

GLSL variable qualifiers

I’ve been playing around with WebGL shader code recently and found this bit on variable prefixes helpful, particularly in the explanation of the variable qualifiers:

  • Attribute: data provided by buffers
  • Uniform: inputs to the shaders
  • Varying: values passed from a vertex shader to a fragment shader and interpolated (or varied) between the vertices for each pixel drawn

Something important to keep in mind is that this relates to the OpenGL ES Shading Language, Version 1.00, which is (unfortunately) what’s currently supported by WebGL.

A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00 [GLES20GLSL], and which do not exceed the minimum functionality mandated in Sections 4 and 5 of Appendix A.

Attribute and Varying were part of early versions of, OpenGL-supported, GLSL, but are deprecated as of OpenGL 3.0 / GLSL 1.30.10, and replaced with more generic constructs:

  • in is for input from the previous pipeline stage, i.e. per vertex (or per fragment) values at most, per primitive if using glAttribDivisor and hardware instanciation
  • out is for output to the next stage

WebGL on a high-DPI display

Dealing with WebGL on a high-DPI display isn’t too difficult, but it does require an understanding of device pixels vs CSS pixels. Elements on a page automatically upscale on a high-DPI display, as dimensions are typically defined with CSS, and therefore defined in units of CSS pixels. The <canvas> element is no exception. However, upscaling a DOM element doesn’t mean that the content within the element will be upscaled or rendered nicely – this is why non-vector content can appear blurry on higher resolutions. With WebGL content, not only will it appear blurry, but the viewport will likely be clipped as well, due to the viewport being incorrectly calculated using CSS pixel dimensions.

With WebGL everything is assumed to be in units of device pixels and there is no automatic conversion from CSS pixels to device pixels. To specify the device pixel dimensions of the <canvas>, we need to set the width and height attributes of the element:

CSS Pixels vs Device Pixels on canvas

I like to compute and set the attributes automatically using window.devicePixelRatio.
In glfx I do the following (passing in _canvasWidthCSSPx, _canvasHeightCSSPx):

// Get devicePixelRatio
glfx.devicePixelRatio = window.devicePixelRatio || 1;

// Set the width,height attributes of the canvas element (in device pixels)
var _canvasWidthDevicePx = _canvasWidthCSSPx * glfx.devicePixelRatio;
var _canvasHeightDevicePx = _canvasHeightCSSPx * glfx.devicePixelRatio;    
_canvas.setAttribute(
"width", _canvasWidthDevicePx);
_canvas.setAttribute(
"height", _canvasHeightDevicePx);

// Set viewport width,height based on dimensions of canvas element        
glfx.gl.viewportWidth = _canvasWidthDevicePx;
glfx.gl.viewportHeight = _canvasHeightDevicePx;        

Reference: HandlingHighDPI

Where we live

Using a number of technologies I’ve been playing around with recently, I began working on a 3D visualization of the Earth, plotting every city, creating a pointillism-styled representation of the planet. Below is the result along with an overview of how I produced the rendering.

Getting the data

I extracted all cities with a population of at least 100,000 people from the MySQL GeoNames database using the following query:

SELECT `id`,`name`,`latitude`,`longitude`,`population`,`timezone`
FROM geonames.cities
WHERE population >= 100000 AND feature_class = 'P';

… and put the results into a JS array.

Creating a 3D model to represent each city

I created this hexagonal model in Blender, exported it to a Wavefront OBJ file, and ran the OBJ file through the Wavefront OBJ to JSON converter I wrote. Note that the model is facing the z-axis to match WebGL’s (and OpenGL’s) default camera orientation: facing down the negative z-axis.

Convert longitude and latitude to a 3D position

Converting a geodetic longitude, latitude pair to a 3D position involves doing a LLA (Longitude Latitude Altitude) to ECEF (Earth-Centered, Earth-Fixed) transformation. The code below implements this transform, converting the longitude and latitude of every city pulled from the GeoNames database into a 3D coordinate where we can render the hexagonal representation of the city.

function llarToWorld(lat, lon, alt, rad)
{            
    lat = lat * (Math.PI/180.0);
    lon = lon * (Math.PI/180.0);

    
var f = 0; //flattening
    
var ls = Math.atan( Math.pow((1.0 - f),2) * Math.tan(lat) ); // lambda

    
var x = rad * Math.cos(ls) * Math.cos(lon) + alt * Math.cos(lat) * Math.cos(lon)
    
var y = rad * Math.cos(ls) * Math.sin(lon) + alt * Math.cos(lat) * Math.sin(lon)
    
var z = rad * Math.sin(ls) + alt * Math.sin(lat)
    
    
return [x,z,-y];            
}

There are 2 items worth noting:

  • The transformation (and function above) involve a 4th parameter, radius which is the radius of the ellipsoid (or sphere, in this case, as flattening=0) into which the transformation is done. I have it set as a fixed constant, as I’m primary concerned with an approximate visual representation, but the MathWorks page describes the actual computation.
  • The ECEF (Earth-Centered, Earth-Fixed) coordinate system has the z-axis pointing north, not the y-axis, so the z and y values need to be swapped to produce a coordinate corresponding to WebGL’s default camera orientation. In addition, as WebGL has a right-handed coordinate system (so the default camera orientation is one where it’s pointing down the negative z-axis), the z coordinate is negated so the point doesn’t wind up behind the camera.

Orient all cities to face the origin

Getting each of the hexagonal models to face the origin involved a bit of math:

  • Calculating the axis about which the rotation should occur by, first, computing a vector from the origin to the 3D position of the model (lookAt), and taking the cross product between lookAt and the z-axis (as we’re rotating toward the z-axis).
  • Calculating the angle of rotation (the angle between the z-axis and lookAt) by computing the dot product between lookAt and the z-axis, then taking the acos of the dot product.

There’s some additional code to handle cases where points lie on the on the z-axis (where the cross product gives the zero vector) and also to return a matrix representation of the rotation.

function lookAtOrigin(v)
{
// compute vector from origin
var lookAt = vec3.create([v[0], v[1], -v[2]]);
vec3.normalize(lookAt);

// reference axis
var refAxis = vec3.create([0,0,-1]);

// computate axis of rotation
var rotAxis = vec3.create(lookAt);
vec3.cross(rotAxis, refAxis);

// compute angle of rotation
var rotAngRad = Math.acos(vec3.dot(lookAt, refAxis));

// special cases...
if(rotAxis[0] == 0 && rotAxis[1] == 0 && rotAxis[2] == 0) {
if(lookAt[2] > 0) {
rotAxis = vec3.create([1,0,0]);
rotAngRad = Math.PI;
}
else {
rotAxis = vec3.create([1,0,0]);
rotAngRad = 0;
}
}

// compute and return a matrix with the rotation
var ret = mat4.identity();
mat4.rotate(ret, rotAngRad, rotAxis);

return ret;
}

Render the scene

Using glfx, I pulled everything together, also adding a bit of code to rotate the camera and do some pseudo-lighting in the pixel shader by alpha blending colors based on depth. All the code can be found in the webgl-globe repository on bitbucket.

glfx – WebGL basis

The base code for my WebGL experiments have been pretty sloppy thus far. I recently took some time to cleanup the code in order to have a more solid basis to work from and I’m presenting it here as a primer for anyone looking for a simple bootstrap or a code-heavy intro to WebGL.

A walk-through of the base code (glfx) and sample code to generate the demo shown below follows. The code is also available via the glfx bitbucket repository.

Dependencies

For matrix and vector operations, the glMatrix library.

Also window.requestAnimationFrame needs to be defined. For older browsers the following shim can be used:

window.requestAnimationFrame = (function(time){
return window.requestAnimationFrame ||
         window.webkitRequestAnimationFrame ||
         window.mozRequestAnimationFrame ||
         window.oRequestAnimationFrame ||
         window.msRequestAnimationFrame ||
        
function( callback ){
            window.setTimeout(callback, 1000 / 60);
         };
})();    

glfx

glfx is the crux of the rendering interface and encapsulates the WebGL context, functionality to load assets (shaders, textures, models), and functionality to setup and render the scene.

// glfx object wraps everything necessary for the rendering interface
var glfx = { };

// echo function to output debug statements to console
glfx.echo = function(txt) {
if(typeof console.log !== 'undefined') {
console.log(txt);
}
}

// WebGL context
glfx.gl = null;

// reference count for assets needed before rendering()
glfx.assetRef = 0;
// function to call when all assets are loaded, set by user via glfx.whenAssetsLoaded, reset internally
glfx.onAssetsLoaded = function() { };
// function to schedule callback when all assets are loaded, set by user
glfx.whenAssetsLoaded = function(_callback) {
if(typeof _callback !== 'undefined') {
if(glfx.assetRef === 0) {
_callback();
}
else {
glfx.onAssetsLoaded = _callback;
}
}
}
// function to increment asset ref count
glfx.incAssetRef = function() {
glfx.assetRef++;
if(glfx.assetRef === 0) {
glfx.onAssetsLoaded();
glfx.onAssetsLoaded =
function() { }; // reset
}
}
// function to decrement asset ref count
glfx.decAssetRef = function() {
glfx.assetRef--;
}

// Shaders class
glfx.shaders = { };
// buffer to store loaded shaders
glfx.shaders.buffer = new Array();

// Function to load vertex shader from external file
// _url = path to shader source
// _type = gl.VERTEX_SHADER / gl.FRAGMENT_SHADER
// _callback = function to call after shader is created, shader object passed is shader is successfully compiled, null otherwise
glfx.shaders.load = function(_url, _name, _type, _callback) {
glfx.decAssetRef();

var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange =
function() {                
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {

var shaderSrc = xmlhttp.responseText;
var shader = glfx.gl.createShader(_type);

glfx.gl.shaderSource(shader, shaderSrc);
glfx.gl.compileShader(shader);

if (!glfx.gl.getShaderParameter(shader, glfx.gl.COMPILE_STATUS)) {
shader =
null;
}

if(typeof _callback !== 'undefined') {
_callback(shader);
}

glfx.shaders.buffer[_name] = shader;

glfx.incAssetRef();
}
}

xmlhttp.open(
"GET", _url, true);
xmlhttp.send();
}


// Textures class
glfx.textures = { };
// Textures array
glfx.textures.buffer = new Array();
// Method to load texture from file
glfx.textures.load = function(_path, _name) {

glfx.decAssetRef();

glfx.textures.buffer[_name] = glfx.gl.createTexture();

var tex=glfx.textures.buffer[_name];
tex.image =
new Image();
tex.image.onload =
function() {                

var tex = glfx.textures.buffer[_name];                                                            
glfx.gl.bindTexture(glfx.gl.TEXTURE_2D, tex);
glfx.gl.pixelStorei(glfx.gl.UNPACK_FLIP_Y_WEBGL,
true);
glfx.gl.texImage2D(glfx.gl.TEXTURE_2D, 0, glfx.gl.RGBA, glfx.gl.RGBA, glfx.gl.UNSIGNED_BYTE, tex.image);

glfx.gl.texParameteri(glfx.gl.TEXTURE_2D, glfx.gl.TEXTURE_MAG_FILTER, glfx.gl.LINEAR);
glfx.gl.texParameteri(glfx.gl.TEXTURE_2D, glfx.gl.TEXTURE_MIN_FILTER, glfx.gl.LINEAR);

// required for non-power-of-2 textures
glfx.gl.texParameteri(glfx.gl.TEXTURE_2D, glfx.gl.TEXTURE_WRAP_S, glfx.gl.CLAMP_TO_EDGE);
glfx.gl.texParameteri(glfx.gl.TEXTURE_2D, glfx.gl.TEXTURE_WRAP_T, glfx.gl.CLAMP_TO_EDGE);

glfx.gl.bindTexture(glfx.gl.TEXTURE_2D,
null);

glfx.incAssetRef();

}

tex.image.src = _path;            
}


// Model class
glfx.model = function() {

this.vertexBuffer = null;
this.indexBuffer = null;
this.texcoordBuffer = null;
this.normalBuffer = null;

}

// Models class
glfx.models = { };
// Models array
glfx.models.buffer = new Array();
// Method to load models from JSON file
glfx.models.load = function(_url, _name, _callback) {

glfx.decAssetRef();

var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange =
function() {                
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {

var data = JSON.parse(xmlhttp.responseText);

var mdl = new glfx.model();

mdl.vertexBuffer = glfx.gl.createBuffer();
glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, mdl.vertexBuffer);
glfx.gl.bufferData(glfx.gl.ARRAY_BUFFER,
new Float32Array(data.verts), glfx.gl.STATIC_DRAW);
mdl.vertexBuffer.itemSize = 3;
mdl.vertexBuffer.numItems = data.verts.length / 3;

mdl.indexBuffer = glfx.gl.createBuffer();
glfx.gl.bindBuffer(glfx.gl.ELEMENT_ARRAY_BUFFER, mdl.indexBuffer);
glfx.gl.bufferData(glfx.gl.ELEMENT_ARRAY_BUFFER,
new Uint16Array(data.indices), glfx.gl.STATIC_DRAW);
mdl.indexBuffer.itemSize = 1;
mdl.indexBuffer.numItems = data.indices.length;        

if(data.texcoords.length > 0) {
mdl.texcoordBuffer = glfx.gl.createBuffer();
glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, mdl.texcoordBuffer);
glfx.gl.bufferData(glfx.gl.ARRAY_BUFFER,
new Float32Array(data.texcoords), glfx.gl.STATIC_DRAW);
mdl.texcoordBuffer.itemSize = 2;
mdl.texcoordBuffer.numItems = data.texcoords.length / 2;            
}

if(data.normals.length > 0) {
mdl.normalBuffer = glfx.gl.createBuffer();
glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, mdl.normalBuffer);
glfx.gl.bufferData(glfx.gl.ARRAY_BUFFER,
new Float32Array(data.normals), glfx.gl.STATIC_DRAW);
mdl.normalBuffer.itemSize = 3;
mdl.normalBuffer.numItems = data.normals / 3;
}

glfx.models.buffer[_name] = mdl;

glfx.incAssetRef();
}
}

xmlhttp.open(
"GET", _url, true);
xmlhttp.send();
}


// Scene class
glfx.scene = { };
// Scene last render time
glfx.scene.ptime = 0;
// Model-View matrix
glfx.scene.matModelView = null;
// Perspective matrix
glfx.scene.matPerspective = null;
// Scene graph
glfx.scene.graph = new Array();

// Class for scene (world) objects
// _base = object with vertex buffer, index buffer, texture coordinate buffer, etc.
glfx.scene.worldObject = function(_base, _shaderProgram) {
this.base = _base;            
this.shprog = _shaderProgram;
this.position = vec3.create();
this.rotation = vec3.create();
this.scale = vec3.create([1.0, 1.0, 1.0]);
this.update = function() { };
}

// method to add object to scene graph
glfx.scene.addWorldObject = function(_wo) {
glfx.scene.graph.push(_wo);
}

// set field of view
glfx.setFOV = function(_fov) {
mat4.perspective(_fov, glfx.gl.viewportWidth / glfx.gl.viewportHeight, 0.1, 100.0, glfx.scene.matPerspective);
}

// set clear color
glfx.setClearColor = function(_color) {
glfx.gl.clearColor(_color[0], _color[1], _color[2], _color[3]);
}

// Initialization function
// _canvas = DOM canvas element
// _onInitComplete (optional) = callback after init is complete
glfx.init = function(_canvas, _onInitComplete) {

glfx.gl = _canvas.getContext(
"experimental-webgl", {antialias:true});
if (!glfx.gl) {
glfx.echo(
"No webGL support.");
return false;
}

// Set viewport width,height based on dimensions of canvas element
glfx.gl.viewportWidth = _canvas.width;
glfx.gl.viewportHeight = _canvas.height;            

// Set clear color
glfx.setClearColor([1,1,1,1]);

// Enable depth buffer
glfx.gl.enable(glfx.gl.DEPTH_TEST);                

// Setup scene matrices
glfx.scene.matPerspective = mat4.create();
glfx.scene.matModelView = mat4.create();            
glfx.setFOV(90);

// Reset render target
glfx.gl.bindTexture(glfx.gl.TEXTURE_2D, null);
glfx.gl.bindRenderbuffer(glfx.gl.RENDERBUFFER,
null);
glfx.gl.bindFramebuffer(glfx.gl.FRAMEBUFFER,
null);

// Execute callback if one was passed
if(typeof _onInitComplete !== 'undefined') {
_onInitComplete();
}

// Begin rendering
glfx.render(0);

return true;
}

// Render loop function
glfx.render = function(time) {

requestAnimationFrame(glfx.render);

if(glfx.assetRef < 0) {
return;
}

// Reset framebuffer
glfx.gl.bindFramebuffer(glfx.gl.FRAMEBUFFER, null);        

// Clear viewport
glfx.gl.viewport(0, 0, glfx.gl.viewportWidth, glfx.gl.viewportHeight);
glfx.gl.clear(glfx.gl.COLOR_BUFFER_BIT | glfx.gl.DEPTH_BUFFER_BIT);                    

// Calculate frame time delta
var tdelta = 0;
if(glfx.scene.ptime > 0) {
tdelta = time - glfx.scene.ptime;
}    
glfx.scene.ptime = time;

// Render all models in scene
for(var i=0; i<glfx.scene.graph.length; i++) {

mat4.identity(glfx.scene.matModelView);                

glfx.scene.graph[i].update(tdelta, glfx.scene.graph[i]);
var objpos = glfx.scene.graph[i].position;
var objrot = glfx.scene.graph[i].rotation;
var objscale = glfx.scene.graph[i].scale;

mat4.scale(glfx.scene.matModelView, objscale);
mat4.translate(glfx.scene.matModelView, objpos);
mat4.rotate(glfx.scene.matModelView, objrot[0], [1, 0, 0]);                
mat4.rotate(glfx.scene.matModelView, objrot[1], [0, 1, 0]);        
mat4.rotate(glfx.scene.matModelView, objrot[2], [0, 0, 1]);                        

glfx.scene.graph[i].render(tdelta, glfx.scene.graph[i], glfx.scene.matModelView, glfx.scene.matPerspective);
}

}

Initializing glfx

Initializing glfx simply involves calling the glfx.init() function with the canvas element that’s going to be used to render on.

var canvasElem = document.getElementById('wgl-canvas');
glfx.init(canvasElem);

This will setup the rendering interface which will begin rendering frames, but as there is nothing in the scene only a clear is done when a frame is rendered. The clear color is set to white (1,1,1,1) and the field of view set to 90deg by default; these can be changed with the glfx.setClearColor() and glfx.setFOV() methods, respectively.

Loading assets

Assets (shaders, textures, and models) are loaded asynchronously via AJAX requests. As there may be dependencies on multiple assets for rendering and scene creation, a simple semaphore is used, glfx.assetRef.

  • glfx.assetRef is decremented when a new request for an asset is issued and incremented once the AJAX call succeeds and the asset has been created.
  • When glfx.assetRef < 0, it indicates a pending asset for the scene and no rendering is done.
  • A callback can be scheduled for when glfx.assetRef = 0 (i.e. all pending assets loaded) via the glfx.whenAssetsLoaded() method.
// Load basic shaders for rendering
glfx.shaders.load('basic.vs', "vert-shader-basic", glfx.gl.VERTEX_SHADER);
glfx.shaders.load(
'basictex.fs', "frag-shader-tex", glfx.gl.FRAGMENT_SHADER);

// Load necessary textures
glfx.textures.load('img/test.png', 'test-tex');                    

// Load models used in scene
glfx.models.load('cube.json', 'cubemdl', glfx.models.jsonParser);

Note that all the asset load methods take a URL as the first argument, and a name as the second argument. The name is an identifier by which to lookup the asset from the buffer it’s stored in. Also, glfx.models.jsonParser is the only model parser available and loads models corresponding to the JSON data produced by my Wavefront OBJ to JSON converter.

Building a scene

After assets are loaded, we can can create shader programs and world objects, then add them to the scene.

glfx.whenAssetsLoaded(function() {

// Create shader program from loaded shaders
var shprog = glfx.gl.createProgram();
glfx.gl.attachShader(shprog, glfx.shaders.buffer[
'vert-shader-basic']);
glfx.gl.attachShader(shprog, glfx.shaders.buffer[
'frag-shader-tex']);
glfx.gl.linkProgram(shprog);

if (!glfx.gl.getProgramParameter(shprog, glfx.gl.LINK_STATUS)) {
alert(
"Could not create shader program");
return false;
}

// Setup variables for shader program
shprog.vertexPositionAttribute = glfx.gl.getAttribLocation(shprog, "aVertexPosition");
glfx.gl.enableVertexAttribArray(shprog.vertexPositionAttribute);            

shprog.pMatrixUniform = glfx.gl.getUniformLocation(shprog,
"uPMatrix");
shprog.mvMatrixUniform = glfx.gl.getUniformLocation(shprog,
"uMVMatrix");

shprog.textureCoordAttribute = glfx.gl.getAttribLocation(shprog,
"aTextureCoord");
glfx.gl.enableVertexAttribArray(shprog.textureCoordAttribute);                        


// add some cubes to the scene graph
var cubeA = new glfx.scene.worldObject(glfx.models.buffer['cubemdl'], shprog);
cubeA.position = vec3.create([-1.6, 0.0, -25.0]);
cubeA.rotation = vec3.create([0.0, 0.0, 0.0]);
cubeA.scale = vec3.create([0.70, 1.0, 1.0]);
cubeA.render =
function(tdelta, wobj, matModelView, matPerspective) {
// Setup shader program to use
var shprog = wobj.shprog;
glfx.gl.useProgram(shprog);    

var tex = glfx.textures.buffer['test-tex'];                
glfx.gl.activeTexture(glfx.gl.TEXTURE0);
glfx.gl.bindTexture(glfx.gl.TEXTURE_2D, tex);
glfx.gl.uniform1i(shprog.samplerUniform, 0);


glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, wobj.base.vertexBuffer);
glfx.gl.vertexAttribPointer(shprog.vertexPositionAttribute, wobj.base.vertexBuffer.itemSize, glfx.gl.FLOAT,
false, 0, 0);                

glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, wobj.base.texcoordBuffer);
glfx.gl.vertexAttribPointer(shprog.textureCoordAttribute, wobj.base.texcoordBuffer.itemSize, glfx.gl.FLOAT,
false, 0, 0);                    

glfx.gl.uniformMatrix4fv(shprog.pMatrixUniform,
false, matPerspective);
glfx.gl.uniformMatrix4fv(shprog.mvMatrixUniform,
false, matModelView);                

glfx.gl.bindBuffer(glfx.gl.ELEMENT_ARRAY_BUFFER, wobj.base.indexBuffer);
glfx.gl.drawElements(glfx.gl.TRIANGLES, wobj.base.indexBuffer.numItems, glfx.gl.UNSIGNED_SHORT, 0);    
}

cubeA.update =
function(tdelta, wobj) {

// some code to position and spin cubeA

if(wobj.position[2] < -5.0) {
wobj.position[2] += 0.022 * tdelta;
}
else {
wobj.position[2] = -5.0;
}

wobj.rotation[0] = 0.35;
wobj.rotation[1] += -(75 * tdelta) / 50000.0;
if( Math.abs(wobj.rotation[1]) >= 2.0*Math.PI ) {
wobj.rotation[1] = 0.0;
}
}
glfx.scene.addWorldObject( cubeA );


// Add another cube to the scene
var cubeB = new glfx.scene.worldObject(glfx.models.buffer['cubemdl'], shprog);
cubeB.position = vec3.create([1.6, 0.0, -25.0]);
cubeB.rotation = vec3.create([0.0, 0.0, 0.0]);
cubeB.scale = vec3.create([0.70, 1.0, 1.0]);
cubeB.update =
function(tdelta, wobj) {
// some code to position and spin cubeB
if(cubeA.position[2] > -15.0) {
if(wobj.position[2] < -5.0) {
wobj.position[2] += 0.022 * tdelta;
}
else {
wobj.position[2] = -5.0;
}
}

wobj.rotation[0] = 0.35;
wobj.rotation[1] += -(75 * tdelta) / 50000.0;
if( Math.abs(wobj.rotation[1]) >= 2.0*Math.PI ) {
wobj.rotation[1] = 0.0;
}
}

cubeB.render =
function(tdelta, wobj, matModelView, matPerspective) {
// Setup shader program to use
var shprog = wobj.shprog;
glfx.gl.useProgram(shprog);    

var tex = glfx.textures.buffer['test-tex'];                
glfx.gl.activeTexture(glfx.gl.TEXTURE0);
glfx.gl.bindTexture(glfx.gl.TEXTURE_2D, tex);
glfx.gl.uniform1i(shprog.samplerUniform, 0);


glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, wobj.base.vertexBuffer);
glfx.gl.vertexAttribPointer(shprog.vertexPositionAttribute, wobj.base.vertexBuffer.itemSize, glfx.gl.FLOAT,
false, 0, 0);                

glfx.gl.bindBuffer(glfx.gl.ARRAY_BUFFER, wobj.base.texcoordBuffer);
glfx.gl.vertexAttribPointer(shprog.textureCoordAttribute, wobj.base.texcoordBuffer.itemSize, glfx.gl.FLOAT,
false, 0, 0);                    

glfx.gl.uniformMatrix4fv(shprog.pMatrixUniform,
false, matPerspective);
glfx.gl.uniformMatrix4fv(shprog.mvMatrixUniform,
false, matModelView);                

glfx.gl.bindBuffer(glfx.gl.ELEMENT_ARRAY_BUFFER, wobj.base.indexBuffer);
glfx.gl.drawElements(glfx.gl.TRIANGLES, wobj.base.indexBuffer.numItems, glfx.gl.UNSIGNED_SHORT, 0);    
}

glfx.scene.addWorldObject( cubeB );

});

Shaders for programs are pulled from the glfx.shaders.buffer[] associative array, referenced by the name specified when they were loaded.

Once we have a shader program and a model, we can create items for the scene by constructing glfx.scene.worldObject objects:

  • Construct the glfx.scene.worldObject object by specifying a model from the glfx.models.buffer[] associative array and the shader program as arguments to the constructor.
  • The worldObject.position, worldObject.rotation, and worldObject.scale vectors can be set as desired.
  • The worldObject.update() method can be overridden to describe how to manipulate the object in each frame.
  • The worldObject.render() method can be overridden to render the objects making use of the underlying buffers in worldObject.base: worldObject.base.indexBuffer, worldObject.base.vertexBuffer, worldObject.base.normalBuffer, worldObject.base.texcoordBuffer, as well as textures from the glfx.textures.buffer[] associative array.
  • Note that transformation on the model-view matrix (matModelView) is done within glfx.render() and should not be done within worldObject.render().

This callback is not ideal. I’m exposing a lot of rendering code that would best be abstracted away to glfx. However, without a strict definition of how a model should be textured or what variables are to be passed over to the vertex and fragment shaders, abstracting further is premature.

Wavefront OBJ to JSON converter

I began experimenting with WebGL a while back and hit a wall when it came to importing geometry data into a scene, as you can only get so far with programmatically generated cubes and spheres. I looked into an Wavefront OBJ to JSON conversion tool (to just spit out the vertices, normals, and texture coordinates of the OBJ model), but couldn’t find much. There’s a Blender plugin for Three.js, but I didn’t want a dependency on Three.js nor the Three.js JSON file format; I have nothing against either, but I didn’t want to add a thick layer of abstraction like Three.js and, honestly, I wanted to delve into the OBJ format and deal with working on geometry data at the vertex level.

The result is a simple OBJ to JSON converter written in C++. The code for the entire program is below and also available via the bitbucket repository.

One of my test models was the Clocktower model by thinice and shown below is a WebGL rendering of the converted geometry data.

The program takes 2 arguments:

  • The path of the input OBJ file
  • The name of the output JSON file

OBJ to JSON converter

Note that the converter does have some limitations:

  • It will not deal with materials (i.e. it does not parse any corresponding MTL files)
  • It will only parse triangle faces (no quadrilaterals). If the model has quadilaterals, you can convert them to triangles in Blender by going into Edit mode, selecting all vertices, and hitting CTRL + T.
  • Named objects and polygon groups are ignored; the converter essentially treats everything in the file as a single polygon group.

#include <iostream>
#include <vector>
#include <cstdio>


struct vec2
{
    
public:
        vec2(
float _u, float _v) : u(_u), v(_v) { }
        
        
float u;
        
float v;
};

struct idx3
{
    
public:
        idx3(
int _a, int _b, int _c) : a(_a), b(_b), c(_c) { }

        
bool operator==(const idx3& other) const {
            
if( this->a == other.a && this->b == other.b && this->c == other.c) {
                
return true;
            }

            
return false;
        }

        
int a;
        
int b;
        
int c;
};

struct vec4
{
    
public:
        vec4(
float _x, float _y, float _z, float _w) : x(_x), y(_y), z(_z), w(_w) { }
        
        
float x;
        
float y;
        
float z;
        
float w;
};

struct tri
{
    
public:
        tri(
int _v1, int _v2, int _v3) : v1(_v1), v2(_v2), v3(_v3), vn1(0), vn2(0), vn3(0), vt1(0), vt2(0), vt3(0) { }
        
        
int v1;
        
int vn1;
        
int vt1;

        
int v2;
        
int vn2;
        
int vt2;
        
        
int v3;
        
int vn3;
        
int vt3;
};

struct polygroup
{
    
public:
        std::vector<vec4>    verts;
        std::vector<vec4>    normals;
        std::vector<vec2>    texcoords;
        std::vector<tri>    tris;
};

struct polygroup_denormalized
{
    
public:
        std::vector<vec4>    verts;
        std::vector<vec4>    normals;
        std::vector<vec2>    texcoords;
        std::vector<
int>    indexbuf;
};

void echo(const char* line)
{
    std::cout << line << std::endl;
}

vec4 parseVertex(
const char* line)
{
    
char prefix[4];
    
float x, y, z;

    sscanf(line,
"%s %f %f %f", prefix, &x, &y, &z);

    
return vec4(x,y,z,1);
}

vec2 parseTexCoord(
const char* line)
{
    
char prefix[4];
    
float u, v;

    sscanf(line,
"%s %f %f", prefix, &u, &v);

    
return vec2(u,v);
}


std::vector<
int> readFace(const char* fstr)
{
    std::vector<
int> ret;

    
char buf[64];
    
int bufidx = 0;
    
for(int i=0; i<strlen(fstr); i++) {

        
if(fstr[i] != '/') {
            buf[bufidx++] = fstr[i];
        }
else {
            ret.push_back( atoi(buf) );
            bufidx = 0;
            memset(buf, 0, 64);
// clear buffer
        
}
    }

    
if(strlen(buf) > 0) {
        ret.push_back( atoi(buf) );
    }

    
return ret;
}

tri parseTriFace(
const char* line)
{
    
char prefix[4];
    
char p1[64];
    
char p2[64];
    
char p3[64];

    
int v1=0, v2=0, v3=0;
    
int vn1=0, vn2=0, vn3=0;
    
int vt1=0, vt2=0, vt3=0;

    sscanf(line,
"%s %s %s %s", prefix, p1, p2, p3);

    std::vector<
int> f1 = readFace(p1);
    
if(f1.size() >= 1) { v1 = f1[0] - 1; }
    
if(f1.size() >= 2) { vt1 = f1[1] - 1; }
    
if(f1.size() >= 3) { vn1 = f1[2] - 1; }

    std::vector<
int> f2 = readFace(p2);
    
if(f2.size() >= 1) { v2 = f2[0] - 1; }
    
if(f2.size() >= 2) { vt2 = f2[1] - 1; }
    
if(f2.size() >= 3) { vn2 = f2[2] - 1; }

    std::vector<
int> f3 = readFace(p3);
    
if(f3.size() >= 1) { v3 = f3[0] - 1; }
    
if(f3.size() >= 2) { vt3 = f3[1] - 1; }
    
if(f3.size() >= 3) { vn3 = f3[2] - 1; }


    tri ret(v1, v2, v3);
    ret.vt1 = vt1;
    ret.vt2 = vt2;
    ret.vt3 = vt3;
    ret.vn1 = vn1;
    ret.vn2 = vn2;
    ret.vn3 = vn3;

    
return ret;
}

std::vector<polygroup*> polygroups_from_obj(
const char* filename)
{
    
bool inPolyGroup = false;
    polygroup* curPolyGroup = NULL;
    std::vector<polygroup*>    polygroups;

    FILE* fp = fopen(filename,
"r");
    
if(fp == NULL) {
        echo(
"ERROR: Input file not found");
        
return polygroups;
    }

    
// make poly group
    
if(curPolyGroup == NULL) {
        curPolyGroup =
new polygroup();
        polygroups.push_back(curPolyGroup);
    }

    
// parse
    
echo("reading OBJ geometry data...");
    
while(true) {

        
char buf[2056];
        
if(fgets(buf, 2056, fp) != NULL) {

            
if(strlen(buf) >= 1) {

                
// texture coordinate line
                
if(strlen(buf) >= 2 && buf[0] == 'v' && buf[1] == 't') {
                    vec2 tc = parseTexCoord(buf);
                    curPolyGroup->texcoords.push_back(tc);
                }
                
// vertex normal line
                
else if(strlen(buf) >= 2 && buf[0] == 'v' && buf[1] == 'n') {
                    vec4 vn = parseVertex(buf);
                    curPolyGroup->normals.push_back(vn);
                }
                
// vertex line
                
else if(buf[0] == 'v') {
                    vec4 vtx = parseVertex(buf);
                    curPolyGroup->verts.push_back(vtx);
                }
                
// face line (ONLY TRIANGLES SUPPORTED)
                
else if(buf[0] == 'f') {
                    tri face = parseTriFace(buf);
                    curPolyGroup->tris.push_back(face);
                }
                
else
                
{ }

            }

        }
else {
            
break;
        }
    }

    fclose(fp);

    
return polygroups;
}


std::string int_array_to_json_array(
const std::vector<int>& arr)
{
    std::string json =
"[";
    
for(int i=0; i<arr.size(); i++) {

        
char buf[256];
        sprintf(buf,
"%i", arr[i]);
        
        
if(i > 0) {
            json.append(
",");
        }

        json.append(buf);
    }

    json.append(
"]");

    
return json;
}

std::string vec4_array_to_json_array(
const std::vector<vec4>& arr)
{
    std::string json =
"[";
    
for(int i=0; i<arr.size(); i++) {

        
char buf[64];
        sprintf(buf,
"%f", arr[i].x);
        
        
if(i > 0) {
            json.append(
",");
        }

        json.append(buf);

        sprintf(buf,
"%f", arr[i].y);
        json.append(
",");
        json.append(buf);

        sprintf(buf,
"%f", arr[i].z);
        json.append(
",");
        json.append(buf);
    }

    json.append(
"]");

    
return json;
}

std::string vec2_array_to_json_array(
const std::vector<vec2>& arr)
{
    std::string json =
"[";
    
for(int i=0; i<arr.size(); i++) {

        
char buf[64];
        sprintf(buf,
"%f", arr[i].u);
        
        
if(i > 0) {
            json.append(
",");
        }

        json.append(buf);

        sprintf(buf,
"%f", arr[i].v);
        json.append(
",");
        json.append(buf);
    }

    json.append(
"]");

    
return json;
}


polygroup_denormalized* denormalize_polygroup(polygroup& pg)
{
    polygroup_denormalized* ret =
new polygroup_denormalized();

    std::vector<idx3> processedVerts;

    
for(int i=0; i<pg.tris.size(); i++) {

        
for(int v=0; v<3; v++) {
            
            idx3 vidx(0,0,0);
            
if(v == 0) {
                vidx = idx3(pg.tris[i].v1, pg.tris[i].vn1, pg.tris[i].vt1);
            }
else if(v == 1) {
                vidx = idx3(pg.tris[i].v2, pg.tris[i].vn2, pg.tris[i].vt2);
            }
else if (v == 2) {
                vidx = idx3(pg.tris[i].v3, pg.tris[i].vn3, pg.tris[i].vt3);
            }
else { }


            
// check if we already processed the vert
            
int indexBufferIndex = -1;
            
for(int pv=0; pv<processedVerts.size(); pv++) {
                
if(vidx == processedVerts[pv]) {
                    indexBufferIndex = pv;
                    
break;
                }
            }

            
// add to buffers
            
if(indexBufferIndex == -1) {

                processedVerts.push_back(vidx);

                ret->verts.push_back(pg.verts[vidx.a]);

                
if(pg.normals.size() > 0) {
                    ret->normals.push_back(pg.normals[vidx.b]);
                }

                
if(pg.texcoords.size() > 0) {
                    ret->texcoords.push_back(pg.texcoords[vidx.c]);
                }

                
int idx = (int)ret->verts.size() - 1;
                ret->indexbuf.push_back(idx);

            }
else {
                ret->indexbuf.push_back(indexBufferIndex);
            }

        }

    }

    
return ret;
}

void polygroup_to_json(polygroup& pg, const char* jsonFilename)
{
    echo(
"denormalizing polygroup...");
    polygroup_denormalized* dpg = denormalize_polygroup(pg);

    echo(
"making verts array...");
    std::string vertsStr =
"";
    vertsStr.append(
"\"verts\":");
    vertsStr.append(vec4_array_to_json_array(dpg->verts));
    vertsStr.append(
",");

    echo(
"making indices array...");
    std::string indicesStr =
"";
    indicesStr.append(
"\"indices\":");
    indicesStr.append(int_array_to_json_array(dpg->indexbuf));
    indicesStr.append(
",");

    echo(
"making texcoords array...");
    std::string texcoordsStr =
"";
    texcoordsStr.append(
"\"texcoords\":");
    
if(dpg->texcoords.size() > 0) {
        texcoordsStr.append(vec2_array_to_json_array(dpg->texcoords));
    }
else {
        texcoordsStr.append(
"[]");
    }
    texcoordsStr.append(
",");

    echo(
"making normals array...");
    std::string normalsStr =
"";
    normalsStr.append(
"\"normals\":");
    
if(dpg->normals.size() > 0) {
        normalsStr.append(vec4_array_to_json_array(dpg->normals));
    }
else {
        normalsStr.append(
"[]");
    }


    echo(
"writing output file...");
    FILE *fp = fopen(jsonFilename,
"w");
    fputs(
"{", fp);
    fputs(vertsStr.c_str(), fp);
    fputs(
"\n", fp);    
    fputs(indicesStr.c_str(), fp);
    fputs(
"\n", fp);
    fputs(texcoordsStr.c_str(), fp);
    fputs(
"\n", fp);
    fputs(normalsStr.c_str(), fp);
    fputs(
"}", fp);
    fclose(fp);

    
delete dpg;
    dpg = NULL;
}


int main(int argc, char *argv[])
{

    echo(
"OBJ to JSON converter");

    
if(argc < 3) {
        echo(
"ERROR: Invalid arguments");
        echo(
"ARGS: wavefrontOBJtoJSON.exe <inputFile> <outputFile>");
        
return 0;
    }

    
char* inputFilename = argv[1];
    
char* outputFilename = argv[2];

    echo(
"reading OBJ data into polygroup...");
    std::vector<polygroup*> pg = polygroups_from_obj(inputFilename);

    
if(pg.size() > 0) {
        echo(
"converting polygroup to JSON arrays...");
        polygroup_to_json(*pg[0], outputFilename);
    }

    
// cleanup
    
for(int i=0; i<pg.size(); i++) {
        
delete pg[i];
        pg[i] = NULL;
    }
    pg.clear();

    echo(
"done.");

    
return 0;

}

One notable aspect of the conversion is denormalizing the geometry [done in denormalize_polygroup()]. An OBJ file stores unique lists of vertices, normals, texture coordinates, etc. and for each face there is an index into the list of vertices, an index into the list of normals, etc. This is great when it comes to storing data (as it eliminates duplicate geometry data, and decreases the file size) but when rendering you can’t have the data organized like this, as you can only have a single index buffer, where each index corresponds to the same location within the list of vertices, normal, texture coordinates, etc. Therefore, data must be duplicated such that every combination of vertex coordinate, texture coordinate, normal, etc. is uniquely identified by an entry in the index buffer (e.g. if 2 vertices have the same position but different texture coordinates, it has to be identified by a different index in the index buffer, and the position must be duplicated so that the new texture coordinate and position can be referenced by the different index).

EDIT (11/1/2013): In the initial code committed and presented was outputting Javascript variables set equal to arrays, the code has been updated to output valid JSON data instead. Furthermore, the namespace argument for the program is no longer required and no type of namespacing is done on the output data.