Archive for January, 2020

Writing to the Bitfenix ICON display with Rust (part 1, writing images)

Bitfenix ICON

I love the Bitfenix Pandora case and used it for my desktop PC build a few years ago. One interesting aspect about this case is that it has an attached LCD on the front, the Bitfenix ICON display; I’ve never done anything with the display (I just left the default logo on it), but recently I discovered the source code was available and it was possible to programmatically write to the display, and became interested in how this is done and what I could do with the display.

Instead of using the code provided by Bitfenix, I used the modified source produced by Rizvi R. I fired up Visual C++, dealt with downloading and pointing the compiler + linker to necessary libraries, I hit a stumbling block with libjpeg, but stripped out the JPEG support and was able to get something up and running. The code loads an image, resizes it to fit the display, reduces the color depth to match the display, and then writes to the display using HIDAPI (which is really cool and something I didn’t realize existed). The code was ok, but I decided to see if I could modularize it further and rewrite it in Rust (avoiding some of the less modern things in C++ world, like package management) as a learning exercise.

The Display

The ICON display is far from any sort of high quality panel, but it’s adequate for something put on a case. I haven’t come across official specs, but here’s what I’ve been able to gather:

  • Resolution of 240×320
  • 16-bit color, 5-6-5 format (5 bits for red, 6 bits for green, 5 bits for blue)
  • Super slow refresh rate, like over 1 second (so not suitable for any sort of animation)

Rust Dependencies

We’ll make use of 2 Rust packages, png and hidapi via cargo:

[dependencies] png = "0.15.3" hidapi = "1.1.0"

Loading PNG images & converting to 16bpp

We can load a PNG image and get a pixel buffer as follows:

fn load_png_image(filepath: &str) -> Vec<u8> { let decoder = png::Decoder::new(File::open(filepath).unwrap()); let (info, mut reader) = decoder.read_info().unwrap(); // Allocate the output buffer. let mut buf = vec![0; info.buffer_size()]; reader.next_frame(&mut buf).unwrap(); buf }

This will work well for 24bpp images and return a buffer with R, G, and B components, 8 bits each. As the ICON is a 5-6-5 16bpp display, we’ll need to reduce the color depth by chopping off bits with some bit shift operations (for more details on what’s happening here, this is a good resource):

fn reduce_image_to_16bit_color(image_buf: &[u8]) -> Vec<u8> { let mut result: Vec<u8> = Vec::with_capacity(240 * 320 * 2); for i in (0..image_buf.len()).step_by(3) { let b: u16 = ((image_buf[i + 2] as u16) >> 3) & 0x001F; let g: u16 = (((image_buf[i + 1] as u16) >> 2) << 5) & 0x07E0; let r: u16 = (((image_buf[i] as u16) >> 3) << 11) & 0xF800; let rgb: u16 = r | g | b; result.push( ((rgb >> 8) & 0x00FF) as u8 ); result.push( (rgb & 0x00FF) as u8 ); } result }

We can now load an image and reduce the color depth as follows:

// Image needs to be 240x320 (24bpp, no alpha channel) let src_image = load_png_image("assets/1.png"); let reduced_color_img = reduce_image_to_16bit_color(&src_image);

Open the ICON device

Using the HIDAPI, we can open the device with it’s vendor ID (0x1fc9) and product ID (0x100b):

let hid = HidApi::new().unwrap(); let bitfenix_icon_device = hid.open(0x1fc9, 0x100b).unwrap();

I thought it might be useful to query and surface product and manufacturer info of the device as follows:

let device_manuf_string = bitfenix_icon_device.get_manufacturer_string().unwrap().unwrap(); let device_prod_string = bitfenix_icon_device.get_product_string().unwrap().unwrap(); println!("{}: {}", device_manuf_string.as_str(), device_prod_string.as_str());

However, this isn’t terribly insightful. We get a product string equal to “NXP Semiconductors” manufacturer string equal to “LPC11Uxx HID”, which is the microcontroller used on the device.

Writing to the display

With the pixel buffer and an open device, we can now write a new image to the display, in a 3 step process:

  • Clear the display
  • Write the buffer to the display
  • Refresh the display

I’m somewhat confused as to why the display needs to be cleared, but if I skip this step I end up with weird overdraw of pixels on top of the existing image. In any case, clearing the display is simple enough and involves writing a 6 byte code to the display:

fn clear_display(bitfenix_icon_device: &HidDevice) { let erase_flash_code: [u8; 6] = [0x0, 0x1, 0xde, 0xad, 0xbe, 0xef]; bitfenix_icon_device.write(&erase_flash_code).unwrap(); }

Writing the buffer to the display involves copying up to 64 bytes at a time to the display, this is a max of 61 bytes from the pixel buffer + a 3 byte header (indicating the operation and number of bytes being written):

fn write_image_to_display(bitfenix_icon_device: &HidDevice, image_buf: &[u8]) { let num_image_bytes_per_write = 61; /* +3 bytes for the header, note that the device only accepts writes in 64 bytes chunks */ let num_writes = ((image_buf.len() as f64 / num_image_bytes_per_write as f64).ceil()) as usize; for i in 0..num_writes { let start = i * num_image_bytes_per_write; let mut length = num_image_bytes_per_write; if i == (num_writes-1) { length = image_buf.len() - ((num_writes - 1) * num_image_bytes_per_write); } let mut image_data_with_header: Vec<u8> = Vec::with_capacity(length + 3); image_data_with_header.push(0x0); image_data_with_header.push(0x2); image_data_with_header.push(length as u8); for image_byte_idx in start..start+length { image_data_with_header.push(image_buf[image_byte_idx]); } bitfenix_icon_device.write(&image_data_with_header).unwrap(); } }

Refreshing the display requires writing a 2 bytes code to the display:

fn refresh_display(bitfenix_icon_device: &HidDevice) { let refresh_code: [u8; 2] = [0x0, 0x3]; bitfenix_icon_device.write(&refresh_code).unwrap(); }

Pulling it all together we have a few straightforward function calls:

println!("Writing new image..."); clear_display(&bitfenix_icon_device); // needs to be done or you end up with weird overwriting on top of exiting image write_image_to_display(&bitfenix_icon_device, &reduced_color_img); println!("Refreshing display..."); refresh_display(&bitfenix_icon_device);
Bitfenix ICON image

Code and future work

I have the above code up on GitHub. This was fun and a great learning experience. Next, I think it would be interesting to see if the display could be utilized to reflect information about the computer, maybe showing CPU usage or internet connection status. There could be some utility in providing such metrics to the user and it would give the display a purpose beyond that of a digital case badge.

setInterval with 0ms delays within Web Workers

The 4ms minimum

Due to browser restrictions, you typically can’t have a setInterval call where the delay is set to 0, from MDN:

In modern browsers, setTimeout()/setInterval() calls are throttled to a minimum of once every 4 ms when successive calls are triggered due to callback nesting (where the nesting level is at least a certain depth), or after certain number of successive intervals.

I confirmed this by testing with the following code in Chrome and Firefox windows:

setInterval(() => { console.log(`now is ${Date.now()}`); }, 0);

In Firefox 72:

Firefox window setInterval with 0ms delay

In Chrome 79:

Chrome window setInterval with 0ms delay

The delay isn’t exact, but you can see that it typically comes out to around 4ms, as expected. However, things are a little different with web workers.

The delay with web workers

To see the timing behavior within a web worker, I used the following code for the worker:

const printNow = function() { console.log(`now is ${Date.now()}`); }; setInterval(printNow, 0); onmessage = function(_req) { };

… and created it via const worker = new Worker('worker.js');.

In Chrome, there’s no surprises, the behavior in the worker was similar to what it was in the window:

Chrome window setInterval with 0ms delay

Things get interesting in Firefox:

Firefox window setInterval with 0ms delay

Firefox starts grouping the log messages (the blue bubbles), as we get multiple calls to the function within the same millisecond. Firefox’s UI becomes unresponsive (which is weird and I didn’t expect as this is on an i7-4790K with 8 logical processors and there’s little interaction between the worker and the parent window), and there’s a very noticeable spike in CPU usage.

Takeaway

setInterval() needs a delay and you shouldn’t depend on the browser to set something reasonable. It would be nice if setInterval(.., 0) would tell the browser to execute as fast as reasonably possible, adjusting for UI responsiveness, power consumption, etc. but that’s clearly not happening here and as such it’s dangerous to have a call like this which may render the user’s browser unresponsive.

Prioritizing Web Worker Requests

Web workers handle incoming request messages via a function declared on the onmessage property of the worker. A, perhaps not so obvious, behavior here is that incoming requests are queued. If you’re doing something intensive within the worker (or the CPU is taxed b/c of other processes) the queuing behavior becomes more obvious, as you need to wait longer for a response from the worker due to the fact that previous requests need to be picked up and handled first. Here’s a simple worker that does some heavy lifting (at least for Chrome 79 on an i7-4790K):

const highLoadWork = function() { let x = 1000; for(let i=0; i<99999999; i++) { if(i % 2 === 0) { x += 1000; } else { x = Math.sqrt(x); } } return `hello, x=${x}`; }; onmessage = function(_req) { const requestNum = _req.data.requestNum; const workResult = highLoadWork(); postMessage( { "response": `responding to request ${requestNum}` } ); };

… and here’s what happens after making 12 requests to it in a loop:

I can’t say that this is a bad thing, this is generally sensible and what you’d expect to happen. That said, there are workloads where you may want to prioritize things differently. GraphPaper was one such case for me. Workers are handling things based on user interactions, the last request represents the current state of the world and is typically the only request that matters (any others from before can be thrown away). Unfortunately, this is no mechanism to interact with or re-prioritize messages in this underlying queue. However, you can offload the requests to a queue that the worker manages internally by itself. The onmessage() function simply puts the request data in a queue and we can use setInterval() to continuously call a function that pulls and processes requests from this queue. Here’s what the modified worker code looks like, where the latest request is prioritized (and previous ones are thrown away):

const highLoadWork = function() { let x = 1000; for(let i=0; i<99999999; i++) { if(i % 2 === 0) { x += 1000; } else { x = Math.sqrt(x); } } return `hello, x=${x}`; }; const requestQueue = []; const processRequestQueue = function() { if(requestQueue.length === 0) { return; } const lastRequest = requestQueue.pop(); requestQueue.length = 0; const requestNum = lastRequest.requestNum; const workResult = highLoadWork(); postMessage( { "response": `responding to request ${requestNum}` } ); }; setInterval(processRequestQueue, 4); onmessage = function(_req) { requestQueue.push(_req.data); };

… and here’s what happens after making 12 requests to it in a loop:

(sometimes, there are also cases where only request 12 was processed)

So this works pretty well, but there there are a few things to be aware of. The time it takes to post a message to the worker, is somewhere between 0ms – 1ms, plus the cost of copying any data that needs to be transferred to the worker. The setInterval() minimum is not really 0ms; the browser sets a reasonable minimum which you can probably expect to be between 4ms – 10ms, and this is in addition to the cost of posting the message to the worker (The code was updated to explicitly specify a 4ms delay, setInterval with a 0ms delay isn’t a good idea). What this means in practice is that there is additional latency before we begin processing a request, but compared to a scenario where we have to factor in waiting on all prior requests to finish processing (which is the point of doing this to begin with), I expect this method to win out in performance.

Finally, here’s a look at a GraphPaper stress test and how prioritizing the last request to the connector routing worker (which is responsible for generating the path between the 2 nodes) allows for a faster/less-laggy update:

No prioritization

Prioritize last request, eliminate prior

Encapsulating Web Workers

Constructing Web Workers

There’s generally 2 ways to construct a Web Worker…

Passing a URL to the Javascript file:

const myWorker = new Worker('worker.js');

Or, creating a URL with the Javascript code (as a string). This is done by creating a Blob from the string and passing the Blob to URL.createObjectURL:

const myWorker = new Worker( URL.createObjectURL(new Blob([...], {type: 'application/javascript'})) );

In Practice

With GraphPaper, I’ve used the former approach for the longest while, depending on the caller to construct and inject the worker into GraphPaper.Canvas:

const canvas = new GraphPaper.Canvas( document.getElementById('paper'), // div to use window, // parent window new Worker('../dist/connector-routing-worker.min.js') // required worker for connector routing );

This technically works but, in practice, there’s 2 issues here:

  • There’s usually a few hoops to go through for the caller to actually get the worker Javascript file in a location that is accessible by the web server. This could mean manually moving the file, additional configuration, additional tooling, etc.
  • GraphPaper.Canvas is responsible for dealing with whether a worker is used or not, which worker, how many workers are used, etc. These aren’t concerns that should bubble up to the caller. You could make a case that caller should have the flexibility to swap in a worker of their choice (a strategy pattern), that’s a fair point, but I’d argue that the strategy here is what the worker is executing not the worker itself and I haven’t figured out a good interface for what that looks like.

So, I worked to figure out how to construct the worker within GraphPaper.Canvas using URL.createObjectURL(), and this is where things got trickier. The GraphPaper codebase is ES6 and uses ES6 modules, I use rollup with babel to produce distribution files the primary ones being minified IIFE bundles (IIFE because browser support for ES6 modules is still very much lacking). One of these bundles is the code for the worker (dist/connector-routing-worker.js), which I’d need to:

  • Encapsulate it into a string that can be referenced within the source
  • Create a Blob from the string
  • Create a URL from the Blob using URL.createObjectURL()
  • Pass the URL to the Worker constructor, new Worker(url)

The latter steps are straightforward function calls, but the first is not clear cut.

Repackaging with Rollup

After producing the “distribution” code for the worker, what I needed was to encapsulate it into a string like this (the “worker-string-wrap”):

const workerStringWrap = ` const ConnectorRoutingWorkerJsString = \` ${workerCode} \`; export { ConnectorRoutingWorkerJsString }` ;

Writing that out to a file, I could then easily import it as just another ES6 module (and use the string to create a URL for the worker), then build and produce the distribution file for GraphPaper.

I first tried doing this with a nodejs script, but creating a rollup plugin proved a more elegant solution. Rollup plugins are aren’t too difficult to create but I did find the documentation a bit convoluted. Simply, rollup will execute certain functions (hooks) at appropriate points during the build process. The hook needed in this scenario is writeBundle, which can be used to get the code of the produced bundle and do something with it (in this case, write it out to a file).

// rollup-plugin-stringify-worker.js const fs = require('fs'); const stringifyWorkerPlugin = function (options) { return { name: 'stringifyWorkerPlugin', writeBundle(bundle) { console.log(`Creating stringified worker...`); // Note: options.srcBundleName and options.dest are expected args from the rollup config const workerCode = bundle[options.srcBundleName].code; const workerStringWrap = `const ConnectorRoutingWorkerJsString = \`${workerCode}\`; export { ConnectorRoutingWorkerJsString }`; fs.writeFile(options.dest, workerStringWrap, function(err) { // ... }); } }; }; export default stringifyWorkerPlugin;

The plugin is setup within a rollup config file:

import stringifyWorker from './build/rollup-plugin-stringify-worker'; // ... { input: 'src/Workers/ConnectorRoutingWorker.js', output: { format: 'iife', file: 'dist/workers/connector-routing-worker.min.js', name: 'ConnectorRoutingWorker', sourcemap: false, }, plugins: [ babel(babelConfig), stringifyWorker( { "srcBundleName": "connector-routing-worker.min.js", "dest": "src/Workers/ConnectorRoutingWorker.string.js" } ) ], }, // ...

Note that addtional config blocks for components that use ConnectorRoutingWorker.string.js (e.g. the GraphPaper distribution files), need to be placed after the block shown above.

The overall process looks like this:

Creating the Worker

The worker can now be created within the codebase as follows:

import {ConnectorRoutingWorkerJsString} from './Workers/ConnectorRoutingWorker.string'; // ... const workerUrl = URL.createObjectURL(new Blob([ ConnectorRoutingWorkerJsString ])); const connectorRoutingWorker = new Worker(workerUrl); // ...

The Future

Looking ahead, I don’t really see a good solution here. Better support for ES6 modules in the browser would be a step in the right direction, but what is really needed is a way to declare a web worker as a module and the ability to import and construct a Worker with that module.