Archive for the ‘Random’ Category

Writing to the Bitfenix ICON display with Rust

Bitfenix ICON

I love the Bitfenix Pandora case and used it for my desktop PC build a few years ago. One interesting aspect about this case is that it has an attached LCD on the front, the Bitfenix ICON display; I’ve never done anything with the display (I just left the default logo on it), but recently I discovered the source code was available and it was possible to programmatically write to the display, and became interested in how this is done and what I could do with the display.

Instead of using the code provided by Bitfenix, I used the modified source produced by Rizvi R. I fired up Visual C++, dealt with downloading and pointing the compiler + linker to necessary libraries, I hit a stumbling block with libjpeg, but stripped out the JPEG support and was able to get something up and running. The code loads an image, resizes it to fit the display, reduces the color depth to match the display, and then writes to the display using HIDAPI (which is really cool and something I didn’t realize existed). The code was ok, but I decided to see if I could modularize it further and rewrite it in Rust (avoiding some of the less modern things in C++ world, like package management) as a learning exercise.

The Display

The ICON display is far from any sort of high quality panel, but it’s adequate for something put on a case. I haven’t come across official specs, but here’s what I’ve been able to gather:

  • Resolution of 240×320
  • 16-bit color, 5-6-5 format (5 bits for red, 6 bits for green, 5 bits for blue)
  • Super slow refresh rate, like over 1 second (so not suitable for any sort of animation)

Rust Dependencies

We’ll make use of 2 Rust packages, png and hidapi via cargo:

[dependencies] png = "0.15.3" hidapi = "1.1.0"

Loading PNG images & converting to 16bpp

We can load a PNG image and get a pixel buffer as follows:

fn load_png_image(filepath: &str) -> Vec<u8> { let decoder = png::Decoder::new(File::open(filepath).unwrap()); let (info, mut reader) = decoder.read_info().unwrap(); // Allocate the output buffer. let mut buf = vec![0; info.buffer_size()]; reader.next_frame(&mut buf).unwrap(); buf }

This will work well for 24bpp images and return a buffer with R, G, and B components, 8 bits each. As the ICON is a 5-6-5 16bpp display, we’ll need to reduce the color depth by chopping off bits with some bit shift operations (for more details on what’s happening here, this is a good resource):

fn reduce_image_to_16bit_color(image_buf: &[u8]) -> Vec<u8> { let mut result: Vec<u8> = Vec::with_capacity(240 * 320 * 2); for i in (0..image_buf.len()).step_by(3) { let b: u16 = ((image_buf[i + 2] as u16) >> 3) & 0x001F; let g: u16 = (((image_buf[i + 1] as u16) >> 2) << 5) & 0x07E0; let r: u16 = (((image_buf[i] as u16) >> 3) << 11) & 0xF800; let rgb: u16 = r | g | b; result.push( ((rgb >> 8) & 0x00FF) as u8 ); result.push( (rgb & 0x00FF) as u8 ); } result }

We can now load an image and reduce the color depth as follows:

// Image needs to be 240x320 (24bpp, no alpha channel) let src_image = load_png_image("assets/1.png"); let reduced_color_img = reduce_image_to_16bit_color(&src_image);

Open the ICON device

Using the HIDAPI, we can open the device with it’s vendor ID (0x1fc9) and product ID (0x100b):

let hid = HidApi::new().unwrap(); let bitfenix_icon_device = hid.open(0x1fc9, 0x100b).unwrap();

I thought it might be useful to query and surface product and manufacturer info of the device as follows:

let device_manuf_string = bitfenix_icon_device.get_manufacturer_string().unwrap().unwrap(); let device_prod_string = bitfenix_icon_device.get_product_string().unwrap().unwrap(); println!("{}: {}", device_manuf_string.as_str(), device_prod_string.as_str());

However, this isn’t terribly insightful. We get a product string equal to “NXP Semiconductors” manufacturer string equal to “LPC11Uxx HID”, which is the microcontroller used on the device.

Writing to the display

With the pixel buffer and an open device, we can now write a new image to the display, in a 3 step process:

  • Clear the display
  • Write the buffer to the display
  • Refresh the display

I’m somewhat confused as to why the display needs to be cleared, but if I skip this step I end up with weird overdraw of pixels on top of the existing image. In any case, clearing the display is simple enough and involves writing a 6 byte code to the display:

fn clear_display(bitfenix_icon_device: &HidDevice) { let erase_flash_code: [u8; 6] = [0x0, 0x1, 0xde, 0xad, 0xbe, 0xef]; bitfenix_icon_device.write(&erase_flash_code).unwrap(); }

Writing the buffer to the display involves copying up to 64 bytes at a time to the display, this is a max of 61 bytes from the pixel buffer + a 3 byte header (indicating the operation and number of bytes being written):

fn write_image_to_display(bitfenix_icon_device: &HidDevice, image_buf: &[u8]) { let num_image_bytes_per_write = 61; /* +3 bytes for the header, note that the device only accepts writes in 64 bytes chunks */ let num_writes = ((image_buf.len() as f64 / num_image_bytes_per_write as f64).ceil()) as usize; for i in 0..num_writes { let start = i * num_image_bytes_per_write; let mut length = num_image_bytes_per_write; if i == (num_writes-1) { length = image_buf.len() - ((num_writes - 1) * num_image_bytes_per_write); } let mut image_data_with_header: Vec<u8> = Vec::with_capacity(length + 3); image_data_with_header.push(0x0); image_data_with_header.push(0x2); image_data_with_header.push(length as u8); for image_byte_idx in start..start+length { image_data_with_header.push(image_buf[image_byte_idx]); } bitfenix_icon_device.write(&image_data_with_header).unwrap(); } }

Refreshing the display requires writing a 2 bytes code to the display:

fn refresh_display(bitfenix_icon_device: &HidDevice) { let refresh_code: [u8; 2] = [0x0, 0x3]; bitfenix_icon_device.write(&refresh_code).unwrap(); }

Pulling it all together we have a few straightforward function calls:

println!("Writing new image..."); clear_display(&bitfenix_icon_device); // needs to be done or you end up with weird overwriting on top of exiting image write_image_to_display(&bitfenix_icon_device, &reduced_color_img); println!("Refreshing display..."); refresh_display(&bitfenix_icon_device);
Bitfenix ICON image

Code and future work

I have the above code up on GitHub. This was fun and a great learning experience. Next, I think it would be interesting to see if the display could be utilized to reflect information about the computer, maybe showing CPU usage or internet connection status. There could be some utility in providing such metrics to the user and it would give the display a purpose beyond that of a digital case badge.

Reel

I wrote a little desktop application to capture short videos and turn them into GIFs. I call it Reel. It’s still rough around the edges but you can grab an early version of it below.

Reel 0.1 (Windows Install)

I’ll have a Linux/Ubuntu version soon. Maybe an OS X version… I have to jump through a few extra hoops here as Apple still refuses to allow OS X to be virtualized.

Reel - Drinking Bird

Aside from its utility, this was also an experiment piecing together some technologies I’ve written about here before: XUL + XPCOM + SocketBridge, video capture using web tech and, in general, using web technologies for desktop applications.

Logging to stdout and a file

A simple way to log to both stdout and a file (using a pipe and tee):

./myapp 2>&1 | tee -a myapp.log

A more relational dictionary

As I started looking to add more functionality to Lexiio, I realized the Wiktionary definitions database dump I was using wasn’t going to cut it; specifically, I needed a normalized schema, or I’d have data duplication all over the place. I started normalizing in MySQL, but whether it was MySQL or MySQL Workbench, I kept running into character encoding issues. Using a simple INSERT-SELECT, in MySQL 5.7, to transfer words from the existing table to a new table resulted losing characters:

MySQL losing characters

I dumped the data into PostgreSQL, didn’t encounter the issue, and just kept working from there.

The normalized schema can be downloaded here: LexiioDB normalized
(released under the Creative Commons Attribution-ShareAlike License)

LexiioDB schema

The unknown_words and unknown_to_similar_words tables is specific to Lexiio and serve as a place to store unknown words entered by the user and close/similar matches to known words (via the Levenshtein distance).

Lexiio

Another little experiment of mine: Lexiio, a web-based CLI dictionary.

Lexiio

A few takeaways:

  • Part of the reason for building this was that I wanted to actually make use of the Wiktionary data set snapshot in a real project. The data set is pretty comprehensive, and easy to parse and work with.
  • This was also a learning exercise for Golang. There’s nothing complex here but, so far, working with Go has been enjoyable. I like that I’m building a native application, types are enforced, and the HTTP server included as part of the standard library is incredibly easy to setup and work with.
  • I wanted to experiment a bit with what a web-based CLI would look and feel like. For something like a dictionary, where user interaction revolves around textual input/output, a command-line interface seems to work really well.

Function argument tricks

I came across this “trick” on coderwall for avoiding an if statement when you have a a function argument that is allowed to be either an array or a scalar value (something that seems oddly common in loosely typed languages).

function example(ids) {
;[].concat(ids).forEach(
function (id) {
// ...
})
}

The trick in question is just concatenating the argument with an empty array so that, within the function, you’re always dealing with an array and elements of the array.

Taking a step back, the bigger question is why do this?
In my experience, it’s almost always better to keep the code within the function straightforward and force the caller to adapt and give the function what it needs. In this case, that means simply forcing the caller to always pass an array.

The Future of Programming

This is a great talk given by Bret Victor that I came across a while ago:

Bret Victor – The Future of Programming from Bret Victor on Vimeo.

All four ideas presented resonate with me any my work, particularly direct manipulation of data, as I’m continually disappointed when I see markup languages and frameworks seen as the go-to solution in places where proper tools would yield better and faster results.

Rtf2Html 1.3

I recently made a small update to Rtf2Html (the converter I wrote for converting RTF text to HTML markup):

  • Support for conversion to SVG markup
  • Updated preview form to use GeckoFX 29 and XULRunner 29.0.1 (the major version numbers have to match)

Download it here.
Note that this version requires the .NET Framework v4.0 or later.

Rtf2Html SVG support

While you can put the generated SVG text on a page, that wasn’t really my motivation here; what I wanted was a way to import syntax-highlighted text (in my case, typically code) into a vector graphics application (Inkscape, Illustrator, etc.) to be placed as part of a diagram.

GeoNames geographical database

I came across the GeoNames database recently and was impressed with the breadth of locations available. I downloaded the allCountries.zip from http://download.geonames.org/export/dump/ which gives data (name, location, population, etc.) on places across all countries in one, TSV delimited, text file. To work with the data more easily, I wrote a PHP script to put the entries into a MySQL database table (it’s actually just a simple modification to the script I used for the Wiktionary definitions import). The TSV, MySQL database, and PHP script are all presented below.

GeoNames allCountries.zip

GeoNames MySQL database export

<?php

require "Database.php";

$tsvInputFilePath = "allCountries.txt";

echo "Importing {$tsvInputFilePath} ...\n";

// Open file
$fp = fopen($tsvInputFilePath, "r");
if($fp === FALSE) {
echo "Could not find file path: " . $tsvInputFilePath;
exit;
}

// Establish DB connection
$db = new Database();

while (!feof($fp)) {

// Get line and parse tab-delimited fields
$ln = fgets($fp);
$parts = explode("\t", $ln);

if(count($parts) < 19) {
continue;
}

// Insert into database
$db->query("INSERT INTO cities (`id`,
`name`,
`asciiname`,
`alternatenames`,
`latitude`,
`longitude`,
`feature_class`,
`feature_code`,
`country_code`,
`cc2`,
`admin1_code`,
`admin2_code`,
`admin3_code`,
`admin4_code`,
`population`,
`elevation`,
`dem`,
`timezone`,
`last_modified_at`)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
,

$parts[0],
$parts[1],
$parts[2],
$parts[3],
$parts[4],
$parts[5],
$parts[6],
$parts[7],
$parts[8],
$parts[9],
$parts[10],
$parts[11],
$parts[12],
$parts[13],
$parts[14],
$parts[15],
$parts[16],
$parts[17],
$parts[18]

);


}

echo "done.\n";
exit;

The Database class is wrapper for mysqli, you can find it, along with the script above, in the geonames-allcountries-import bitbucket repo.

Note that this script will take a while to run (likely a few days) as there are 9,195,153 records that need to be inserted and we’re just doing simple INSERTs with no optimizations.

An overview of each of the fields in the database can be found in the GeoNames export readme.txt. Particularly important is the feature_class and feature_code fields, the range of values for which can be found on the GeoNames Feature Codes page. Also, as indicated in the readme, the data is licensed under the Creative Commons Attribution 3.0 License.

Wiktionary definitions database

Having a dictionary can be incredibly useful in software development, and forms the basis for a wide range of natural language processing applications. However, finding an open-source dictionary, one that can be easily parsed and used within applications, is incredibly difficult as there simply isn’t a lot of options available.

WordNet is one option I came across, but requires significant work parsing the WordNet ASCII database files or Prolog database files.

Wiktionary was the other viable option, and the one I went with. The Wiktionary XML dumps are available, but being a wiki, these files are likely even more difficult to parse than the WordNet database files as you’d have to deal with wiki markup. However, a while ago I was able to get a TSV file with words, parts of speech, and definitions from the Wikimedia Toolserver at http://toolserver.org/~enwikt/definitions. The Toolserver has since been discontinued and I haven’t found updated TSVs hosted anywhere else, but the file I downloaded, dated November 27, 2012, is still fairly up-to-date for a dictionary and useful in many applications.

I wrote a PHP script to parse the TSV and make INSERTs into a MySQL database. The TSV file, MySQL database, and PHP script are presented below.

Wiktionary TSV file

Wiktionary MySQL database export

PHP Script:

<?php

require "Database.php";

$tsvInputFilePath = "TEMP-E20121127.tsv";

echo "Importing {$tsvInputFilePath} ...\n";

// Open file
$fp = fopen($tsvInputFilePath, "r");
if($fp === FALSE) {
echo "Could not find file path: " . $tsvInputFilePath;
exit;
}

// Establish DB connection
$db = new Database();

while (!feof($fp)) {

// Get line and parse tab-delimited fields
$ln = fgets($fp);
$parts = explode("\t", $ln);
if(count($parts) < 4) {
continue;
}

$lang = $parts[0];
$word = $parts[1];
$partOfSpeech = $parts[2];
$definitionRaw = $parts[3];

// Insert into database
$db->query("INSERT INTO words (language, word, part_of_speech, definition_raw)
VALUES (?, ?, ?, ?)"
,
$lang, $word, $partOfSpeech, $definitionRaw);

}

echo "done.\n";
exit;

The Database class is wrapper for mysqli, you can find it, along with the script above, in the wiktionary-tsv-import bitbucket repo.

Note that definitions need to be parsed further, as they contain wiki markup. The parsing doesn’t seem difficult and is something I hope to get done in the near future.

Related resources:

There’s valuable stuff from each of the projects above, but like WordNet, requires significantly more time to evaluate and implement in an application, compared to the simple TSV -> MySQL translation.

EDIT (12/13/2015): I’ve updated the MySQL database export. There was some holes in the data because I was using utf8 column encoding for definitions, however, MySQL’s has a weird “UTF-8” implementation that only handles codepoint that up to 3 bytes in size. utf8mb4 encoding needs to be used for a proper UTF-8 encoding supporting up to 4 bytes.