Deployments with git tags + npm publish

Git tags and npm

Deployment workflows can vary a lot, but what I’ve tended to find ideal is to tag releases on GitHub (or whatever platform, as most have some mechanism to handle releases), the tag itself being the version number of whatever is being deployed, and having a deployment pipeline orchestrate and perform whatever steps are necessary to deploy the application, service, library, etc. This flow is well supported with git, well supported on platforms like GitHub, and is dead simple for developers to pick up and work with (in GitHub, this means filling out a form and hitting “Publish release”).

npm doesn’t play nicely with this workflow. With npm version numbers aren’t tied to git tags, or any external mechanism, but instead to the value defined in the project’s package.json file. So, trying to publish a package via tagging requires some additional steps. The typical solutions seem to be:

  • Update the version in package.json first, then create the tag
  • Use some workflow that include npm version patch to have npm handle the update to package.json and creating the git tag
  • Use an additional tool (e.g. standard-version), that tries to abstract away management of version numbers from both package.json and git tag

None of these options are great; versioning responsibility and authority is pulled away from git and, in the process, additional workflow complexity and, in the latter case, additional dependencies are introduced.

Version 0.0.0

In order to publish with npm, keep versioning authority with git, and maintain a simple workflow that doesn’t include additional steps or dependencies, the following has been working well in my projects:

  • In package.json, set the version number to “0.0.0”; this value is never changed within any git branch and, conceptually, it can be viewed as representing the “dev version” of the library. package.json only has a “non-dev version” for code published to our package repository.
  • In the deployment pipeline (triggered by tagging a release), update package.json with the version from the git tag.

    Most CI systems have some way of getting the tag being processed and working with it. For example, in CircleCI, working with tags formatted like vMAJOR.MINOR.PATCH, we can reference the tag, remove the “v” prefix, and set the version in package.json using npm version as follows:

    npm --no-git-tag-version version ${CIRCLE_TAG:1}

    Note that this update to package.json is only done within the checked-out copy of the code used in the pipeline. The change is never committed to the repo nor pushed upstream.
  • Finally, within the deployment pipeline, publish as usual via npm publish

Limitations

I haven’t run across any major limitations with this workflow. There is some loss of information captured in the git repository, as the version in package.json is fixed at 0.0.0, but I’ve yet to come across that being an issue. I could potentially see issues if you want to allow developers to do deployments locally via npm publish but, in general, I view local deployments as an anti-pattern when done for anything beyond toy projects.


Writing to the Bitfenix ICON display with Rust (part 2, writing text)

Bitmap Fonts

Picking up from being able to successfully write images to the Bitfenix ICON display, I started looking into how to render textual content. Despite their lack of versatility, using a bitmap font was a natural option: they’re easy to work with, they’re a good fit for fixed-resolution displays, and rendering is fast.

I pulled up the following bitmap font from an old project (I don’t remember the source used to create this):

A few points about this font:

  • Each character glyph is 16×16, with a total of 256 characters in the 256×256 bitmap
  • Characters are arranged/indexed left-to-right, top-to-bottom, and the index of a 16×16 block will correspond with an ASCII or UTF8 codepoint value (e.g. the character at index 33 = “!”, which also maps to UTF8 codepoint 33)
  • Despite space for 256 characters, there’s a very limited set of characters here, but there’s enough for simple US English strings
  • While each character is 16×16 pixels, this is not a monospaced font, there is data for accompanying widths for each character
  • The glyphs are simply black and white (i.e. there’s no antialiasing on the character glyphs, we can simply ignore black pixels and not worry about blending into the background)

Rendering Strings

We need to render characters from the bitmap font onto something. We could create a new image but, building upon what was done in part 1, I decided to render atop this background image. The composite of the background image + characters will be the image written to the ICON display.

To start, we’ll load the background image just as we did in part 1, but we need to make it mutable, as we’ll be writing character pixels direct to it:

// Background image needs to be 240x320 (24bpp, no alpha channel) let mut background_image = reduce_image_to_16bit_color(&load_png_image("assets/1.png"));

Next, we’ll load the font PNG in the same manner (it doesn’t need to be mutable, as we’re not modifying the character pixels):

let font_image = reduce_image_to_16bit_color(&load_png_image("assets/fonts/font1.png"));

To handle text rendering a TextRenderer struct is declared with the rendering logic encapsulated in TextRenderer.render_string(), which loops through each character in the input string, looks up the location of the character in the bitmap font, and renders the 16×16 block of pixels for the character onto the background image. The x-location of where to render a character is incremented by the width of the previous character (found via lookup into TextRenderer::font_widths vector).

pub struct TextRenderer { font_widths: Vec<u8>, } impl TextRenderer { pub fn new() -> TextRenderer { TextRenderer { font_widths: build_font_width_vec() } } pub fn render_string(&self, txt: &str, x: u64, y: u64, fontimg: &[u8], outbuf: &mut [u8]) { // x position of where character should be rendered let mut cur_x = x; for ch in txt.chars() { // From codepoint, lookup x, y of character in font and width of character from self.font_widths let ch_idx = ch as u64; let ch_width = self.font_widths[(ch as u32 % 256) as usize]; let ch_x = (ch_idx % 16) * 16; let ch_y = ((ch_idx as f32 / 16.0) as u64) * 16; // For each character, copy the 16x16 block of pixels into outbuf for fy in ch_y..ch_y+16 { for fx in ch_x..ch_x+16 { let fidx: usize = ((fx + fy*256) * 2) as usize; let fdx = fx - ch_x; let fdy = fy - ch_y; let outbuf_idx: usize = (((cur_x + fdx) + (y + fdy)*240) * 2) as usize; // If the pixel from the font bitmap is not black, write it out to outbuf if fontimg[fidx] != 0x00 && fontimg[fidx+1] != 0x00 { outbuf[outbuf_idx] = fontimg[fidx]; outbuf[outbuf_idx + 1] = fontimg[fidx + 1]; } } } cur_x = cur_x + (ch_width as u64); } } }

The TextRenderer::font_widths vector is built from the build_font_width_vec() method, which simply builds and returns a vector of hardcoded values for the character widths:

fn build_font_width_vec() -> Vec<u8> { let result = vec![ 7, 7, 7, 7, 7, 7, 7, 7, 7, 30, 0, 7, 7, 0, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 5, 3, 5, 9, 7, 18, 8, 3, 5, 5, 7, 9, 4, 8, 3, 5, 8, 4, 7, 7, 8, 7, 7, 7, 7, 7, 3, 4, 6, 9, 6, 7, 9, 8, 8, 8, 8, 8, 8, 8, 8, 5, 7, 8, 7, 9, 9, 9, 8, 9, 8, 8, 9, 8, 9, 10, 9, 9, 8, 4, 6, 4, 8, 9, 5, 7, 7, 6, 7, 7, 6, 7, 7, 3, 5, 7, 3, 9, 7, 7, 7, 7, 6, 6, 6, 7, 7, 10, 7, 7, 6, 5, 3, 5, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 3, 3, 5, 5, 3, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 3, 6, 7, 7, 13, 3, 11, 10, 13, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, ]; result }

Writing to the ICON display

The final piece is simply creating a TextRenderer instance and calling render_string() with the necessary parameters. Here we’ll write out “Hello world!” to position 10, 20 on the display:

let tr = TextRenderer::new(); tr.render_string("Hello world!", 10, 20, &font_image, &mut background_image);
Bitfenix ICON image display with text

All of the code presented is up on the bitfenix-icon-sysstatus repo.

Next, I’m looking to play around with the systemstat library to print out something useful to the display.


Avoiding loops and routing failures with A*

The problem

With GraphPaper, generating path with the A* algorithm I ran into some interesting cases where the algorithm would fail to generate a path between the start and end points:

Failed path

Note that the path should go around, not directly across the big rectangle in the center.

There’s a few things being masked here, as GraphPaper will always try to produce a result, even if it’s non-optimal. So we need to disable:

  • The mechanism that makes a path directly from start to end if there’s a failure in generating a path
  • Path optimization

With these disabled, and some annotations showing the order in which the path segments were generated, we can get a better idea of what’s happening:

Failed path, showing details

Here we see that the algorithm generates a path that loops around its adjacent object (1-12), the path moves away from the object (13-14), then moves right back towards the object (15-18), then we’re out of accessible routing points, so we have a failure and we end the path by going directly to the endpoint. It’s important to note that routing points are not grid points here, they’re points designated around objects (the rectangles with the dots in the center) and anchors (the rectangles where paths start and end).

Why is this happening?

This might seem like a bug in the A* implementation but, as far as I can tell, the algorithm is performing as it should and the issue here comes down to the path cost computation done in the main loop. A* doesn’t actually define how to compute cost, but GraphPaper uses run-of-the-mill Euclidean distance as a metric.

In the A* main loop, the cost of going from our current point to a given point, n, is given by:

f(n) = g(n) + h(n)

g(n) is the total length of the path from the starting point to n (sum of the length of all pieces of the path). In practice, we keep track of the current path length, so we only need to compute and add the straight-line length from current to n:

g(n)=currentPathLength+computeLength(current,n).svg

h(n) is the straight-line length from n to the goal:

h(n)=computeLength(n,endpt).svg

The issues we’re seeing surface as a result of f(n) being based solely on distance computations, note that:

  • We may have a good point, n, it’s close to our goal but requires a large jump from our current location, the large increase in g(n) leads to another, closer, point being prioritized
  • We may have a bad point, n, it’s further from our goal but only requires a small jump from our current location, h(n) increases a bit but the relatively small increase in g(n), leads to this point being prioritized over a better one

The implication here is also that shorter jumps become preferable, even if we end up going in the opposite direction of the goal, as such jumps are safer (i.e. less likely to lead to an blocked/invalid path).

Smarter routing

To improve the routing, I wanted to adjust the cost computation such that moving to points that and in the direction of (and thus closer) to the goal are prioritized, so I introduced a new term, t(n), in the cost computation:

f(n) = g(n) + h(n) - t(n)

t(n)=(vecCurrentTo(endpt)-dot-vecCurrentTo(n))-times-computeLength(current,n).svg

To compute t(n), we first compute the dot product of 2 normalized vectors:

  • vecCurrentTo(endpt): from current to the endpoint, representing the ideal direction we should head in
  • vecCurrentTo(n): from current to n, representing the direction to n

The dot product ranges from [-1, 1]; tending to -1 when n is in the opposite direction of the ideal, and tending to 1 when n is in the same direction as the ideal.

We then scale the dot product result as we can’t just bias our overall cost, f(n), by a range this small, so we scale by the straight-line length from current to n. Using this length as a scaling factor also serves to bias towards longer paths in good/positive directions directions.

In code, the functions g(n), h(n) and t(n) components are computed like this:

// g(n) = length/cost of _startPoint to _vp + _currentRouteLength const currentToVisibleLength = (new Line(currentPoint, visiblePt)).getLength(); let gn = currentToVisibleLength + _currentRouteLength; // h(n) = length/cost of _vp to _endPoint let hn = (new Line(visiblePt, _endPoint)).getLength(); // t(n) = // a. get the relationship between the 2 vectors (dot product) // b. scale to give influence (scale by currentToVisibleLength) // .. using currentToVisibleLength as scaling factor is influence towards longer paths in good directions vs shorter paths in bad directions const vecToTargetIdeal = (new Vec2(_endPoint.getX() - currentPoint.getX(), _endPoint.getY() - currentPoint.getY())).normalize(); const vecVisibleToEndpt = (new Vec2(visiblePt.getX() - currentPoint.getX(), visiblePt.getY() - currentPoint.getY())).normalize(); const tn = vecToTargetIdeal.dot(vecVisibleToEndpt) * currentToVisibleLength;

With the updated path cost computation, we successful get a path to the endpoint and, even with path optimization off, we get a much better path overall with less unnecessary points in the path and no looping around objects:

Fixed path with smarter routing

It’s also noting how this performs as the position of the object changes (again, path optimization is disabled and the mechanism that makes a path directly from start to end on routing failure is also disabled):

Without t(n)

With t(n)

In pretty much every case, applying the t(n) factor to the path computation produces a better path with no failures in routing to the endpoint.


A* path optimization

The A* algorithm, using a straight-line distance heuristic function, is great in terms of performance, but yields a number of cases where the paths produced are not optimal.

For example, here is a path created in ScratchGraph using GraphPaper (the underlying library used for creating the connectors):

A* without path optimization

I suspect the non-optimal result is even more pronounced here given that the path is determined based on specific routing points that exist around the objects (you can spot these as the inflection points in the path), instead of a uniform grid. That said, I don’t want to use a uniform grid; while it’s likely a non-issue with A*, there is also the cost of computing which routing points are accessible vs blocked, and that cost grows quickly as the possible number of routing points grow.

An approach that works well, and doesn’t drastically increase the cost of path computation, is simplifying the generated path by checking if corresponding points in the path are visible to each other. If so, we can remove the intermediate points, and simply connect those 2 points together. Here’s a snipped from the GraphPaper codebase:

/** * * @param {Point[]} _pointsInRoute * @param {Function} _arePointsVisibleToEachOther */ optimize: function(_pointsInRoute, _arePointsVisibleToEachOther) { let start = 0; let end = _pointsInRoute.length - 1; while(true) { if((end-start) <= 1) { start++; end = _pointsInRoute.length - 1; if(start >= _pointsInRoute.length-2) { break; } } if(_arePointsVisibleToEachOther(_pointsInRoute[start], _pointsInRoute[end])) { _pointsInRoute.splice(start + 1, (end-start) - 1); end = _pointsInRoute.length - 1; } else { end--; } } }

The function works as follows:

  • Begin with the points start (first point in the path) and end (last point in the path).
  • Relative to start, check if the other points in the path are visible to it (in the code above, we iterate backwards from the endpoint). If we find that start is visible to another point, we eliminate the intermediate points from the path.
  • Once we get to end being the point after start, update start to the next point in the path and reset end to whatever the last point in the path is.
  • Repeat the latter 2 steps until we’ve checked all corresponding points in the path (start is the point directly preceding end).

Optimizing the path shown above with this method yields an optimal path:

A* with path optimization

In terms of performance, the cost is based on the number of points in the path and the cost of whatever computation the _arePointsVisibleToEachOther() function does. For my use-cases, paths have relatively few points and _arePointsVisibleToEachOther() consists of a number of fairly fast line-line intersection checks, so the method is fairly cheap and there’s no significant decrease in performance when generating paths.


Pushing computation to the front: client-side compression

Client → Server Compression

Content from a web server being automatically gzipped (via apache, nginx, etc.) and transferred to the browser isn’t anything new, but there’s really nothing in the way of compression when going in the other direction (i.e. transferring content from the client to the server). This is not too surprising, as most client payloads are small bits of textual content and/or binary content that is already well compressed (e.g. JPEG images), where there’s little gain from compression and you’re likely to just waste CPU cycles doing it. That said, when your frontend client is a space for content creation, you’re potentially going to run into cases where you’re sending a lot of uncompressed data to the server.

Use-case: ScratchGraph Export

ScratchGraph has an export feature that essentially renders the page (minus UI components) as a string of HTML. This string packaged along with some metadata and sent to the server, which sends it to a service running puppeteer, that renders the HTML string to either an image or a PDF. The overall process looks something like this:

ScratchGraph Export Flow

The HTML string being sent to the server is relatively large, a couple of MBs, due to:

  • The CSS styles (particularly due to external resources being pulled in and inlined as base64 URLs)
  • The user simply having lots of content

To be fair, it’s usually the former rather than the latter, and optimizing to avoid the inlining of resources (the intent of which was to try and do exports entirely in the browser) would have a greater impact in reducing the amount of data being transferred to the server. However, for the purposes of this blog post (and also because it leads to a more complex discussion on how the application architecture can/should evolve and what this feature looks like in the future), we’re going to sidestep that discussion and focus on what benefits data compression may offer.

Compression with pako

I was more than ready to implement a compression algorithm, but was happy to discover pako, which does zlib compression. Compressing (i.e. deflating) with pako is very simple, below I encode the HTML string to UTF8 via TextEncoder.encode() (this is because I want UTF8, this isn’t a requirement of pako), which returns a Uint8Array, then use that as the input for pako.deflate(), which also returns a Uint8Array.

const staticHtmlUtf8Arr = (new TextEncoder()).encode(html); const compressedStaticHtmlUtf8Arr = pako.deflate(staticHtmlUtf8Arr);

Here’s what that looks like in practice, exporting the diagram shown above:

ScratchGraph Export, with pako compression, results

That’s fairly significant, as the data size has been reduced by 1,237,266 bytes (42.77%)!

The final bit for the frontend is sending this to the server. I use a FormData object for the XHR call and, for the compressed data, I put append it as a Blob:

formData.append( "compressedStaticHtml", new Blob([compressedStaticHtmlUtf8Arr], {type: 'application/zlib'}), "compressedStaticHtml" );

Handling the compressed data server-side with PHP

PHP support zlib compression/decompression via the zlib module. The only additional logic needed server-side is calling gzuncompress() to decompressed the compressed data.

$staticHtml = gzuncompress(file_get_contents($compressedStaticHtmlFile->getFilePath()));

Note that $compressedStaticHtmlFile is an object representing a file pulled from the request (note that FormData will append a Blob in the same manner as a file, so server-side, you’re dealing with the data as a file). The File.getFilePath() method here is simply returning the path for the uploaded file.

Limitations

Compressing and decompressing data will cost CPU cycles and, for zlib and most algorithms, this will scale with the size of the data. So considerations around what the client-side system looks like and the size of the data need to be taken into account. In addition, compression within a browser’s main thread can lead to UI events, reflow, and repaint being blocked (i.e. the page becomes unresponsive). If the compression time is significant, performing it within a web worker instead would be a better path.


Are spreadsheets databases?

A few weeks ago, news that using Excel resulted in the loss of ~16,000 coronavirus cases in England due to a 65K row limit in XLS files, sparked a number of tweets around how Excel isn’t a database. There’s a lot to cringe at here, but saying Excel or, more precisely, spreadsheets aren’t databases and using a “proper database” would have prevented a failure is incredibly reductive.

excel is not a database

Definition

To start, it’s worth looking at what the definition of “database” actually is:

definition of database

At least from that definition, I think it’s fair to say spreadsheets are databases. They’re primitive, there’s little-to-nothing in the way of concurrency, security, constraints, etc., but they are databases.

In software engineering, the term “database” is typically shorthand for a relational database, but that feels more and more problematic, as we now see many more databases in use now that aren’t relational (Cassandra, Mongo, DynamoDB, etc.). Even S3 now serves as a foundational basis for many databases.

System Considerations

Beyond definitions, what’s also interesting here is that tooling and database choice is never a simple equation for a non-trivial data system; there’s a host of considerations that come into play. Here’s a few that pop into my mind:

  • Interfacing: Who’s accessing and/or manipulating data in the system? Sophisticated data systems and bespoke interfaces are powerful, but they require training and expertise, and there’s typically a higher maintenance burden. Leveraging common and ubiquitous tooling can be beneficial when it comes to interfacing needs for a larger audience. This comment on ArsTechnica points out that the “proper tools” are inordinately complex and not something a typical end-user can pick-up and understand easily, and I think that’s a fair assessment of the product landscape.
  • System limits: every data system has limits, some are explicit and obvious, some are not. It’s also not surprising to bump into limits due how a database is setup or how a schema is designed. Hitting a 65K row limit is frustrating and problematic, but so is discovering a value is truncated because the field length was set too small or an incorrect type was used.
  • Cost: What is the cost of the technical infrastructure? What about the cost of the people needed maintain the system? Unsurprisingly, more sophisticated and complex systems will cost more.
  • Failure modes: What are some common ways this system fails? What does it take to recover and get back to normal operations? With simple systems you tend to just hit hard limits, but more complex systems fail in a multitude of ways.
  • Time: When does this need to be shipped and what compromises need to be made? When you don’t have weeks or months to design and prototype, leveraging per-existing and proven method is typically the path of least resistance.

It’s perhaps easy to point to some of the issues that come into play with Excel and spreadsheets, but any data system will have its fair share of limits and risks, along with any potential benefits.


Copy & pasting non-textual objects in the browser

Interacting with the clipboard

A proper way to deal with clipboard access has been a dream for web developers for years. There’s out-of-the-box browser support, via keyboard shortcuts and content menu commands, for <input>, <textarea>, or elements with the contenteditable attribute. However, for more complex interactions or dealing with application-defined, non-textual objects (e.g. something composed of multiple DOM elements that is manipulable by the user, such as the ScratchGraph notes and connectors shown below) we need to start looking at the available Javascript APIs.

ScratchGraph entities and connectors

Clipboard API vs document.execCommand() + paste event

The bad news is that document.execCommand() (using the “cut”, “copy” commands) is still the de-facto way of writing to the clipboard, despite this method being deemed obsolete. The good news is that there does seems to be good progress, in terms of stability and browser implementation, of the Clipboard API.

For reading from the clipboard, the paste event seems to be the best way to go, however there is the inherent limitation that this event will only be triggered by “paste actions” from the browser’s interface (e.g. the use hitting Ctrl + V). Again, there is good news in that the Clipboard API would allow more flexibility here and there is good progress towards browser support.

Despite the good news around the Clipboard API, given the state of where things are now, in September 2020, using document.execCommand() and the paste event seems to be the way to go; the Clipboard API, as well as the corresponding permissions via the Permissions API, are still in the process of being implemented in browsers. However, most of what’s in this post will (hopefully) still be valid with the Clipboard API, with the clipboard interaction code being more robust and the more hacky bits being thrown away.

Copying to the clipboard with document.execCommand(“copy”)

Selecting plain text and copying it to the clipboard is straightforward with an <input> or <textarea>:

  • Call the select() method on the element
  • Call document.execCommand("copy");

We can make a generic method to copy arbitrary text to the clipboard by programmatically creating a <textarea>, setting its value to the text we want, performing the above operations to copy its contents to the clipboard, and finally removing the created <textarea>.

const Clipboard = { copyText: function(_text) { const textAreaElem = document.createElement('textarea'); textAreaElem.textContent = _text; document.body.appendChild(textAreaElem); textAreaElem.select(); document.execCommand("copy"); document.body.removeChild(textAreaElem); } };

Now, using this method, if we can serialize an object to some sort of textual format, we can put it on the clipboard. JSON is a good option, as it’s easy to work with in Javascript.

Object → JSON → Plain text → Clipboard

In ScratchGraph, entities like notes, connectors, etc. have a toJSON() method, which returns a JSON serialized representation of the entity, e.g.:

this.toJSON = function() { const serializedObj = { "id": self.getId(), "owner_id": self.ownerId, "sheet_id": self.sheetId, "position_x": self.getX(), "position_y": self.getY(), ... }; return serializedObj; }

When a user initiates a request to copy something, say by hitting Ctrl + C, we iterate over all the selected entities, get the serialized JSON for each, and build an array of JSON objects. Next we construct a JSON object containing that array, along with some metadata around context (application name, version, etc.). Finally, we JSON.stringify this object to get a plain text representation and use the Clipboard.copyText() method to write to the clipboard.

document.addEventListener('keydown', function(e) { // Copy selected entities on Ctrl + C if(e.ctrlKey && e.key === 'c') { const entitiesSelected = currentGroupTransformationContainer.getEntities(); const entitiesJsonArr = []; entitiesSelected.forEach(function(e) { entitiesJsonArr.push(e.toJSON()); }); ... const strForClipboard = JSON.stringify({ "application": "scratchgraph", "version": "1.0", "entities": entitiesJsonArr, ... }); Clipboard.copyText(strForClipboard); } });

Pasting from the clipboard

To read what’s on the clipboard, we can listen for and implement a handler on the paste event.

While you can listen for the paste event on any DOM element, I’ve found it tricky to isolate to specific elements because the element that has focus isn’t always obvious and I’ve run into situations where an offscreen <input> gets focus and the clipboard content simply gets pasted into that input. It’s more reliable to listen on the document and determine if and where to paste something based on the metadata embedded when we copied data to the clipboard.

Sketching out what we need to deal with, we get the following:

  • Listen for paste events on the document
  • Check if there’s plain-text content being pasted
  • Try to parse the plain-text content as JSON
  • Check whatever metadata is in the object to see if it’s something copied from our application and it’s something we can read/interpret.
  • If it’s something we can handle, suppress any default behavior and do what is needed to clone and create new objects.

This is simplified for clarity, but the code in ScratchGraph looks something like this:

document.addEventListener('paste', function(e) { if(typeof e.clipboardData === 'undefined' || typeof e.clipboardData.items === 'undefined') { return; } const items = e.clipboardData.items; for (let i=0; i<items.length; ++i) { if (items[i].kind === 'string' && items[i].type === "text/plain") { try { const clipboardJson = JSON.parse(e.clipboardData.getData('text/plain')); if(clipboardJson.application === "scratchgraph") { e.preventDefault(); createFromClipboardJson(clipboardJson); } } catch(err) { } } } });

The createFromClipboardJson() method handles the application-specific logic of reading the data and creating copies. In ScratchGraph, I’m dealing with entities, so I don’t actually deserialize, I just read the bits of data needed to be make a clone and create something with a new ID (i.e. new entity). However, YMMV, based on the type of objects you’re dealing with, how your application handles data, and/or how you deal with state.

Limitations and future work

As I mentioned, there are limitations here when it comes to reading or writing from/to the clipboard. The paste event will only be triggered by interactions supported by the browser, so creating something like a button to paste content isn’t possible. document.execCommand("copy") is now considered obsolete and the method presented to allow copying arbitrary bits of text is pretty hacky, though it is versatile in that you can bind the method to application-specific interactions (e.g. a button to copy content). A further limitation here is that data can only be copied as plain-text and we’re not actually encoding any type information; this manifests in some non-ideal behavior, where what’s put on the clipboard can be pasted into any application that accepts plain-text.

The Clipboard API looks to be a promising solution to these limitations. I’m hoping to revisit this in the near future to update the clipboard interaction logic to use the API and have an all-round cleaner and more robust solution.


Pushing computation to the front: video snapshots

Video and the Canvas API

The Canvas API is surprising versatile. The image parameter of the CanvasRenderingContext2D.drawImage() method will accept images from a number of different sources including an HTMLVideoElement. I touched on this a bit in a previous post about processing the data from video streams, however HTMLVideoElement can also handle loading and rendering video files, with all modern browsers capable of tackling the non-trivial tasks of decoding and rendering H.264 MP4 or VP8/VP9 WebM content (and, of course, you get all the benefits of the client’s GPU hardware that the browser takes advantage of). This opens up the possibility of capturing frames from video files which can be used for preview images, poster images, or substituting in an image when video playback isn’t possible (e.g. for a print layout, which is the issue I’ve run into with ScratchGraph).

Setting up the HTMLVideoElement

This is fairly standard, here we’ll load an H.264 MP4 with the filename “test.mp4”:

const video = document.createElement('video'); const videoSource = document.createElement('source'); videoSource.setAttribute('type', 'video/mp4'); videoSource.setAttribute('src', 'test.mp4'); video.appendChild(videoSource);

For reference, here’s the test video:

Next, we want to seek to a point in the video where we want to capture the frame and also bind to an event that’ll tell us when we’re able to read the frame data from the HTMLVideoElement. The seeked event works well. The other potentially viable option is the loadeddata event, but I ran into some issues here, which I’ll describe later.

video.addEventListener('seeked', function(e) { // capture the video frame at the point seeked to... }); // seek to 2s video.currentTime = 2;

Render the frame onto a canvas

The Canvas API makes this really easy and the process mirrors what’s described in the post on thumbnail generation:

/** * * @param {HTMLVideoElement} video * @param {Number} newWidth * @param {Number} newHeight * @param {Boolean} proportionalScale * @returns {Canvas} */ videoFrameToCanvas: function(video, newWidth, newHeight, proportionalScale) { if(proportionalScale) { if(video.videoWidth > video.videoHeight) { newHeight = newHeight * (video.videoHeight / video.videoWidth); } else if(video.height > video.videoWidth) { newWidth = newWidth * (video.videoWidth / video.videoHeight); } else {} } const canvas = document.createElement('canvas'); canvas.width = newWidth; canvas.height = newHeight; const canvasCtx = canvas.getContext('2d'); canvasCtx.drawImage(video, 0, 0, newWidth, newHeight); return canvas; }

I added this method to the canvas-image-transformer library; referencing the method we can now flesh out the seeked event handler. For this test, we’ll also render out what’s on the canvas to an <img> element in the document to see what’s been captured.

video.addEventListener('seeked', function(e) { // capture the video frame at the point seeked to const frameOnCanvas = CanvasImageTransformer.videoFrameToCanvas(video, 500, 500, true); document.getElementById('testImage').src = frameOnCanvas.toDataURL(); });

frameOnCanvas is a canvas with the captured frame, and here’s what it looks like transformed & rendered into an <img> element:

canvas-image-transformer-test-video-frame-capture

Issues

  • Something not immediately obvious is that the seeked event is not fired if video.currentTime = 0 (i.e. you want to seek to the first frame of a video). However, you can use a very small time value (e.g. video.currentTime = 0.000000001), which will typically seek to the first frame in most cases. That said, it is a hacky/non-elegant solution.
  • There are cross-browser issues with the loadeddata event. In Firefox, you will only get a frame capture if you don’t seek. If you do attempt to seek, you’ll get a empty frame and the canvas will have a transparent image. Conversely, in Chrome (and other Webkit-based browsers), you will only get a frame if you do seek. The standard states that the event should be fired when “the user agent can render the media data at the current playback position for the first time” which seem to indicate an implementation flaw in both browsers.
  • The test video was taken on my phone and the frames themselves are upsided-down, this is typical with smartphone videos as it’s expected that playback will take into account metadata indicating orientation. In Firefox, this isn’t taken into account when using CanvasRenderingContext2D.drawImage() with HTMLVideoElement, so you get an upsided-down image on the canvas.

Alternatives & limitations

I couldn’t think of a ton of options for decoding H.264 or VP8/VP9. If you’re looking to create something yourself, a server-side service invoking FFmpeg seems like the best option. I played around with Puppeteer, but Puppeteer comes with Chromium, which lacks the audio and video support you get out-of-the box with Chrome. Although, installing and using Chrome server-side with Puppeteer has potential.

There are also third-party services which can handle video decoding and transcoding, and those are solid server-side options.

As with thumbnail generation, here again we’re looking at workloads that have potential to be moved to the frontend, where you have hardware better suited for graphics work and the possibility of reducing backend complexity. On the other hand, the same limitations comes into play, as you have less control over the execution environment and no clear path for backfill or migration needs.


Pushing computation to the front: thumbnail generation

Frontend possibilities

As the APIs brought forward by HTML5 about a decade ago have matured and the devices running web browsers have continued to improve in computational power, looking at what’s possible on the frontend and the ability to bring backend computations to the frontend has been increasingly interesting to me. Such architectures would see each user’s browsers as a worker for certain tasks and could simply backend systems, as those tasks are pushed forward to the client. Using Canvas for image processing tasks is one area that interesting and that I’ve had success with.

For Mural, I did the following Medium-esque image preload effect, the basis of which is generating a tiny (16×16) thumbnail which is loaded with the page. That thumbnail is blurred via CSS filter, and transitions to the full-resolution image once it’s loaded. The thumbnail itself is generated entirely on the frontend when a card is created and saved alongside the card data.

In this post, I’ll run though generating and handling that 16×16 thumbnail. This is fairly straightforward use of the Canvas API, but it does highlight how frontend clients can be utilized for operations typically relegated to server-side systems.

The image processing code presented is encapsulated in the canvas-image-transformer library.

<img> → <canvas>

A precursor for any sort of image processing is getting the image data into a <canvas>. The <img> element and corresponding HTMLImageElement interface don’t provide any sort of pixel-level read/write functionality, whereas the <canvas> element and corresponding HTMLCanvasElement interface does. This transformation is pretty straightforward:

The code is as follows (an interesting thing to note here is that this can all be done without injecting anything into the DOM or rendering anything onto the screen):

const img = new Image(); img.onload = function() { const canvas = document.createElement('canvas'); canvas.width = img.width; canvas.height = img.height; const canvasCtx = canvas.getContext('2d'); canvasCtx.drawImage(img, 0, 0, img.width, img.width); // the image has now been rendered onto canvas } img.src = "https://some-image-url";

Resizing an image

Resizing is trivial, as it can be handled directly via arguments to CanvasRenderingContext2D.drawImage(). Adding in a bit of math to do proportional scaling (i.e. preserve aspect ratio), we can wrap the transformation logic into the following method:

/** * * @param {HTMLImageElement} img * @param {Number} newWidth * @param {Number} newHeight * @param {Boolean} proportionalScale * @returns {Canvas} */ imageToCanvas: function(img, newWidth, newHeight, proportionalScale) { if(proportionalScale) { if(img.width > img.height) { newHeight = newHeight * (img.height / img.width); } else if(img.height > img.width) { newWidth = newWidth * (img.width / img.height); } else {} } var canvas = document.createElement('canvas'); canvas.width = newWidth; canvas.height = newHeight; var canvasCtx = canvas.getContext('2d'); canvasCtx.drawImage(img, 0, 0, newWidth, newHeight); return canvas; }

Getting the transformed image from the canvas

My goto method for getting the data off a canvas and into a more interoperable form is to use the HTMLCanvasElement.toDataURL() method, which allows easily getting the image as a PNG or JPEG. I do have mixed feeling about data-URIs; they’re great for the web, because so much of the web is textually based, but they’re also horribly bloated and inefficient. In any case, I think interoperability and ease-of-use usually wins out (esp. here where we’re dealing with a 16×16 thumbnail and the data-uri is relatively lightweight) and getting a data-uri is generally the best solution.

Using CanvasRenderingContext2D.getImageData() to get the raw pixel from a canvas is also an option but, for a lot of use-cases, you’d likely need to compress and/or package the data in some way to make use of it.

Save the transformed image

With a data-uri, saving the image is pretty straightforward. Send it to the server via some HTTP method (POST, PUT, etc.) and save it. For a 16×16 PNG the data-uri textual representation is small enough that we can put it directly in a relational database and not worry about a conversion to binary.

Alternatives & limitations

The status quo alternative is having this sort of image manipulation logic encapsulated within some backend component (method, microservice, etc.) and, to be fair, such systems work well. There’s also some very concrete benefits:

  • You are aware of and have control over the environment in which the image processing is done, so you’re isolated from browser quirks or issues stemming from a user’s computing environment.
  • You have an easier path for any sort of backfill (e.g. how do you generate thumbnails for images previously uploaded?) or migration needs (e.g. how can you move to a different sized thumbnail?); you can’t just run though rows in a database and make a call to get what you need.

However, something worth looking at is that backend systems and server-side environments are typically not optimized for any sort of graphics workload, as processing is centered around CPU cores. In contrast, the majority of frontend environments have access to a GPU, even fairly cheap phone have some sort of GPU that is better suited for “embarassing parallel”-esque graphics operations, the performance benefits of which you get for free with the Canvas API in all modern browsers.

In Chrome, see the output of chrome://gpu:

chrome settings, canvas hardware acceleration

Scale, complexity and cost also come into play. Thinking of frontend clients as computational nodes can change the architecture of systems. The need for server-side resources (hardware, VMs, containers, etc.) is eliminated. Scaling concerns are also, to a large extent, eliminated or radically changed as operations are pushed forward to the client.

Future work

What’s presented here is just scratching the surface of what’s possible with Canvas. WebGL also presents as a ton of possibilities and abstraction layers like gpu.js are really interesting. Overall, it’s exciting to see the web frontend evolve beyond a mechanism for user input and into a layer in which substantive computation can be done.


Embedding content with oEmbed

History

Embedding external content has been a feature of the web since the introduction of the iframes ages ago. However, embedding as a business strategy didn’t seem to be a thing until sometime around the late 2000s or the early 2010s, as social networking became big business, blogging became really popular, and there was concern over walled gardens. In this environment, embedding became a component for growth, and it was no doubt successful for now behemoths like Youtube and Twitter (another component was adding social networking to sites, à la Google+ or Dunder Mifflin Infinity). Almost any site dealing in content had an embed feature or an embedded “widget.” Even Grovo, where I worked, was on this train as well with the Grovo Widget, though I was not involved in its development. It some cases this made sense and provided utility, in many others it was just copying what seemed to be working for others in ecosystem without regard for product or overall business strategy.

It seems like around this time oEmbed was drafted and Embedly was founded.

oEmbed

My view on how embeds were done was based on what you see in most UIs: you get a embed code (which is a snippet of HTML, likely encapsulated within an iframe), you paste that into a page, and the browser does the rest.

I learned about oEmbed working on Mural. oEmbed is an interesting protocol which allows consumers to request data representing what a resource should look like in an embedded context, given the URL of that resource. The overall flow looks something like this:

Figuring out what the oEmbed endpoint is involves looking at a <link> element from the resource’s HTML page. Unfortunately this does mean you have the overhead and complexity of downloading and parsing an HTML document to get the endpoint. An alternative is pulling from the list of providers in the oEmbed repo and looking at the oEmbed endpoints and allowed URL schemes for resources.

In either case, the result for the end-user is that by simply providing the URL for a resource, the appropriate content for an embed can be provided. Here’s what this looks like in Squarespace:

Embedly

Unfortunately, the video above is a lie. While everything described around oEmbed would allow for a flow like that, Squarespace (and a surprising number of other popular sites, like Medium) outsource handling of oEmbed to Embedly.

Embedly seems to do a few other things, but primarily it seems to proxy oEmbed content. What’s the value-add? According to Embedly:

We take care of every step of the process: retrieving information about a URL, checking it against malware registries, extracting content, making additional API calls to providers that support them, parsing RSS feeds, and performing validation. We save you time so that you can focus on making your app great.

The only aspect there I find compelling for the price tag is “checking it against malware registries,” but there’s little info on what level of protection they’re actually providing there.

I dealt with Embedly directly when working on Mural and it was a frustrating experience. First, note that if you’re not registered as an Embedly provider, sites that proxy through Embedly will return incorrect oEmbed data. On Squarespace, I noticed the Embedly response would have a type of “link” and not provide any of the data to do an HTML embed (so what’s shown in the video above will not happen, and it will appear as if the resource is not embeddable). Registering as a provider involves filling out a form with some endpoint information and example URLs, easy enough, but I had to wait weeks with no responses or status updates. For a request put in on Jan. 31, 2018, the Embedly integration was not done until Apr. 9th, 2018. A terrible experience overall and surprising for a company that is (a) owned by Medium and (b) seems to be a critical dependency for so many sites.

The short of it is, if as a provider, users can’t embed your content on a site, see if you need to register as an Embedly provider. If you do, good luck.

The Future

The excitement and prominence around blogging seems to have died down and this seems to have corresponded with the excitement around embedding content diminishing as well. That said, I think being able to embed content is still, and will continue to be, a powerful mechanism on the web. Even with its flaws oEmbed works nicely in this landscape. I can’t say the same for Embedly.