Prioritizing Web Worker Requests
Jan 21 2020 · Web Technologies
Web workers handle incoming request messages via a function declared on the onmessage
property of the worker. A, perhaps not so obvious, behavior here is that incoming requests are queued. If you’re doing something intensive within the worker (or the CPU is taxed b/c of other processes) the queuing behavior becomes more obvious, as you need to wait longer for a response from the worker due to the fact that previous requests need to be picked up and handled first. Here’s a simple worker that does some heavy lifting (at least for Chrome 79 on an i7-4790K):
const highLoadWork = function() {
let x = 1000;
for(let i=0; i<99999999; i++) {
if(i % 2 === 0) {
x += 1000;
} else {
x = Math.sqrt(x);
}
}
return `hello, x=${x}`;
};
onmessage = function(_req) {
const requestNum = _req.data.requestNum;
const workResult = highLoadWork();
postMessage(
{
"response": `responding to request ${requestNum}`
}
);
};
… and here’s what happens after making 12 requests to it in a loop:
I can’t say that this is a bad thing, this is generally sensible and what you’d expect to happen. That said, there are workloads where you may want to prioritize things differently. GraphPaper
was one such case for me. Workers are handling things based on user interactions, the last request represents the current state of the world and is typically the only request that matters (any others from before can be thrown away). Unfortunately, this is no mechanism to interact with or re-prioritize messages in this underlying queue. However, you can offload the requests to a queue that the worker manages internally by itself. The onmessage()
function simply puts the request data in a queue and we can use setInterval()
to continuously call a function that pulls and processes requests from this queue. Here’s what the modified worker code looks like, where the latest request is prioritized (and previous ones are thrown away):
const highLoadWork = function() {
let x = 1000;
for(let i=0; i<99999999; i++) {
if(i % 2 === 0) {
x += 1000;
} else {
x = Math.sqrt(x);
}
}
return `hello, x=${x}`;
};
const requestQueue = [];
const processRequestQueue = function() {
if(requestQueue.length === 0) {
return;
}
const lastRequest = requestQueue.pop();
requestQueue.length = 0;
const requestNum = lastRequest.requestNum;
const workResult = highLoadWork();
postMessage(
{
"response": `responding to request ${requestNum}`
}
);
};
setInterval(processRequestQueue, 4);
onmessage = function(_req) {
requestQueue.push(_req.data);
};
… and here’s what happens after making 12 requests to it in a loop:
(sometimes, there are also cases where only request 12 was processed)
So this works pretty well, but there there are a few things to be aware of. The time it takes to post a message to the worker, is somewhere between 0ms – 1ms, plus the cost of copying any data that needs to be transferred to the worker. The (The code was updated to explicitly specify a 4ms delay, setInterval()
minimum is not really 0ms; the browser sets a reasonable minimum which you can probably expect to be between 4ms – 10ms, and this is in addition to the cost of posting the message to the workersetInterval
with a 0ms delay isn’t a good idea). What this means in practice is that there is additional latency before we begin processing a request, but compared to a scenario where we have to factor in waiting on all prior requests to finish processing (which is the point of doing this to begin with), I expect this method to win out in performance.
Finally, here’s a look at a GraphPaper
stress test and how prioritizing the last request to the connector routing worker (which is responsible for generating the path between the 2 nodes) allows for a faster/less-laggy update:
No prioritization
Prioritize last request, eliminate prior