A while ago I began playing around with grabbing a video stream from a webcam and seeing what I could do with the captured data. Capturing the video stream using the navigator.getUserMedia() navigator.mediaDevices.getUserMedia()
method was straightforward, but directly reading and writing the image data of the video stream isn’t possible. That said, the stream data can be put onto a canvas using CanvasRenderingContext2D.drawImage()
, giving you the ability to manipulate the pixel data.
const videoElem = document.querySelector('video');
// Request video stream
navigator.mediaDevices.getUserMedia({video: true, audio: false})
.then(function(_videoStream) {
// Render video stream on <video> element
videoElem.srcObject =_videoStream;
})
.catch(function(err) {
console.log(`getUserMedia error: ${err}`);
}
);
const videoElem = document.querySelector('video');
const canvas = document.querySelector('canvas');
const ctx = canvas.getContext('2d');
// put snapshot from video stream into canvas
ctx.drawImage(videoElem, 0, 0);
You can read and write to the <canvas>
element, so hiding the <video>
element with the source data and just showing the <canvas>
element seems logical, but the CanvasRenderingContext2D.drawImage()
call is expensive; looking at the copied stream on the <canvas>
element there is, very noticeable, visual lag. Another reason to avoid this option is that the frequency at which you render (e.g. 30 FPS), isn’t necessarily the frequency at which you’d want to grab and process image data (e.g. 10 FPS). The disassociation allows you to keep the video playback smooth, for a better user experience, but more effectively utilize CPU cycles for image processing. At least in my experience so far, a small delay in visual feedback from image processing is acceptable and looks perfectly fine intermixed with the higher-frequency video stream.
So the best options all seem to involve showing the <video>
element with the webcam stream and placing visual feedback on top of the video in some way. A few ideas:
- Write pixel data to another canvas and render it on top of the <video> element
- Render SVG elements on top of the <video> element
- Render DOM elements (absolutely positioned) on top of the <video> element
The third option is an ugly solution, but it’s fast to code and thus allows for quick prototyping. The demo and code below shows a quick demo I slapped together using <div>
elements as markers for hotspots, in this case bright spots, within the video.
Here’s the code for the above demo:
<!DOCTYPE html>
<html>
<head>
<title>Webcam Cap</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
* { margin:0; padding:0; border:none; overflow:hidden; }
</style>
</head>
<body>
<div>
<video style="width:640px; height:480px;" width="640" height="480" autoplay></video>
<canvas style="display:none; width:640px; height:480px;" width="640" height="480"></canvas>
</div>
<div class="ia-markers"></div>
<script type="text/javascript">
const videoElem = document.querySelector('video');
const canvas = document.querySelector('canvas');
const ctx = canvas.getContext('2d');
var snapshotIntv = null;
const width = 640;
const height = 480;
// Request video stream
navigator.mediaDevices.getUserMedia({video: true, audio: false})
.then(function(_videoStream) {
// Render video stream on <video> element
videoElem.srcObject =_videoStream;
// Take a snapshot of the video stream 10ms
snapshotIntv = setInterval(function() {
processSnapshot(_videoStream);
}, 100);
})
.catch(function(err) {
console.log(`getUserMedia error: ${err}`);
}
);
// Take a snapshot from the video stream
function processSnapshot() {
// put snapshot from video stream into canvas
ctx.drawImage(videoElem, 0, 0);
// Clear old snapshot markers
var markerSetParent = (document.getElementsByClassName('ia-markers'))[0];
markerSetParent.innerHTML = '';
// Array to store hotzone points
var hotzones = [];
// Process pixels
var imageData = ctx.getImageData(0, 0, width, height);
for (var y = 0; y < height; y+=16) {
for (var x = 0; x < width; x+=16) {
var index = (x + y * imageData.width) << 2;
var r = imageData.data[index + 0];
var g = imageData.data[index + 1];
var b = imageData.data[index + 2];
if(r > 200 && g > 200 && b > 200) {
hotzones.push([x,y]);
}
}
}
// Add new hotzone elements to DOM
for(var i=0; i<hotzones.length; i++) {
var x = hotzones[i][0];
var y = hotzones[i][1];
var markerDivElem = document.createElement("div");
markerDivElem.setAttribute('style', 'position:absolute; width:16px; height:16px; border-radius:8px; background:#0f0; opacity:0.25; left:' + x + 'px; top:' + y + 'px');
markerDivElem.className = 'ia-hotzone-marker';
markerSetParent.appendChild(markerDivElem);
}
}
</script>
</body>
</html>
Edit (8/1/2020): The code has been updated to reflect changes in the MediaDevices API. This includes:
navigator.getUserMedia
→ navigator.mediaDevices.getUserMedia
. The code structure is slightly different given that the latter returns a promise.
- Assigning the media stream to a video element directly via the srcObject attribute. This is now required in most modern browsers as the old way of using
createObjectURL
on the stream and assigning the returned URL to the video element’s src attribute is no longer supported.
In addition, there’s also just some general code cleanup to modernize the code and make it a little easier to read. Some of the language in the post has also been tweaked to make things clearer.