The full conference is now available for viewing on YouTube but here are a few talks/topics that stood out to me as especially interesting:
There are many introductions to the WebRTC APIs (Google has a nice one and MDN is our bible) but this talk is something else: an engineering overview describing the fundamental building blocks of WebRTC itself and how they fit together.
This talk fills some huge knowledge gaps. WebRTC is built on a strata of existing protocols and technologies, and without a talk like this it’s difficult to understand the technology properly without reading 20+ years of RFCs and connecting the dots. For the same reason, it’s a dense talk; I plan to rewatch it a few times myself.
You can watch the talk on YouTube and Sean compiled a lot of this information into a community book called WebRTC for the Curious.
Roderick’s company produces transcripts for use in post-production, e.g. for editors to use as a guide for navigating a video, rather than relying solely on an A/V scrubber. Recently, he’s been working on a web-based tool for editorial staff to create rough edits from multiple video clips, simply by highlighting transcript text and then dragging and dropping to reorder clips into a narrative – think of the way interviews are stitched together in a documentary or news bulletin. These rough cuts can then be passed on to video editors (e.g. as FCPXML) for creative work.
Much of Roderick’s talk focused on solving the technical issue of representing clip selections in a web-based editor. This was interesting, but the product itself caught my imagination. It’s an innovative way to close the gap between two phases of editorial work.
Watch Roderick’s talk on YouTube.
Inspired by the Flash animations of yesteryear (Homestar Runner, anyone?) Sam’s been working on an SVG video codec: storing animation as vector graphics in MP4, and then rendering on the client using WebGL. Sam’s primarily interested in delivering complete animations in high quality at low bitrates, but also mentioned how the idea could apply to video overlays or be used to deliver more readable and accessible titles at heavier compression rates.
This talk’s a window into a lot of deep and interesting topics (vectorization, custom codecs, a custom library to parse and render SVG via WebGL). Conversation afterward also pointed to an in-development SVG Streaming spec which could make adoption of these ideas easier and more widespread in future.
Audio description is an industry term for narrating video to describe visual content for accessibility purposes or to make it available in new contexts e.g. watching a TV show while doing the dishes.
Jun has been compiling a list of standards and other resources to help organizations get started on the path to producing more Audio Description content.
Watch and hear Jun’s talk on YouTube.
This talk’s too much fun, but still fascinating! I hadn’t realized how close LaserDisc was to analog media. Its video is stored entirely in analog: an FM signal was stamped into the disc as pits and lands that required data reads at a precise rate, like an optical record player with a laser as its needle.
Watch Vanessa’s talk on YouTube.
Finally, here’s a few of the links I scribbled down without context, mostly from lightning talks and side conversations:
If you’re interested in video or just want a new perspective on engineering problems, I can’t recommend this conference enough.
]]>convert 16x16.png 32x32.png 48x48.png favicon.ico
Output size: 15 KB 🙀 – almost 8× my sources. Icon Slate yields similar: 18 KB. Both tools are bundling bitmaps into the ICO rather than the source PNGs.
I thrashed around looking for an open source lib or tool that reliably packages an ICO of PNG images but my luck was bad enough I began to wonder if I’d made up the whole thing about ICO serving as a PNG container.
I hadn’t, but I reflexively turned to Wikipedia where the ICO format is so well documented (and the format itself so straightforward) that I decided to write a small Node library dedicated to the task of packaging PNG images into an ICO. Here’s how I use it:
const fs = require('fs');
const pack = require('ico-packer');
fs.writeFileSync('favicon.ico', pack([
fs.readFileSync('16x16.png'),
fs.readFileSync('32x32.png'),
fs.readFileSync('48x48.png'),
]));
Output size: 2 KB 😽
This isn’t a new trick – it’s baked into the format and several “favicon generator” websites have been doing it for years – but if anyone else has the same trouble finding a working tool then hopefully this strikes the right keywords to help you out. ❤️
]]>While TypeScript has quickly taken over front-end library development, many Node libraries are still maintained as pure JS. That’s partly due to differences in lifecycles and codebase maturity, but there’s also a technical reason for the difference: in the front-end TypeScript is largely replacing an existing transpile step, so it’s a natural evolution of longstanding development toolchains. Since Node codebases don’t contend with browser support, they have no natural need for such a transpile step. That makes transpilation – and its associated inconveniences to testing and debugging – a new friction when adopting TypeScript in Node. (This is one of Deno’s draws: TypeScript support by default means less tooling to argue with.)
Regardless, type declaration files are a common feature request for Node libraries. The compiler’s JSDoc support seems a good solution: it allows library contributors to work in pure JS, toolchains untampered, with only a publish
script in package.json
to generate .d.ts
files for distribution:
"publish": "tsc --allowJs --declaration --emitDeclarationOnly --outDir .",
That satisfies consumers’ expectations for type support without disrupting development, adding only a short automatic step to distribution and a single shallow dependency for typescript
itself.
One of my wackier ideas was to build a virtual file server sharing the same tree structure over a protocol that enjoys direct macOS support: SMB, or less seriously: FTP or WebDAV. I’ve been tinkering in my free time to probe the viability of that idea.
Perhaps I’ll share the more serious research into CIFS & SMB2 some other time. Before I knuckled down to that, I spent some time distracted by WebDAV. It’s an infamously quirky protocol, but it’s also a piece of web tech I’d never paid much mind to. I had a surprising amount of fun playing with it.
WebDAV uses HTTP to deliver file contents and XML payloads representing file & folder properties. It also extends HTTP with a handful of special verbs like PROPFIND
which obtains the properties of a resource (whereas GET
obtains the resource itself).
When a PROPFIND
request is made to a directory (a collection in WebDAV parlance), the server doesn’t just respond with the directory’s properties but with the properties of its members too. This is represented as a multi-status response, as though PROPFIND
requests had been made to each of those resources individually and their responses were being stitched together into one XML document.
To demonstrate that, here’s a toy implementation written in Express – a read-only WebDAV server that lists a single text file member of a root collection:
const express = require("express");
const xml = require("xmlbuilder");
const app = express();
// Describes a mock file for our toy server
const file = {
path: "/hello-world.txt",
content: "Hello World",
ctime: new Date(2020, 1, 10),
};
// Serve the file itself (standard get request)
app.get(file.path, (req, res) => res.send(file.content));
// Obtain properties for the root resource,
// which is a collection containing our file
app.propfind("/", (req, res) => {
res.set("Content-Type", "application/xml");
// Begin a multistatus XML document to represent
// PROPFIND responses from multiple resources
const doc = xml.create("D:multistatus").att("xmlns:D", "DAV:");
// Helper method for adding new a response to the doc
const add = (path, prop) =>
doc.ele({
"D:response": {
"D:href": { "#text": path },
"D:propstat": {
"D:prop": prop,
"D:status": "HTTP/1.1 200 OK",
},
},
});
// Add a response for the root collection's properties
// You can look up properties themselves in the WebDAV spec:
// http://www.webdav.org/specs/rfc4918.html#dav.properties
add("/", {
"D:creationdate": file.ctime.toUTCString(),
"D:getlastmodified": file.ctime.toUTCString(),
"D:resourcetype": { "D:collection": "" },
});
// Add a response for the file's properties
add(file.path, {
"D:getcontentlength": file.content.length,
"D:creationdate": file.ctime.toUTCString(),
"D:getlastmodified": file.ctime.toUTCString(),
"D:resourcetype": "", // empty for non-collection resources
});
res.status(207).send(doc.end({ pretty: true }));
});
// Respond to an OPTIONS request with the permitted verbs
// for the root collection, and a DAV compliance class.
// This tells clients that it's a WebDAV resource.
app.options("/", (req, res) => {
res.set({
Allow: "OPTIONS,PROPFIND",
DAV: "1",
});
res.send();
});
app.listen(1900);
Since it’s a toy I’ve left some things unimplemented, including authentication, but it’s a complete working example that I’ve tested in several WebDAV clients. (Authentication into a WebDAV server is assumed by macOS, so Finder will prompt you for a username and password if you connect to it. You can fill in any value.)
One of my favorite quirks of WebDAV (unimplemented in the toy above) is the depth header – a request header sent by clients to indicate how deeply they’d like to inspect a collection if there are others nested within it. The depth header takes exactly three values:
Depth: 0
– give me details about this collection onlyDepth: 1
– give me details about this collection and its immediate membersDepth: infinity
– give me details about the entire tree of resources that I can reach through this collection, down to the furthest leafEvidently WebDAV wasn’t designed with large trees in mind. 😄
Our FUSE daemon also adds extended file attributes to associate files with custom metadata stored in a separate system and managed via an electron app. I was curious how we’d accomplish that over WebDAV since its “properties” are limited to a few supported keys.
I found that macOS uses AppleDouble Format files to send extended attributes over WebDAV (AKA sidecar files, ._
files, or winky frogs). For each resource it encounters in a WebDAV collection, the macOS client sends a prospective GET request to an equivalently named ._
location and transparently ties that metadata to the original resource when extended attributes are read.
AppleDouble Format is relatively simple to produce and is covered by existing libraries in many languages, e.g. xattr-file in Node.
FUSE allows us to translate chunked reads into range requests when reading over HTTP, which is important for buffering large files. With WebDAV we’re at the mercy of the client to make that decision: the spec has no guarantees.
In my experiments, the macOS client made some range requests (e.g. for header information and probing for moov atoms at the back of MP4 files) but large sequential reads were translated to uncapped ranges, i.e. specifying a range-start but no range-end – “give me the the rest of the file.” This puts buffering under more strain than adding content in controlled chunks, and video applications stall for several seconds when opening large files even within a local network.
I experimented with chunked transfer encoding (too slow, I suppose due to overhead from stitching files back together in an intermediary buffer) and forced Content-Range responses (which goes against the spec, so I didn’t really expect it to work), but couldn’t find a workaround.
I took this as my cue to stop monkeying about with WebDAV, but I was surprised how much fun I’d been having. It’s a unique application of HTTP and one I hadn’t previously considered.
]]>Charles Proxy is particularly helpful here. We use Charles a lot in front-end development and debugging, so I’ll assume you’re already familiar with features like Breakpoints and Map Local to intercept and rewrite requests & responses. However, there are a couple of snags unique to the command line environment which can be a stumbling block when you’re trying to apply the same approach to server-side development.
Don’t be put off! It’s easy when you know how. 😄
macOS proxy settings (set through Charles or via Network Preferences) are automatically applied to GUI applications – including most browsers – but they don’t extend to the command line.
By convention that’s remedied via the http_proxy
/https_proxy
environment variables, which can point to the Charles HTTP Proxy in familiar scheme://[userinfo@]host[:port]
URI syntax, i.e. for most of us:
export https_proxy="http://127.0.0.1:8888"
Applications that honor these variables then use the specified proxy when making HTTP/HTTPS requests. If you find it tedious to configure each time, Derek Morgan had the idea of using scutil output to set the variable automatically based on macOS proxy settings. I don’t do this, in part because of a problem Node makes:
Some popular tools & libraries like Node & Got don’t honor the conventional http_proxy
/https_proxy
environment variables.
For Node, the global-agent package provides the same functionality, hooking into Node’s globalAgent
configurations to add proxy support. It requires a little extra setup and has its own environment variables, all covered by its readme.
For older Node versions (< 12) global-tunnel serves a similar purpose.
Command line applications also don’t know about the certificates installed & trusted in the macOS System Keychain. They need to be configured into awareness of the Charles root certificate, otherwise proxied HTTPS requests will fail.
Ruby uses OpenSSL, which has an environment variable SSL_CERT_FILE
for this purpose. That variable can point to the exported Charles root certificate (Help > SSL Proxying > Save Charles Root Certificate…) when starting Rails:
env SSL_CERT_FILE="/path/to/cert.pem" rails s
Be aware: this configuration replaces the OpenSSL default, which will be a problem if the application makes HTTPS requests to hosts outside of Charles’ configured SSL Proxying locations. This hasn’t been an issue for me, but it’s possible to configure a directory of multiple certificates to mix the default cert in.
Node makes this a little easier with an environment variable that’s specifically designed to take additional certificates instead of an override:
env NODE_EXTRA_CA_CERTS="/path/to/cert.pem" npm start
With these configurations in place, we can treat microservice APIs in Node & Rails just like we would in client applications. It’s a handy tool for debugging and rapid development.
]]>