This post describes how hls.js estimates bandwidth for adaptive bitrate streaming. My goal is to shine a little light on ABR configs.
An HLS stream provides variants of the same video at higher and lower qualities. The variants are sliced into short video segments, each just a few seconds long:
As the player downloads new video segments, it uses their download times to estimate bandwidth. That information helps it decide whether to switch to a higher or lower quality variant for the next segments it downloads. Put that process on loop and you have ABR: adaptive bitrate streaming.
The problem
Calculating throughput for a single download is simple: number of bits / number of seconds = bits per second. But how well does an overall average describe the connection at any given point in time during a video stream?
Think of a car traveling at different speeds on a trip: its average speed over the trip may be 60mph, but some of that’s spent on highways, some on residential roads, some waiting at intersections. Internet download speeds fluctuate like this too, so streaming video players need to adapt quickly to give viewers the best playback quality their connection can support without stalling.
Each segment in an HLS stream is a single download, so hls.js uses each one to sample the viewer’s throughput. Below is a scatter plot showing some of these samples – one for each segment that was downloaded during a short hls.js playback session. I’ve also plotted a simple rolling average through them:
I used a browser-simulated mobile connection, so it’s fairly stable, but you can see a few dramatic shifts. Some are temporary extremes, and if the player responded to them immediately then it would be switching variants too often.
But notice how flat the rolling average remains throughout all fluctuations. As a view session goes on, that regression effect becomes stronger. If the player relied on an overall average to decide when to switch variant then it would take too long to react to changes in network conditions.
We need another way to average this data, one that gives more weight to recent samples so the player can be more responsive to changes without going overboard.
Exponentially Weighted Moving Average
Hls.js addresses this problem with an Exponentially Weighted Moving Average (EWMA). The gist of this solution is that samples are given a half life as we collect them, so each sample’s impact on the overall average lessens as more data is collected. This makes bandwidth estimates more responsive to changes.
Here’s the same sample data with an EWMA added:
The exact value of the half life – the rate of decay for our samples – is an important variable in the EWMA formula, and it’s one we can tune to affect the player’s responsiveness to changing download speeds.
Another important variable is the passage of time. In the chart above, it’s assumed that samples are taken at regular non-specific intervals. But what if playback is interrupted? What about varying segment lengths? Ideally, samples would decay according to their age.
Adjusted EWMA
To address this, hls.js uses an adjusted weight parameter, alpha, for each sample. First, it generates a base alpha according to the configured half life. Then, for each sample it takes, it uses the timestamp as an exponent to amplify that base alpha.
Here’s an interactive chart demonstrating how the half life affects both the base alpha and bandwidth estimates with the time adjustment in place:
Notice that a higher alpha slows down the EWMA’s responsiveness: it makes the peaks and valleys in the line less extreme.
Fast and Slow EWMA
It pays to be somewhat pessimistic about network conditions: we should be cautious when new samples suggest that bandwidth is suddenly better, but much more responsive when it seems to be getting worse. In other words, our estimates would ideally combine the qualities of both a higher and lower alpha.
For that reason, hls.js tracks two exponential weighted moving averages:
- Fast – default half life of 3.0
- Slow – default half life of 9.0
Notice how their two lines overlap: the fast line is more responsive, so it has higher peaks and lower valleys; the slow line is less responsive, so it remains flatter overall.
As it takes new samples, hls.js uses the lower of these two outputs as its bandwidth estimate to guide ABR decisions. This gives a final bandwidth estimate that drops quickly and climbs slowly:
This is exactly what we wanted: an estimate that responds quickly to negative network conditions but is more cautious about interpreting positive extremes.
Sensible defaults
The final value that’s important in hls.js EWMA configurations is the default bandwidth estimate. At the start of playback, hls.js doesn’t have any segment samples to make intelligent EWMA estimates, so it uses the default estimate to guide ABR until enough samples have been gathered.
Hopefully you can now recognize the configuration options for hls.js bandwidth estimates:
abrEwmaFastVoD
sets the “fast” half life for VOD playbackabrEwmaFastLive
sets the “fast” half life for live event streamsabrEwmaSlowVoD
sets the “slow” half life for VODabrEwmaSlowLive
sets the “slow” half life for liveabrEwmaDefaultEstimate
is the bandwidth value that hls.js uses until it’s gathered enough samples to make estimates
It’s rare that you’ll want to tinker with the half life configurations, but they can be helpful when tuning ABR for specific network profiles.
The default estimate is a more common configuration. If you have a lot of repeat views then a useful technique is to capture the viewer’s last bandwidth estimate in web storage and apply it as the default estimate for the next stream, so the viewer can start at a level that better matches their network capabilities.