Adaptive Bitrate (ABR) video streaming is the invisible engine that powers the way we watch video today, from live sports and news to movies on our favorite OTT platforms. Yet, despite being everywhere, the inner workings of ABR protocols like HLS, DASH, and CMAF are often misunderstood or hidden behind the scenes. To help bridge that knowledge gap, we’ve created an interactive protocol inspector tool that allows you to see exactly what’s happening under the hood as your device streams video.
This article provides a guided introduction to the core concepts of ABR technology and complements the tool with step-by-step explanations. Whether you’re new to streaming or looking to deepen your technical know-how, this combined walkthrough will help you understand how players choose quality, how manifests are structured, and how video segments are delivered to your screen in real time.
This online video streaming protocol inspector tool helps you understand how Adaptive Bitrate (ABR) video streaming protocols work.
For those who are not familiar ABR streaming protocols concepts, you will find a simple comprehensive tutorial of ABR Video Streaming technology at the bottom of this article.
Reading the whole article while using this tool, will give you a clear detail of how ABR video streaming works.
This tool has been designed to simplify the understanding of ABR video streaming protocols through visual examples, allowing you to inspect video packets and understand the differences between protocols and how they work.
On the home page, you can enter a streaming URL (non-encrypted) or select one of the three example streams.
When you click play, the player starts to download manifest files and video and audio chunk files (packets) to reproduce the content. You need to click again on Play button to reproduce the content in the player.
In the HLS Example, you can see how the player downloads a first Video chunk to calculate the available bandwidth of your Internet connection and selects the profile of the content to download.
The main playlist (manifest) shows all the video and audio track playlists for the different profiles.
If you click on one of the media playlists, you will see the same manifest details along with a tree view below the main “Manifest Detail” window.
You can inspect all the segments, their duration, and the URL of each one.
If you click on a Media (Video+Audio) segment chunk, you will see the details of the Payload and what kind of Media Container is formatted. In this case, it is a MPEG-2 TS (Transport Stream).
In TS different data (Video, Audio, Subtitles and Data) are multiplexed into a single stream.
In MPEG-2 Transport Stream (TS) segments (commonly used in HLS implementations), multiplexing is the process of interleaving different types of data into a single stream so they can be transmitted or stored together.
Here is how video, audio, and subtitles are combined:
For example, a stream might look like this sequence of packets: [Video Pkt] – [Video Pkt] – [Audio Pkt] – [Video Pkt] – [Subtitle Pkt] – [Audio Pkt]…
This interleaving ensures that the player receives audio, video, and subtitle data concurrently as it downloads the file, rather than waiting for the whole video track to finish before receiving audio.
By clicking Reset or Home buttom, you will come back to the home page, where you can select a DASH example and click Play to reproduce and inspect the MPEG DASH example. You will see the different segments and the differences between DASH and HLS.
If you inspect a DASH .mpd manifest you will see the structure and format on the Payload Details.
Think of an MP4 file (MPEG-4 Part 14) not as a video format, but as a digital shipping container.
On the outside, it looks like a single box (video.mp4), but inside, it is a highly organized cabinet with distinct drawers. Its job is to hold different types of media synchronization, so they play back together perfectly.
Here is the simple breakdown of the MP4 structure:
Unlike the “Stream of packets” in the Transport Stream (TS) you asked about earlier, an MP4 is organized into a hierarchy of objects called “Atoms” or “Boxes.”
You can visualize an MP4 file as having three main parts:
If you inspect a CMAF stream example, you will see the differences on the manifest and how in this case, the Media Playlist offers segments by byteranges.
In the Tree preview, you will inspect the segments and their byteranges. In this case, the player can download the segments dynamically adjusting the length of the segments.
In the context of CMAF (Common Media Application Format), there are two ways to store the video:
When using Single File Mode, the player uses HTTP Byte Ranges to download just the specific segment it needs without downloading the whole movie.
Here is the explanation of how this works.
Imagine the CMAF file is a 1,000-page book.
Step 1: The Manifest Lookup
The player downloads the Manifest (HLS or DASH). In Single File mode, the manifest doesn’t just list filenames; it lists the Byte Offset (start position) and Byte Length (size) for every segment.
#EXT-X-MAP:URI=”movie.cmfv”,BYTERANGE=”800@0″
#EXTINF:4.000,
#EXT-X-BYTERANGE: 120000@800
movie.cmfv
XML
<SegmentBase indexRange=”0-800″>
<Initialization range=”0-799″ />
</SegmentBase>
Step 2: The Player Request
The player calculates the range and sends a GET request to the CDN. It adds a special header: Range.
Request: GET /video/movie.cmfv HTTP/1.1 Host: cdn.example.com Range: bytes=800-120800
Step 3: The Server Response (206 Partial Content)
The CDN recognizes the Range header. Instead of sending the whole file (which would be a standard 200 OK response), it sends only that specific slice of data.
Response: HTTP/1.1 206 Partial Content Content-Range: bytes 800-120800/50000000 Content-Length: 120000
[…Binary Data for that specific segment…]