Brightcove
지원+1 888 882 1880
제품
솔루션 이용사례
리소스
브라이트코브
Search IconA magnifying glass icon.
제품 문의하기

뒤로

By Zencoder

at Brightcove

TAGS


The Zencoder guide to closed captioning for web, mobile, and connected TV

Tech Talk

Captioning is coming to internet video. Legislation goes into effect in the US during 2012 and 2013 that mandates closed captioning on certain categories of online content - see our earlier post for details on the legislation. But even apart from this legislation, closed captioning is a good thing for accessibility and usability, and is yet another milestone as internet video marches towards maturity. Unfortunately, closed captioning is not a single technology or "feature" of video that can be "turned on". There are a number of formats, standards, and approaches, ranging from good to bad to ugly. Closed captioning is kind of a mess, just like the rest of digital video, and is especially challenging for multiscreen publishers. So if you want to publish video today for web, mobile, and connected TV delivery, what do you have to know about closed captioning? This post will outline the basics: how closed captions work, formats you may need to know about, and how to enable closed captions for every screen.

How closed captions work.

The first thing to understand is how closed captions are delivered, stored, and read. There are two main approaches today. 1. Embedded within a video: CEA-608, CEA-708, DVB-T, DVB-S, WST. These caption formats are written directly in a video file, either as a data track or embedded into a video stream itself. Broadcast television uses this approach, as does iOS. 2. Stored as a separate file: DFXP, SAMI, SMPTE-TT, TTML, EBU-TT (XML), WebVTT, SRT (text), SCC, EBU-STL (binary). These formats pass caption information to a player alongside of a video, rather than being embedded in the video itself. This approach is usually used by browser-based video playback (Flash, HTML5). What about subtitles? Are they the same thing as closed captions? It turns out that there are three main differences. 1. Goals. Closed captions are an accessibility feature, making video available to the hard of hearing, and may include cues about who is speaking or about what sounds are happening: e.g. "There is a knock at the door". Subtitles are an internationalization feature, making video available to people who don't understand the spoken language. In other words, you would use captions to watch a video on mute, and you would use subtitles to watch a video in a language that you don't understand. (Note that this terminological distinction holds in North America, but much of the world does not distinguish between closed captions and subtitles.) 2. Storage. Historically, captions have been embedded within video, and subtitles have been stored externally. (See CEA-608, below.) This makes sense conceptually, because captions should always be provided along with a video; 100% accessibility for hard-of-hearing is mandated by legislation. Whereas subtitles are only sometimes needed; a German-language video broadcast in Germany doesn't need to include German subtitles, but that same video broadcast in France would. 3. Playback. Since captions are passed along with the video and interpreted/displayed by a TV or other consumer device, viewers can turn them on and off themselves at any time using the TV itself, but rarely have options for selecting a language. In these situations when subtitles are added for translation purposes, they are generally hard subtitles (see below) and thus cannot be disabled. However, when viewing DVD/Blue-Ray/VOD video, the playback device controls whether subtitles are displayed, and in which language.

Formats and standards.

There are dozens of formats and standards for closed captioning and subtitles. Here is a rundown of the most important ones for internet video. CEA-608 (also called Line 21) captions are the NTSC standard, used by analog television in the United States and Canada. Line 21 captions are encoded directly into a hidden area of the video stream by broadcast playout devices. If you've ever seen white bars and dots at the top of a program, that's Line 21 captioning. Example (and detailed breakdown):   An SCC file contains captions in Scenarist Closed Caption format. The file contains SMTPE timecodes with the corresponding encoded caption data as a representation of CEA-608 data. CEA-708 is the standard for closed captioning for ATSC digital television (DTV) streams in the United States and Canada. There is currently no standard file format for storing CEA-708 captions apart from a video stream. TTML stands for Timed Text Markup Language. TTML describes the synchronization of text and other media such as audio or video. See the W3C TTML Recommendation for more. TTML example:

<tt xml:lang="" xmlns="http://www.w3.org/ns/ttml">
  <head>
    <styling xmlns:tts="http://www.w3.org/ns/ttml#styling">
      <style xml:id="s1" tts:color="white" />
    </styling>
  </head>
  <body>
    <div>
      <p xml:id="subtitle1" begin="0.76s" end="3.45s">
        Trololololo
      </p>
      <p xml:id="subtitle2" begin="5.0s" end="10.0s">
        lalala
      </p>
      <p xml:id="subtitle3" begin="10.0s" end="16.0s">
        Oh-hahaha-ho
      </p>
    </div>
  </body>
</tt>

DFXP is a profile of TTML defined by W3C. DFXP files contain TTML that defines when and how to display caption data. DFXP stands for Distribution Format Exchange Profile. DFXP and TTML are often used synonymously. SMPTE-TT (Society of Motion Picture and Television Engineers - Timed Text) is an extension of the DFXP profile that adds support for three extensions found in other captioning formats and informational items but not found in DFXP: #data, #image, and #information. See the SMPTE-TT standard for more. SMPTE-TT is also the FCC Safe Harbor format - if a video content producer provides captions in this format to a distributor, they have satisfied their obligation to provide captions in an accessible format. However, video content producers and distributors are free to agree upon a different format. SAMI (Synchronized Accessible Media Interchange) is based on HTML and was developed by Microsoft for products such as Microsoft Encarta Encyclopedia and Windows Media Player. SAMI is supported by a number of desktop video players. EBU-STL is a binary format used by the EBU standard, stored in separate .STL files. EBU-TT is a newer format supported by the EBU, based on TTML. EBU-TT is a strict subset of TTML, which means that EBU-TT documents are valid TTML documents, but some TTML documents are not valid EBU-TT documents because they include features not supported by EBU-TT. SRT is a format created by SubRip, a Windows-based open source tool for extracting captions or subtitles from a video. SRT is widely supported by desktop video players. WebVTT is a text format that is similar to SRT. The Web Hypertext Application Technology Working Group (WHATWG) has proposed WebVTT as the standard for HTML5 video closed captioning. WebVTT example: WEBVTT 00:00.76 --> 00:03.45 <v Eduard Khil>Trololololo 00:5.000 --> 00:10.000 lalala 00:10.000 --> 00:16.000 Oh-hahaha-ho Hard subtitles (hardsubs) are, by definition, not closed captioning. Hard subtitles are overlaid text that is encoded into the video itself, so that they cannot be turned on or off, unlike closed captions or soft subtitles. Whenever possible, soft subtitles or closed captions are generally be preferred, but hard subtitles can be useful when targeting a device or player that does not support closed captioning.

Captioning for every device.

What formats get used by what devices and players? Flash video players can be written to parse external caption files. For example, JW Player supports captions in SRT and DFXP format. HTML5 captions are not yet widely supported by browsers, but that will change over time. There are two competing standards: TTML, proposed by W3C, and WebVTT, proposed by WHATWG. At the moment, Chrome has limited support for WebVTT; Safari, Firefox, and Opera are all working on WebVTT support; and Internet Explorer 10 supports both WebVTT and TTML. Example:

<video width="1280" height="720" controls>
  <source src="video.mp4" type="video/mp4" />
  <source src="video.webm" type="video/webm" />
  <track src="captions.vtt" kind="captions" srclang="en" label="English" />
</video>

Until browsers support a format natively, an HTML5 player framework like Video.js can support captions through Javascript, by parsing an external file. (Video.js currently supports WebVTT captions.) iOS takes a different approach, and uses CEA-608 captions using a modified version of CEA-708/ATSC legacy encoding. This means that, unlike Flash and HTML5, captions must be added at the time of transcoding. Zencoder can add captions to HTTP Live Streaming videos for iOS. Android video player support is still fragmented and problematic. Caption support will obviously depend on the OS version and the player used. Flash playback on Android should support TTML, though very little information is available. (If you have delivered captions to native Android video apps, please let us know!) Some other mobile devices have no support for closed captions at all, and hard subtitles may be the only option. Roku supports captions through external SRT files. Some other connected TV platforms do not support closed captioning yet. But they will soon enough. Every TV, console, cable box, and Blu-Ray player on the market today wants to stream internet content, and over the next year and a half, closed captioning will become a requirement. So Sony, Samsung, Vizio, Google TV, et al will eventually make caption support a part of their application development frameworks. Unfortunately, it isn't yet clear what formats will be used. Most likely, different platforms will continue to support a variety of incompatible formats for many years to come.

Closed captioning for internet video: 2012 edition

The landscape for closed captioning will change and mature over time, but as of 2012, here are the most common requirements for supporting closed captioning on common devices.

  • A web player (Flash, HTML5, or both) with player-side controls for enabling and disabling closed captioning.
  • An external file with caption data, probably using a format like WebVTT, TTML, or SRT. More than one file may be required - e.g. SRT for Roku and WebVTT for HTML5.
  • A transcoder that supports embedded closed captions for HTTP Live Streaming for iPad/iPhone delivery, like Zencoder. Zencoder can accept caption information in a variety of formats, including TTML, so publishers could use a single TTML file for both web playback and as input to Zencoder for iOS video.

Beyond there, things get difficult. Other input formats may be required for other devices, and hard subtitles are probably necessary for 100% compatibility across legacy devices.

Zencoder and captions

Zencoder supports closed captioning for two formats: CEA-608-style captions for iOS devices using HLS, and MP4 files with CEA-608 caption tracks. On the input side, we support SCC, SAMI, DFXP/TTML/SMPTE-TT, and CEA-608 caption tracks in MP4 files. See our closed captioning API docs for more details. To date, we've chosen to focus on the first of the two approaches to closed captions - embedded captions - because these formats are added to video files at the point of transcoding. So if we didn't support captioning for iPad or iPhone, our customers publishing to these devices wouldn't be able to use closed captions. In the future, we'll expand the range of caption formats we accept, and we may provide services like format conversion for external caption files (e.g. TTML to WebVTT). But in the meantime, with a single caption file and the right Flash/HTML5 player, Zencoder customers have everything they need to create captioned videos for web, mobile, and (some) connected TV devices.


맨 위로