MPEG-DASH: CREATING A STANDARD FOR INTEROPERABILITY, END-TO-END DELIVERY

If you work in media, you have undoubtedly heard the term MPEG-DASH bandied about quite a bit. MPEG-DASH is not a codec, a protocol, a system, or a format. Instead, it is a standard for interoperability—essentially end-to-end delivery—of video over HTTP.

One of the main goals of MPEG-DASH, and ostensibly the core benefit to publishers, is the ability to reduce the cost and effort of delivering live and pre-recorded premium video experiences using open standards on existing infrastructure. Today’s premium video experiences typically include requirements for advertising, security (e.g., DRM), adaptive bitrate playback, captions, and support for multiple languages. Applying these requirements for live and pre-recorded content in a fragmented device landscape results in complexity (read: cost) for publishers’ encoding, packaging, storage, and delivery workflows.

With MPEG-DASH, industry players are endeavoring to take three de facto protocols for video delivery (Apple’s HTTP Live Streaming, Adobe’s HTTP Dynamic Streaming, and Microsoft’s Smooth Streaming) and logically “evolve” them into a composite standard. This makes sense. These three protocols are all very similar in terms of what they’re trying to accomplish: efficient and secure delivery of content for adaptive bitrate playback over a HTTP network. However, they are not compatible with each other.

Today, most publishers are striving for content ubiquity, supporting a range of devices: desktop, mobile, Connected TVs, game consoles, etc. Consequently, if publishers want to support adaptive bitrate streaming, they either have to support multiple formats, protocols, and content protection options for broader support across devices and platforms or standardize and limit their device and platform footprint.

Neither is appealing. Everybody is operating inefficiently, from content creation (encoding for multiple formats and languages, packaging for multiple content protection schemes), duplicative storage, multiple content delivery protocols, multiple players with differing capabilities, and inconsistent ad formats.

MPEG-DASH’s goal is to streamline the video workflow so that publishers can efficiently manage their video workflow and deliver to any platform and device.

Is MPEG-DASH a Cure All?

MPEG-DASH doesn’t define the implementation details; instead, they leave the following tasks and decisions to the industry at large.

  • End-to-end DRM
  • Codecs
  • File formats and backward compatibility
  • Royalty considerations and issues surrounding current and future IP

There’s still the possibility that if publishers rush into the migration, their technology and workflow decisions would be dictated by the limited or inconsistent support by individual vendors in the ecosystem and the lack of interoperability between the vendors within the ecosystem. Publishers would then need to piece together all the parts of their stack—content delivery, advertising, analytics, encoding, DRM packaging and license management, and playback—to truly solve the end-to-end workflow.

In fact, the fragmentation we have seen with the HTML5 “standard” could be indicative of what we will encounter with MPEG-DASH.

What’s in It for Apple?

It’s also not clear why Apple would promote MPEG-DASH, given that they have put forth tremendous effort around HLS and elevated it to a de facto standard. Many systems and companies are built around this protocol, and in my view, it will likely be an uphill battle to convince Apple to sacrifice the advantage they have with HLS and instead push for standardization of an alternative to their offering.

History Repeats Itself… or Does It?

When assessing the viability of a new standard or process, it’s helpful to consider it through a historical, comparative lens.

Consider how companies used to transport goods. Prior to the 1950s, there was no easy and efficient way to do so. But in the mid 50s, the concept of intermodal freight transport and containers was introduced. By allowing goods to be transported by ship, rail, or truck in a standard format, the modern supply chain was born. Agreeing to the standardization of the process was the critical first step. MPEG-DASH is trying to accomplish a similar “sea change,” but because it avoids implementation details, there’s a significant risk that fragmentation will ultimately enter the equation.
Here are the issues we may encounter.

  • If MPEG-DASH implementations are not backward compatible, then there will be a need to support both MPEG-DASH and HLS. If HLS (or even HDS and Smooth) continues on a path that makes backward compatibility inefficient, publishers are forced to account for MPEG-DASH and HLS, and Smooth Streaming, and HDS.
  • If client-side players (desktop, mobile, Connected TVS, game consoles) cannot broadly support MPEG-DASH, publishers will still be faced with player fragmentation. Player fragmentation flows upstream; this means the entire content workflow—from playback to delivery to packaging to encoding—will be duplicative of the MPEG-DASH workflow. For many publishers, the cost of adoption may not be worth the incremental gains.

Brightcove’s Take

Our publishers already face the operational complexity of supporting multiple formats and associated delivery protocols. We will continue to improve our capabilities to reduce the friction and effort needed for all steps in the workflow: content ingestion of multiple formats, transcoding and packaging for multiple renditions and formats needed for cross-platform playback and cross-platform DRM, and adaptive bitrate streaming for desktop, mobile web, mobile apps, and Connected TVs.

While we support the concept of standardization, we’re not yet at a point where we can eschew all other support in favor of an end-to-end MPEG-DASH scenario. Since MPEG-DASH does not account for the the full breadth and depth of the video ecosystem, early adoption could lead to vendor dependence or lock-down, which would be detrimental to our customers.

Ultimately, we hope that MPEG-DASH and the vendors within the ecosystem quickly enhance their capabilities to provide publishers with more flexibility rather than force a proprietary implementation that results in vendor dependency or in an incomplete implementation of the standard. In the meantime, we’re rolling up our sleeves and putting fingers on keywords to work on our digital role within the MPEG-DASH ecosystem.

VIDEO.JS 4.0 IMPROVES PERFORMANCE AND STABILITY

Video.js 4.0 released in 2013 and was available for download on Github and hosted for free on our CDN. As background, Video.js is an open source HTML5 video player created by the team at Zencoder, which Brightcove acquired in 2012.

Video.js disrupted the market for open source video player technology and saw tremendous adoption and market share in just a few years. The free Video.js player has been used by tens of thousands of organizations, including Montblanc, Dolce & Gabbana, Diesel, Illy, Applebee’s, Mattel, Kellogg’s, Les Echos, US Navy, Aetna, Transamerica, Washington State University, and many others.

Version 4.0 received the most community collaboration of any previous version, which speaks to the growing strength of the JavaScript community, the growing popularity of HTML5 video, and an increase in Video.js usage. From 2012 to 2013, the number of sites using Video.js has more than doubled, and each month there are more than 200 million hits to the CDN-hosted version alone.

There are many new features in Video.js 4.0.

  • Improved performance through an 18% size reduction using Google Closure Compiler in advanced mode
  • Greater stability through an automated cross-browser/device test suite using TravisCI, Bunyip, and Browserstack.
  • New plugin interface and plugin listing for extending Video.js
  • New default skin design that uses font icons for greater customization
  • Responsive design and retina display support
  • Improved accessibility through better ARIA support
  • Moved to Apache 2.0 license
  • 100% JavaScript development tool set including Grunt

2013 will be an exciting year for Video.js, with more improvements to performance, multi-platform stability and customizability through plugins and skins. Members of the community have already started work on plugins for some of the more requested features, like playlists, analytics, and advertising.

7 Video SEO Best Practices That Drive Traffic

Are you looking to improve your video SEO and drive more traffic to your site? Here are 7 quick tips that can help.

1. Write Your Video Title and Description for People

Google puts importance on “writing for people” and frowns on stuffing keywords in your video title and description. Make sure you are writing titles and descriptions that are engaging and relevant to your audience, not stuffing keywords.

2. Add Tags If You Have a Video Site Map

Adding tags to your videos is a great way to organize content within the Brightcove online video platform—and it can help with SEO—but only if you have a video site map that exposes those tags to the search engines. If you have the development resources to create a simple video site map, it’s well worth the investment.

3. Use Schema Markup

Schema is a collection of HTML tags that webmasters can use to markup their pages in ways recognized by major search providers. Search engines rely on these schema tags to improve the display of their search results, making it easier for people to find the right web pages. If you use the the itemscope attribute to identify your content as a video to the search engines, it will increase your SEO results. Schema has an explanation on how to use the itemscope attribute and general instructions on using html tags to identify video.

4. Pick Quality Thumbnail Images

The editorial quality of your thumbnail trumps image quality. Put another way, choosing compelling thumbnails for your content can increase user clicks. Higher user engagement will affect your SEO results. It’s also worth noting that an image thumbnail will always get better traffic results than a generic video icon or a “click here for video” text link.

5. Incorporate Top Search Terms That Return Video Results

While you shouldn’t “keyword stuff” your video title and descriptions, certain words will increase your likelihood of being found by the search engines. Words like “video”, “show”, “how to”, “review” and “about” are all top terms that increase the likelihood your content will return a video thumbnail result. Don’t do anything unnatural, but if you can incorporate these words in your title or description (e.g. “Video tour of Aloft Brooklyn”) it can make a difference.

6. Optimize Your Website

The most flawless video SEO is worthless if your site is not optimized. It’s always a good best practice to ensure your website’s SEO is optimized before tweaking your video settings.

7. Place Videos at the Top of the Page

If you have video, put it at the top of the page. This allows search engines to understand it’s video content. It’s amazing what kind of results you’ll see from this small change.

DYNAMIC MANIFESTS AND THE FLEXIBILITY OF HLS PLAYLISTS

For years, there were two basic models of internet streaming: server-based proprietary technology such as RTMP or progressive download. Server-based streaming allows the delivery of multi-bitrate streams that can be switched on demand, but it requires licensing expensive software. Progressive download can be done over Apache, but switching bitrates requires playback to stop.

The advent of HTTP-based streaming protocols such as HLS and Smooth Streaming meant that streaming delivery was possible over standard HTTP connections using commodity server technology such as Apache; seamless bitrate switching became commonplace and delivery over CDNs was simple as it was fundamentally the same as delivering any file over HTTP. HTTP streaming has resulted in nothing short of a revolution in the delivery of streaming media, vastly reducing the cost and complexity of high-quality streaming.

When designing a video platform there are countless things to consider. However, one of the most important and oft-overlooked decisions is how to treat HTTP-based manifest files.

Static Manifest Files

In the physical world, when you purchase a video, you look at the packaging, grab the box, head to the checkout stand, pay the cashier, go home, and insert it into your player.

Most video platforms are structured pretty similarly. Fundamentally, a group of metadata (the box) is associated with a playable media item (the video). Most video platforms start with the concept of a single URL that connects the metadata to a single mp4 video. As a video platform becomes more complex, there may be multiple URLs connected to the metadata representing multiple bitrates, resolutions, or perhaps other media associated with the main item such as previews or special features.

Things become more complicated when trying to extend the physical model to an online streaming world that includes HTTP-based streaming protocols such as HLS. HLS is based on many fragments of a video file linked together by a text file called a manifest. When implementing HLS, the most straightforward method is to simply add a URL that links to the manifest, or m3u8 file. This has the benefit of being extremely easy, basically fitting into the existing model.

The drawbacks are that HLS is not really like a static media item. For example, an MP4 is very much like a video track on a DVD; it’s a single video at a single resolution and bitrate. The HLS manifest consists, most likely, of multiple bitrates, resolutions, and thousands of fragmented pieces of video. HLS has the capacity to do so much more than an MP4, so why treat it the same?

HLS Playlists

An HLS playlist includes some metadata that describes basic elements of the stream and an ordered set of links to fragments of the video. By downloading each fragment, or segment of the video and playing them back in sequence, the user is able to watch what appears to be a single continuous video.

  • EXTM3U
  • #EXT-X-PLAYLIST-TYPE:VOD
  • #EXT-X-TARGETDURATION:10
  • #EXTINF:10,
  • file-0001.ts
  • #EXTINF:10,
  • file-0002.ts
  • #EXTINF:10,
  • file-0003.ts
  • #EXTINF:10,
  • file-0003.ts
  • #EXT-X-ENDLIST

Above is a basic m3u8 playlist. It links to four video segments. To generate this data programmatically, all that is needed is the filename of the first item, the target duration of the segments (in this case, 10), and the total number of segments.

HLS Manifests

An HLS manifest is an unordered series of links to playlists. There are two reasons for having multiple playlists: to provide various bitrates and to provide for backup playlists. Here is a typical playlist where each of the .m3u8’s is a relative link to another HLS playlist.

  • #EXTM3U
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2040000
  • file-2040k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1540000
  • file-1540k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1040000
  • file-1040k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=640000
  • file-640k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=440000
  • file-440k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=240000
  • file-240k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
  • file-64k.m3u8

The playlists are of varying bitrates and resolutions in order to provide smooth playback regardless of the network conditions. All that is needed to generate a manifest are the bitrates of each playlist and their relative paths.

Filling in the Blanks

There are many other important pieces of information that an online video platform should be capturing for each encoded video asset: video codec, audio codec, container, and total bitrate are just a few. The data stored for a single video item should be meaningful to the viewer (description, rating, cast), meaningful to the platform (duration, views, engagement), and meaningful for applications (format, resolution, bitrate). With this data, you enable a viewer to decide what to watch, the system to decide how to program, and the application to decide how to playback.

By capturing the data necessary to programmatically generate a playlist, a manifest and the codec information for each of the playlists, it becomes possible to have a system where manifests and playlists are generated per request.

Example – The First Playlist

The HLS specification determines that whichever playlist comes first in the manifest will be the first chosen to playback. In the previous section’s example, the first item in the list was also the highest quality track. That is fine for users with a fast, stable internet connection, but for people with slower connections it will take some time for playback to start.

It would be better to determine whether the device appeared to have a good internet connection then customize the playlist accordingly. Luckily, with dynamic manifest generation, that is exactly what the system is set up to accomplish.

For the purposes of this exercise, assume a request for a manifest is made with an ordered array of bitrates. For example, the request [2040,1540,1040,640,440,240,64] would return a playlist identical to the one in the previous section. On iOS, it’s possible to determine if the user is on WiFi or a cellular connection. Since data has been captured about each playlist including bitrate, resolution, and other such parameters, an app can intelligently decide how to order the manifest.

For example, it may be determined that it’s best to start between 800-1200kbps if the user is on WiFi and between 200-600kbps if the user is on a cellular connection. If the user was on wifi, the app would request an array that looks something like: [1040,2040,1540,640,440,240,64]. If the app detected only a cellular connection, it would request [440,2040,1540,1040,640,240,64].

Example – The Legacy Device

On Android, video support is a bit of a black box. For years, the official Android documentation only supported the use of 640×480 baseline h.264 mp4 video, even though certain models were able to handle 1080p. In the case of HLS, support is even more fragmented and difficult to understand.

Luckily, Android is dominated by a handful of marquee devices. With dynamic manifests, the app can target not only which is the best playlist to start with, but can exclude playlists that are determined to be incompatible.

Since our media items are also capturing data such as resolution and codec information, support can be targeted at specific devices. An app could decide to send all of the renditions: [2040,1540,1040,640,440,240,64]. Or, an older device that only supports up to 720p could remove the highest rendition: [1540,1040,640,440,240,64]. Furthermore, beyond the world of mobile devices, if the app is a Connected TV, it could remove the lowest quality renditions: [2040,1540,1040,640].

Dynamic or Static

Choosing a static manifest model is perfectly fine. Some flexibility is lost, but there is nothing wrong with simplicity. Many use cases, especially in the user-generated content world, do not require the amount of complexity dynamic generation involves; however, dynamic manifest generation opens a lot of doors for those willing to take the plunge.

How manifests are treated will have a significant impact on the long-term flexibility of a video platform, and it is something that should be discussed with your resident compressionist.

UPLOADING AND ENCODING VIDEO USING FILEPICKER

How to go about uploading videos is one of the most common questions we get from new customers. For developers, implementing file uploads is something they’ve probably done a few times, but it’s always a pain.

Enter Filepicker

Filepicker (now Filestack) makes file upload easy. Like, really easy. You’re not just limited to just local files either; they support a wide range of sources, from Dropbox and Google to even recording a video directly from your webcam. The best part is, you can do all of this without ever leaving the front-end.

Before we do anything else, you’ll need to sign up for a Filepicker account. Once you’ve done so, create a new App in your dashboard if one doesn’t exist. Take note of the API key you see, we’ll be using that later. Filepicker is nice enough to provide an S3 bucket for getting started, but take a second to set up a destination S3 bucket for your uploads.

Let’s start on the same page with a basic HTML5 template. We’re going to use jQuery to make things simple, so we’ll include that with our boilerplate along with the Filepicker library.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US">
<head>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
    <title>Zencoder Dropzone</title>

    <!-- jQuery Include -->
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>

    <!-- Filepicker.io Include -->
    <script type="text/javascript" src="//api.filepicker.io/v1/filepicker.js"></script>
</head>

<body>
    <div id="content">
        <h1>Upload all the things!</h1>
    </div>
</body>
</html>

Now we can use the Filepicker JavaScript API to allow our users to select a file and save it to our S3 bucket. We’ll also need to associate this with a link in the body so the user has something to click. Add this below the Filepicker JavaScript include. First let’s add the upload link. Since we already have a big, prominent h1 tag, let’s just add the link to that.

<h1><a href="#" id="upload-link">Upload all the things!</a></h1>

Now we want to use that link to trigger filepicker.pickAndStore when clicked. This is where we’ll use the API key you made note of earlier. Place this snippet below the Filepicker JavaScript include in the head of the page.

<script type="text/javascript">
    $(function() {
      filepicker.setKey('The_Key_From_Your_Dashboard');

      $('a').click(function(e) {
        e.preventDefault(); // This keeps the normal link behavior from happening

        filepicker.pickAndStore({},{location: 's3'},function(fpfiles){
            console.log(JSON.stringify(fpfiles));
        });
      });
    });
</script>

You’ll need to use some sort of web server to serve up the HTML or else you won’t be able to load the external JavaScript files. You can use something like http-server, but there’s a basic Node application that will serve static files in the GitHub repository.

Choose any file (you might want to pick something relatively small) and upload it. Right now, a successful upload just logs the fpfiles object to the console, so after you upload a file take a look at the console. If everything went according to plan, you should have an object with some information about your newly uploaded file.

You just uploaded a file from your computer with just 27 total lines of code, including simple HTML markup. Just uploading files and leaving them there isn’t useful though, so let’s make it so users can upload videos and encode them.

Adding Zencoder

First, let’s alter our uploader to only accept video files. Filepicker allows us to restrict by mimetype, so if you’re like me you may be tempted to try {mimetype: 'video/*'}. This will work just fine in Chrome, but your Safari users will see a much smaller subset of files that they can upload. For video, it’s much more reliable to restrict by extension, so let’s take that route.

$('a').click(function(e) {
  e.preventDefault(); // This keeps the normal link behavior from happening
  var acceptedExtensions = ['3g2','3gp','3gp2','3gpp','3gpp2','aac','ac3','eac3','ec3','f4a','f4b','f4v','flv','highwinds','m4a','m4b','m4r','m4v','mkv','mov','mp3','mp4','oga','ogg','ogv','ogx','ts','webm','wma','wmv'];
  filepicker.pickAndStore({extensions: acceptedExtensions},{location: 's3'},function(fpfiles){
      console.log(JSON.stringify(fpfiles));
  });
});

You can restrict this set of accepted files or add more, but I took the easy way out and just used the list of valid output formats from the Zencoder format documentation. This includes some audio files, but since Zencoder supports audio-only encoding we can leave them in there. Now if you click the link and browse your local files, you should notice that you can only select files with an extension on the list. If you try to drag and drop an unacceptable file, you’ll get an error.

Now that we know we’ll only be uploading files Zencoder can support, let’s make it so a successful upload sends that file to Zencoder for encoding. Before we can do that, we’ll need to set our Zencoder API key. You can just include this right below your Filepicker key.

filepicker.setKey('The_Key_From_Your_Dashboard');
var zenKey = 'Zencoder_API_Key';

Now we’ll use jQuery’s $.ajax to send our API request to Zencoder upon successful upload.

filepicker.pickAndStore({extensions: acceptedExtensions},{location: 's3'},function(fpfiles){
  // This is the simplest Zencoder API call you can make. This will output an h.264 mp4 with AAC audio and
  // save it to Zencoder's temporary storage on S3.
  var request = {
    "input": fpfiles[0].url
  }
  // Let's use $.ajax instead of $.post so we can specify custom headers.
  $.ajax({
      url: 'https://app.zencoder.com/api/v2/jobs',
      type: 'POST',
      data: JSON.stringify(request),
      headers: { "Zencoder-Api-Key": zenKey },
      dataType: 'json',
      success: function(data) {
        $('body').append('Job created! <a href="https://app.zencoder.com/jobs/'+ data.id +'">View Job</a>')
      },
      error: function(data) {
        console.log(data);
      }
  });
});

Now refresh your page and upload a video. If everything has gone according to plan you should see a success message with a link to your newly created job.

Only 47 lines of code later you have a web page that will allow you to upload a video and send it off for encoding.

Notes and Warnings

It’s a bad idea to put your Zencoder API key in plain text inside of JavaScript. Just to repeat that one more time: Do not use this in code that other people could possibly access. Nothing would stop people from taking your API key and encoding all the video they wanted.

A much better idea would be to use Filepicker as described but actually make the Zencoder API call in your back end where your API key is safe from prying eyes.

Taking It a Step Further

Drag and drop is really cool, so we wanted to make a whole page that uses Filepicker’s makeDropPane. Users have to put their API key in before doing anything and it’s not stored in the code, so the demo is safe to put online.

This version validates the API key, includes a history of your recent Zencoder jobs, and allows you to modify the request template. All of these settings are saved in your browser’s localStorage so you don’t lose everything on a refresh.

CONCATENATING VIDEO WITH HLS MANIFESTS

This article is focused on HTTP Live Streaming (HLS), but the basic concepts are valid for other HTTP-based streaming protocols as well. A deep dive into the HLS protocol is beyond the scope of this article, but a wealth of information is available online including the published standard: HTTP Live Streaming.

Concatenation and The Old Way

Content equals value, so, in the video world, one way to create more value is by taking a single video and mixing it with other videos to create a new piece of content. Many times this is done through concatenation, or the ability to stitch multiple videos together, which represents a basic form of editing. Add to that the creation of clips through edit lists and you have two of the most basic functions of a non-linear editor.

As promising as concatenation appears, it can also introduce a burden on both infrastructure and operations. Imagine a social video portal. Depending on the devices they target, there could be anywhere between a handful to many dozens of output formats per video. Should they decide to concatenate multiple videos to extend the value of their library, they will also see a massive increase in storage cost and the complexity of managing assets. Each time a new combination of videos is created, a series of fixed assets are generated and need to be stored.

HTTP Live Streaming and The Manifest File

The introduction of manifest driven HTTP-based streaming protocols has created an entirely new paradigm for creating dynamic viewing experiences. Traditionally, the only option for delivering multiple combinations of clips from a single piece of content was through editing, which means the creation of fixed assets. With technology such as HLS—since the playable item is no longer a video file, but a simple text file—making edits to a video is the same as making edits to a document in a word processor.

For a video platform, there are two ways to treat the HLS m3u8 manifest file. Most simply, the m3u8 file can be treated as a discrete, playable asset. In this model, the m3u8 is stored on the origin server alongside the segmented TS files and delivered to devices. The result is simple and quick to implement, but the m3u8 file can only be changed through a manual process.

Instead, by treating the manifest as something that is dynamically generated, it becomes possible to deliver a virtually limitless combination of clips to viewers. In this model, the m3u8 is generated on the fly, so it doesn’t sit on the server, but will be created and delivered every time it’s requested

Dynamic Manifest Generation

What is a manifest file? Most basically, it is a combination of some metadata and links to segments of video.

  • Exemplary Video A
  • #EXTM3U
  • #EXT-X-MEDIA-SEQUENCE:0
  • #EXT-X-TARGETDURATION:10
  • #EXTINF:10,
  • Exemplary_A_segment-01.ts
  • #EXTINF:10,
  • Exemplary_A_segment-02.ts

The above m3u8 has two video segments of 10 seconds each, so the total video length is 20 seconds. Exemplary Video A, which, by the way is a truly great video, is 20 seconds long. Now let’s imagine we also have:

  • Exemplary Video B
  • #EXTM3U
  • #EXT-X-MEDIA-SEQUENCE:0
  • #EXT-X-TARGETDURATION:10
  • #EXTINF:10,
  • Exemplary_B_segment-01.ts
  • #EXTINF:10,
  • Exemplary_B_segment-02.ts

And let’s also say that we know that a particular viewer would be thrilled to watch a combination of both videos, with Video B running first and Video A running second:

  • Superb Video
  • #EXTM3U
  • #EXT-X-MEDIA-SEQUENCE:0
  • #EXT-X-TARGETDURATION:10
  • #EXTINF:10,
  • Exemplary_B_segment-01.ts
  • #EXTINF:10,
  • Exemplary_B_segment-02.ts
  • #EXT-X-DISCONTINUITY
  • #EXTINF:10,
  • Exemplary_A_segment-01.ts
  • #EXTINF:10,
  • Exemplary_A_segment-02.ts

Now, instantly, without creating any permanent assets that need to be stored on origin, and without having involved an editor to create a new asset, we have generated a new video for the user that begins with Video B followed by Video A. As if that wasn’t cool enough, the video will play seamlessly as though it was a single video.

You may have noticed a small addition to the m3u8:

EXT-X-DISCONTINUITY

Placing this tag in the m3u8 tells the player to expect the next video segment to be a different resolution or have a different audio profile than the last. If the videos are all encoded with the same resolution, codecs, and profiles then this tag can be left out.

Extending the New Model

The heavy lifting for making a video platform capable of delivering on-the-fly, custom playback experiences is to treat the m3u8 manifest not as a fixed asset, but as something that needs to be generated per request. That means that the backend must be aware of the location of every segment of video, the total number of segments per item, and the length of each segment.

There are ways to make this more simple. For example, by naming the files consistently, only the base filename needs to be known for all of the segments, and the segment iteration can be handled programmatically. It can be assumed that all segments except the final segment will be of the same target duration, so only the duration of the final segment needs to be stored. So, for a single video file with many video segments, all that needs to be stored is base path, base filename, number of segments, average segment length, and length of the last segment.

By considering even long-form titles to be a combination of scenes, or even further, by considering scenes to be a combination of shots, there is an incredible amount of power that can be unlocked through dynamic manifest generation. If planned for and built early, the architecture of the delivery platform can achieve a great deal of flexibility without subsequent increase in operational or infrastructure costs.

HOW AVID TECHNOLOGY CREATED CUSTOMIZED EDUCATION FEATURES

It’s always interesting to explore how Brightcove customers are utilizing our technology and realizing measurable benefits from their implementations. It was particularly fun to learn more about how Avid Technology is using Brightcove since our world is intrinsically linked to theirs. While we help customers deliver and manage online video content, Avid is often the creative tool used in the video development process.

Avid, another Massachusetts-based company, specializes in video and audio production technology—specifically, digital non-linear editing systems, management and distribution services. Creative professionals from Hollywood to Madison Avenue rely upon Avid’s suite of products to fulfill their visual storytelling needs. Since Avid’s 1987 launch, its technology innovations have earned it hundreds of awards, including two Oscars, a Grammy and 14 Emmys. The company certainly wields “video cred.”

So what led Avid to Brightcove? Though Avid is an expert on the video development front, it sought outside expertise for video distribution best practices. Our customer case study discusses the Avid/Brightcove relationship in further detail, but we wanted to use this post to offer a brief synopsis.

Essentially, Avid’s path to online video dates to the spring of 2010, when the company began to investigate live webcasting options, including video. Ultimately, Avid assembled a DIY, Flash-based webcasting solution that incorporated both chat and video for an interactive experience. With this knowledge in-hand, the company began to research online video platforms that would provide additional, on-demand viewing capabilities—and also help the company grow into additional educational video functionality moving forward.

In March 2012, Avid selected Brightcove as its online video platform of record. Since then, the company has integrated video into its website help offerings—directing users to tutorial video content when they are working within Avid software and a question arises. Currently, the Avid team is working to migrate its video content marketing assets to Video Cloud so that they can be easily organized and managed as well as optimized for mobile devices. In the future, Avid plans to take advantage of Brightcove to improve video-driven SEO and add user-generated content to its website.

PUMA DRIVES CUSTOMER ENGAGEMENT WITH ONLINE VIDEO

We’ve written at length about the role that online video plays in the content marketing ecosystem in helping brands build lasting relationships with their customers. PUMA, one of the most well known footwear and apparel brands in the world, is a great example of a marketer that understands the power of video and how it helps to increase engagement with customers.

PUMA produces and publishes a wide range of video content around the world to support its products but also to bring customers on a journey. While PUMA is known for its cutting-edge products, its brand really comes alive through the context that the company puts the products in and the lifestyle that the brand portrays. PUMA looks to video as an opportunity for engagement and a way to direct customers to a cadence-specific, multi-screen experience.

This strategy was put to great use at the 2012 London Olympics, where PUMA created an entire brand environment for its customers to interact with both in person and remotely through live video content, with events and content timed around PUMA-sponsored Jamaican sprinter, Usain Bolt, and his epic performances in the 100 and 200 meters.

We recently sat down with Jay Basnight, head of digital strategy at PUMA, to learn more about the company’s video strategy and the impact of video in driving engagement. Jay talks in detail about the importance of video and how PUMA measures success, as well as how the company uses the Brightcove video platform to support its video efforts around the world.

USING SALESFORCE BULK API AND APEX CODES WITH BRIGHTCOVE

At Brightcove, we use Salesforce to manage our customer information. Our sales, account management, support and finance teams also use it for various activities such as contacting sales leads, tracking support cases, and generating usage reports. It’s important for our business to keep pushing customer data into Salesforce in a timely and reliable way.

The data model for our products supports a many-to-many relationship between users and accounts. An account object represents an organization or a department within a large organization, and a user object represents an individual who works for one or multiple organizations. In Salesforce, we customize the built-in Contact object to represent each user of Brightcove services and we define a custom object called BCAccount to represent an account (see figure 1).

Figure 1

Figure 1. Data Model in Brightcove Service and Salesforce

Several years ago we built the data synchronization feature using the Salesforce SOAP API and quartz, and we have seen some problems with this implementation. There are two major difficulties:

  • It is too chatty, which makes it slow. Only 700 objects can be synchronized to Salesforce per hour.
  • It requires a lot of effort to make any changes to the data model. To add a new field to an object, it forces us to export a new WSDL file from Salesforce and generate Java classes from the WSDL file.

In light of these difficulties, we decided to build a new synchronization system using the Salesforce bulk API and Apex code. The new implementation consists of a data pushing engine called RedLine and a set of Salesforce Apex classes to process bulk data pushed from RedLine.

Figure 2

Figure 2. New Data Synchronization

RedLine is built using Sinatra, a lightweight ruby web server, as a standalone service independent from the other Brightcove services. RedLine uses the rufus scheduler to periodically query object creates, updates and deletes from Brightcove via RESTful APIs. Then RedLine transforms JSON response to CSV and sends to Salesforce as bulk request. Salesforce has a limit of 10,000 objects per bulk request, which is enough for our usage. Since bulk request is processed asynchronously in Salesforce, neither any of the Brightcove services nor RedLine needs to wait after sending data to Salesforce.

We wrote a few Apex classes to process bulk requests, including adapting the user and account objects to the Salesforce objects, and then deployed the Apex classes to Salesforce and scheduled Apex batch jobs to run these classes once data arrives as bulk request. In this way, no code exists in Brightcove services for the Salesforce data model and only Salesforce Apex code needs to deal with Salesforce data model. Salesforce provides a set of monitoring tools for both bulk request and Apex batch job.

If there are any errors during the processing of a bulk request, we can easily see them in the Salesforce Web UI. We also deployed an Apex class which runs periodically to check whether a bulk request arrives in an expected frequency, and alerts if a request has not arrived for a while.

In the new synchronization system, to release a change of new fields of user or account object we just need to add the new fields in the Salesforce custom object and then expose the new fields in the JSON response of the Brightcove service API. We don’t need to change or restart RedLine for object format change since RedLine is smart enough to convert the new fields in JSON as new columns in CSV in bulk requests.

There have been four changes to account objects and one change to user objects, and we didn’t have to change a line of RedLine code for these changes. For the old SOAP API based synchronization system, it used to take us one to two weeks to synchronize a new field for user or account objects.

After running this new synchronization application in production for 8 months, we have seen it handle a couple of burst data changes gracefully. Recently a batch change of 900 accounts was made during a deployment, and all of them were synchronized to Salesforce in less than a minute (most of the time was spent by Apex classes running in Salesforce). It used to take longer than an hour to synchronize the same amount of objects in the old synchronization system.