100 simultaneous live streams at Inter High

MID-ROLL ADS IN LIVE BROADCASTS

Sports Communications is the company that operates the internet sports media “SPORTS BULL” (abbreviated to “Spobu”). It has partnerships with over 60 media outlets and covers over 40 sports. It is not just sports with a wide fan base, such as baseball and soccer, but it also picks up detailed information on sports that may be minor now but want to expand their fan base, and the number of users is increasing rapidly. The number of viewers and visitors increases dramatically during events, and the number of daily active users (DAU) for the 2018 summer high school baseball tournament exceeded 3 million. Like high school baseball, the summer’s major content is the Inter-High School Athletic Meet. Yusuke Kumagai, a director at the company, says, “Amateur sports have a very wide target audience. It is deeply rooted in the local community, and can be enjoyed by people of all ages, from the current generation to the parents and grandparents of that generation. In fact, only about half of the people at our company have the habit of watching professional sports. Even so, everyone is drawn to amateur sports. There is a mysterious charm to them, and it may be the original experience of many Japanese people who have had club activities in their school days. For the company, which is working to distribute information on all kinds of sports, and for the fans of Spobu, this tournament to decide the best high school team in Japan is a major event.

The desire to broadcast the Inter-High live led the company to decide to adopt a video platform. Since its founding, the company has been distributing several video contents, and the response from viewers has been good. However, the burden on the content managers was heavy, and it was difficult for a startup company with a small number of staff to fully engage in video. When it comes to live broadcasting, the burden increases even more, and it is necessary to have a full-time staff member. As a system that can solve such issues, Brightcove Video Cloud was the best fit.

Mr. Taiki Kumagai, Manager of the Development Department, recalls, “Brightcove Video Cloud was the only solution that could be used to stably operate live streaming with pre-roll and mid-roll advertisements. The fact that IT knowledge was not required for the operation was also attractive, and we were confident that we could achieve live streaming for Inter-High.tv and BIG6.tv (Tokyo Big 6 Baseball League) with our current system.

700 LIVE BROADCASTS WERE ACHIEVED DURING THE INTER-HIGH PERIOD

Brightcove was adopted in 2018. At the time, there were about 10 employees. They proceeded with tests in preparation for the summer Inter-High, and held numerous meetings with local partner companies that would actually film the matches. They then established a process for filming videos, importing them into Brightcove Video Cloud, and distributing them. We decided to issue a dedicated account and have them deliver the videos by accessing the video platform directly from the local area, eliminating the need for them to upload, download, and re-upload the videos. In this way, we were able to deliver the matches live to viewers with the minimum amount of administrative work.

We will soon enter the 5G era. The need for video will only continue to grow. I feel that preparing the platform and streamlining the delivery process in advance was a great benefit for our business.

Yusuke Kumagai

Director, Undo Tsushinsha Co.

The distribution was on a large scale. Over 100 videos were delivered to viewers per day. Live broadcasts were also run simultaneously for multiple sports, with 700 matches broadcast live during the period. In the end, around 12,000 videos of various lengths were edited and published as video content.

“Now that we can complete all our work on the video platform, we feel that the total man-hours required have been reduced by about 30%. I can say with certainty that we would not have been able to achieve this scale of distribution without Brightcove Video Cloud” (Taiki Kumagai).

CONTENT THAT CAN BE ENJOYED BY ‘PASSIONATE USERS’

It is said that amateur sports have many “heavy users”. They stay for long periods of time and watch the videos in detail. There are also many users who visit the site repeatedly. The number of impressions may not be as high as for major sports, but there are definitely users who have a passion for the sport. The company is trying to deliver content that will make these users enjoy the site even more.

Yusuke Kumagai says, “Our aim is to become a presence that ‘watches’, ‘plays’, and ‘supports’ sports. At the moment, we are focusing on ‘watching’ using videos, but we are also trying to support the creation of experiences through ‘playing’.”

In the future, we want to become a presence that supports. Our goal is to be more directly involved in supporting the transmission of better information and consulting on monetization methods, while cultivating and developing the audience that is interested in our content. In doing so, they may be able to convey the appeal of excellent video content and the advertising revenue it generates to all kinds of sports organizations.

“We are about to enter the 5G era. As communication speeds increase and it becomes possible to exchange large amounts of data at high speeds, the demand for video will only continue to grow. I feel that preparing the platform and streamlining the distribution process in advance has been a great benefit to our business,” he says.

SETTING UP AN END-TO-END ENCRYPTED TRANSCODING PIPELINE

For many Zencoder customers, ensuring that their content is secure during the transcoding process is a top priority. Now that Zencoder supports encrypted inputs, customers can ensure that their data is never stored in the plain as it flows through Zencoder. In short, Zencoder can accept encrypted input, decrypt it for transcoding, then re-encrypt output videos before writing them to a storage location. The importance of this workflow is that both inputs and outputs are then protected. If an unauthorized user were able to access these encrypted files, they would be unable to view them without the key and IV pair used to encrypt them. Let’s walk through how this process would look. Before we get started, we’ll need an encrypted input. For this example, we’ll encrypt a file locally using OpenSSL, then upload it to S3 before creating the transcoding job.

$ openssl aes-256-cbc -k zencoderisawesome -in trailer_test.mp4 -out trailer_test.mp4.enc -p

The -k flag is the secret we want to use, which in this case is “zencoderisawesome”. The -p flag tells OpenSSL to print out the key when it’s done, which we’ll need for decryption later. For us, the output looked like this:

salt=9E7E90A964768A2F
key=DAFF64EAE3B3AB9C7905871E407293D4987E16DE76578372E161B1261F39CD66
iv =375FDBBB213C062D544FCB5A6ACBA44E

Now the file is encrypted, so you shouldn’t be able to play the file as you would have before. Now we need to upload the file to S3 or an FTP server somewhere so Zencoder can access it. We’ll just use the S3 upload interface.S3 UploadTime to build the request. We’ll use the Node.js library to send the request in these examples, but the same requests could also be sent using another tool such as the Request Builder. We’ll need to specify the encryption key and IV we used above for the input.

var zencoder = require('zencoder')();
zencoder.Job.create({
  input: "s3://zencoder-demo/trailer_test.mp4.enc",
  decryption_method: "aes-256",
  decryption_key: "DAFF64EAE3B3AB9C7905871E407293D4987E16DE76578372E161B1261F39CD66",
  decryption_password: "zencoderisawesome"
}, function(err, data) {
  if (err) {
    console.log("Job wasn't created");
    return console.log(err);
  }
  console.log("Woo!");
  console.log(data);
});

This would be enough to create a standard h.264 output, but it wouldn’t be encrypted in any way. Sometimes this is useful, because you may want to take an encrypted mezzanine file (a very high quality file used to create other, lower quality outputs) and use it for watermarked or lower quality outputs for distribution. Let’s pretend we want to take one mezzanine file, and upload it to three different services. We want one output to be an unencrypted, low quality version with a watermark, and the other two to be encrypted using 2 different keys, one with an identifying watermark and the other without. Before we can create this request, though, we’ll need to generate the two keys we’re going to use. We’ll use OpenSSL again to create these new keys:

$ openssl enc -aes-256-cbc -k supersecret -P
salt=12B83BBF81DFA5B7
key=48A9E3FA8A629AEBA5B4F1FAC962920F0D7084E306E0D01A0ED01C920BBCBD08
iv =2B3CABAB503198DB32394245F54E2A34

$ openssl enc -aes-256-cbc -k anothersecret -P salt=DE2DE044EA5FEB2A key=3AAE9D6E5212224BB9F76E328D2BD826F17B4FC292845B6E3B72634D2C28052D iv =169C3DE53C56E74130CDA57BA85F8255

Now we can use these keys when we encrypt the outputs during the transcoding process.

zencoder.Job.create({
  input: "s3://zencoder-demo/trailer_test.mp4.enc",
  decryption_method: "aes-256",
  decryption_key: "DAFF64EAE3B3AB9C7905871E407293D4987E16DE76578372E161B1261F39CD66",
  decryption_password: "zencoderisawesome",
  outputs: [
    {
      url: 's3://some-bucket/decrypted.mp4',
      quality: 3,
      width: 320,
      watermarks: [{
        url: 's3://zencoder-live/test-job-watermark.png'
      }]
    },
    {
      url: 's3://some-other-bucket/encrypted-watermarked.mp4',
      width: 720,
      watermarks: [{
        url: 's3://zencoder-live/test-job-watermark.png'
      }],
      encryption_method: "aes-256",
      encryption_key: '48A9E3FA8A629AEBA5B4F1FAC962920F0D7084E306E0D01A0ED01C920BBCBD08',
      encryption_iv: '2B3CABAB503198DB32394245F54E2A34'
    },
    {
      url: 's3://some-bucket/encrypted-out.mp4',
      width: 720,
      encryption_method: "aes-256",
      encryption_key: '3AAE9D6E5212224BB9F76E328D2BD826F17B4FC292845B6E3B72634D2C28052D',
      encryption_iv: '169C3DE53C56E74130CDA57BA85F8255'
    }
  ]
}, function(err, data) {
  if (err) {
    console.log("Job wasn't created…");
    return console.log(err);
  }
  console.log("Woo!");
  console.log(data);
});

Omitting encryption from one output and encrypting two others separately might seem like a wacky thing to do, but consider the use case. The low quality output could be used as a sample (you could even create a shorter clip for this purpose). One of the high quality versions has a watermark identifying the person the video is being uploaded to, so you could provide them the key to decrypt and watch, and if the video is ever found outside of their control you know who’s copy it was. The third, unwatermarked copy would be uploaded back to a bucket we control, so we can use it for distribution later. Once you have one of these encrypted files locally, you can decrypt it using a similar process to the one we used to encrypt it originally. To unencrypt the watermarked file: $ openssl enc -aes-256-cbc -d -K 48A9E3FA8A629AEBA5B4F1FAC962920F0D7084E306E0D01A0ED01C920BBCBD08 -iv 2B3CABAB503198DB32394245F54E2A34 -in encrypted-watermarked.mp4 -out decrypted-watermarked.mp4 To unencrypt the file without the watermark: $ openssl enc -aes-256-cbc -d -K 3AAE9D6E5212224BB9F76E328D2BD826F17B4FC292845B6E3B72634D2C28052D -iv 169C3DE53C56E74130CDA57BA85F8255 -in encrypted-out.mp4 -out decrypted-out.mp4 There you go! You’ve now got an end-to-end encrypted encoding pipeline. The encrypted file used in these examples is available in that location and was actually encrypted using these credentials, so feel free to use that as a test file. Just a note, this is not to be confused with digital rights management, or DRM. A proper DRM solution handles things like access rights to content which can be much more granular, down to certain devices and users. Encrypted files can only be viewed using the encryption key and associated password, but that’s the only criteria.

PLAYBOOK: HOW TO CREATE A SPORTS VIDEO STRATEGY

The value of sports goes beyond a game, league, team, or player. Fundamentally, sports is composed of moments. And people don’t just remember a moment; they remember where they were, who they were with and even what they were eating. Sports grow and thrive on emotion. Whether it’s joy, despair, or envy, sports evoke a range of emotions in one second, one hundredth of a second, or after one decade.

When publishers think about sports in context of video, they should realize the opportunities to engage viewers and create an experience that melds the spontaneity of news, the dramatic arc of narrative film, and—with the passing of each game and each season—a trove of data.

4 S’s of Sports

For publishers, the opportunities to drive greater audience engagement with video can be grouped into four categories.

  • Statistics and scores
  • Social and sharing
  • Spontaneity
  • Stories

Statistics and Scores

Statistics and scores are how we record, measure, analyze, and track sports. Inevitably, someone or some team receives the “W” and at least one person or team receives the “L,” with the occasional tie/draw for good measure.

Video can provide context for any type of real-time statistics.

During a sporting event, while it’s common to showcase the “big” plays, non-scoring moments are just as effective to understand the ebb and flow of a game, team, or player.

  • Penalties (or controversial or missed calls)
  • Seemingly minor moments (e.g., a hesitation during a relay exchange, a player substitution)
  • Strategy (e.g., set plays in football, volleyball)
  • Performance (e.g., player splits or changes in pitching velocity)

For some, the sport itself is only a vehicle for an even greater passion that spans not just games, but seasons, jobs, cities, and friendships.

While fantasy sports games are most popular with baseball and football, it’s not uncommon to see fantasy games for soccer, basketball, auto racing, golf and more.

During the fantasy sports season and during an individual game, video can be used to augment the real-time data collected from recent games or games in progress, highlighting any type of fantasy sports “scoring,” from touchdowns, goals, strikeouts, big yardage gains, and more.

But fantasy sports participants also spend their time passively staring into a stream of real-time data. Publishers can extend this data-driven experience into a leanback video experience by aggregating, consolidating, and serializing video highlights into a video story about their fantasy team and players—a virtual “sizzle reel” of statistics and scores.

Even more compelling than synchronizing video with real-time or recent data is the potential for utilizing video to create additional context when researching team and player statistics. Publishers can create compelling experiences by enabling consumers to not just view statistics but to research via video.

Social and Sharing

Sports is commonly a social activity for participation, attendance, and viewing.

Video can play an important role beyond watching the game itself.

Whether delivered via a personal network (email or text) or a social network (Facebook or Twitter), sports lives beyond the moment, enhancing its replay value.

With social media, publishers can use video to:

  • Start conversations about a specific moment (a score, an “almost” score, an exuberant fan or a frustrated coach)
  • Enable viewers to start a conversation about a specific moment, “remix” their own series of moments, or create their own SportsCenter-style recap
  • Create new opportunities for monetization with sponsored themes of content

Sports clubs and leagues can use video to:

  • Announce changes to a stadium to entice season ticket holders or attendees (e.g., dining options, views from bleachers and suites, and previews of special game day events or giveaways)
  • Leverage user-generated content to strengthen the fanbase and mobilize that audience to build long-lasting brand value, (e.g., the “best” examples of signs, cheers, “at home” fans, tailgate parties, and food)

Spontaneity

Publishers should ensure all video experiences, from desktop to mobile to Connected TVs, adapt to the viewer’s desire to discover content. Sports content has the unique characteristics of being:

  • Consumed both live, time-shifted, and pre-recorded
  • Viewed from the perspective of leagues, teams, players, and fans
  • Inextricably linked to data

With all these facets, publishers have the opportunity to optimize discovery and promotion to increase engagement and monetization.

In the leanback mode of video consumption, publishers should let the content feel spontaneous. A viewer may start by watching a replay of Michael Phelps’ gold medal victory by the narrowest of margins: one hundredth of a second. Keying on the video’s notion of a “close finish,” the video experience could automatically program a dynamic playlist of similar content: Christian Laettner’s turnaround jumper against Kentucky, Jimmie Johnson’s 0.0002 second victory at Talladega, or the Blackhawks’ last minute rally to win the Cup.

Stories

Video can tell a story in six seconds or six hours. For publishers that focus on sports content, every video tells a story: a story about a league, a team, a player, a coach, or a fan. But sports stories, as they can encompass any number of factors, can extend beyond the more traditional set of organized leagues.

Armed with a GoPro, we can watch an angler wrestle with a 950 pound Marlin, climbers scale El Capitan, players engage in geocaching, or urban explorers scurry beneath the cobblestones of Paris.

Publishers have the unique opportunity to appeal to the emotions of the consumer. While news content is transient and derives value from both its immediacy and as a historical archive, a sports moment can be watched again and again and again with the same level of drama.

Sports enables people to engage in their passion, as a participant or as a fan, with video playing a vital role that can heighten the experience with every win, lose, or draw.

HOW LUXEMBURGER WORT STREAMS THE TOUR DE FRANCE

If you compare Luxembourg’s population with its successful Tour de France participants, you immediately understand why cycling has become the small country’s national sport. Since the first “Grand Boucle,” which started 100 years ago, 15 Luxembourgish cycling greats have celebrated 70 stage victories in the Tour. Five Luxembourgers have already won the most famous bike race in the world and were able to take the yellow jersey home at the end of the three-week ordeal. It’s no wonder, then, that cycling is also one of the hottest topics these days for one of the country’s most important daily newspapers, the Luxemburger Wort.

With its own team of journalists and photographers on site, the news portal of the Luxemburger Wort, wort.lu, offers its readers a daily updated video coverage. Using Brightcove, a small team manages the site’s entire video content directly from the wort.lu newsroom and provides online videos, smaller video reports of one to five minutes, and other formats such as longer video interviews. During the entire Tour, cycling fans can keep up-to-date with the Tour live ticker on wort.lu and can access the official video summaries from the organizers, the Amaury Sport Organization (ASO), on the website every evening. And of course, the newspaper, which is published by Verlag Saint-Paul Luxembourg S.A. with a daily print run of 85,000 copies, hopes that cyclists from Luxembourg will once again be at the forefront of the race in this Tour anniversary year.

One of the key factors in the publisher’s decision to implement Brightcove in 2012 was that Luxembourg has one of the highest mobile usage rates in Europe. In a country where residents are increasingly accessing news while on the move, reliable and high-quality video delivery on a wide range of platforms is essential. With our Video Cloud SDK, the wort.lu development team was able to develop mobile applications for the wort.lu site shortly after the two-week implementation. The smooth implementation was based on our extensive documentation and in collaboration with our support team.

Localization also drew the attention of the publishing decision-makers at DMEXCO 2012 to Brightcove because wort.lu offers both its Tour de France coverage and its entire editorial content in German, French, English, and Portuguese. This means that the editorial team can use Brightcove to automate label or other standard texts for their video content, such as subtitles, in multiple languages.

For Marc Thill, editor-in-chief of Luxemburger Wort, the strategic direction of wort.lu in the video sector is clear: “Our online and print editorial teams are slowly growing together, and for that we needed a solution like Brightcove that can grow with our future video needs, which will surely increase. In particular, as far as traffic generation for our internet news portal wort.lu is concerned, the annual Tour de France coverage is a fixed highlight of the year for our readers, who are traditionally enthusiastic about cycling.”

The publisher is aware that its digital offering also has to prove itself daily in the growing competition for internet dominance beyond such major sporting events in order to maintain and expand the leading position of wort.lu in the Luxembourg news market. With Brightcove, says Marc Thill, the publishing house has met the necessary technical conditions. The wort.lu newsroom team can now deliver its online video content dynamically and reliably to its various mobile and online user groups on a daily basis without much technical effort. The publishing house can now also respond to the steadily increasing demand from wort.lu advertising partners for close cooperation in the online sector by offering a new range of video advertising options thanks to the use of Brightcove. The analytics function, which is important for successful monetization in the video market, is included in the Brightcove package.

MPEG-DASH: CREATING A STANDARD FOR INTEROPERABILITY, END-TO-END DELIVERY

If you work in media, you have undoubtedly heard the term MPEG-DASH bandied about quite a bit. MPEG-DASH is not a codec, a protocol, a system, or a format. Instead, it is a standard for interoperability—essentially end-to-end delivery—of video over HTTP.

One of the main goals of MPEG-DASH, and ostensibly the core benefit to publishers, is the ability to reduce the cost and effort of delivering live and pre-recorded premium video experiences using open standards on existing infrastructure. Today’s premium video experiences typically include requirements for advertising, security (e.g., DRM), adaptive bitrate playback, captions, and support for multiple languages. Applying these requirements for live and pre-recorded content in a fragmented device landscape results in complexity (read: cost) for publishers’ encoding, packaging, storage, and delivery workflows.

With MPEG-DASH, industry players are endeavoring to take three de facto protocols for video delivery (Apple’s HTTP Live Streaming, Adobe’s HTTP Dynamic Streaming, and Microsoft’s Smooth Streaming) and logically “evolve” them into a composite standard. This makes sense. These three protocols are all very similar in terms of what they’re trying to accomplish: efficient and secure delivery of content for adaptive bitrate playback over a HTTP network. However, they are not compatible with each other.

Today, most publishers are striving for content ubiquity, supporting a range of devices: desktop, mobile, Connected TVs, game consoles, etc. Consequently, if publishers want to support adaptive bitrate streaming, they either have to support multiple formats, protocols, and content protection options for broader support across devices and platforms or standardize and limit their device and platform footprint.

Neither is appealing. Everybody is operating inefficiently, from content creation (encoding for multiple formats and languages, packaging for multiple content protection schemes), duplicative storage, multiple content delivery protocols, multiple players with differing capabilities, and inconsistent ad formats.

MPEG-DASH’s goal is to streamline the video workflow so that publishers can efficiently manage their video workflow and deliver to any platform and device.

Is MPEG-DASH a Cure All?

MPEG-DASH doesn’t define the implementation details; instead, they leave the following tasks and decisions to the industry at large.

  • End-to-end DRM
  • Codecs
  • File formats and backward compatibility
  • Royalty considerations and issues surrounding current and future IP

There’s still the possibility that if publishers rush into the migration, their technology and workflow decisions would be dictated by the limited or inconsistent support by individual vendors in the ecosystem and the lack of interoperability between the vendors within the ecosystem. Publishers would then need to piece together all the parts of their stack—content delivery, advertising, analytics, encoding, DRM packaging and license management, and playback—to truly solve the end-to-end workflow.

In fact, the fragmentation we have seen with the HTML5 “standard” could be indicative of what we will encounter with MPEG-DASH.

What’s in It for Apple?

It’s also not clear why Apple would promote MPEG-DASH, given that they have put forth tremendous effort around HLS and elevated it to a de facto standard. Many systems and companies are built around this protocol, and in my view, it will likely be an uphill battle to convince Apple to sacrifice the advantage they have with HLS and instead push for standardization of an alternative to their offering.

History Repeats Itself… or Does It?

When assessing the viability of a new standard or process, it’s helpful to consider it through a historical, comparative lens.

Consider how companies used to transport goods. Prior to the 1950s, there was no easy and efficient way to do so. But in the mid 50s, the concept of intermodal freight transport and containers was introduced. By allowing goods to be transported by ship, rail, or truck in a standard format, the modern supply chain was born. Agreeing to the standardization of the process was the critical first step. MPEG-DASH is trying to accomplish a similar “sea change,” but because it avoids implementation details, there’s a significant risk that fragmentation will ultimately enter the equation.
Here are the issues we may encounter.

  • If MPEG-DASH implementations are not backward compatible, then there will be a need to support both MPEG-DASH and HLS. If HLS (or even HDS and Smooth) continues on a path that makes backward compatibility inefficient, publishers are forced to account for MPEG-DASH and HLS, and Smooth Streaming, and HDS.
  • If client-side players (desktop, mobile, Connected TVS, game consoles) cannot broadly support MPEG-DASH, publishers will still be faced with player fragmentation. Player fragmentation flows upstream; this means the entire content workflow—from playback to delivery to packaging to encoding—will be duplicative of the MPEG-DASH workflow. For many publishers, the cost of adoption may not be worth the incremental gains.

Brightcove’s Take

Our publishers already face the operational complexity of supporting multiple formats and associated delivery protocols. We will continue to improve our capabilities to reduce the friction and effort needed for all steps in the workflow: content ingestion of multiple formats, transcoding and packaging for multiple renditions and formats needed for cross-platform playback and cross-platform DRM, and adaptive bitrate streaming for desktop, mobile web, mobile apps, and Connected TVs.

While we support the concept of standardization, we’re not yet at a point where we can eschew all other support in favor of an end-to-end MPEG-DASH scenario. Since MPEG-DASH does not account for the the full breadth and depth of the video ecosystem, early adoption could lead to vendor dependence or lock-down, which would be detrimental to our customers.

Ultimately, we hope that MPEG-DASH and the vendors within the ecosystem quickly enhance their capabilities to provide publishers with more flexibility rather than force a proprietary implementation that results in vendor dependency or in an incomplete implementation of the standard. In the meantime, we’re rolling up our sleeves and putting fingers on keywords to work on our digital role within the MPEG-DASH ecosystem.

VIDEO.JS 4.0 IMPROVES PERFORMANCE AND STABILITY

Video.js 4.0 released in 2013 and was available for download on Github and hosted for free on our CDN. As background, Video.js is an open source HTML5 video player created by the team at Zencoder, which Brightcove acquired in 2012.

Video.js disrupted the market for open source video player technology and saw tremendous adoption and market share in just a few years. The free Video.js player has been used by tens of thousands of organizations, including Montblanc, Dolce & Gabbana, Diesel, Illy, Applebee’s, Mattel, Kellogg’s, Les Echos, US Navy, Aetna, Transamerica, Washington State University, and many others.

Version 4.0 received the most community collaboration of any previous version, which speaks to the growing strength of the JavaScript community, the growing popularity of HTML5 video, and an increase in Video.js usage. From 2012 to 2013, the number of sites using Video.js has more than doubled, and each month there are more than 200 million hits to the CDN-hosted version alone.

There are many new features in Video.js 4.0.

  • Improved performance through an 18% size reduction using Google Closure Compiler in advanced mode
  • Greater stability through an automated cross-browser/device test suite using TravisCI, Bunyip, and Browserstack.
  • New plugin interface and plugin listing for extending Video.js
  • New default skin design that uses font icons for greater customization
  • Responsive design and retina display support
  • Improved accessibility through better ARIA support
  • Moved to Apache 2.0 license
  • 100% JavaScript development tool set including Grunt

2013 will be an exciting year for Video.js, with more improvements to performance, multi-platform stability and customizability through plugins and skins. Members of the community have already started work on plugins for some of the more requested features, like playlists, analytics, and advertising.

7 Video SEO Best Practices That Drive Traffic

Are you looking to improve your video SEO and drive more traffic to your site? Here are 7 quick tips that can help.

1. Write Your Video Title and Description for People

Google puts importance on “writing for people” and frowns on stuffing keywords in your video title and description. Make sure you are writing titles and descriptions that are engaging and relevant to your audience, not stuffing keywords.

2. Add Tags If You Have a Video Site Map

Adding tags to your videos is a great way to organize content within the Brightcove online video platform—and it can help with SEO—but only if you have a video site map that exposes those tags to the search engines. If you have the development resources to create a simple video site map, it’s well worth the investment.

3. Use Schema Markup

Schema is a collection of HTML tags that webmasters can use to markup their pages in ways recognized by major search providers. Search engines rely on these schema tags to improve the display of their search results, making it easier for people to find the right web pages. If you use the the itemscope attribute to identify your content as a video to the search engines, it will increase your SEO results. Schema has an explanation on how to use the itemscope attribute and general instructions on using html tags to identify video.

4. Pick Quality Thumbnail Images

The editorial quality of your thumbnail trumps image quality. Put another way, choosing compelling thumbnails for your content can increase user clicks. Higher user engagement will affect your SEO results. It’s also worth noting that an image thumbnail will always get better traffic results than a generic video icon or a “click here for video” text link.

5. Incorporate Top Search Terms That Return Video Results

While you shouldn’t “keyword stuff” your video title and descriptions, certain words will increase your likelihood of being found by the search engines. Words like “video”, “show”, “how to”, “review” and “about” are all top terms that increase the likelihood your content will return a video thumbnail result. Don’t do anything unnatural, but if you can incorporate these words in your title or description (e.g. “Video tour of Aloft Brooklyn”) it can make a difference.

6. Optimize Your Website

The most flawless video SEO is worthless if your site is not optimized. It’s always a good best practice to ensure your website’s SEO is optimized before tweaking your video settings.

7. Place Videos at the Top of the Page

If you have video, put it at the top of the page. This allows search engines to understand it’s video content. It’s amazing what kind of results you’ll see from this small change.

DYNAMIC MANIFESTS AND THE FLEXIBILITY OF HLS PLAYLISTS

For years, there were two basic models of internet streaming: server-based proprietary technology such as RTMP or progressive download. Server-based streaming allows the delivery of multi-bitrate streams that can be switched on demand, but it requires licensing expensive software. Progressive download can be done over Apache, but switching bitrates requires playback to stop.

The advent of HTTP-based streaming protocols such as HLS and Smooth Streaming meant that streaming delivery was possible over standard HTTP connections using commodity server technology such as Apache; seamless bitrate switching became commonplace and delivery over CDNs was simple as it was fundamentally the same as delivering any file over HTTP. HTTP streaming has resulted in nothing short of a revolution in the delivery of streaming media, vastly reducing the cost and complexity of high-quality streaming.

When designing a video platform there are countless things to consider. However, one of the most important and oft-overlooked decisions is how to treat HTTP-based manifest files.

Static Manifest Files

In the physical world, when you purchase a video, you look at the packaging, grab the box, head to the checkout stand, pay the cashier, go home, and insert it into your player.

Most video platforms are structured pretty similarly. Fundamentally, a group of metadata (the box) is associated with a playable media item (the video). Most video platforms start with the concept of a single URL that connects the metadata to a single mp4 video. As a video platform becomes more complex, there may be multiple URLs connected to the metadata representing multiple bitrates, resolutions, or perhaps other media associated with the main item such as previews or special features.

Things become more complicated when trying to extend the physical model to an online streaming world that includes HTTP-based streaming protocols such as HLS. HLS is based on many fragments of a video file linked together by a text file called a manifest. When implementing HLS, the most straightforward method is to simply add a URL that links to the manifest, or m3u8 file. This has the benefit of being extremely easy, basically fitting into the existing model.

The drawbacks are that HLS is not really like a static media item. For example, an MP4 is very much like a video track on a DVD; it’s a single video at a single resolution and bitrate. The HLS manifest consists, most likely, of multiple bitrates, resolutions, and thousands of fragmented pieces of video. HLS has the capacity to do so much more than an MP4, so why treat it the same?

HLS Playlists

An HLS playlist includes some metadata that describes basic elements of the stream and an ordered set of links to fragments of the video. By downloading each fragment, or segment of the video and playing them back in sequence, the user is able to watch what appears to be a single continuous video.

  • EXTM3U
  • #EXT-X-PLAYLIST-TYPE:VOD
  • #EXT-X-TARGETDURATION:10
  • #EXTINF:10,
  • file-0001.ts
  • #EXTINF:10,
  • file-0002.ts
  • #EXTINF:10,
  • file-0003.ts
  • #EXTINF:10,
  • file-0003.ts
  • #EXT-X-ENDLIST

Above is a basic m3u8 playlist. It links to four video segments. To generate this data programmatically, all that is needed is the filename of the first item, the target duration of the segments (in this case, 10), and the total number of segments.

HLS Manifests

An HLS manifest is an unordered series of links to playlists. There are two reasons for having multiple playlists: to provide various bitrates and to provide for backup playlists. Here is a typical playlist where each of the .m3u8’s is a relative link to another HLS playlist.

  • #EXTM3U
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2040000
  • file-2040k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1540000
  • file-1540k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1040000
  • file-1040k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=640000
  • file-640k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=440000
  • file-440k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=240000
  • file-240k.m3u8
  • #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
  • file-64k.m3u8

The playlists are of varying bitrates and resolutions in order to provide smooth playback regardless of the network conditions. All that is needed to generate a manifest are the bitrates of each playlist and their relative paths.

Filling in the Blanks

There are many other important pieces of information that an online video platform should be capturing for each encoded video asset: video codec, audio codec, container, and total bitrate are just a few. The data stored for a single video item should be meaningful to the viewer (description, rating, cast), meaningful to the platform (duration, views, engagement), and meaningful for applications (format, resolution, bitrate). With this data, you enable a viewer to decide what to watch, the system to decide how to program, and the application to decide how to playback.

By capturing the data necessary to programmatically generate a playlist, a manifest and the codec information for each of the playlists, it becomes possible to have a system where manifests and playlists are generated per request.

Example – The First Playlist

The HLS specification determines that whichever playlist comes first in the manifest will be the first chosen to playback. In the previous section’s example, the first item in the list was also the highest quality track. That is fine for users with a fast, stable internet connection, but for people with slower connections it will take some time for playback to start.

It would be better to determine whether the device appeared to have a good internet connection then customize the playlist accordingly. Luckily, with dynamic manifest generation, that is exactly what the system is set up to accomplish.

For the purposes of this exercise, assume a request for a manifest is made with an ordered array of bitrates. For example, the request [2040,1540,1040,640,440,240,64] would return a playlist identical to the one in the previous section. On iOS, it’s possible to determine if the user is on WiFi or a cellular connection. Since data has been captured about each playlist including bitrate, resolution, and other such parameters, an app can intelligently decide how to order the manifest.

For example, it may be determined that it’s best to start between 800-1200kbps if the user is on WiFi and between 200-600kbps if the user is on a cellular connection. If the user was on wifi, the app would request an array that looks something like: [1040,2040,1540,640,440,240,64]. If the app detected only a cellular connection, it would request [440,2040,1540,1040,640,240,64].

Example – The Legacy Device

On Android, video support is a bit of a black box. For years, the official Android documentation only supported the use of 640×480 baseline h.264 mp4 video, even though certain models were able to handle 1080p. In the case of HLS, support is even more fragmented and difficult to understand.

Luckily, Android is dominated by a handful of marquee devices. With dynamic manifests, the app can target not only which is the best playlist to start with, but can exclude playlists that are determined to be incompatible.

Since our media items are also capturing data such as resolution and codec information, support can be targeted at specific devices. An app could decide to send all of the renditions: [2040,1540,1040,640,440,240,64]. Or, an older device that only supports up to 720p could remove the highest rendition: [1540,1040,640,440,240,64]. Furthermore, beyond the world of mobile devices, if the app is a Connected TV, it could remove the lowest quality renditions: [2040,1540,1040,640].

Dynamic or Static

Choosing a static manifest model is perfectly fine. Some flexibility is lost, but there is nothing wrong with simplicity. Many use cases, especially in the user-generated content world, do not require the amount of complexity dynamic generation involves; however, dynamic manifest generation opens a lot of doors for those willing to take the plunge.

How manifests are treated will have a significant impact on the long-term flexibility of a video platform, and it is something that should be discussed with your resident compressionist.

UPLOADING AND ENCODING VIDEO USING FILEPICKER

How to go about uploading videos is one of the most common questions we get from new customers. For developers, implementing file uploads is something they’ve probably done a few times, but it’s always a pain.

Enter Filepicker

Filepicker (now Filestack) makes file upload easy. Like, really easy. You’re not just limited to just local files either; they support a wide range of sources, from Dropbox and Google to even recording a video directly from your webcam. The best part is, you can do all of this without ever leaving the front-end.

Before we do anything else, you’ll need to sign up for a Filepicker account. Once you’ve done so, create a new App in your dashboard if one doesn’t exist. Take note of the API key you see, we’ll be using that later. Filepicker is nice enough to provide an S3 bucket for getting started, but take a second to set up a destination S3 bucket for your uploads.

Let’s start on the same page with a basic HTML5 template. We’re going to use jQuery to make things simple, so we’ll include that with our boilerplate along with the Filepicker library.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US">
<head>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
    <title>Zencoder Dropzone</title>

    <!-- jQuery Include -->
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>

    <!-- Filepicker.io Include -->
    <script type="text/javascript" src="//api.filepicker.io/v1/filepicker.js"></script>
</head>

<body>
    <div id="content">
        <h1>Upload all the things!</h1>
    </div>
</body>
</html>

Now we can use the Filepicker JavaScript API to allow our users to select a file and save it to our S3 bucket. We’ll also need to associate this with a link in the body so the user has something to click. Add this below the Filepicker JavaScript include. First let’s add the upload link. Since we already have a big, prominent h1 tag, let’s just add the link to that.

<h1><a href="#" id="upload-link">Upload all the things!</a></h1>

Now we want to use that link to trigger filepicker.pickAndStore when clicked. This is where we’ll use the API key you made note of earlier. Place this snippet below the Filepicker JavaScript include in the head of the page.

<script type="text/javascript">
    $(function() {
      filepicker.setKey('The_Key_From_Your_Dashboard');

      $('a').click(function(e) {
        e.preventDefault(); // This keeps the normal link behavior from happening

        filepicker.pickAndStore({},{location: 's3'},function(fpfiles){
            console.log(JSON.stringify(fpfiles));
        });
      });
    });
</script>

You’ll need to use some sort of web server to serve up the HTML or else you won’t be able to load the external JavaScript files. You can use something like http-server, but there’s a basic Node application that will serve static files in the GitHub repository.

Choose any file (you might want to pick something relatively small) and upload it. Right now, a successful upload just logs the fpfiles object to the console, so after you upload a file take a look at the console. If everything went according to plan, you should have an object with some information about your newly uploaded file.

You just uploaded a file from your computer with just 27 total lines of code, including simple HTML markup. Just uploading files and leaving them there isn’t useful though, so let’s make it so users can upload videos and encode them.

Adding Zencoder

First, let’s alter our uploader to only accept video files. Filepicker allows us to restrict by mimetype, so if you’re like me you may be tempted to try {mimetype: 'video/*'}. This will work just fine in Chrome, but your Safari users will see a much smaller subset of files that they can upload. For video, it’s much more reliable to restrict by extension, so let’s take that route.

$('a').click(function(e) {
  e.preventDefault(); // This keeps the normal link behavior from happening
  var acceptedExtensions = ['3g2','3gp','3gp2','3gpp','3gpp2','aac','ac3','eac3','ec3','f4a','f4b','f4v','flv','highwinds','m4a','m4b','m4r','m4v','mkv','mov','mp3','mp4','oga','ogg','ogv','ogx','ts','webm','wma','wmv'];
  filepicker.pickAndStore({extensions: acceptedExtensions},{location: 's3'},function(fpfiles){
      console.log(JSON.stringify(fpfiles));
  });
});

You can restrict this set of accepted files or add more, but I took the easy way out and just used the list of valid output formats from the Zencoder format documentation. This includes some audio files, but since Zencoder supports audio-only encoding we can leave them in there. Now if you click the link and browse your local files, you should notice that you can only select files with an extension on the list. If you try to drag and drop an unacceptable file, you’ll get an error.

Now that we know we’ll only be uploading files Zencoder can support, let’s make it so a successful upload sends that file to Zencoder for encoding. Before we can do that, we’ll need to set our Zencoder API key. You can just include this right below your Filepicker key.

filepicker.setKey('The_Key_From_Your_Dashboard');
var zenKey = 'Zencoder_API_Key';

Now we’ll use jQuery’s $.ajax to send our API request to Zencoder upon successful upload.

filepicker.pickAndStore({extensions: acceptedExtensions},{location: 's3'},function(fpfiles){
  // This is the simplest Zencoder API call you can make. This will output an h.264 mp4 with AAC audio and
  // save it to Zencoder's temporary storage on S3.
  var request = {
    "input": fpfiles[0].url
  }
  // Let's use $.ajax instead of $.post so we can specify custom headers.
  $.ajax({
      url: 'https://app.zencoder.com/api/v2/jobs',
      type: 'POST',
      data: JSON.stringify(request),
      headers: { "Zencoder-Api-Key": zenKey },
      dataType: 'json',
      success: function(data) {
        $('body').append('Job created! <a href="https://app.zencoder.com/jobs/'+ data.id +'">View Job</a>')
      },
      error: function(data) {
        console.log(data);
      }
  });
});

Now refresh your page and upload a video. If everything has gone according to plan you should see a success message with a link to your newly created job.

Only 47 lines of code later you have a web page that will allow you to upload a video and send it off for encoding.

Notes and Warnings

It’s a bad idea to put your Zencoder API key in plain text inside of JavaScript. Just to repeat that one more time: Do not use this in code that other people could possibly access. Nothing would stop people from taking your API key and encoding all the video they wanted.

A much better idea would be to use Filepicker as described but actually make the Zencoder API call in your back end where your API key is safe from prying eyes.

Taking It a Step Further

Drag and drop is really cool, so we wanted to make a whole page that uses Filepicker’s makeDropPane. Users have to put their API key in before doing anything and it’s not stored in the code, so the demo is safe to put online.

This version validates the API key, includes a history of your recent Zencoder jobs, and allows you to modify the request template. All of these settings are saved in your browser’s localStorage so you don’t lose everything on a refresh.