Building a Dual Screen Video Application with App Cloud

With App Cloud’s dual screen video APIs, developers can easily take advantage of Apple TV technology to create an application that streams HD video to the television screen while providing a secondary screen on their iPad or iPhone for navigation to control or enhance that video experience. The dual screen video sample provides an example of how such an application can be built, and this post will break down important pieces of its code to make it easy for you to modify and expand upon it.

Overview

The application itself provides a simple interface to navigate through up to four Video Cloud managed playlists (though the data could be pulled from any source) and to play back videos selected from the playlists, with the UI updating to display data about the current video selected. It is important when authoring a dual screen application to account initially for how the application will render and behave when an Apple TV device is not present or connected, or when the application is delivered to a device that does not support streaming to the Apple TV.

The video is played back in a standard HTML video element with custom controls added. However, when a secondary screen is detected, the application sends the video stream to the television using the APIs provided by App Cloud and adjusts its interface on the tablet.

The project is divided into a number of directories, including html, where the single HTML file is found that simply includes all Javascript and CSS, the images directory which includes several images used in the interface, the stylesheets directory which includes the single CSS file defining the styles for the application in both single and dual screen modes, and the javascripts directory which includes the App Cloud Javascript SDK, the index.js containing nearly all of the application logic, and config.js. config.js contains the Brightcove Media API token and the Video Cloud playlist IDs used in the application and should be updated to reference the desired content account.

Dual Screen Handlers

If we move to the index.js you will find all the logic for the content fetching and rendering and video playback. Everything is contained within an anonymous function that is invoked by the init event, a function that itself defines a number of local functions that will be invoked through the life of the application. At the bottom of this function are several important event handlers established that will allow for the dual screen video playback functionality.

// bind to all the external screen events
$(bc).bind("externalscreenconnected", onExternalScreenConnected);
$(bc).bind("externalscreendisconnected", onExternalScreenDisconnected);
$(bc).bind("externalscreenvideoplaying", onExternalVideoPlaying);
$(bc).bind("externalscreenvideopaused", onExternalVideoPaused);
$(bc).bind("externalscreenvideoend", onVideoEnded);
$(bc).bind("externalscreenvideoprogress", onExternalVideoProgress);

Whenever a second screen is connected or disconnected the first two event handlers, respectively, will fire. While video is playing back on a second screen, the latter four handlers would fire depending on the video state. We’ll explore the communication to control the video later within this post, but first note that the final line of this anonymous function calls renderPage(), which writes the initial HTML to the page and then calls fetchPlaylists(), which makes the Media API call to retrieve the Video Cloud playlists using the token and IDs found in config.js.

Retrieving and Rendering the Data

We will here quickly step through the application flow for rendering the data and interface, but won’t look in depth at the functions involved as there are no new concepts or APIs used. In depth analysis and walk through of the code will be saved for the video manipulation functions discussed later in this post.

Once the playlists have been retrieved by fetchPlaylists(), additional rendering functions are called, renderVideoControls(), renderVideoInfo() and renderTabs(). The first function, renderVideoControls(), writes the HTML to the page for the UI controls to control the video, including the play/pause button and the playhead and its track. It also sets up event handlers for these controls to handle playing, pausing and scrubbing of the video. renderVideoInfo() is a much simpler function without event handling as it merely writes out the HTML for the currently playing video’s metadata like the thumbnail, title and description. This content will all be static and won’t include any user interaction. Finally, renderTabs() writes the HTML for the tabs displaying the playlists’ names which allow the user to view different playlists. This application is built to support up to four playlists, but this function could be modified here to write out a horizontally scrolling tabbed list, or some other UI, that allowed for more playlists, or playlists with longer names.

In addition to writing out the tabs and establishing handlers for these tabs to update the UI when clicked, the renderTabs() function selects the first tab by calling selectTab() and specifying through the third argument that the first video of the selected playlist should be selected. Calling selectTab() will result in the invocation of renderPlaylist().

The function renderPlaylist() will first clear any previously written playlist HTML as this function will be called whenever a new tab is selected. Once the list div is emptied, the new playlist data is written into it and manageList() is called to set up all of the interaction handlers that take care of the scrolling of the list and the selection of items, which will result in the invocation of selectItem() in renderPlaylist() and the eventual invocation of loadVideo().

Video Proxies

Up to this point we have retrieved metadata from the back end and rendered that data within the UI using standard jQuery. When loadVideo() is called, though, we need to start playing video, and so we get into new territory.

First, to enable newly selected videos to play back automatically, the loadVideo() function sets the autoplay attribute of the HTML video element to true only if selectedVideo has previously been defined, meaning that the first video will appear queued but all subsequent videos will start automatically when an item is selected.

// videos after initial video should autoplay
if (selectedVideo && video) {
  video.autoplay = true;
}
selectedVideo = videoData;

At the end of the function we set up a video proxy which will communicate with whatever is playing back the video -- either the HTML video element in single screen mode, or the App Cloud JavaScript API which will speak to the native device while in dual screen mode. This is done in these lines:

// these proxy calls to either the in-page video element or the App Cloud AirPlay API
if (hasExternalScreen) {
  videoProxy = new ExternalVideoProxy();
  videoProxy.playVideo();
} else {
  videoProxy = new VideoElementProxy();
}

The variable hasExternalScreen is a flag that is set within the handlers discussed at the beginning of this post, for the events externalscreenconnected and externalscreendisconnected. These handlers are shown here:

// handles when the app goes into mirroring mode
onExternalScreenConnected = function() {
  hasExternalScreen = true;
  $(".singleScreen").addClass("dualScreen");
  videoProxy = new ExternalVideoProxy();
  handleExternalScreenSwitch();
},

// handles when the app exits mirroring mode
onExternalScreenDisconnected = function() {
  hasExternalScreen = false;
  $(".dualScreen").removeClass("dualScreen");
  videoProxy = new VideoElementProxy();
  handleExternalScreenSwitch();
},

As you can see in these methods, the hasExternalScreen flag is set accordingly within each handler. In addition, UI elements’ styles are updated by the addition or removal of a new CSS class and handleExternalScreenSwitch() is called in both handlers. This function updates the playhead and track based on the UI changes. The other important lines to note are that in both methods a new videoProxy is created based on the presence of the secondary screen.

So what exactly are these video proxies? Since control of a video is different based on whether it is a single screen application using the HTML video element or a dual screen application using the App Cloud Javascript API to communicate with the native device, the video proxies provide an abstraction layer to allow the UI to manipulate the video without worrying about the technology. For example, when the play/pause button is clicked, the following function is called:

// toggles playback of video from button click
playOrPauseVideo = function() {
  if (!selectedVideo) return;
  $playButton.removeClass("downState");
  if (video) video.autoplay = true;
  playing ? videoProxy.pauseVideo() : videoProxy.playVideo();
},

The important line to note in relation to the video proxies is the final line of the function, where either pauseVideo() or playVideo() is called depending on the current playback state. It does not matter to this function HOW the video is paused or played, and the application could either be in single screen or dual screen mode, but the implementation of these functions by the proxy classes differs. Here is how the VideoElementProxy, which communicates with the HTML video element, implements the pauseVideo() function:

// pauses video
this.pauseVideo = function() {
  video.pause();
};

In contrast, here is how the ExternalVideoProxy implements the same function:

// pauses video
this.pauseVideo = function() {
  bc.device.externalscreen.pauseVideo();
};

In this way the rest of the controls in the application can work against a single interface for both types of experiences and technologies. The full implementation of the VideoElementProxy is shown here. Note the large commented out section in the playVideo() function, which will be addressed after the listing.

// proxy to video element in HTML
VideoElementProxy = function() {

  var $video = $("video"),
      $videoBackground = $("#videoBackground");

  // seeks to position in video
  this.seekVideo = function(position) {
    video.currentTime = position;
  };

  // starts video, possibly at a position other than 0
  this.playVideo = function(position) {
    var seekToPosition;
    video.play();
    // Unreliable and inconsistent due to how iOS pauses the video when the user 
    // opens the AirPlay controls. The issue is that when exiting dual screen, the iPad
    // will pause when the dual screen access control (on the bottom bar of iPad)
    // is closed, so although it might seek to the correct position initially,
    // once this bar is closed the video will pause and sometimes will result
    // in buggy subsequent behavior like returning to start, whereas if you don't attempt
    // to seek the video will continue playing normally. You can uncomment
    // the lines if seeking is more important despite bugginess. Also, if checking for seekable
    // and seekable.length and seekable.end() are removed, the video will
    // sometimes seek successfully and sometimes not, but seems to always play.
    /*
    if (!isNaN(position) && position > 0) {
      seekedTo = position;
      seekToPosition = function() {
        try {
          if (video.seekable && video.seekable.length && video.seekable.end() >= position) {
            video.currentTime = position;
            $video.unbind("timeupdate", seekToPosition);
          }
        } catch (e) {}
      };
      $video.bind("timeupdate", seekToPosition);
    }
    */
  };

  // pauses video
  this.pauseVideo = function() {
    video.pause();
  };

  if ($video.length == 0) {
    // AirPlay doesn't like turning back control to existing element
    // so make a new one
    $videoBackground.empty();
    $video = $("<video />");
    $videoBackground.append($video);
    $video.bind("play", onVideoElementPlay);
    $video.bind("pause", onVideoElementPause);
    $video.bind("ended", onVideoEnded);
    $video.bind("timeupdate", onVideoTimeUpdate);
    video = $video[0];

  }
  video.src = selectedVideo.FLVURL;
  video.poster = selectedVideo.videoStillURL;
},

When this proxy is instantiated a new video element is created if it doesn’t already exist, with handlers set up for the common video events. The source and the poster image are set as well. The functions themselves call methods or set attributes on the video element, play(), pause() or currentTime.

The block to note is the commented out section in playVideo(). This block is an implementation to allow a video to be seeked to a position within the video, which should occur when dual screen mode is exited while a video is being played back. Unfortunately, this is a fairly unreliable piece of functionality due to quirks in the way entering and exiting AirPlay is currently implemented, as elaborated on in the comments.

The implementation of the ExternalVideoProxy is just as simple:

// proxy to external AirPlay video through AppCloud API
ExternalVideoProxy = function() {

  // get rid of in-page video element
  $("#videoBackground").empty();
  video = null;

  // seeks to position in video
  this.seekVideo = function(position) {
    bc.device.externalscreen.seekVideo(position);
  };

  // starts video, possibly at a position other than 0
  this.playVideo = function(position) {
    var options = !isNaN(position) ? {timecode:Math.round(position)} : null;
    bc.device.externalscreen.playVideo(
      selectedVideo.FLVURL,
      function(e) {},
      function(e) {},
      options
    );
  };

  // pauses video
  this.pauseVideo = function() {
    bc.device.externalscreen.pauseVideo();
  };

};

The first thing done is that the HTML video element, if present, is removed from the DOM. If left, there are odd side effects that can occur, like communication with the element being severed, so the cleanest thing to do is get rid of it completely. For the methods, the App Cloud Javascript API is invoked, with the only special case being that if moving into dual screen while already playing back a video, bc.device.externalscreen.playVideo() is passed a fourth argument which defines the desired starting position in seconds of the video, which fails if the number is not passed as an int.

Using these two proxies, whenever the video is seeked to a new position, as when the playhead is dragged or the track is clicked, the current proxy can be called without worry of what mode the application is currently in:

// seeks to specified position (milliseconds) in video
seekVideo = function(position) {
  seekedTo = position/1000; // convert to seconds
  videoProxy.seekVideo(seekedTo);
},

Integrating with Video Cloud

In this example an HTML video element has been used in order to simplify the example and not require the creation of a Video Cloud player. However, because of how a proxy to the video has been used in the code it is a simple task to rework the code to reference a Video Cloud player as opposed to a video element. As an example, this code could replace the current VideoElementProxy:

player = null,

// proxy to Video Cloud player
VideoElementProxy = function() {

  var html,
      $videoBackground = $("#videoBackground"),

   // handles timeupdate event from video element, updating playhead and time
  onVideoCloudProgress = function(event) {
    if (!hasExternalScreen) {
      handleVideoProgress(event.position, event.duration || selectedVideo.length/1000);
    }
  },

  // handles when video element starts playing
  onVideoCloudPlay = function() {
    if (!hasExternalScreen) handleVideoPlay();
  },
  
  // handles when video element pauses
  onVideoCloudStop = function(event) {
    if (!hasExternalScreen) {
      handleVideoPause();
      if (event.position == event.duration) {
        onVideoEnded();
      }
    }
  };

 // seeks to position in video
  this.seekVideo = function(position) {
    video.seek(position);
  };

  // starts video
  this.playVideo = function() {
    video.play();
  };

  // pauses video
  this.pauseVideo = function() {
    video.pause(true);
  };

  if (player == null) {
    $videoBackground.empty();

    html = '<object id="myExperience" class="BrightcoveExperience">';
    html +=   '<param name="bgcolor" value="#000000" />';
    html +=   '<param name="width" value="600" />';
    html +=   '<param name="height" value="450" />';
    html +=   '<param name="playerKey" value="AQ~~,AAAAwnfEsvk~,KAoXD_LRPPBdLlNBC_eyUOLc6mbs-HO2" />';
    html +=   '<param name="isSlim" value="true" />';
    html +=   '<param name="includeAPI" value="true" />';
    html +=   '<param name="templateLoadHandler" value="onTemplateLoaded" />';
    html +=   '<param name="@videoPlayer" value="' + selectedVideo.id + '" />';
    html += '</object>';

    window.onTemplateLoaded = function(id) {
      var MediaEvent = brightcove.api.events.MediaEvent;

      player = brightcove.api.getExperience(id);
      video = player.getModule(brightcove.api.modules.APIModules.VIDEO_PLAYER);
      video.addEventListener(MediaEvent.PLAY, onVideoCloudPlay);
      video.addEventListener(MediaEvent.STOP, onVideoCloudStop);
      video.addEventListener(MediaEvent.PROGRESS, onVideoCloudProgress);
      window.onTemplateLoad = null;
    };

    $videoBackground.append(html);
    brightcove.createExperiences();

  } else {
    video.loadVideoByID(selectedVideo.id);
  }

},

The player referenced here is a template that contains only a VideoDisplay BEML element, which does not include controls. This is not a standard out-of-the-box template and requires creation through the player templates interface in the Brightcove Studio, which is available to Professional and Enterprise customers. Express customers wishing to include a Video Cloud player could use a Chromeless Video Player and then hide the custom controls in this application when in single screen mode.

Cleaning up the player when entering into dual screen would also have to be altered. In the ExternalVideoProxy, the following lines would remove the Video Cloud player:

if (player) {
  video.pause(true);
  brightcove.removeExperience(player.id);
  video = null;
  player = null;
}

At this point, the application would work with a Video Cloud player, though there are further optimizations you could include, like removing all of the references to $video and its handlers and changing VideoElementProxy to be something more accurate, like VideoCloudProxy. You could also explore not using the Media API and instead program the Video Cloud player, extracting the playlist metadata from the Video Cloud player and using this to drive the tabs and list in the application. Not only would that remove the need for communicating with the backend outside of the player, but it would also allow you to dynamically change what content appears in your distributed application more easily.

Conclusion

As we move into a world of users consuming their media on multiple screens simultaneously, taking advantage of synchronizing and enhancing content across devices is an important piece that should be considered when developing your applications. The dual screen video APIs within App Cloud Core simplifies the implementation of such applications, and this open source example demonstrates how.