Problem Space

Early versions of iOS had very poor support for video playback. A developer could display a H.264 video full screen, but just about anything else was impossible. In more recent versions of iOS, video support has improved significantly. Unfortunately, a developer needing to implement custom video logic in an iOS application will quickly run into the limits of what can be done with the built-in iOS libraries. AVAnimator fills in the gaps to provide video and audio related functionality that makes it easy to implement custom logic. AVAnimator is basically like Quicktime for iOS devices.

The AVAnimator library makes a number of hard things easy. Video content can be directed to a UIView subclass or to a Core Graphics CALayer. The implementation uses highly optimized ARM asm code for optimal performance. Memory usage at runtime is minimal and there are no memory leaks. Input video content can be stored in a H.264 container, in an APNG file, as an animated GIF, or in the Maxvid format (optimized for iOS hardware).

AVAnimator solves these problems:

Import Quicktime Video

How can you use Quicktime video in an iOS app? You can't.

It is possible to import H.264 audio/video and play these movies directly in an iOS window, but Quicktime is not supported on iOS devices. Existing videos would need to be converted to H.264 before they can be displayed on iOS devices.

The more serious production problem is how does a dev team interface with existing artists that make use of video production tools like After Effects? Artists may be experienced with exporting assets in Quicktime format, but they may not be aware of the specific details of encoding H.264 for iOS platforms. Also, in some cases, H.264 may not be the best solution for your project.

With AVAnimator, the artist can continue to deliver Quicktime rendered output and then your developer can convert the Quicktime formatted files to MVID or H.264 on the desktop. This makes it possible to encode to lossy H.264 or lossless MVID files and preview the results before the files are attached to the app resources. This saves considerable developer time since the app does not need to be built and run before intermediate results can be seen.

Artist Dev Desktop iPhone App
Artist -> Developer -> iPhone App

Use Alpha Channel Video

How can a developer import video content that makes use of an alpha channel? The bad news is that iOS contains no native support for video with an alpha channel. The H.264 format does not support an alpha channel by default. In theory, there is a H.264 spec proposal called frex extensions that could support an alpha channel, but no known coder supports these extensions. Even if an encoder existed, the decoding hardware already in iOS devices would not support decoding non-standard streams. Feel free to ignore posts in online forums (by poorly informed individuals) making claims to the contrary.

One cheap hack would be to encode a video as a series of PNG images with alpha channels. The downside to this approach is that it results in a bloated app download. In addition, a lot of CPU time is needed to decode each frame of PNG data.

AVAnimator supports two professional quality solutions to the problem of alpha channel video:

  • Lossless Animation : MVID
  • Lossy Video : H.264

The Lossless MVID format supports 32BPP with a full alpha channel and frame by frame deltas. This format is basically the same as Quicktime with the Animation codec (Millions+) except that it is optimized for the iOS pixel layout. The compressed file attached to the app resources features 7zip compression to squeeze the download size down to an absolute minimum.

The Lossy H.264 encoding approach uses a pair of videos to encode the RGB data and traveling matte separately. The principal benefit of this approach is a professional quality final result. AVAnimator code on the client handles all the details of decoding the H.264 videos and combining them back together into a single 32BPP video. The production process is simplified since the artist does any Chroma Key processing using existing tools and then a Quicktime Animation encoded 32PPP video can be delivered to the dev team. The developer runs an AVAnimator supplied tool to split the alpha channel video into two H.264 files. Videos can be previewed and verified on the developer desktop before video data is incorporated into the iOS app. Complex videos with lots of detail and motion will compress more effectively using the lossy H.264 approach.

This Kitty Boom blog post goes into more detail and provides a working Xcode project file demonstrating the RGB+A video encoding using H.264 files.

Seamless Looping

Seamless Looping refers specifically to a loop that can repeat without a noticeable pause or glitch in either audio or video content. There is no "sort of seamless" looping, either a loop is seamless or there is a noticeable glitch when a loop repeats. Proper definition of terms is important since much advice found online is simply talking about looping a video clip and not seamless looping.

GIF example:


An iOS developer attempting to implement seamless looping using AVPlayer and a H.264 stream would be able to play a stream and then restart when the video ended. But, because of the way that iOS H.264 hardware and the associated APIs function, a developer is not going to be able to actually implement reliable seamless looping of H.264 audio/video under iOS. A better solution is to use AVAnimator to convert the video data to MVID so that video blit operations can be done with a tight time tolerance around the loop point. AVAnimator can not only loop the video but it can also do seamless looping of the audio content. It is important to note that both the audio and the video content used in seamless looping must be specifically crafted by an artist to properly support seamless looping.

Audio / Video Sync

Audio / Video Sync refers to logic that maintains a tight synchronization between the audio clock and the video frame that should be displayed at a specific time. While synchronization is straightforward to understand, the actual details of how to implement Audio/Video sync are quite complex. The AVAnimator code that monitors the audio clock to predict the next frame display time is able to dynamically recover from a number of clock jitter and resource starvation issues. AVAnimator is even able to execute multiple clocks with different frame rates simultaneously.