News Feed

Advancing Technologies for The Lion King

This talk was originally created for Siggraph 2019 but was not submitted for consideration.

R&D
September 24, 2019
Authors

Kai Wolter (MPC R&D)

Abstract

From believable characters, to realistic environments and documentary-style cinematography, MPC Film had to solve a number of technological challenges to bring the classic Disney tale The Lion King back onto the big screen, in a new, highly photo-realistic look.
In addition to our animators reviving the beloved original cast using hand-animation, we extended the tool set of our Groom and Technical Animation departments to create even more realistically moving fur than seen in the 2016 The Jungle Book.
We also rewrote our fur shader to meticulously capture the appearance of the characters’ real-world counterparts.
We recreated 150 square kilometers of the original landscapes in 3D to give the director the freedom to virtually explore the sets, choose from a wide range of lenses and capture the action and environment up to the horizon at a very high level of detail.
New workflows needed to be thought up to create and handle the complexity across multiple departments. Our rendering and compositing pipeline was revised to deliver the look well known from animal documentaries shot in Africa. All challenges gave us the opportunity to rethink existing tools, workflows and focus further on optimizations to deliver close to 1500 shots.

Fur

Based on the artists’ feedback from our work on The Jungle Book, we provided our Technical Animation department with a full integration of our latest in-house fur software, Furtility, in SideFX Houdini–extending our Maya and proprietary grooming and simulation tool set. This gave us the unique opportunity to combine the strengths of our procedural grooming software with the node-based workflow and simulation capabilities in Houdini. Using Houdini’s new Vellum solver we could produce more detailed simulations
more efficiently. Recent advances to Furtility’s procedural workflow gave the artist a higher level of control over the dynamic hairs and
data of each groom based on the needs of the shot. Thanks to our seamless integration, all groom attributes including e.g. wetness, were exposed as native Houdini attributes to the artists allowing them to drive simulation setups and override properties of the groom when needed. The changes were layered on top of the groom and eventually also exposed to our lighters in Katana enabling them to make further modification if needed.

In addition to the grooming and simulation of hair and fur we reinvented our fur shader to match the look of real-world animals more closely. We introduced longitudinal and azimuthal distributions to make strands resemble fibers instead of ribbons (an artifact often observed in previous approaches). We modelled lobes to simulate the medulla inside the cortex of the hair shaft replicating the optical properties distinctive to fur, as well as practical lobes to control the amount of dirt, dust and wetness. A focus on physical plausibility and energy conservation helped to ensure consistency between shots (allowing artists to focus on the creative aspects of lighting) and at the same time reduced the render time by up to 30% as compared to off-the-shelf shaders.

Environments

The Environments team built 150 square kilometers of sets including Pride Rock, the vast Pride Lands, lush forests and deserts covered with a wide variation of flora. To bring the environments to life and provide a seamless integration with the realistic characters, an exceptional attention to detail was necessary. Our Houdini pipeline was extended to allow the Environment team to build tools to procedurally generate believable natural sceneries. This included the growth of vegetation, erosion effects, scatter of debris and modelling of intricate inter-dependencies between the elements of the environments. As compared to our previous scattering pipeline, a recipe-based projection setup allowed us to automatically recreate the set-up anytime we needed to update the environment due to changes to hand-modelled assets such as the ground, landmarks and rocks.

To be able to render and simulate these vast environments with millions of pebbles, rocks, moving grass blades and plants we needed to find a way to reduce the complexity as early as possible and promote selected areas for hero simulations. Inspired by [Repasky et al.2013], we introduced a template-based, procedural
processing step to optimize each environment per shot: Based on the camera and configurable heuristics, the level of detail for each instance was calculated. Any instances not within the camera’s frustum, occluded by the environment or too small to be seen were also automatically culled. What is more, TDs could also add sequence or shot specific edits to these operations to further reduce the complexity. This way we were able to produce a highly optimized per shot representation of the scatter with a geometrical complexity often reduced by 95 percent. The latter helped creating localised simulation set-ups, higher quality QC dailies (providing more information earlier in the pipeline) and further reducing the cost of our final renders.

As the scatter comprised static geometry like pebbles as well as moving grass blades and plants, we extended our pipeline to efficiently store and render these objects using instanced geometry–supporting both static and animated geometries with per-instance offsets allowing the artist to either choose from a wide-range of predefined motions or simulate shot specific phenomena and character interactions.

Rendering

As the director’s intent was to capture the look and feel of the lenses used in traditional animal documentaries we had to account for rendering setups covering the characteristics of very wide to tight telephoto lenses and aperture settings. Depth of field (DOF) played a particularly important role. Instead of producing the effect in render we decided to rely thoroughly on deep compositing for this project and calculate the out-of-focus effect in Compositing.
This resulted in a believable blur blending furry characters and environments seamlessly with volumetric effects. This decoupling from the renderer not only enabled our compositors to tweak the look per shot without having to re-render the 3D scene, but also sped review workflows where where the effect was not yet required.
Additional artistic control was provided by an extended set of deep and recoloring nodes inside Nuke [Pieké et al. 2018b]. Initial tests quickly confirmed that we also had to change our existing deep compositing pipeline to make it a scale to our demands: Amongst others we significantly improved the performance of the deep operations in Nuke and introduced several compression schemes [Pieké et al. 2018a] to reduce the IO requirements significantly.

We extended RenderFlow (our in-house system to create high quality previews for review [Auty et al.2016]), to provide consistent look-dev and lighting across departments and renderers. We provided our lighters with a sequence render tool SQR enabling them to automatically render and composite preview or final quality shots in batch from a single interface. We collaborated with the Pixar RenderMan team to facilitate the latest advances in curve rendering, check-pointing and cloud-rendering to further increase quality and reduce render times.

Future

We intend to evaluate alternative deep data representations for more efficient and accurate compositing and re-lighting operations. We also look forward to moving our environment and scatter pipeline to USD and to investigating more interactive DOF approaches.

Acknowledgements

Special thanks go to Adam Valdez, Elliot Newman, Simon Jones, Oliver Winwood, James Austin, Julien Bolbach and Kirstin Hall for providing endless practical insight, feedback and support as well as to Igor Skliar for his work on the fur shader, and to all developers and TDs who helped making these technologies possible.

References

Curtis Andrus. 2018. Layering Changes in a Procedural Grooming Pipeline. In Proceedings of the 8th Annual Digital Production Symposium (DigiPro ’18). ACM, New York, NY, USA, Article 4, 3 pages. https://doi.org/10.1145/3233085.3233094

Jared Auty, Marlène Chazot, Ruben D. Hernandez, and Marco Romeo. 2016. Rapid, High Quality Dailies with RenderFlow for The Jungle Book. In ACM SIGGRAPH
2016 Talks (SIGGRAPH ’16). ACM, New York, NY, USA, Article 70, 2 pages. https://doi.org/10.1145/2897839.2927415

Stefano Cieri, Adriano Muraca, Alexander Schwank, Filippo Preti, and Tony Micilotta. 2016. The Jungle Book: Art-directing Procedural Scatters in Rich Environments. In Proceedings of the 2016 Symposium on Digital Production (DigiPro ’16). ACM, New York, NY, USA, 57–59. https://doi.org/10.1145/2947688.2947692

Rob Pieké, Yanli Zhao, and Fabià Serra Arrizabalaga. 2018a. Deep Thoughts on Deep Image Compression. In ACM SIGGRAPH 2018 Talks (SIGGRAPH ’18). ACM, New York, NY, USA, Article 74, 2 pages. https://doi.org/10.1145/3214745.3214753

Rob Pieké, Yanli Zhao, and Fabià Serra Arrizabalaga. 2018b. Recolouring Deep Images.
In Proceedings of the 8th Annual Digital Production Symposium (DigiPro ’18). ACM, New York, NY, USA, Article 10, 3 pages. https://doi.org/10.1145/3233085.3233095

Zachary Repasky, Patrick Schork, Kevin McNamara, and Susan Fong. 2013. Large Scale Geometric Visibility Culling on Brave. In Pixar Technical Memo #13-05. 1.
https://graphics.pixar.com/library/VisibilityCulling

Related Stories

Contact us

Get in touch to take the first steps toward making your vision a reality.

Contact
Contact us