Diving into Depth Maps
Paul Forest has the scoop on how we create gorgeous 3d scenes
Tinkering away in the depths of the enchanted forest, we try all kinds of things to make our cards the best they can be. The main idea: how to create enhanced 3D effects from our paintings? One recent test involved depth maps.
Depth maps have been around as long as the depth (or Z) buffer itself, invented by Pixar founder Ed Catmull in 1974. They exploded in popularity around 2018 when advanced mobile phones could easily store depth information, and social media networks could readily display this information.
Recently, a lot of research has been done using advanced neural networks to predict a depth map based upon a single image input. Companies like Adobe, Meta, and SnapChat all have a vested interest in enabling users with all kinds of hardware to be able to easily generate and share images like this. Up until this stage, special camera technology was required to get the depth information, involving dual lenses, lens masks and even LIDAR.
Other methods of generating 3D scenes from 2D images involve modeling the data directly, projecting the data onto proxy geometry, or even using a “2.5D” process, separating portions of the image into layers, and setting each layer to a different depth value. All of these methods are manual and time consuming.
Neural networks trained on the massive volume of photo based depth maps that currently exist have proven to be incredibly effective. Primarily used for casual entertainment, and generally on photographs, it was a long shot that they would be able to make sense of our stylized, painterly images.
Fortunately, the same networks that allow old time, single shot photos to be converted to 3D work well for our illustrations.
So an image like Skip here can be analyzed and generate a depth map like this one here.
When combined with the right shader, you get results like the image below.
Thanks for reading! We’ll always be looking for ways to make your game experience delightful and fresh.
-Paul