Scanning the world to bring it alive: the impact of the new iPad Pro on AR
Last Wednesday, Apple announced a new iPad Pro. It contains the usual incremental improvements to processor speed, display quality and cameras, as well as a new keyboard stand with an integrated trackpad. One entirely new hardware feature stands out: the addition of a LiDAR scanner to the camera bump on the back of the device.
LiDAR is a type of 3D laser scanner that works similarly to radar: it sends out laser pulses into the environment, waits for their reflections to come back into a sensor, and measures how long it took for the light to return at each point to construct a 3D model of the environment. It is what self-driving cars and robots use to sense and model the real world, and now it’s on an iOS device for one specific reason: to take Augmented Reality to the next level. How? Let us explain.
First, it makes mapping and modeling the real world much faster. Without LiDAR, augmented reality relies on computer vision: taking the camera image and analyzing how it changes as you move around the room. As a result, using AR today requires carefully moving your device around the space so that it can recognize surfaces like the floor, a tabletop or a wall — and this can sometimes take a while. LiDAR on the other hand doesn’t require movement to understand the world; it’s instantaneous.
Second, it will make the mapping and modeling of the real world much more accurate. Computer vision-based AR makes probabilistic guesses based on noisy data, sensor fusion techniques and maybe some machine learning; often with great results, but always with some margin of error. LiDAR gives us ground truth based on measurements, at high resolution and with high frequency.
Third, it will solve many of the limitations of augmented reality, like its inability to recognize very bright, dark or textureless surfaces. Ever tried to place an AR object on a shiny white wall, or a matte black couch? These limitations of current AR technology are hard to solve and often difficult to explain to users — they just expect the magic to work, and we don’t blame them. Because LiDAR doesn’t rely on visible light but on laser measurements, it sees things perfectly regardless of what they look like.
In short, active depth sensing will dramatically shift the quality of mobile AR from "cool... when it works" to "it just works!". Not only that, but features that are very hard for developers like us to implement today, like making a high-quality 3D scan of a real-world object or occluding virtual objects behind real ones, will become easy to do.
So why is this only on iPad Pro? Likely for good reasons: component cost, physical size and battery usage. Good commercially available LiDAR scanners have traditionally been the size of a fist and cost hundreds of dollars; the fact that Apple can now squeeze one into the camera bump of a mobile device is impressive. It makes a lot of sense for this to start appearing on iPhones as well — though we’re curious to see if it’ll be an iPhone Pro feature first, or across the line.
Above all we’re tremendously excited to see Apple invest in the progress of high-end, mobile AR with custom hardware, and we can’t wait to get started with it.