Augmented Reality June 19, 2018

How Apple and Google are taking their first steps towards the AR Cloud

Kenny Deriemaeker

AR/VR Competence Lead

At its yearly Worldwide Developer Conference in San Jose last week Apple announced ARKit 2, a major update to their Augmented Reality framework for iPhone and iPad. When ARKit was first introduced at WWDC 2017 it made iOS the world's largest consumer platform for Augmented Reality - and it put the technology itself in the spotlights after decades of being confined to academic research, military uses and simple marker-based gimmicks for consumers.

ARKit shows Apple's strategic commitment to AR, a technology they believe will transform the way we interact with digital in the coming years. There are persistent rumors that Apple is working on a wearable AR headset for consumers, to be released sometime in 2021.

Regardless of where Apple's roadmap is ultimately going, ARKit 2 takes several big steps in the right direction by advancing what can be done today on our smartphones and tablets. Let's take a look.


Performance, Quality and System-Level Integration

ARKit 2 makes subtle but significant improvements to the quality of AR on iOS. For starters, it shortens the time needed to start an AR session (the part where you need to wave your phone around for a while until the framework understands your environment), so getting in and out of AR doesn't put so much burden on the user.

It also introduces API's to generate realistic lighting and reflections based on what the camera sees in the real environment — with Machine Learning filling in the gaps. So if you are putting down a virtual car in your driveway for example, you'll actually see the shape and color of your house reflected on it. This gives the visual quality of all AR experiences on iOS a boost without requiring much effort at all from developers - exactly the kind of feature we like at In The Pocket.

iOS 12 also integrates ARKit at a system level using the new USDZ file format, enabling AR content to be processed and displayed by apps just as easily as regular images or video files.

A real banana reflected in a virtual bowl of apples

A real banana reflected in a virtual bowl of apples


Shared and Persistent Experiences

The support for shared and persistent AR experiences is the most impactful addition by far. Until now, virtual content placed in the environment wouldn't survive across sessions and couldn't be seen by other users on a different device. In mobile AR 1.0, every experience was by necessity local and ephemeral.

To overcome these limitations, ARKit 2 introduces World Maps: a data structure that captures a 3D map of the environment for the application to store or share with other users. This lets the connection between the physical world and the virtual elements in an AR app or game be shared across devices and across time, which unlocks lots of new use cases.

Multiplayer games is an obvious new use case: Apple showed off a collaborative Lego building experience on stage at WWDC, and has open-sourced a game where two players can take shots at each other using virtual catapults. It's great fun, although a little weird for spectators who can't see the virtual content.

The real excitement for us at In The Pocket lies in the non-gaming use cases. ARKit's World Maps make contextual augmented reality possible: AR experiences that are closely tied to a particular real-world location or object. If you can make a 3D model of a place and localize the user's device within that model using a World Map, you can display all kinds of useful information on top of reality in a context that the user intuitively understands. Apps could guide you to your terminal at the airport, show you where in the store the contents of your shopping list can be found, or visualize where cables and pipes are underneath the floor with great precision. AR suddenly becomes useful in the real world.

Find out what AR can do for you

To identify how your customer would interact in an AR environment, we designed the Roadmap to AR: a fast-paced & hands-on track, tailored specifically to discover opportunities and use cases in the exciting world of Augmented Reality.

Learn more
Learn more


Painting the World with Data

To be clear, World Maps aren't the ultimate solution to the problem of contextual AR. Capturing maps has to be done with an iOS device as well, which is time-consuming; and localization within the map isn't guaranteed to work reliably in all environments. Also, it is completely up to the application developer to manage and store the mapping data — and to worry about the privacy aspects of handling 3D maps of people's personal spaces. ARGDPR, anyone?

World Maps however, are definitely a big and useful step towards the AR Cloud: the ambitious idea of mapping the entire world (indoors and outdoors) in 3D so it can be used as a canvas for digital experiences. Think of the internet but based on the physical environment, or as AR commentator Charlie Fink poetically calls it, "painting the entire world with data".

This may sound like a ludicrous undertaking, but many efforts are underway to make it a reality. From tech giants over Silicon Valley start-ups to even open-source government-supported initiatives.

On the left: a table that is used as a reference point for an AR experience; on the right, the 3D point cloud of that table as ARKit sees it.

On the left: a table that is used as a reference point for an AR experience; on the right, the 3D point cloud of that table as ARKit sees it.

We attended Augmented World Expo, the world's foremost conference on AR and VR in Santa Clara a few weeks ago. We met a lot of deeply technical startup founders with an interesting mix of wild-eyed vision and pragmatism. Many of them hoping to build some critical part of this almost-mythical AR Cloud. The unifying message among these competing startups is that the infrastructure that underlies our mixed reality future should be technically sound, open-source to some degree, and definitely not under the control of Google, Apple or Amazon.

Apple isn't offering an AR cloud solution yet, nor are they opening up their world mapping algorithms as an open standard that can interoperate with other platforms. We believe they will eventually have to, if they want to realize the full potential of contextual AR on their devices. So we're keeping our fingers crossed for an "ARCloudKit" at a future WWDC.


What about Google?

Apple isn't the only tech giant betting on Augmented Reality as a critical part of our digital future. Google has been serious about AR for years, beginning with the launch of Project Tango in 2014. Tango's tech was several years ahead of the competition but made the unfortunate choice of requiring special depth-sensing hardware; a bridge too far for most smartphone manufacturers. After the announcement of ARKit last year, Google scrambled to take the parts of Tango that could work on any smartphone and repackaged them as ARCore, a framework very similar to Apple's offering.

Two Android phones playing a multiplayer game using Cloud Anchors.

Two Android phones playing a multiplayer game using Cloud Anchors.

ARCore was recently updated as well: at their yearly I/O conference Google announced version 1.2 of the framework with several major new features, mostly in line with ARKit. What's interesting is how Google is approaching the move towards the AR Cloud, and how their approach differs from Apple's.

In typical fashion Google relies on the cloud instead of the processing power of the device; their version of World Maps is called AR Cloud Anchors and works exclusively through Google's cloud services. The data captured by the user's camera is sent to Google and stored on their servers for up to seven days, after which the data is deleted. In this way Google's solution is still ephemeral to a degree; they are focusing more on sharing and less on persistence. There are also legitimate concerns about personal camera data being sent to Google servers and stored there — especially now that Apple has demonstrated a similar solution that doesn't require any access to your data at all.

Contrary to ARKit's solution, Google's Cloud Anchors work on iOS as well: as a developer you can make multiplayer AR games that can be shared in realtime between an Android phone and an iPhone. This is a winning move in our book — Apple, please take notice.


What's next?

Google is undoubtedly working on a bigger cloud-based solution for shared and persistent AR experiences — they already teased a Google Maps-like service they call the Visual Positioning System (VPS) a few years ago. It clearly isn't ready yet, but next year's Google I/O may be when they finally unveil it.

Apple is focusing its AR efforts in the same general direction as Google but in subtly different ways, leveraging their vertical integration and the power of their iOS hardware to circumvent privacy issues and provide quality tools and API's to developers. We just hope they are willing to open up their platform so it can interface and interoperate with others — the only way the full potential of mobile Augmented Reality can be unlocked.

Exciting times!

Want to learn more about AR?

Get the ball rolling and see if our Roadmap to AR is just what you need. Let’s talk possibilities!

Learn more
Learn more