Boosting the sales of VR entertainment by letting visitors share their experience
To further boost the social selling of The Park, we developed a pipeline that produces a video showing the players in the game on the basis of a real life capture.
Since 2018, users can experience a unique VR adventure in Belgium: chasing zombies with friends, dismantling bombs, stopping a bank robbery... In its first year, more than 50,000 visitors played at The Park. There are now five Belgian locations and investors Telenet and 9.5 Magnitude Ventures are planning to conquer the United States with the concept.
Being able to discover new technology with friends in a playful way is a very nice experience that you want to share with as many people as possible. The Park is heavily depending on word-of-mouth advertising, but social media remained a blind spot for a long time. If you wanted to share photos of your game at The Park, you could only show pictures of people wearing VR glasses in an empty space.
To further boost the social selling of The Park, we developed a pipeline that produces a video showing the players in the game on the basis of a real life capture.
To achieve such a result, there was a technical challenge: the symbiosis between physical people and a virtual context. What makes it especially difficult is that it wasn’t possible to work with a green screen. We solved the problem by combining two of our favourite technologies: virtual reality and artificial intelligence.
To tackle such a complex problem like this one, the first thing we did was wireframing the complete pipeline on our good old-fashioned whiteboard. Nothing more important than thinking things through before you dive into the coding part. After several runs and talking with a lot of the great minds at our company, we designed a modular system capable of taking in VR footage, game data and real life footage and merging it into one composite video that eventually gets sent to all the players.
Our first idea was to use the people segmentation Apple had released in ARKit 3.0. This looked like worth the shot and first, preliminary tests showed that we might get pretty far by using this feature. As soon as people are playing a game at The Park, the game starts capturing the playing field at interesting moments. This happens in both VR and in real life. From the real life footage, the players bodies get extracted by the people segmentation. Both the depth map from the game itself and the real life generated one get merged. By doing this, we make sure that people walking behind an object in VR, also get clipped in the final composited video.
Unfortunately for us, after testing out the people segmentation on-site in real life conditions (so not the most optimal ones), we noticed that ARKit is not so good at handling the segmentation in not so well lit areas. Soon it was clear that multiple people overlapping each other in the view, resulted in bad segmentation and thus a bad final composition.
At In The Pocket, we don’t accept it when we hit the capabilities of a certain technology. We try to overcome them by creating our own. That’s why our AR and AI team joined forces, to see if we could implement a different people segmentation algorithm and introduce it into our pipeline. Due to our good preparation, the modularity of our pipeline paid off. We removed the basic ARKit segmentation and implemented the largest and most advanced image segmentation algorithm available from the world-renowned AI research lab of facebook. Because we wanted (and needed) to use more processing power, we decided to process this segmentation in the Google Cloud.
After we fixed the segmentation, we got a working end-to-end pipeline again. To support all of the games you can play at The Park, we created a full Unity plugin that can be easily integrated into new games in the future. Thanks to this fully automated pipeline, you can relive the experience at The Park and send it to all of your friends.