Exploring the Apple Vision Pro: First impressions from our developers
After immersing ourselves in the world of the Apple Vision Pro for a few weeks, we're excited to share our insights and learnings from a developer’s perspective. In this first review, we'll delve into the highs and lows of our experience with this much-anticipated new Apple device.
Limited WebXR Support
When it comes to web browsing on the Apple Vision Pro, our feelings are mixed. While Safari does support WebXR, it’s hidden behind a feature flag. Even then, the support is limited to WebVR only, lacking passthrough AR functionality. We eagerly await the potential addition of WebAR, which could significantly enrich the immersive web experience on this platform.
For those who want to know: a workaround exists for accessing passthrough AR on the web using the <model> tag, also known as Quick Look. However, its functionality remains limited, failing to deliver a truly immersive XR experience.
Privacy: Balancing Security and Capability
Privacy is a paramount concern for Apple, evident in the Vision Pro's design. While their dedication to privacy is noteworthy, it does come with trade-offs. Access to eye-tracking data and the raw camera feed is restricted, which, while good for privacy, limits the device's potential for leveraging computer vision - and the powers of Generative AI - fully.
Intuitive Development Environment
Developing for VisionOS feels intuitive, thanks to Apple's user-friendly approach. The platform, leaning more towards macOS than iOS, offers a straightforward setup process, particularly for those familiar with SwiftUI.
And thanks to a Vision Pro simulator, you don’t need 10 devices in your office to have a group of developers working on it. The simulator is a really good option to test your product without having to put on the Vision Pro continuously.
On top of that, you’re able to preview windows and volumes without having to build and run the app! It’s worth noting here that ARKit features cannot be tested in the simulator, which means you need real-device testing to assess the full functionality of your app.
The proof is in the pudding
By now, we've successfully created two Proof of Concepts (PoCs) to test the Vision Pro's capabilities in a short timeframe, with promising results.
In our first demo, we created a PoC to showcase some of its capabilities. This PoC lets you select an item in a 2D window and seamlessly place its 3D counterpart into the real world. This was our first test, and we know we're just scratching the surface here. But this makes us hungry for more!
In a second PoC, we tried blending in Generative AI and succeeded! This experiments allows users to prompt their own images, a first step in exploring the Generative AI potential. For this, our developer used SDXL ControlNet to embed the logo in the image.
Challenges with 3D Models
However, while developing a PoC for on of our clients, some challenges came up when dealing with 3D models as Apple only supports the .usdz format. Converting models with external tools like Blender resulted in lost textures. Thankfully, Apple's Reality Converter offers a viable solution, effectively converting models to the required .usdz format.
Other than that, it’s worth noting here that ARKit features cannot be tested in the simulator, which means you need real-device testing to assess the full functionality of your app.
Moving forward, we're eager to explore alternative frameworks such as Flutter, React Native, and Unity, broadening our scope beyond Native Vision Pro apps.
Our conclusion
Our initial experiences with the Apple Vision Pro have been largely positive. While it may not immediately dazzle with its capabilities, its robust OS and accessible development platform suggest it could become a big player within Apple's ecosystem. Despite challenges like WebXR and privacy concerns, we remain committed to exploring the possibilities of the Vision Pro and XR technology in general.