Traditional interfaces have been around for a long time and can be optimised endlessly to provide the perfect experience for a user. One big issue with this, not every user is the same. To make their experiences well crafted and personal, you can proactively provide the interface or complete the intended goal the user is aiming for.
Users often face too many choices and have decision fatigue. A great example of this is Netflix, once you open the website, you see an endless list of TV Shows and movies. You have so much choice that at some point you just stop browsing and decide to do something else. Netflix is aware of this and makes up for it by optimising their recommendations and auto-playing trailer. The faster you start watching a video, the more likely you will stay inside the app.
So if Netflix can reduce decisions of its users, you can as well. Guide your user when they need to make a decision, the best way to do this is by making them invisible. Add recommendations, reduce navigation,…
By reducing the user’s choices, they will have a more enjoyable experience by having fewer steps to take to achieve their goal.
Uber is a great example of user flow simplification, they have simplified the taxi taking experience. Let’s go through a traditional way of ordering a taxi:
- You call to the taxi service
- Tell your destination
- Tell your own location
- Tell the time you want to be picked up
- Wait for the taxi to arrive (without any real-time updates)
- Take the taxi
- Arrive at your location
- Pay the driver
Now the Uber way of ordering a taxi:
- Open the Uber app
- Choose your destination
- Wait for your rider to pick you up (with real-time updates)
- Take the taxi
- Arrive at your location
A prediction is a statement about an uncertain event. It is often based on experience or knowledge.
First off, get to know your user well. You can do this by capturing their preferences, input and behaviour. (Please don’t make it creepy, inform and explain to your user what you will do with their data.)
As an example user X always goes straight to their favourite podcast and plays the most recent episode after they open your podcast app. If you detect that this behaviour keeps repeating, it is worthy to save it as a behavioural pattern.
Even without tracking behaviour you can use easily predict what they are about to do. Calendar appointments, weather conditions, interest in news topics, and favourite music can be used guess what a user wants to achieve or be informed about.
So once you’ve gotten to know your user better, you can predict their next move. Let’s help user X, she wants to listen to the latest episode of her favourite podcast. She opens the app, you can present her with a card on the bottom of the screen that allows her to directly play the episode. You have just saved her some time that she can now use to listen to the actual podcast instead of looking for it.
This might be an obvious example, but you could improve and expand it even further if you know the right data about your users. You could also go further and use contextual behaviour to predict what a user wants to achieve.
Both Apple and Google use predictive interfaces to help a user to achieve their goals faster or help them discover new information they were not aware of yet.
Take the Apple Watch, with WatchOS 4 Apple introduced a Siri watch face for the device. It is a predictive interface that displays the information that would be most useful for a user at the moment they take a glance at their watch. It shows upcoming calendar events, weather changes, reminders, timers, updated playlists, sunsets, fitness updates, and what’s coming up the next day.
Google Search takes a similar approach and presents the user with news articles a user might be interested in. What’s great is you can tell what topics or news sources you are not interested in, to improve the algorithm.
Keep in mind that you should not force the predictive interface to your user. Make it easily accessible, but make it possible to perform a different action. Your user should always stay in control.
The capability of a machine to imitate the intelligent human behaviour
The most significant way to improve predictive interfaces is by using artificial intelligence. While all previous examples were mainly based on basic knowledge about the user’s behaviour and device state. To make it scalable, your application should be able to take over and handle the predictions. It can become more advanced over time and improve the experience of the user.
When talking about AI, it might not always be a way of displaying an interface but reducing the flows and decisions for a user to take to achieve their objective.
Discover Weekly, well known by everyone, is a perfect example of how you can reduce a user’s decisions. It automatically creates a playlist based on the user’s listening behaviour, it compares to other playlists and crafts you a personal playlist.
One of the biggest companies that rely heavily on AI is Google. With Google Photo’s you can easily see how powerful AI can be. After uploading the pictures you have taken, every image will be analysed and tagged based on the people that are in the photos, even pets can be detected. So instead of manually labelling every single picture, you can just let Photos handle it. It reduces the time spent on organising for the user and allows you to do more with your photographs like searching for friend’s names.
An excellent example of artificial intelligence is Facebook’s AI. While most AI will improve the experience for the average user, Facebook makes use of it to enhance the experience for blind users. It will automatically caption all images that are uploaded to the News Feed, those captions can then be read to the user. Making it possible for these users to get a more immersive experience suddenly, they know what is displayed in a photo, which was previously not possible.
While all of the above examples create a perfect case why you should rely on AI to improve a user’s experience, it isn’t always perfect. Computer predictions can be wrong and might cause a disconnect between the user and the application if the predictive interface is incorrect.
Computers aren’t perfect
Make sure your user can manually tweak the AI if they are just not interested in the provided content. Also, a hard reset might not be so bad if the algorithm is completely wrong.
You could also think about a way for users to turn of personalised interfaces. Say your application has a feed generated with AI, let your user choose if they want to have their feed manipulated by AI.
But maybe most importantly, keep tracking your user’s behaviour. That’s the only way you can improve your AI and know for sure providing a predictive interface is actually helping your users.
Alternatives for AI
AI is a great way to improve a product, but it might not always be feasible to implement it.
You can use integrations like calendar appointments, purchase history, travel history, fitness activity, social media likes or past email recipients.
However, you can also utilise the capabilities of the devices your users own. Smart devices you put in your pocket, wear around your wrist or one day might have on your head are packed with different types of sensors. A lot of those sensors might be underused, available only for the OS or just forgotten by users and developers.
You can use device information like location services, weather conditions, beacons, headphone status, physical activity, time, and battery life.
Did you know that the latest iPhone has a proximity sensor, ambient light sensor, accelerometer, motion coprocessor, magnetometer, gyroscope, barometer, infrared camera, flood illuminator, dot projector, NFC chip and a pressure sensitive display?
A lot of fancy words, but the only thing you should remember is that these devices are capable of a lot more than they are used for.
The sad thing is that you can only use most of the sensors while your app is in use, making it impossible to access them while your device is locked.
Android devices, however, can detect even more when your application is closed or running in the background, making it possible to create context-aware experiences.
Context-awareness is the ability of devices to sense their physical environment and adapt their behaviour accordingly.
With Google’s Awareness API you can detect advanced changes in the user’s behaviour, for instance: a user plugs in their headphones, starts running, a user moves to a different location, starts driving, …
Your app can react to these changes and answer based on the user’s context. Say for you have an exercise application where you track runs and walks. Your user picks up their device, plugs in their headphones and starts running. Your application can start activity tracking and play the user’s favourite playlist. You can even go further and map out the perfect running track based on the user’s location, the weather conditions and previous running distance.
By detecting these behavioural changes, you can proactively predict the user’s intentions and present the right interface or automatically perform the expected interaction.
Optimising your interface is the first way to improve your user’s experience, but going a step further and proactively changing the interface will make the experience even better. Make use of data provided by the user and capture the user’s behaviour. Use AI together with the data you’ve collected to improve and correct the AI. If AI isn’t an option for you use other services and device information to predict what a user is about to do.
- Reduce the provided options in the UI
- Collect the right data about your user
- Capture the user’s behavioural patterns
- Use other services and device information
- Let users improve or correct the AI