Android 9(Pie) is built with Artificial Intelligence at its core with an aim to make your phone smarter, simpler and adapting to your needs by learning from you. The new features introduced in Android P are Adaptive battery, Adaptive brightness, App action, Slices, simpler and more interactive UI and Digital Wellbeing. These new features follow from the three pillars of Android 9: Intelligence, Simplicity and Digital Wellbeing. A new concept of on-device learning or federated learning has been implemented in Android 9. Besides, some great new APIs and capabilities have been introduced for developers as well.
In this article, we will discuss each new feature and explain how the device will adapt to your needs. The key points in this article are:
- What is adaptive battery?
- How adaptive battery works?
- What is Adaptive brightness?
- Difference between Adaptive brightness and the Auto brightness feature
- App action
- How App action works?
- Slices in Google Search
- Simplicity in Android P
- What Android P offers for Digital Wellbeing?
Google partnered with deepmind to work out a smart battery management system called Adaptive battery which is designed to give more consistent battery experience. Adaptive battery uses on-device machine learning to figure out which apps you will use in the next few hours and which apps you will use later in the day. And with this understanding, the operating system adapts to your usage patterns. So that it spends battery only on the apps and services you care about.
According to Google’s Android Team, the results are promising and they have seen 30% reduction in CPU wake-ups for apps in general. This includes other performance improvements like running background processes on the small cpu cores resulting in increase in battery life for many users.
Adaptive battery uses a deep convolutional neural network to predict the next app you are going to use. In convolutional neural network, the layers of neurons are arranged in 3 dimensions:
height. The neurons inside a layer are connected to only a small region of the layer before it, called a receptive field. Following the concept of receptive fields, convolutional neural network exploits spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learnt filters produce the strongest response to a spatially local input pattern.
Deepmind have partnered with Google’s Android Team to incorporate machine learning into a feature called Adaptive brightness. Adaptive brightness learns how you would like to set the brightness given the ambient lighting and then does it for you in a power-efficient way. You will literally see the the brightness lighter move as the phone adapts to your preferences.
According to the Android Team, adaptive brightness is extremely effective. In fact, half of their test users make fewer brightness adjustements compared to all previous versions of Android.
The new AI based Adaptive brightness feature is different from the already exisiting Auto brightness feature in your phone.The follwing are the differences between Adaptive brightness and Auto brightness:
- In Auto brightness, the rules of brightness are same for all devices. But in Adaptive brightness, the brightness adapts to your personal preferences and hence have no pre-defined rules.
- Adaptive brightness is AI based and uses machine learning to know your prefences, whereas Auto brightness is not AI based. It is a hard-coded program to adjust brightness automatically.
- Adaptive brightness is more effective than Auto brightness and requires less manual adjustment.
Last year, Android had introduced a concept of predicted Apps, a feature that places the next app the OS anticipates on the path you would normally follow to launch that app. It was very effective with an almost 60% prediction rate.
With Android P, Android goes beyond predicting the next App you would like to launch to predicting the next action you would want to take. This feature is called App action.
In app action, actions are predicted based on your usage patterns. The phone adapts to you and try to help you get to your next task more quickly. At the top of the launcher, you will see two actions you are most likey to take at the moment. It may be making a phone call to your sister or going for a workout.
This action is dynamic just as your needs of the moment. For example, if you connect your headphone, Android will suggest you to resume the album or song you were listening to.
App action works with google search as well. For example, if your search say infinity war, in addition to regular suggestions, Android would suggest you to book tickets for the movie and watch the trailer of infinity war. App action would suggest you to book ticket with the ticket-booking app you use the most. If you are a big Fandango user, ticket booking will happen with Fandango.
Actions surface not just in the launcher, but also in Smart Text Selection, Playstore, Google search and Google assistant.
Actions are simple but powerful ideas for providing deeplinks into the app given your context. But even more powerful is bringing part of the app UI to the user right there in the action. This feature is called Slices. Slices are new APIs for developers to define interactive snippets of their app UIs. They could be surfaced at different places of the OS. In Android P, the Android Team has laid the groundwork by showing the slices first in search.
Suppose you need to get a ride to work. If you type lyft in the google search app, you will see a slice from the lyft app installed on your phone. Lyft is using the Slice API to render a slice of their app in the context of search. And lyft is able to give you the price of your trip to work and the slice is interactive. So you can order directly from it.
One of the key goals of the Android Team for the last few years has been to evlolve Android UI to be simpler and more approachable both for the current set of users and the next billion android users. With Android P, the emphasis is on simplicity by addressing many issues in the system UI which could be made simpler and more interactive. You will find these improvements on any device that adopts google's version of the Android UI such as Google Pixel and Android One devices.
Lets now check out some improvements in the system UI:
The first thing you will notice is a single clean home button.
This design places an emphasis on gestures over multiple buttons at the edge of the screen. This is especially helpful as phones grow taller and it's more difficult to get things done on your phone with one hand.
When you swipe up, you will see full-screen previews of recently used apps where you can resume apps you have recently used. You also get five predicted apps at the bottom of the screen to save you time. When you swipe up a second time, you get all the apps. Architecturally what has been done is all the apps and the overview spaces are combined into one.
The nicer thing about the larger horizontal overview is that the app content is now glancable. You can easily refer back to the information in a previous app. Even more, Smart text selection(which recognizes the meaning of the text you’re selecting and suggests relevant actions) works even in the overview.
Changing a navigation works. Not a pretty big deal! But sometimes small changes can make a big difference. Take volume control for example. We have all been there . You try to turn on the volume before a video starts but instead you turn on the ringer volume and then the video blasts everyone around you. So how Android P fixes this volume issue?
You can see the new simplified volume control here. They are vertical and located near the hardware button. The key difference now is that the slider adjusts the media volume by default. Because that's the thing you want to change most often. And for the ringer volume, all you really care about is on or off.
Another thing that is greatly simplified in Android P is rotation. You must have sometimes hated your device for rotating at the wrong time. This issue is addressed in Android P. Now when you rotate your device, a new rotation icon appears on the nav bar and then you can choose whether or not to rotate your view.
While smartphone today is the source of knowledge and fun, it has become increasingly important for us to find the right balance for our life. We need to make sure that we are aware of how much time we are spending on our devices and how much are we spending on other things. Many people wish that they could disconnect more easily from their phones and find time for other things. According to Google, over 70 percent of people working in Google want help in this. So they been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for.
Android P has come up with some interesting features for Digital wellbeing. The features are: 1. a new Dashboard that helps you understand how you’re spending time on your device; 2. App Timer that lets you set time limits on apps and grays out the icon on your home screen when the time is up; 3. a new Do Not Disturb, which silences all the visual interruptions that pop up on your screen; and 4. Wind Down, which switches on Night Light and Do Not Disturb and fades the screen to grayscale before bedtime.
These are some of the new features of Android 9 which could make our experiences more unique, and get our jobs done much faster. Android 9 is built to learn from you and work better for you, as you use it more. From predicting your next tasks so that you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone. Wish you a great experience with Android 9.