The long-awaited Google I/O finally went down recently near the company’s headquarters, the aptly named Googleplex, in Mountainview, California. The annual event is first and foremost a developer conference, but it also had a little bit of something for everyone else. The usual suspects were there; Android Wear, Google Assistant, Google TV and Google Home. There was plenty of SDK talk as well, but all eyes and ears were on the new, upcoming Android OS. The somewhat covertly-named Android O (which will most likely stand for Oreo) is currently in Beta and is set for release later this year.
With Android O, Google wants to provide the user with more fluid experiences. That is the ability to jump from one mobile app or interface to another seamlessly. It promises to take multitasking to a whole new level through a more intuitive setup. Picture-in-picture (PiP) mode will now be readily available for all compatible devices. This will allow users to view two different screens or mobile apps simultaneously, and jump from one to the other instantly. The new OS also stepped up Android’s notification capabilities. Users may now long press on notifications to view a more detailed description of the alert. All these are great news for mobile app developers!
The new OS, much like its predecessors, seeks to build upon the previous and make things more efficient. Take the autofill for example. The feature has been around for over a decade, but Android O ensures that it is more comprehensive than ever before and can be used in more mobile apps and pages. Highly responsive feedback is thanks to the practically omnipresent on-device machine learning capabilities. Phrases, names, addresses, phone numbers and other pieces of often used information will be recommended automatically. For the user, the key will be to use the device as much as possible to increase the “library” of vocabulary and behaviors recognized by the OS. Receiving the recommendation from the OS is simple. The user needs only a double tap to receive a list of recommended entries.
On the mobile app development front, Android O will be the first OS that will fully support the Kotlin language. Mobile app developers around the world have been asking for the robust coding language to be included in the Android environment and Google has answered their call. Now, mobile app development companies frustrated by the limitations presented by Java (which has in a way become the de facto Android language) have a viable alternative.
There is Android Go, Google’s solution for users who believe in “more is less”. Android Go is a platform that will optimize the latest Android OS for low-spec’d devices. These low-cost devices will have significantly less speed, power, and storage, but thanks to the optimization processes enacted by Android Go, the average user should only experience a minor difference, since the OS will require less resources to run.
As previously noted there was Android Wear news, but not too much of it mainly because the Android Wear 2.0 OS was already released in February. No major updates were shared or are expected. However, it was noted that users should expect new apps to land soon, and that new watches will also be arriving later this year. This only to be expected as the new OS is used more and as more mobile app developers are able to play around and explore its various features.
Android, machine learning and Artificial Intelligence
TensorFlow was also a big point of discussion. A special version of the open source library for Machine Intelligence was released. Known as TensorFlow Lite, this particular version was created to further boost AI-enabled capabilities by providing mobile app developers with even more resources to work with neural nets. The desired outcome with TensorFlow Lite is to increase app speed on smartphones through the use of machine learning.
The hype behind Google Assistant has been steadily building since it was announced last year, and things aren’t slowing down for what is quickly becoming the dominant intelligent personal assistant on the market. Google’s top brass bragged about how over 100 million devices can now interact with Google Assistant. This revelation is only topped by the next, and that is that Android has surpassed the 2 billion active daily devices threshold. Thanks to all the machine learning capabilities being pumped into Android, Assistant is becoming a whole lot smart, a whole lot quicker. The inclusion of Google Lens into the whole mix speed up the process, but more on Lens later. Even interactions with Google Assistant will be a new experience. Assistant will actually have the capacity to have a conversation with the user regarding what is currently showing on the user’s screen. In turn, users will also be able to issue commands or ask questions by simply typing it in.
Mobile app developers will be keeping their eye out for the soon to be released Google Assistant SDK. Using this manufacturers, vendors, programmers, coders, and mobile app development companies will be able to bake in Google Assistant into mobile apps and devices and allowing users to interact with them via their smartphones. From cars to toys and thermostats to appliances, the possibilities of applications are only limited by the mobile app developer’s imagination. To further increase reach and adoption rate, new languages are being introduced to Google Assistant allowing French and German users to interact with Assistant as efficiently as English, Japanese, Spanish and Portuguese users. Seven languages are set to be released this year alone, and it is safe to assume that more are on their way.
But perhaps the biggest news surrounding Google Assistant does not pertain to Android devices. For the first time ever, Google Assistant will be making its way to the dark side, the iPhone. This puts it in direct competition with Siri, on the same device, at the same time. From the outset, it looks like Siri has the upper hand. Siri is the native app and has been hard coded into the deepest recesses of iOS. However, do remember that Google Assistant, while equally as powerful out of the gate, is ever present and ever learning. Some expect it to surpass the capabilities of Siri on the iPhone.
Google Lens started out as a nifty quasi-translator, favored by travelers and tourists that find it very useful when trying to decipher foreign languages whilst overseas. In a very Google-like fashion, the company is taking the computer-vision photo technology to new frontiers by integrating it with machine learning and automation. Google Lens can actually seek out the text in a photo and use the information contained within to take actions right on your phone…automatically. Take for example a picture of a Wi-Fi ID and password. After taking the picture, Google Lens will open the Wi-Fi network, enter the login credentials and connect you to the network.
Google Home and Android TV
Google Home was announced a while back and will compete directly with Amazon’s Echo, Apple Home and other such services. Several updates were mentioned in Google I/O including its entry into five countries this year (Canada, Germany, France, Japan and Australia). Furthermore, it will now have the ability to offer proactive assistance, which is to say that it will recommend actions or act without the user having to prompt it. This can include anything from ordering certain products after a certain amount of time, decreasing the volume of a music player at certain periods of the day, and even setting an alarm for an upcoming event or appointment.
Hands free calling will now be possible and will include free calls to anywhere in the USA and Canada. Google Home will also be integrated with multi-user support. Specific actions will be taken depending on who is requesting them. For example, when a request to “call mom” is issued, Google Home will recognize the speaker’s voice and will dial the correct number depending on which user spoke the command. When compared to previous iterations of Google Home it looks like the technology is growing by leaps and bounds, and it technically is. However, in reality Google is merely catching up to the high standards already set by Amazon’s Echo. But, it is only fair to mention that Echo has been in the mainstream longer than Google Home, therefore it is “smart” since it has learnt more from user’s due to a greater exposure to different behaviors and patterns. Finally, Bluetooth integration turns Google Home into a Bluetooth speaker not only for Android devices but others with Bluetooth capability.
Android TV received a much-needed face-lift and now provides users with a more intuitive interface. However, there is still much to be desired, and a lot more work for Google to convince the masses to run out and get a set-top box or TV that supports it. While the improvements are welcomed they should only be seen as just another step towards making a top-notch product. Take the redesigned interface for example. Yes, it looks better than the last. Yes, it is easy to see and choose something to watch from the three lines on the home screen, but anything other than that will require much annoying scrolling and scanning. The bottom line is that Android TV is heading on a good trajectory, but it just isn’t there yet.
So for all mobile app developers in AppFutura, keep in mind all this information because the next generation of apps will have new features that users will appreciate, for sure. Android and Google are trying to make it more simple and easier for users to have a smartphone that works, so they can access all the apps in a second and all of them should work properly. Mobile app development companies have to keep up with Android’s rhythm and integrate all these in their apps.