Apple’s annual Worldwide iOS Developers Conference (WWDC) recently took place, and the Silicon Valley tech giant showed yet again that it is pushing the boundaries when it comes to enhancing the way its users are interacting with the world through Apple devices. Operating systems were big topics with the release of iOS 11 and the new macOS, “High Sierra”. Even the OS for the Apple watch received a nod with WatchOS 4. There was also a great deal of focus on the progression of augmented reality and machine learning. Finally, Apple is trying to ensure that the future mobile app developer ecosystem is nice and healthy by expanding and fully backing their educational coding initiatives featuring Swift Playgrounds. Last but not least, there was a host of hardware updates and releases which included the Amazon Echo rival HomePod.
WWDC 2017 felt more like a show than a conference for iOS mobile app developers with big unveils and this year the operating systems seemed to have taken over the main stage. For obvious reasons, the eyes of most of iOS app developers were centered on iOS 11 and watchOS 4. The new features include a file management app, a new dock and now supports various accessories, devices and apps. The theme seems to surround multitasking and making it easier to do so. The Apple Pencil writing device now has enhanced support and increased functionality, this should give developers a greater degree of freedom to integrate its use in any apps that they build. Productivity app developers should take note of document scanning abilities that have been integrated with Note.
One of the more notable innovations with iOS 11 is drag-and-drop. The popular and much sought-after function has finally been given the nod in the new mobile OS. iPad users can now run split screens on their devices, which once again reaffirms Apple’s push towards greater access to multitasking. iOS app developers should view this as an opportunity to increase use and engagement with their apps. For example, this new iOS function would allow iOS app developers to help users to drag and drop photos from their gallery to an email, but it can also be used to drag and drop a photo or video from an email to a messaging or calendar app. Apple has made it dead easy to integrate this feature through the implementation of an API.
The watchOS 4 (Apple Watch’s operating system) release was noticeably consumer feature heavy, however developers were still able to get their fix with a new platform, GymKit. The rise of smart devices is no longer relegated to the home or car. These days even gym equipment have the ability to pair with smart mobile devices. GymKit bridges the gap between Apple Watch and gym equipment, giving the user the ability to enhance workout tracking and recording exercise output. iOS app developers can use this technology to develop smarter and more efficient coaching apps that will run on the Apple Watch. In theory, this should help users exercise more and be more effective at reaching their goals.
Self-proclaimed Apple fanboy, entrepreneur and Digg co-founder, Kevin Rose once stated that augmented reality (AR), and not virtual reality (VR), will be the first to be adopted by the masses. Apple seems to think so as well, which is why their sudden push into AR in the new iOS 11 caused so much commotion this WWDC. Apple showed they were serious about the AR game by releasing the ARKit. The API platform is purpose built to give iOS app developers the tools they need to integrate augmented reality into their apps. Apple packed the ARKit with a host of APIs to help developers deliver augmented reality experiences to their audiences. This of course would have been very difficult to pull off without the addition of the all-seeing Visual Inertial Odometry (VIO) tool which is also included in the AR API platform. Actively tracks the users surroundings using a combination of data harvested from the camera sensor and CoreMotion. All of this translates to drastic increases in both accuracy and the ability to sense the world around the user without constant calibration from the user.
Machine learning has been a hot topic in all of this year’s developer conferences and it was no different at WWDC. Apple has also been developing their own machine learning systems but has taken a different approach from the likes of Google, Amazon and Microsoft. Unlike its competitors who prefer to utilize the cloud for its machine learning functions, Apple has (as Apple is characteristically known to do) preferred to perform all necessary functions locally. Taking this route not only allows Apple to retain almost all control, almost all the time, but it also decreases fears surrounding privacy for them and their users.
Of particular note, Apple unveiled their Core ML network which will greatly increase the speed of AI executed actions on any device running iOS. This doesn’t just refer to faster voice search results but also actions that the user doesn’t dictate such as face tracking, text detection, image registration, and object tracking, to name but a few. So far, Google’s Pixel has been the gold standard when it came to image recognition especially since it is backed by the powerful and ever evolving Google Lens. However, Apple now claims that thanks to Core ML, iPhones are not only competes with the Pixel, but surpasses it to the tune of 6X the speed.
For iOS app developers, this is a virtual godsend and opens up possibilities that could have only been dreamt up previously. If Apple’s claims do hold true regarding the AI execution speeds, developers can take this technology, integrate it into their apps and push the boundaries of their apps’ capabilities. Machine learning is especially a large part of Apple’s HomePod, which seeks to challenge the current AI-for-the-home incumbents Google Home and Amazon Echo. This is Apple’s big push in terms of product, but not much has been mentioned with regards to how iOS app developers can take advantage of this new device. However, the logical, and one would say inevitable, thinking would be that somewhere down the line, API’s will be made available so that mobile app development companies can take full advantage of this interface and integrate it into their apps functions.
Swift was released a couple of years ago and has slowly made headway into coming into the mainstream. It is a programming language that was developed by Apple as an alternative, but one would think that it is likely to become the de facto, language for macOS, iOS, watchOS and tvOS. Learning a new language could be a difficult task for people, but Apple believes that learning is essential and that in most cases it is best to start at an early age. Apple has doubled down when it comes to educating people about their products, and coding for them. In fact, they have launched several educational initiatives meant for people of all ages; from seasoned professionals to youngsters looking to code for the first time in their lives. Take Swift Playgrounds for example. Apple has gamified the coding process and has made it child friendly. Using a series of fun games, puzzles and stories, Apple is teaching children the language of coding and Swift, at nearly the same rate that they are learning their first language and the world.
With their educational products, services and initiatives like Swift Playgrounds (Playgrounds 2 is already in beta), Apple is ensuring a future not only for their product line or the apps that run on them, but also for the entire industry. And they are not alone in this. Google and other key players in the industry have also created educational initiatives that help people take their first steps into the developer and coding world. But Swift is different. It is being hailed as the programming language for all iOS app developers that will rekindle the programming world. If this is even remotely true, then Apple is doing exactly what it needs to fuel that fire of change.