Artificial intelligence has gone from the imagination of people like Philip K. Dick and Arthur C. Clarke, and is now a part of every aspect of technology. The future of smartphones revolves around terminologies like machine learning, artificial intelligence and augmented reality. We're starting to see this happen already, as most smartphone manufacturers now stress that their devices have AI baked in.
But is the hype justified, or are we hearing about AI now because the hardware seems to have reached a plateau? What's clear is that the next revolution lies in software, in bringing actual intelligence to "smart" phones, and that's why AI has to be implemented at all stages of the smartphone experience. But what does this mean, exactly, and how does it affect us?
The scope of artificial intelligence has expanded and evolved over time, and it is very hard to frame a concrete definition. In the broadest of terms however, AI is anything or any instance where a machine is able to reason and make decisions which are not explicitly defined.
Artificial intelligence is being presented as a life-changing feature, and in many ways it is. A human brain can never perform calculations as fast as a blazingly fast processor, but it has the ability to decode the world around it and distinguish between objects, animals, shapes and sizes. This is what AI is set to bring to the table - the ability to allow smartphones (and other devices) to understand surroundings and make their own decisions.
There are two main facets to AI - machine learning and deep learning. In a nutshell, machine learning gives a phone or a computer the ability to learn and improve from data, without being explicitly programmed to do so. Deep learning is a subset of machine learning that aims to emulate the working of the human brain, and is more advanced and nuanced. Very simply put, it uses neural networks with many layers and hierarchies, to learn and decide optimal parameters.
AI and machine learning are being used in many aspects of our smartphone experience - from mapping services like Google Maps and Apple Maps, to virtual assistants like Siri, Google Assistant, and Cortana. It has now even made its way inside the underpinnings of smartphones, with processors like Huawei's Kirin 970 and Apple's A11 Bionic containing a dedicated chip to handle AI computations locally. Using both machine learning and deep learning, smartphones can now do things like distinguish between a cat and a dog whilst taking a photograph, optimise software settings automatically, increase security and battery life, and speed up day to day tasks.
On the software side of the things, virtual assistants powered by smart, conversational AI have been around for a while. Assistants like Google Assistant, Amazon's Alexa, and Apple's Siri use AI to understand and interpret our voice commands. These assistants can be used to search something on the Web, dial a number, or book a cab. Furthermore, they can work with connected devices to adjust lighting in a room with a voice command. Most virtual assistant are also adept at understanding context and questions posed in natural language. Using machine learning, these assistants also understand our usage patterns and become smarter over time.
Machine learning is also being seen in services like Gmail, Google Search, Maps applications from both Google and Apple, and even in the targeted ads we see on websites. The suggestions you see in the search box of Google Maps and Google Search, which are based on past searches, location and popular trends, are an example of machine learning at work. Both Google Photos and the stock Photos app on iOS use artificial intelligence to sort photos and help users find pictures of the same person, photos with dogs, vacation pictures, and so on. Priority inbox in Gmail, which automatically sifts through your mail and recognises the important ones, is another example of machine learning at work.
Google also uses artificial intelligence on its Pixel 2 and Pixel 2 XL smartphones to enhance the photography experience. The bokeh mode on Pixel smartphones is powered by software algorithms which recognise your face and decide what should be in focus and what should fade into the background. This 'semantic image segmentation model' called DeepLab-v3+, helps the Pixel phones produce the depth-of-field effect with just a single-rear camera. In a similar vein, manufacturers like Xiaomi, Oppo, and Vivo are using software based algorithms in their smartphones to enable bokeh mode with a single front camera.
At Google I/O last week, Google introduced a host of new features centred around artificial intelligence - a smart compose feature for Gmail, which uses machine learning to offer phrase and word suggestions as you compose a new email, more effective close captioning on YouTube videos and TV shows, and natural human speech capabilities for Google Assistant. Google also showcased a new technology called Duplex that allows Google Assistant to respond naturally to phone conversations by using complex sentences, fast speech, and long remarks.
What's more interesting today - and is the latest buzz word in artificial intelligence - is on-device AI. Huawei's latest high-end chipset has a Neural Processing Unit that enables artificial intelligence computations to occur locally, instead of over the cloud. In a similar vein, Apple's iPhone 8, iPhone 8 Plus, and iPhone X are powered by the A11 Bionic chip, which has a Neural Engine that's used for Face ID and other machine learning-enabled tasks. Qualcomm and ARM are in the process of releasing AI-optimised hardware for the rest of the industry.
What exactly does on-device AI bring to the table? It basically entails better privacy and performance as all the computations happen on the phone itself and none of your data is uploaded to the cloud. Gartner predicts that almost 80 per cent of smartphones shipped in 2020 will have on-device AI capabilities. The firm believes that on-device AI will bring better power management and data protection compared to cloud-based solutions.
According to Gartner, on-device AI will make face recognition implementations more secure, help virtual assistants like Siri process data faster and make them more proficient at understanding natural-language, increase battery and device performance and enhance the spread of augmented reality. The firm also believes devices will be able to use personal data for individualised assistance and content censorship.
The iPhone X uses its neural engine, which is a part of the A11 Bionic chip, for a host of AI features such as Animoji, which mimic a person's facial expressions, facial detection and more. The 'TrueDepth' Facial recognition system present on the iPhone X creates a 3D map of a person's face which is stored securely on the neural engine chip. Apple claims the neural engine can handle 600 billion operations per second.
Meanwhile, Huawei claims that its NPU is capable of performing image recognition on more than 2,000 pictures per second. As a result, smartphones like the Honor View 10 and the recently launched Huawei P20 Pro, which include a NPU, are able to intelligently detect the object or scene being photographed and optimise image settings accordingly. They can detect 13 different types of scenes and objects - including dogs, cats, plants, people, and printed text. These phones come with a host of other AI-backed features such an AI accelerated translator, facial recognition, and intelligent battery management. EMUI 8.0, which is based on Android 8.0 Oreo, also uses machine learning to intelligently analyse user behaviour and allocate resources accordingly.
Commenting on its foray into AI and the Kirin 970 chipset in particular, Huawei told Gadgets 360, "A standalone AI unit allows the speed of all AI-related processing become much faster than processing over the traditional CPU and GPU with even less energy consumption to complete more tasks. It significantly improves the AI-related operational efficiency of a chipset."
Huawei claims its third generation of artificial intelligence smartphones, which will be spearheaded by the Honor 10, will have even more advanced AI features. "Leveraging the success of its predecessors, the AI application of Honor 10 is much more diversified. [The] Honor 10 will zero in the AI technology on photography and include functions like AI scene recognition, semantic image segmentation, high optical zooms, ultra-fast shutter speed, portrait mode and portrait Bokeh effect. As the photography capability further improves, Honor 10 users should enjoy a more AI and professional photography experience on device," the company said.
Smartphone manufacturers have started throwing around the word artificial intelligence haphazardly. There is no hard and fast definition of artificial intelligence and companies are taking advantage of that fact to mislead customers. It is very easy it to reduce AI to nothing but a marketing buzzword.
Companies are taking features that have been found in smartphones for years and adding the prefix "AI-powered" to make them sound new and shiny. Many manufacturers are also pushing AI-based selfie cameras with 'intelligent' beautification features. While some do use some sort of machine learning to distinguish facial characteristics and skin tone, most are just throwing around buzzwords.
Many manufacturers are cashing in on the artificial intelligence craze by pushing the label a bit too liberally. That said, AI is definitely more than just a buzzword - the fact that it has the potential to revolutionise smartphones cannot be denied. The future of smartphones revolves around on-device AI and machine learning. It is up to smartphone manufacturers to unlock the potential on offer and push boundaries.