The new version of Google Assistant that Google announced at its Made By Google event yesterday will be exclusive to the Pixel 4 and Pixel 4 XL, at least for now. It has a completely new look, and is touted to be up to 10 times faster than before, thanks to a new offline language recognition and processing model that can run on the device itself, without sending requests to Google's data centres and waiting for responses. Google has not yet disclosed when the new Assistant will come to previous-gen Pixel phones and third-party devices.
It will also be exclusive to the US, and will only work in English as of now, although XDA Developers reports that Canada, the UK, and Singapore could be next, based on an examination of the Google app's code. This would imply that the new features will roll out for English-speaking countries before others, though that is not yet confirmed.
Using the newly revamped assistant, users can quickly open apps, search for information stored on their phones, and multitask between apps. Contextual awareness means that users can carry on a conversation, stringing multiple queries one after another, even building on the assistant's responses.
First announced at Google I/O 2019 in May this year, the new Assistant can process and respond to many day-to-day requests without needing an Internet connection, and will even work when the host device is in Airplane mode. Users will be able to dictate and send messages, search for and share photos, manage calendar appointments, and much more. Requests for external information, such as flight timings, will still require an Internet connection.
According to Google, the company has managed to compress hundreds of gigabytes' worth of voice processing information into a model that requires less than half a gigabyte. Local processing reduces latency and helps keep information private.
Speaking of privacy, Google promises that users can simply delete their Assistant history by telling it to do so. Users can even specify a time period to wipe out, such as the past day or week. Data stored on-device is protected by Google's Titan M security and encryption chip.
The Google Assistant UI on the Pixel 4 is also new, with an animated multi-coloured band running across the bottom of the screen appearing when the Assistant is called up, either by voice or by squeezing the edges of the device. The results of some voice requests can be seen on a floating panel that slides up from the bottom, if required.
Another application of the on-device voice processing capability is the transcription feature of the voice recorder app. The Pixel 4 and Pixel 4 XL can display text in real-time, and it can later be searched through or copied to any other app. If you search for words that appear within voice recordings, you can see where exactly on the waveform they occur, allowing you to jump directly to the parts of a conversation you need. This can be used for meetings, lectures, interviews, and reminders.
Google also showed off the computational capabilities of the Pixel 4 and Pixel 4 XL's new cameras. The HDR+ and Super Res Zoom features both combine the devices' hardware and software capabilities to improve image quality. By using multiple exposures, more detail can be exposed in highlight as well as shadow areas of frames. The Pixel 4 devices will now show two separate sliders for brightness (exposure) and shadows (tone mapping), allowing users a much greater degree of control over the aesthetic qualities of a shot. The new phones will also now show an AI-generated approximation of the HDR+ image in the viewfinder itself, so that users know when exposure adjustments are needed or not needed.
The Super Res Zoom feature can now be used through the main camera as well as the telephoto camera, allowing for a hybrid zoom that is superior to just cropping an image after it has been shot. White balance also now leverages machine learning in all camera modes, not just in Night Sight, resulting in truer colours in many situations.
Night Sight can now handle astrophotography, combining up to 15 exposures of 16 second each. This avoids motion blur and star trails, which would have resulted from a single long exposure, while also allowing much better contrast and details against the night sky.
Finally, portrait mode uses multiple physical cameras as well as dual pixels to better detect large objects, and enable portrait effects when shooting objects at a distance. Fur and hair are separated from backgrounds better, and there's more natural-looking bokeh.