• Home
  • Apps
  • Apps News
  • Google Makes DeepMind's AI Powered Cloud Text to Speech Service Available to Developers

Google Makes DeepMind's AI-Powered Cloud Text-to-Speech Service Available to Developers

Google Makes DeepMind's AI-Powered Cloud Text-to-Speech Service Available to Developers
Advertisement

Google on Wednesday launched a voice synthesiser called "Cloud Text-to-Speech" which is powered by its Britain-based Artificial Intelligence (AI) subsidiary DeepMind.

The service is now available for developers to add it in their own applications.

A text-to-speech service is a form of speech synthesis that converts text into spoken voice output. Google's text-to-speech powers the voices in service like Google Assistant, Search and Maps.

"'Cloud Text-to-Speech' lets developers choose from 32 different voices from 12 languages and variants," Dan Aharon, Product Manager, Cloud AI, said in a blog post.

"Cloud Text-to-Speech" correctly pronounces complex text such as names, dates, times and addresses for authentic-sounding speech, the company claimed.

It also allows developers to customise pitch, speaking rate and volume gain, and supports a variety of audio formats, including MP3 and WAV.

According to Google, "Cloud Text-to-Speech" can be used in a variety of ways, including to power voice response systems for call centres (IVRs) and enabling real-time natural language conversations, to enable Internet of Things (IoT) devices to talk back and to convert text-based media into spoken format.

Google said that "Cloud Text-to-Speech" includes a selection of high-fidelity voices built using WaveNet - a neural network trained with a large volume of speech samples that is able to create raw audio waveforms from scratch.

DeepMind introduced the first version of WaveNet in late 2016.

WaveNet synthesises more natural-sounding speech and, on average, produces speech audio that people prefer over other text-to-speech technologies.

During training, the network extracts the structure of the speech, including tones and what shape a realistic speech waveform should have.

When given text input, the trained WaveNet model generates the corresponding speech waveforms, one sample at a time, achieving higher accuracy than alternative approaches.

Today's improved WaveNet model generates raw waveforms 1,000 times faster than the original model and can generate one second of speech in just 50 milliseconds.

The model also has higher-fidelity and is capable of creating waveforms with 24,000 samples a second.

"We have also increased the resolution of each sample from 8 bits to 16 bits, producing higher quality audio for a more human sound," Aharon added.

With these adjustments, the latest WaveNet model produces more natural sounding speech and people have given the new US English WaveNet voices an average mean-opinion-score (MOS) of 4.1 on a scale of one-five.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Facebook Unveils Privacy Shortcuts Menu, Redesigns Data Settings
Anand Mahindra Proposes Funding Facebook Rival
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »