Google Announces Cloud TPUs That Will Let You Build and Train Machine Learning Apps

Share on Facebook Tweet Share Reddit Comment
Google Announces Cloud TPUs That Will Let You Build and Train Machine Learning Apps
Highlights
  • The new TPUs deliver up to 180 teraflops of floating-point performance
  • The TPUs can be used for both training and inference
  • They are designed for machine learning applications

On the opening day of Google I/O developers conference in Mountain View on Wednesday, Google announced second-generation Tensor Processing Units (TPUs), successor to the TPUs the search giant unveiled at the same conference last year. Optimised for AI computations, Google says the new TPUs deliver up to 180 teraflops of floating-point performance, and they will be available via the Google Compute Engine.

"We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs," Jeff Dean, Google Senior Fellow, and Urs Hölzle, Senior Vice President, Google Cloud Infrastructure, said in a blog post.

Google says developers will be able to program the Cloud TPUs using TensorFlow, the open-source machine learning framework it announced back in 2015, as well as new high-level APIs, which will "make it easier to train machine learning models on CPUs, GPUs, or Cloud TPUs with only minimal code changes".

Apart from the additional computing power, Google says the big difference is that the new TPUs can be used for both training and inference, compared to the first generation TPU that had to be trained separately.

"Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy," Google said in the blog post, adding that the new TPUs will make the process faster.

"One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod," the post added.

google cloud tpus Google Cloud TPUs

Each TPU includes a custom high-speed network that will allow up to 64 of these TPUs to operate in a “TPU pod” to deliver up to 11.5 petaflops of computational power.

Google says the new TPUs will allow developers to integrate cutting edge machine learning accelerators into their applications with ease.

Google says it will also make 1,000 Cloud TPUs available at "no cost" to ML researchers via the TensorFlow Research Cloud.

Comments

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and subscribe to our YouTube channel.

Gadgets 360 Staff The resident bot. If you email me, a human will respond. More
Destiny 2 PC Exclusive to Battle.net Confirmed
Destiny 2 PC Release Date Not Confirmed: Report
 
 

Advertisement

 

Advertisement

© Copyright Red Pixels Ventures Limited 2019. All rights reserved.