Breaking News

Nikon releases the NIKKOR Z 24-70mm f/2.8 S II CORSAIR ONE a600 Brings Improved Cooling and Adaptive Performance in a Compact Design Speedlink setting the tone in the gaming zone be quiet! unveils new show case chassis and shows cooling innovations and peripherals at gamescom 2025 RICOH announces GR IV and GF-2 external flash

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Enterprise & IT May 9,2019 0

Google has made publically available its Cloud TPU v2 Pods and Cloud TPU v3 Pods to help machine learning (ML) researchers, engineers, and data scientists iterate faster and train more capable machine learning models.

To accelerate the largest-scale machine learning applications, Google created custom silicon chips called Tensor Processing Units (TPUs). When assembled into multi-rack ML supercomputers called Cloud TPU Pods, these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems.

Today, for the first time, Google Cloud TPU v2 Pods and Cloud TPU v3 Pods are available in beta to ML researchers.

Google Cloud is providing a full spectrum of ML accelerators, including both Cloud GPUs and Cloud TPUs. Cloud TPUs offer competitive performance and cost, often training cutting-edge deep learning models faster while delivering significant savings.

While some custom silicon chips can only perform a single function, TPUs are fully programmable, which means that Cloud TPU Pods can accelerate a wide range of ML workloads, including many of the most popular deep learning models. For example, a Cloud TPU v3 Pod can train ResNet-50 (image classification) from scratch on the ImageNet dataset in just two minutes or BERT (NLP) in just 76 minutes.

A single Cloud TPU Pod can include more than 1,000 individual TPU chips which are connected by an ultra-fast, two-dimensional toroidal mesh network. The TPU software stack uses this mesh network to enable many racks of machines to be programmed as a single, giant ML supercomputer via a variety of flexible, high-level APIs.

The latest-generation Cloud TPU v3 Pods are liquid-cooled and each one delivers more than 100 petaFLOPs of computing power. In terms of raw mathematical operations per second, a Cloud TPU v3 Pod is comparable with a top 5 supercomputer worldwide (though it operates at lower numerical precision).

It’s also possible to use smaller sections of Cloud TPU Pods called “slices.” ML teams ofter develop their initial models on individual Cloud TPU devices (which are generally available) and then expand to progressively larger Cloud TPU Pod slices via both data parallelism and model parallelism to achieve greater training speed and model scale.

Tags: Machine learningTensor processing unit
Previous Post
FCC Denies China Mobile's Application to Enter the U.S. Market
Next Post
Imagination Licenses Ray Tracing Technology

Related Posts

  • Imec Uses Machine Learning Algorithms in Chip Design to Achieve cm Accuracy and Low-power Ultra Wideband Localization

  • Seagate Uses NVIDIA's AI Inference Tools to Improve Hard Drive Manufacturing

  • Intel and Georgia Tech to Mitigate Machine Learning Deception Attacks

  • Infineon, Cypress Deal Wins CFIUS Clearance

  • DARPA is Working on Mathematical Framework to Check the Safety of Autonomous Systems

  • IBM Launches New IC922 POWER9-Based Server For ML Inference

  • SiFive and CEVA Partner to Make Machine Learning Processors Mainstream

  • ASUS Thinker Edge T Single-board Computer is Equipped With a Google TPU

Latest News

Nikon releases the NIKKOR Z 24-70mm f/2.8 S II
Cameras

Nikon releases the NIKKOR Z 24-70mm f/2.8 S II

CORSAIR ONE a600 Brings Improved Cooling and Adaptive Performance in a Compact Design
Cooling Systems

CORSAIR ONE a600 Brings Improved Cooling and Adaptive Performance in a Compact Design

Speedlink setting the tone in the gaming zone
PC components

Speedlink setting the tone in the gaming zone

be quiet! unveils new show case chassis and shows cooling innovations and peripherals at gamescom 2025
PC components

be quiet! unveils new show case chassis and shows cooling innovations and peripherals at gamescom 2025

RICOH announces GR IV and GF-2 external flash
Cameras

RICOH announces GR IV and GF-2 external flash

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

be quiet! Light Loop 360mm

be quiet! Light Loop 360mm

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Noctua NH-D15 G2

Noctua NH-D15 G2

be quiet! Light Base 600 LX

be quiet! Light Base 600 LX

Soundpeats Pop Clip

Soundpeats Pop Clip

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Pure Base 501

be quiet! Pure Base 501

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed