Google's in-house network technology ranges from Firehose, the company's first in-house datacenter network, ten years ago to the latest-generation Jupiter network, which helped increase the capacity of a single datacenter network more than 100x. Google says Jupiter fabrics can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, such capacity would be enough for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.
Google has arranged its network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.
The company also used a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.
In addition, Google built its software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.
According to Amin Vahdat, Google Fellow and Technical lead for networking at Google, Google's datacenter networks "are built for modularity, constantly upgraded to meet the insatiable bandwidth demands of the latest generation of our servers." They are managed for availability, and are shared infrastructure. This means that the same networks that power all of Google’s internal infrastructure and services also power Google Cloud Platform.