Turn off the Ad Banner  

To print: Select File and then Print from your browser's menu.

    -----------------------------------------------
This story was printed from CdrInfo.com,
located at http://www.cdrinfo.com.
-----------------------------------------------

Appeared on: Tuesday, April 5, 2016
Nvidia 2016 GPU Technology Conference Is About Artificial Intelligence

As artificial intelligence sweeps across the technology landscape, NVIDIA unveiled today at its annual GPU Technology Conference a series of new products and technologies focused on deep learning, virtual reality and self-driving cars.

NVIDIA CEO and Co-founder Jen-Hsun Huang unveiled a deep-learning supercomputer in a box — a single integrated system with the computing throughput of 250 servers. The NVIDIA DGX-1, with 170 teraflops of half precision performance, can speed up training times by over 12x.

Huang described the current state of AI, pointing to a wide range of ways it’s being deployed. He noted more than 20 cloud-services giants — from Alibaba to Yelp, and Amazon to Twitter — that generate vast amounts of data in their hyperscale data centers and use NVIDIA GPUs for tasks such as photo processing, speech recognition and photo classification.

Underpinning DGX-1 is a new processor, the NVIDIA Tesla P100 GPU - Nvidia's most advanced accelerator and the first to be based on the company’s 11th generation Pascal architecture.

Based on five technologies - which Jen-Hsun smilingly called "miracles" — the Tesla P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.

These five breakthroughs are the following:

The Tesla P100 GPU accelerator delivers a new level of performance for a range of HPC and deep learning applications, including the AMBER molecular dynamics code, which runs faster on a single server node with Tesla P100 GPUs than on 48 dual-socket CPU server nodes.

Tesla P100 Specifications

General availability for the Pascal-based NVIDIA Tesla P100 GPU accelerator in the new NVIDIA DGX-1 deep learning system is in June. It is also expected to be available beginning in early 2017 from server manufacturers.

A key early customer for DGX-1 is Massachusetts General Hospital. It’s set up a clinical data center, with NVIDIA as a founding partner, that will use AI to help diagnose disease starting in the fields of radiology and pathology.

While NVIDIA hardware has long made headlines, software is key to advancing GPU-accelerated computing. Nvidia announced the NVIDIA SDK, and Jen-Hsun described a series of major updates that it’s getting.

The keynote’s visual highlight was a view of a VR experience built on NASA’s research to send visitors to Mars. The Mars 2030 VR experience developed with FUSION Media, with advice from NASA, was demoed by personal computing pioneer Steve Wozniak.

Jen-Hsun showed how Nvidia's Iray technology can create interactive, virtual 3D worlds with great fidelity. These Iray VR capabilities allow Nvidia to create environments that let users strap on a headset and prowl around photorealistic virtual environments — like a building not yet constructed.

"With Iray VR, you’ll be able to look around the inside of a virtual car, a modern loft, or the interior of our still unfinished Silicon Valley campus with uncanny accuracy," Nvidia said.

Expect for more details on the availability of Iray VR capabilities later this spring.

To show what VR can do, Google sent along 5,000 Google Cardboard VR viewers. Nvidia passed them out after the keynote so GTC attendees can experience NVIDIA Iray VR technology on their phones.

Continuing efforts to help build autonomous vehicles with super-human levels of perception, Nvidia also introduced an end-to-end mapping platforms for self-driving cars.

It’s designed to help automakers, map companies and startups create HD maps and keep them updated, using the compute power of NVIDIA DRIVE PX 2 in the car and NVIDIA Tesla GPUs in the data center.

Maps are a key component for self-driving cars. Automakers will need to equip vehicles with powerful on-board supercomputers capable of processing inputs from multiple sensors to precisely understand their environments.

Nvidia is also putting its DRIVE PX 2 AI supercomputer into the cars that will compete in the Roborace Championship, the first global autonomous motor sports competition.

Part of the new Formula E ePrix electric racing series, Roborace combines the intrigue of robot competition with earth-friendly alternative energy racing.

Every Roborace will pit 10 teams, each with two driverless cars equipped with NVIDIA DRIVE PX 2, against each other in one-hour races. The teams will have identical cars. Their sole competitive advantage: software.

DRIVE PX 2 provides supercomputer-class performance - up to 24 trillion operations a second for AI applications -in a case the size of a lunchbox. And such a small box is exactly what these racecars need.



Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .