Turn off the Ad Banner  

To print: Select File and then Print from your browser's menu.

    -----------------------------------------------
This story was printed from CdrInfo.com,
located at http://www.cdrinfo.com.
-----------------------------------------------

Appeared on: Monday, June 25, 2018
NVIDIA Adds New High-Performance Computing Containers to GPU Cloud

NVIDIA has added nine additional containers to NVIDIA GPU Cloud (NGC), as part of the company's efforts to speed the deployment of GPU accelerated high-performance computing and AI.

This means that NGC, which launched last year, now includes 35 deep learning, high-performance computing, and visualization containers.

Containers allow researchers and data scientists run AI workloads and deploy applications on a shared cluster in order to speed their work. The containers make deploying deep learning frameworks - building blocks for designing, training and validating deep neural networks - faster and easier. Containers also simplify the framework installation process. Users can access to the latest application versions with simple pull and run commands.

The nine new HPC and Visualization containers include CHROMA, CANDLE, PGI and VMD . This is in addition to eight containers including NAMD, GROMACS, and ParaView that NVIDIA launched at the previous year's Supercomputing Conference.

The container for PGI compilers available on NGC will help developers build HPC applications targeting multicore CPUs and NVIDIA Tesla GPUs. PGI compilers and tools enable development of performance-portable HPC applications using OpenACC, OpenMP and CUDA Fortran parallel programming.

The need for containers isn't limited to deep learning. Supercomputing has a dire need to simplify the deployment of applications across all the segments. That's because almost all supercomputing centers use environment modules to build, deploy, and launch applications.

This is a time consuming, and unproductive approach which can take days, making it unproductive for both the system administrators and the end-users.

The complexity of such installs in supercomputing limits users from accessing the latest features and enjoying optimized performance, in turn delaying discoveries.

Containers make an alternative. Installations are eliminated, which means no one has to keep track of or be concerned about breaking the environment module links.

Users can pull the containers themselves and deploy an application in minutes compared to waiting for days for the advisory council to agree on an install and go through the actual process.

Additional key benefits of containers are that they provide reproducibility and portability. Users can run their workloads on various systems without installing the application and get equivalent simulation results.



Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .