PSC Bridges-2 GPU-AI (Bridges-2 GPU Artificial Intelligence)
Resource Type
Compute
Latest Status
production
Description

Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows.

Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). They are also available for accelerated simulation and modeling applications.

Each Bridges-2 GPU node contains 8 NVIDIA Tesla V100-32GB SXM2 GPUs, for aggregate performance of 1Pf/s mixed-precision tensor, 62.4Tf/s fp64, and 125Tf/s fp32, combined with 256GB of HBM2 memory per node to support training large models and big data.

Each NVIDIA Tesla V100-32GB SXM2 has 640 tensor cores that are specifically designed to accelerate deep learning, with peak performance of over 125Tf/s for mixed-precision tensor operations. In addition, 5,120 CUDA cores support broad GPU functionality, with peak floating-point performance of 7.8Tf/s fp64 and 15.7Tf/s fp32. 32GB of HBM2 (high-bandwidth memory) delivers 900 GB/s of memory bandwidth to each GPU. NVLink 2.0 interconnects the GPUs at 50GB/s per link, or 300GB/s per GPU.

Each Bridges-2 GPU node provides a total of 40,960 CUDA cores and 5,120 tensor cores per node. In addition, each node holds 2 Intel Xeon Gold 6248 CPUs; 512GB of DDR4-2933 RAM; and 7.68TB NVMe SSD.

The nodes are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.

Features
Is an ACCESS Allocated Production Compute resource
Agency supercomputers and advanced architecture systems
GPU use is the main purpose of this resource
Organization Name
Pittsburgh Supercomputing Center
Global Resource ID
bridges2-gpu-ai.psc.access-ci.org