Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows. Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). They are also available for accelerated simulation and modeling applications. Bridges-2 has three types of GPU nodes: 10 HPE Cray 670 h100-80 nodes, with eight H100-SXM5-80GB GPUs each with 80GB of GPU memory and a total of 2TB RAM per node; 24 HPE v100-32 nodes with eight V100 GPUs with NVLink, each with 32GB of GPU memory and a total of 512GB RAM per node; and 9 v100-16 nodes containing eight V100 GPUs without NVLink, each with 16GB of GPU memory and a total of 192GB RAM per node.
The nodes are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.