Ascend

System Downtime March 14, 2023

A downtime for OSC HPC systems is scheduled from 7 a.m. to 9 p.m., Tuesday, March 14, 2023. The downtime will affect the Pitzer, Owens and Ascend Clusters, web portals, and HPC file servers. MyOSC (https://my.osc.edu) and state-wide licenses will be available during the downtime. In preparation for the downtime, the batch scheduler will not start jobs that cannot be completed before 7 a.m., March 14, 2023.

Technical Specifications

The following are technical specifications for Ascend.  

Number of Nodes

24 nodes

Number of CPU Sockets

48 (2 sockets/node)

Number of CPU Cores

2,304 (96 cores/node)

Cores Per Node

96 cores/node (88 usable cores/node)

Internal Storage

12.8 TB NVMe internal storage

OSC enables Globus High Assurance storage endpoint

A new High Assurance Globus endpoint for OSC will be deployed to manage protected data on February 2, 2023. This will affect current projects at OSC which use Globus to manage their protected data. These projects will need to use the new High Assurance Globus endpoint to access their data. The name of the new endpoints are OSC /fs/ess High Assurance for project storage (/fs/ess) and OSC /fs/scratch High Assurance for scratch storage (/fs/scratch).

NVHPC

NVHPC, or NVIDIA HPC SDK, C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud.

System Downtime December 13, 2022

A downtime for OSC HPC systems is scheduled from 7 a.m. to 9 p.m., Tuesday, December 13, 2022. The downtime will affect the Pitzer, Owens and Ascend Clusters, web portals, and HPC file servers. MyOSC (https://my.osc.edu) and state-wide licenses will be available during the downtime. In preparation for the downtime, the batch scheduler will not start jobs that cannot be completed before 7 a.m., December 13, 2022. Jobs that are not started on clusters will be held until after the downtime and then started once the system is returned to production status.

aocc

The AMD Optimizing C/C++ and Fortran Compilers (“AOCC”) are a set of production compilers optimized for software performance when running on AMD host processors using the AMD “Zen” core architecture.  Supported processor families are AMD EPYC™, AMD Ryzen™, and AMD Ryzen™ Threadripper™ processors.  The AOCC compiler environment simplifies and accelerates development and tuning for x86 applications built with C, C++, and Fortran languages.

Request access

Users who would like to use the Ascend cluster will need to request access.  This is because of the particulars of the Ascend environment, which includes its size, GPUs, and scheduling policies.

Motivation

Access to Ascend is done on a case by case basis because:

Ascend SSH key fingerprints

These are the public key fingerprints for Ascend:
ascend: ssh_host_rsa_key.pub = 2f:ad:ee:99:5a:f4:7f:0d:58:8f:d1:70:9d:e4:f4:16
ascend: ssh_host_ed25519_key.pub = 6b:0e:f1:fb:10:da:8c:0b:36:12:04:57:2b:2c:2b:4d
ascend: ssh_host_ecdsa_key.pub = f4:6f:b5:d2:fa:96:02:73:9a:40:5e:cf:ad:6d:19:e5

nccl

The NVIDIA Collective Communication Library (NCCL) implements multi-GPU and multi-node communication primitives optimized for NVIDIA GPUs and Networking. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter as well as point-to-point send and receive that are optimized to achieve high bandwidth and low latency over PCIe and NVLink high-speed interconnects within a node and over NVIDIA Mellanox Network across nodes.

Pages