OSC is deploying the Next Gen Ascend cluster in collaboration with The Ohio State University Wexner Medical Center and the Ohio State University College of Medicine.
Current OSC users from OSU College of Medicine, as well as projects which already have Ascend access, are participating in the early user program. Notifications are sent to PIs and College of Medicine users in late February.
March 3 - 31, 2025 (tentative)
Detailed system specifications of Dual GPU nodes on the Next Gen Ascend:
In addition, some Quad GPU nodes are also included for testing:
The Next Gen Ascend cluster is now running on Red Hat Enterprise Linux (RHEL) 9, introducing several software-related changes compared to the RHEL 7/8 environment used on the Pitzer and original Ascend cluster. These updates provide access to modern tools and libraries but may also require adjustments to your workflows. Please refer to the Ascend Software Environment page for key software changes and available software.
A key change is that you are now required to specify the module version when loading any modules. For example, instead of using module load intel
, you must use module load intel/2021.10.0
. Failure to specify the version will result in an error message.
Below is an example message when loading gcc without specifying the version:
$ module load gcc Lmod has detected the following error: These module(s) or extension(s) exist but cannot be loaded as requested: "gcc". You encountered this error for one of the following reasons: 1. Missing version specification: On Ascend, you must specify an available version. 2. Missing required modules: Ensure you have loaded the appropriate compiler and MPI modules. Try: "module spider gcc" to view available versions or required modules. If you need further assistance, please contact oschelp@osc.edu with the subject line "lmod error: gcc"
The Next Gen Ascend supports programming in C, C++, and Fortran. The available compiler suites include Intel, oneAPI, and GCC. Please refer to the Ascend Programming Environment page for details on compiler commands, parallel and GPU computing.
To login to Next Gen Ascend at OSC, ssh to the following hostname:
ascend-nextgen.osc.edu
You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:
ssh <username>@ascend-nextgen.osc.edu
From there, you are connected to the Next Gen Ascend login node and have access to the compilers and other software development tools on the RHEL 9 environment. You can run programs interactively or through batch requests. Please use batch jobs for any compute-intensive or memory-intensive work.
You can also login to Next Gen Ascend at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access it by clicking on "Clusters," and then selecting ">_Ascend Nextgen Shell Access."
You will have access to:
--partition=quad
to test it. The nodes are identical to those in the Ascend cluster you’ve been using, but on RHEL 9, along with a new suite of softwares, as discussed in 'Software environment' on this page.--partition=nextgen
to test it. It includes Dual GPU nodes discussed in 'Hardware' on this page, on RHEL 9, along with a new suite of softwares, as discussed in 'Software environment' on this page.You will only have access to:
--partition=nextgen
to test it. It includes Dual GPU nodes discussed in 'Hardware' on this page, on RHEL 9, along with a new suite of softwares, as discussed in 'Software environment' on this page.It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs.
Partition | # of gpus per node | Usable cores per node | default memory per core | max usable memory per node |
---|---|---|---|---|
nextgen | 2 | 120 | 4,027 MB | 471.91 GB |
quad | 4 | 88 | 10,724 MB | 921.59 GB |
batch | 4 | 88 | 10,724 MB | 921.59 GB |
We reserve 1 core per 1 GPU. The CPU-only job can be scheduled but can only request up to 118 cores per dual GPU node and up to 84 cores per quad GPU node. You can also request multiple nodes for one CPU-only job.
Partition | Max walltime limit | Min job size | Max job size | Note |
---|---|---|---|---|
nextgen | 7-00:00:00 (168 hours) | 1 core | 16 nodes |
Can request multiple partial nodes |
quad | 7-00:00:00 (168 hours) | 1 core | 2 nodes | Can request multiple partial nodes |
debug-nextgen | 1 hour | 1 core | 2 nodes | |
debug-quad | 1 hour | 1 core | 2 nodes |
Max # of cores in use | Max # of GPUs in use | Max # of running jobs | Max # of jobs to submit | |
---|---|---|---|---|
Per user | 5,632 | 96 | 256 | 1000 |
Per project | 5,632 | 96 | 512 | n/a |
Jobs on Ascend (including Quad GPU nodes and Dual GPU nodes) are eligible for the early user program and will not be charged. All queued jobs submitted during the early user program will be deleted from the system at the end of the early user program to avoid any unwanted charges.
All jobs submitted after the early user program will be charged. The charge for core-hour and GPU-hour on Ascend is the same as the Standard compute core-hour and GPU-hour on Pitzer and Cardinal. Academic users can check the service costs page for more information. Please contact OSC Help if you have any questions about the charges.
All College of Medicine users only have access to Dual GPU nodes. Jobs eligible for the early user program will not be charged. All queued jobs submitted during the early user program will be deleted from the system at the end of the early user program to avoid any unwanted charges.
Clients may continue to access the Pitzer cluster condo and other OSC resources during this time and get charged based on the existing service agreement, but may access only the Ascend cluster after its formal launch. HPC jobs on Ascend will be zero cost and have priority scheduling. We will update this CoM service page for more updated information.
For any queued or running jobs, you can check the job information with either Slurm commands (which are discussed here) or the OSC OnDemand Jobs app by clicking "Active Jobs" and choosing "Ascend NextGen" as the cluster name.
For any completed jobs, you can check the job information using the OSC XDMoD Tool. Choose "Ascend" as "Resource." Check here for more information on how to use XDMoD.
Please feel free to contact OSC Help if you have any questions.