Next Gen Ascend Early User Program

The Ascend cluster including the Next Gen Ascend nodes is expected to be made available to all OSC clients on March 31, 2025, tentatively.
This page is currently under development.

The Next Gen Ascend, which will be a ~14 PetaFLOP system, will feature an additional 274 Dell nodes each with: 

  • Two AMD EPYC 7H12 2.60GHz, 64 cores each, 128 cores per server
  • Two NVIDIA Ampere A100, PCIe, 250W, 40GB GPUs 
  • 512GB Memory
  • HDR100 Infiniband 

Who is eligible to participate in the Early User Program?

OSC is deploying the Next Gen Ascend cluster in collaboration with The Ohio State University Wexner Medical Center and the Ohio State University College of Medicine.

Current OSC users from both the Ohio State University College of Medicine and existing Ascend users are participating in the early user program. Notifications are sent to PIs in late February.

Early user period

March 3 - 31, 2025 (tentative)

Hardware

Detailed system specifications:

  •  274 Dell R7525 server nodes, each with:
    • 2 AMD EPYC 7H12 processors (2.60 GHz, each with 60 usable cores) 
    • 2 NVIDIA A100 GPUs with 40GB memory each, PCIe, 250W
    • 472GB usable Memory
    • 1.92TB NVMe internal storage
    • HDR100 Infiniband (100 Gbps)

Available software packages 

During the early access period, the programming environment and software packages will keep being updated; and the system may go down or jobs may be killed with little or no warning. If your work won't tolerate this level of instability, we recommend that you use Cardinal or Pitzer instead.

Selected software packages have been installed on the Next Gen Ascend cluster. You can use 'module spider' to see the available packages after logging into Next Gen Ascend. Also check this page to see the available packages. Please note that the package list in the web page is not complete at this moment. 

A key change is that you are now required to specify the module version when loading any modules. For example, instead of using module load intel, you must use module load intel/2021.10.0. Failure to specify the version will result in an error message. 

If you need other software you want to use on Next Gen Ascend, please contact OSC Help

Programming environment

  •  

How to log into Next Gen Ascend

  • SSH Method

To login to Next Gen Ascend at OSC, ssh to the following hostname:

ascend-nextgen.osc.edu 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@ascend-nextgen.osc.edu

From there, you are connected to the Next Gen Ascend login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. 

  • OnDemand Method

You can also login to Next Gen Ascend at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access it by clicking on "Clusters," and then selecting ">_Ascend Nextgen Shell Access."

Scheduling policy

  • Memory limit

It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs. It has 120 usable cores/node. The usable memory is 4,027 MB/core or 472GB/node.

  • CPU-only jobs

We reserve 1 core per 1 GPU for a Next Gen Ascend node. The CPU-only job can be scheduled but can only request up to 118 cores per node. You can also request multiple nodes for one CPU-only job. 

  • Job limits 

  Max # of cores in use Max # of GPUs in use Max # of running jobs  Max # of jobs to submit
Per user 704 32 256 1000
Per project 704 32 512 n/a
  • Batch limit

  Max walltime limit Min job size Max job size Note
Nextgen 7-00:00:00 (168 hours) 1 core 4 nodes  Can request multiple partial nodes

How do the jobs get charged?

Jobs on both original Ascend and Next Gen Ascend are eligible for the early user program and will not be charged. All queued jobs submitted during the early user program will be deleted from the system at the end of the early user program to avoid any unwanted charges.

All jobs submitted after the early user program will be charged.

The charge for core-hour and GPU-hour on Ascend is the same as the Standard compute core-hour and GPU-hour on Pitzer and Cardinal. Academic users can check the service costs page for more information. Please contact OSC Help if you have any questions about the charges.  

How do I find my jobs submitted during the Early User Program?

For any queued or running jobs, you can check the job information with either Slurm commands (which are discussed here) or the OSC OnDemand Jobs app by clicking "Active Jobs" and choosing "Ascend NextGen" as the cluster name.

For any completed jobs, you can check the job information using the OSC XDMoD Tool. Choose "Ascend" as "Resource." Check here for more information on how to use XDMoD.  Please it includes jobs on both original Ascend and Next Gen Ascend. 

How do I get help?

Please feel free to contact OSC Help if you have any questions. 

Supercomputer: