We will have rolling reboots of all HPC clusters (Ascend, Cardinal, Owens, and Pitzer cluster), including login and compute nodes, starti

Pitzer

TIP: Remember to check the menu to the right of the page for related pages with more information about Pitzer's specifics.

OSC's original Pitzer cluster was installed in late 2018 and is a Dell-built, Intel® Xeon® 'Skylake' processor-based supercomputer with 260 nodes.

In September 2020, OSC installed additional 398 Intel® Xeon® 'Cascade Lake' processor-based nodes as part of a Pitzer Expansion cluster. 

2024_0903 Pitzer Cluster Graphic Update.png

Hardware

Photo of Pitzer Cluster

Detailed system specifications:

  Deployed in 2018 Deployed in 2020 Total
Total Compute Nodes 260 Dell nodes 398 Dell nodes 658 Dell nodes
Total CPU Cores 10,560 total cores 19,104 total cores 29,664 total cores
Standard Dense Compute Nodes

224 nodes​​​​​​

  • Dual Intel Xeon 6148s Skylakes
  • 40 cores per node @ 2.4 GHz
  • 192 GB memory
  • 1 TB HDD disk space
340 nodes
  • Dual Intel Xeon 8268s Cascade Lakes
  • 48 cores per node @ 2.9 GHz
  • 192 GB memory 
  • 1 TB HDD disk space
564 nodes
Dual GPU Compute Nodes 32 nodes
  • Dual Intel Xeon 6148s
  • Dual NVIDIA Volta V100 w/ 16 GB GPU memory
  • 40 cores per node @ 2.4 GHz
  • 384 GB memory
  • 1 TB HDD disk space
42 nodes
  • Dual Intel Xeon 8268s 
  • Dual NVIDIA Volta V100 w/32 GB GPU memory
  • 48 cores per node @ 2.9 GHz
  • 384 GB memory
  • 1 TB HDD disk space
74 dual GPU nodes
Quad GPU Compute Nodes N/A 4 nodes 
  • Dual Intel Xeon 8260s Cascade Lakes
  • Quad NVIDIA Volta V100s w/32 GB GPU memory and NVLink
  • 48 cores per node @ 2.4 GHz
  • 768 GB memory
  • 4 TB SSD disk space
4 quad GPU nodes
Large Memory Compute Nodes 4 nodes
  • Quad Processor Intel Xeon 6148 Skylakes
  • 80 cores per node @ 2.4 GHz
  • 3 TB memory
  • 7.6 TB SSD disk space
12 nodes
  • Dual Intel Xeon 8268 Cascade Lakes
  • 48 cores per node @ 2.9 GHz
  • 768 GB memory
  • 1 TB HDD disk space
16 nodes
Interactive Login Nodes

4 nodes

  • Dual Intel Xeon 6148s
  • 368 GB memory
  • IP address: 192.148.247.[176-179]
4 nodes
InfiniBand High-Speed Network Mellanox EDR (100 Gbps) Infiniband networking Mellanox EDR (100 Gbps) Infiniband networking  
Theoretical Peak Performance

~850 TFLOPS (CPU only)

~450 TFLOPS (GPU only)

~1300 TFLOPS (total)

~1900 TFLOPS (CPU only)

~700 TFLOPS (GPU only)

~2600 TFLOPS (total)

~2750 TFLOPS (CPU only)

~1150 TFLOPS (GPU only)

~3900 TFLOPS (total)

How to Connect

  • SSH Method

To login to Pitzer at OSC, ssh to the following hostname:

pitzer.osc.edu 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer.osc.edu

You may see a warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprints listed here, then type yes.

From there, you are connected to the Pitzer login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Pitzer at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access Pitzer by clicking on "Clusters", and then selecting ">_Pitzer Shell Access".

Instructions on how to connect to OnDemand can be found at the OnDemand documentation page.

File Systems

Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

The module system on Pitzer is the same as on the Owens and Ruby systems. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler, and an appropriate version of mvapich2 loaded.

You can keep up to the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Compiling Code to Use Advanced Vector Extensions (AVX2)

The Skylake processors that make Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

This advice assumes that you are building and running your code on Pitzer. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

See the Pitzer Programming Environment page for details.

Batch Specifics

On September 22, 2020, OSC switches to Slurm for job scheduling and resource management on the Pitzer Cluster.

Refer to this Slurm migration page to understand how to use Slurm on the Pitzer cluster. Some specifics you will need to know to create well-formed batch scripts:

  • OSC enables PBS compatibility layer provided by Slurm such that PBS batch scripts that used to work in the previous Torque/Moab environment mostly still work in Slurm. 
  • Pitzer is a heterogeneous system with mixed types of CPUs after the expansion as shown in the above table. Please be cautious when requesting resources on Pitzer and check this page for more detailed discussions
  • Jobs on Pitzer may request partial nodes.  

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC and Slurm migration. For specific information about modules and file storage, please see the Batch Execution Environment page.

Pitzer Known Issues (unresolved)

Title Category Description Posted Updated
Rolling reboots on all HPC systems starting Oct 31 2024 Owens, Pitzer

We will have rolling reboots of all HPC clusters (Ascend, Cardinal, Owens, and Pitzer cluster), including login and compute nodes, starting from 9am on Oct 31 2024. The rolling reboot is for... Read more

1 day 20 hours ago 1 day 20 hours ago

Pitzer Changelog