Cardinal

System Downtime August 6 2024

A downtime for OSC HPC systems is scheduled from 7 a.m. to 9 p.m., Tuesday, August 6 2024. The downtime will affect the Pitzer, Owens and Ascend Clusters, web portals, and HPC file servers. There will be a short outage of the state-wide licenses; MyOSC (the client portal) will be available during the downtime. In preparation for the downtime, the batch scheduler will not start jobs that cannot be completed before 7 a.m., August 6. Jobs that are not started on clusters will be held until after the downtime and then started once the system is returned to production status.

AutoDock

AutoDock is a a suite of automated docking tools. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure. AutoDock has applications in X-ray crystallography, structure-based drug design, lead optimization, etc.

Nodejs

Nodejs is used to create server-side web applications, and it is perfect for data-intensive applications since it uses an asynchronous, event-driven model

Tinker

Tinker is a molecular modeling package. Tinker provides a general set of tools for molecular mechanics and molecular dynamics.

MRIcroGL

MRIcroGL is medical image viewer that allows you to load overlays (e.g. statistical maps), draw regions of interest (e.g. create lesion maps).

Availability and Restrictions

Versions

MRIcroGL is available on Pitzer cluster. These are the versions currently available:

dcm2nii

dcm2niix is designed to convert neuroimaging data from the DICOM format to the NIfTI format. The DICOM format is the standard image format generated by modern medical imaging devices. However, DICOM is very complicated and has been interpreted differently by different vendors. The NIfTI format is popular with scientists, it is very simple and explicit. However, this simplicity also imposes limitations (e.g. it demands equidistant slices).

NVHPC

NVHPC, or NVIDIA HPC SDK, C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud.

Pages