Pitzer
There is a scheduled outage for XDMOD tool (xdmod.osc.edu) between 9AM-4PM June 2 2021 for upgrading to 9.5.0. During the outage XDMOD will be in maintain mode and not accessible by OSC users.
fMRIPrep
fMRIPrep is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that is designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols and that requires minimal user input, while providing easily interpretable and comprehensive error and output reporting.
myosc outage - may 17, 2021
https://my.osc.edu is currently unavailable.
Staff are working to resolve the problem as soon as possible.
Intermittent home directory performance issues
Users may experience performance issues in home directory. It is recommended to use temporary directory ($TMPDIR, or scratch) or project storage to minimize the impact on your jobs.
OSC is currently troubleshooting the cause. Contact oschelp@osc.edu if there are questions.
New scratch filesystem policy in effect Monday, April 5, 2021
The Ohio Supercomputer Center's new scratch filesystem policy is effective Monday, April 5, 2021.
The revised policy better defines standards for utilization and maintenance of the scratch filesystem by OSC users and staff.
The key updates to the policy are that each user will have a quota of 100 tebibytes and 25 million files.
Home directory performance issues
Users may experience performance issues in home directory locations.
OSC is currently troubleshooting the issue.
Contact oschelp@osc.edu if there are questions.
System Downtime March 31, 2021
A downtime for all OSC HPC systems is scheduled from 7 a.m. to 9 p.m., Wednesday, March 31, 2021. The downtime will affect the Pitzer and Owens Clusters, web portals, state-wide licenses, and HPC file servers. Login services will not be available during this time, including MyOSC.
In preparation for the downtime, the batch scheduler will begin holding jobs that cannot be completed before 7 a.m., March 31, 2021. Jobs that are not started on clusters will be held until after the downtime and then started once the system is returned to production status.
Gromacs 2020.5 is available
Gromacs 2020.5 has been installed on Owens and Pitzer. Usage is via the module gromacs/2020.5. For help loading an installation, use the command: "module spider gromacs/2020.5". For information on available executables and installation details use the command: "module help gromacs/2020.5".