Charging for memory use

On July 24th, 2014, OSC implemented changes to the Resource Unit (RU) calculation formula. Academic RU allocations are still fully subsidized by the State of Ohio.

In the past, only processor use in batch jobs was considered when calculating RUs. The change in our charging policy will reflect the fact that memory is also a scarce resource that should be accounted for in RU charges. The proposed change affects only Oakley jobs that request more than 4GB memory per processor (core).

The Executive Committee of the Statewide Users Group (SUG) has endorsed this proposal based on the results of the public comment period.

Previously, we had asked users to voluntarily comply with restricting their memory use to the available per-core memory. On Oakley, this equates to 4GB/core. However, we observed a growing number of jobs that consumed more than this amount of memory, which can negatively impact other users on jobs requesting less than a full node (12 cores).

To understand the motivation behind the new charging policy, suppose job A uses 12 cores and 4GB memory while job B uses 1 core and 48GB memory. Both jobs use an entire node but job A is charged twelve times as much as job B under current policy. Under the new policy both are charged the same amount, equal to the current charge for 12 cores.

Technical solution – memory limits

Two technical changes were necessary to ensure that users have use of all the memory they are being charged for. These changes are implemented on Oakley only. Glenn is not affected.

First, the job submission system had to be modified to honor memory requests such as nodes=1:ppn=1,mem=12GB; such requests did not work properly in the past. This change was rolled out in October, 2013. Jobs are allocated 4GB per core if they don't include an explicit memory request. So, for example, if you request nodes=1:ppn=3, you will have an implicit memory limit of 12GB. Jobs requesting more than 48GB are allocated an entire large-memory node.

Second, a mechanism was needed to ensure that a job could not exceed its memory limit and encroach on memory allocated to other jobs. Memory containers, or cgroups, allow us to enforce the limits on Oakley. A job that exceeds its memory limit on a shared node will be killed by the batch system. Jobs running on whole nodes, i.e., ppn=12, all parallel jobs, and large-memory jobs, are allowed to use all memory available on the node and a large portion of swap space. A job that approaches the limit of memory+swap will be killed by the batch system.

A job that requests nodes=1:ppn=1 will be assigned one core and will have access to 4GB of RAM (but no more than that). A job that requests nodes=1:ppn=1,mem=12GB will only be assigned one core, but will have access to 12GB of RAM.

Charging policy change

A job that requests nodes=1:ppn=3 will still be charged for 3 cores worth of Resource Units (RU). However, a job that requests nodes=1:ppn=1,mem=12GB is currently only charged for 1 core worth of RU, in accordance with our current charging policy. Under our new policy, this job would be charged for 3 cores. Why?

effective cores = memory / memory per core


effective cores = 12GB / (4GB/core) = 3 core

The value for "effective cores" will be used in the regular RU calculation if larger than the explicit number of cores requested.

Jobs requesting more than 48GB memory are allocated (and charged for) an entire large-memory node.

Please note that this change has no charging impact on jobs that use full nodes, including parallel jobs.

Impact on users

By examining jobs on Oakley, we have determined that approximately 1% of our current jobs will be impacted by this change. About 8% of all jobs might see increased charges, but only because they are requesting more memory than they actually need - and thus could avoid the increased charges by adjusting their scripts.

We expect about 1 in 5 users to be impacted by this change.

Only about 0.5% of all jobs will require a script change to avoid cancellation, as many of our users with memory requirements in excess of 4GB/core are already requesting memory limits in their jobs.

Well-behaved jobs using less than a whole node will benefit because other jobs will no longer be able to infringe on their memory or cause an out-of-memory node crash.

Per-job estimate of charges

The job epilog with usage statistics for each job will contain information about the what jobs are being charged under the new policy, along with a link to this document.

Public comment period

A public comment period was held to determine how users responded to this change, with positive results.  The comment period ended June 5th, 2014.


Q: Does this impact Glenn?

A: No. Glenn does not support memory containers, and will likely be decommissioned within the next 12 months. On Glenn you must request enough cores (ppn) to cover your memory requirements at 3GB/core.

Q: What if I do not use a lot of memory in my jobs?

A: If you are using less than 4GB/core on Oakley and are not explicitly requesting more memory, you will not see any change in your job costs.

Q: I am using one of the bigmem (192GB of RAM) or hugemem (1TB of RAM) nodes. Will I see a change in my job charges?

A: Possibly. Jobs on these nodes are allocated the whole node. Hugemem jobs are already charged for the whole node (32 cores), but bigmem jobs are currently charged only for the number of cores requested. With the policy change, bigmem jobs will be charged for the whole node (12 cores).

Q: I run parallel jobs. Will I see a change in my job charges?

A: No. Your jobs are already charged for entire nodes, and thus are already allocated all of the RAM. You will see no changes.

Q: I am an academic user. Does this mean I will have to start paying for RU?

A: No. This is a change to the RU formula to account for memory use beyond 4GB/core. RUs allocated to academic research projects conducted by Ohio researchers are still fully subsidized by the State of Ohio; some individual jobs may cost you more RU than they would have previously.

See Also

Out-of-Memory (OOM) or Excessive Memory Usage