= HPC Resource Management = == Managing CPU and storage allocations on national facilities == #managecpu CMS manage HPC resources on behalf of the 556 HPC users in the atmospheric and polar research community on these facilities: * 160 Million Core-hrs of ARCHER compute * 14 Million Core-Hours of NEXCS compute * 655TB of ARCHER work disc * 4.7PB of Research Data Facility GPFS storage * 7.5PB of JASMIN storage Access to ARCHER is through either National Capability or standard NERC research awards. ARCHER, NECXS, RDF and JASMIN resource requests are reviewed by the NERC HPC Steering Committee. [wiki:/ContactUs Contact CMS ] for advice on HPC availability, and resourcing computer time and data needs. == Supporting the Met Office, EPCC, NERC and ESPRC on HPC delivery == #supportdelivery The UK atmospheric science and Earth System modelling community have available several HPC platforms on which to run large numerical simulations and data analysis programs, notably * [wiki:/PumaService PUMA] the Reading system which provides workflow infrastructure and access to ARCHER, Monsoon & NEXCS compute * [https://www.archer.ac.uk/ ARCHER ] Cray XC30 - the UKRI national service jointly funded by EPSRC/NERC * [http://collab.metoffice.gov.uk/twiki/bin/view/Support/WhatIsMONSooN MONSooN ] the NERC/Met Office Cray XC40 * [https://collab.metoffice.gov.uk/twiki/bin/view/Support/NEXCS NEXCS] NERC only share of the Met Office Cray XC40 * [http://www.jasmin.ac.uk/ JASMIN ] the JASMIN super-data-cluster CMS provide and maintain the software infrastructure needed to run the Met Office Unified Model on ARCHER and MONSooN and work closely with CEDA to deliver JASMIN capability. [wiki:/ContactUs Contact CMS ] for information and advice on accessing these resources for your modelling needs. CMS can also advise on the suitability of other platforms for your modelling project.