The ARCHER2 Service is a world class advanced computing resource for UK researchers. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh.

ARCHER2 is due to commence operation in 2020, replacing the current service ARCHER. Please visit the ARCHER2 website.

Further information on moving to ARCHER2 will be made available here.

Pilot System

Prior to installation of the complete ARCHER2, we have access to a 4-cabinet pilot machine that will run in parallel with ARCHER. ARCHER users will find the new machine very familiar in many respects but with some important differences - see for a comprehensive array of presentations, in particular the one titled Differences between ARCHER and ARCHER2.

CMS has installed and undertaken limited testing of several versions of the Unified Model and its auxiliary software. The process is ongoing - we encourage users where possible to migrate their workflows to use the latest versions of the UM.

Limitations of the pilot system may result in some constraint on the nature of workflows that it can accommodate.


ARCHER2 uses SLURM (ARCHER used PBS), so all ARCHER batch scripts need to be rewritten for use on ARCHER2.

login nodes

The login nodes do support persistent ssh agents, so data transfer to JASMIN through Rose/Cylc workflows is possible.

compute nodes

Compute nodes can not see /home. Unlike ARCHER, batch scripts run on the compute nodes, so batch scripts must not have references to /home.

serial nodes

The pilot system does not have serial nodes. The full system will have serial nodes.


Request access through the ARCHER2 SAFE.

File Systems

/home and /work file systems with identical structure to that on ARCHER. The pilot system will have only 325TB on /work and 1.7TB on /home; the full system will have substantially more.


The ARCHER budget structure and membership will carry over to ARCHER2.


Currently installed versions 7.3, 8.4, 11.1, 11.2, 11.5, 11.6

Example jobs
UM version job/suite id config_root_path branches note
7.3 abxcd CCMI
8.4 xoxta GLOMAP; + CLASSIC: RJ4.0 ARCHER GA4.0
11.1 u-be303 (deriv) fcm:um.x_br/dev/simonwilson/vn11.1_archer2_compile fcm:um.x_br/dev/jeffcole/vn11.1_archer2_fixes UKESM AMIP
11.2 u-be463 (deriv) fcm:um.x_br/dev/simonwilson/vn11.2_archer2_compile fcm:um.x_br/dev/jeffcole/vn11.2_archer2_fixes AMIP
11.2 u-bz746 (deriv) fcm:um.x_br/dev/simonwilson/vn11.2_archer2_compile fcm:um.x_br/dev/jeffcole/vn11.2_archer2_fixes UKESM coupled
11.5 u-br938? fcm:um.x_br/dev/simonwilson/vn11.5_archer2_compile GA7.1 N1280 UM11.5 AMIP
11.6 fcm:um.x_br/dev/simonwilson/vn11.6_archer2_compile

Table 1.


The multiplicity and diversity of Rose/Cylc suites prevents us from providing a simple comprehensive guide to suite modifications necessary for running on ARCHER2. However, the suites referred to in Table 1 should give hints on to how to upgrade your suite. The suite changes required stem from the following differences between ARCHER and ARCHER2:

  • scheduler: ARCHER uses PBS, ARCHER2 uses SLURM
  • architecture: ARCHER has 24 cores per node, ARCHER2 has 128 cores per node

Changes to account for SLURM will typically be in the [[directives]] section of tasks in the suite.rc file or in an appropriate site/archer2.rc file (you my need to create one of these.) The example below serves to illustrate common SLURM features. Note: the SLURM directives --partition, --qos, and --reservation combine to provide a more flexible replacement for the PBS directive --queue. Additional partitions will become available with the full ARCHER2 system.

        pre-script = """
             module restore /work/n02/n02/simon/um_modules   <====== to load the environment
             module list 2>&1
             ulimit -s unlimited

            --chdir=/work/n02/n02/<your ARCHER2 user name>   <===== you must set this 
{% if ARCHER2_QUEUE == 'short' %}
{% endif %}
            PLATFORM = cce
            UMDIR = /work/y07/shared/umshared
            batch system = slurm                             <===== specify use of SLURM
            host =                       <====== use ARCHER2
{% if HPC_USER is defined %}
            owner = {{HPC_USER}}
{% endif %}

        inherit = HPC
            ROSE_TASK_N_JOBS = 32

            CONFIG = ncas-ex-cce                             <====== note name of config for ARCHER2

Setting SLURM options that specify the number of processors requires assigning values to --nodes, --ntasks, --tasks-per-node, and --cpus-per-task. These should be familiar from ARCHER modulo the precise names for the attributes. Your suite may use different names for the various parameters, such as TASKS_RCF, for example, but there should be a simple correspondence.

        inherit = UM_PARALLEL
            --ntasks= {{TASKS_RCF}}
            execution time limit = PT20M

            --ntasks= {{TASKS_ATM}}
            execution time limit = {{MAIN_CLOCK}}

Your suite should include a section to specify the flags that will be passed the command to launch the job (for ARCHER that command is aprun, for ARCHER2 it is srun.) The flags are different for jobs running with or without OpenMP. Most suites will need some jinja like this:

{# set up slurm flags for OpenMP/non-OpenMP #}
{% if MAIN_OMPTHR_RCF > 1 %}
 {% set RCF_SLURM_FLAGS= "--hint=nomultithread --distribution=block:block" %}
{% else %}
 {% set RCF_SLURM_FLAGS = "--cpu-bind=cores" %}
{% endif %}
{% if MAIN_OMPTHR_ATM > 1 %}
 {% set ATM_SLURM_FLAGS= "--hint=nomultithread --distribution=block:block" %}
{% else %}
 {% set ATM_SLURM_FLAGS = "--cpu-bind=cores" %}
{% endif %}

Suites frequently contain macros to calculate the number of nodes and cores required - the only change needed is to set to 8 the number of NUMA regions per node.

Coupled suites

We have adopted the SLURM heterogeneous jobs method of handling coupled suites where the atmosphere, NEMO, and XIOS are separate executables running under a common communicator. The basic SLURM ideas above carry over to heterogeneous jobs but rather than making an overarching job resource request (as is the case for PBS), each component of the coupled job specifies its own requirements.

To inform Cylc that it needs to know about het. jobs, set

            batch system = slurm_hetero

For the coupled task (or in its inherited resources)

            hetjob_1_--ntasks= {{OCEAN_TASKS}}
            hetjob_2_--ntasks= {{XIOS_TASKS}}

where hetjob_0_ is associated with the atmosphere, hetjob_1_ with the ocean, and hetjob_2_ with the (XIOS)io-servers.

The variables ROSE_LAUNCHER_PREOPTS_UM, ROSE_LAUNCHER_PREOPTS_NEMO, and ROSE_LAUNCHER_PREOPTS_XIOS also need modification to link the resource request to the job launcher command, for example:

            {% if OMPTHR_ATM > 1 %}
              ROSE_LAUNCHER_PREOPTS_UM  = --het-group=0 --hint=nomultithread --distribution=block:block --export=all,OMP_NUM_THREADS={{OMPTHR_ATM}},HYPERTHREADS={{HYPERTHREADS}},OMP_PLACES=cores
            {% else %}
              ROSE_LAUNCHER_PREOPTS_UM  = --het-group=0 --cpu-bind=cores --export=all,OMP_NUM_THREADS={{OMPTHR_ATM}},HYPERTHREADS={{HYPERTHREADS}}
            {% endif %}

where the flag --het-group=0 makes the connection to hetjob_0_.

Post Processing


UM 8.4

Very few changes are required in order to run these jobs:

  • in Model Selection → Input/Output Control and Resources → Time Convention and …
    • set DATADIR (this must be on /work)
  • in Model Selection → User Information and Submit Method → Job submission method
    • select SLURM (Cray EX)
    • Set the number of processors to be a multiple of 128
    • click the SLURM button to specify the Job time limit

Post Processing has not been tested.

UM 7.3


All n02 users will be granted read access to the umshared package account (as was the case on ARCHER.) UM data and software is installed at /work/y07/shared/umshared. You may set UMDIR in your .bash_profile, but note, batch jobs can not see /home and will not source scripts that reside there.


Last modified 88 minutes ago Last modified on 02/12/20 14:44:40