ARCHER2 specific setup instructions for data transfer to JASMIN

Instructions depend on whether your suite has the PP Transfer option already available and also if this is your first time using the Transfer App.

To determine if your suite already has PP Transfer available search in the rose edit GUI for the variable PPTRANSFER. If PPTRANSFER variable is found then your suite is already setup with the Transfer app.

Determine which of the following scenarios you fall into and follow the instructions listed.

Scenario 1 - PP Transfer not already in the suite and you have never used the Transfer app before.

Scenario 2: PP Transfer already in suite.

Add PP Transfer task to a suite

Only follow these instructions if your suite doesn't already have the "PP Transfer" option available.


  1. Add PPTRANSFER=true
  1. The PPTRANSFER variable will, by default, appear under "suite conf → jinja2". To tell Rose to place it with all the other suite control switches (e.g. "Build UM" & "Run Reconfiguration") usually found in a panel in the suite conf section under "Build and Run" or Tasks edit the meta/rose-meta.conf file to add in the metadata for the PPTRANSFER variable. Place it under the definition for POSTPROC. (This step is optional.)
    description=Transfer files archived with PostProc to a remote machine
    title=PP Transfer

Where <panel_namespace> is the same value as for the POSTPROC entry in this file; e.g. ns=Build and Run


Note: Depending on the suite setup you may find the appropriate sections to modify in the site/archer.rc file rather than the suite.rc.

  1. Add the build & run of pptransfer task into the cylc graph initial cycle. Add the line:
    {{ 'fcm_make_pptransfer => fcm_make2_pptransfer' + (' => pptransfer' if RUN else '') if PPTRANSFER else '' }}
    to the cylc graph for the initial cycle, indicated by [[[ R1 ]]]. For example: (insertion indicated by "⇐ Add line here")
            [[[ R1 ]]]
                graph = """
    {{ 'fcm_make_pp => fcm_make2_pp' + (' => postproc' if RUN else '') if POSTPROC else '' }}
    {{ 'fcm_make_pptransfer => fcm_make2_pptransfer' + (' => pptransfer' if RUN else '') if PPTRANSFER else '' }}    <== Add line here
    {{ 'fcm_make_ocean => fcm_make2_ocean' + (' => recon' if RECON else ' => coupled' if RUN else '') if BUILD_OCEAN else '' }}
    {{ 'fcm_make_um => fcm_make2_um' + (' => recon' if RECON else ' => coupled' if RUN else '') if BUILD_UM else '' }}
    {{ 'install_ancil => recon ' if RECON else ('install_ancil => coupled' if RUN else '')}}
    {{ 'recon' + (' => coupled' if RUN else '') if RECON else '' }}
    {{ 'clearout' + (' => coupled' if RUN else '') if CLEAROUT else '' }}
  1. Add the pptransfer task into the graph for all subsequent cycles such that it runs after the postproc task and also waits for the previous pptransfer task to complete. As an example for a coupled suite (All added lines indicated with "⇐"):
            [[[ {{FMT}} ]]]
                graph = """
    {% if RUN %}
    coupled[-{{FMT}}] => coupled {{ '=> \\' if POSTPROC or HOUSEKEEP else '' }}
      {% if POSTPROC %}
    postproc {{ '=> \\' if PPTRANSFER or HOUSEKEEP else '' }}     <= "PPTRANSFER or" added here
      {% endif %}
      {% if PPTRANSFER %}                                         <=
    pptransfer {{ '=> \\' if HOUSEKEEP else '' }}                 <=
      {% endif %}                                                 <=
      {% if HOUSEKEEP %}
      {% endif %}
      {% if POSTPROC %}
    postproc[-{{FMT}}] => postproc
      {% endif %}
      {% if PPTRANSFER %}                                         <=
    pptransfer[-{{FMT}}] => pptransfer                            <=
      {% endif %}                                                 <=
    {% endif %}

Take care to ensure there is no trailing whitespace at the end of each added line.

  1. In the [[postproc]] task check that pre-script is set as follows:
            inherit = ...
            pre-script = "module restore /work/y07/shared/umshared/modulefiles/postproc/2020.12.11; module list 2>&1; ulimit -s unlimited"
  1. At the end of the file add the pptransfer task definitions:
            inherit = HPC_SERIAL
                batch system = background
            inherit = None, LINUX_UM, PPTRANSFER_BUILD
            inherit = None, PPTRANSFER, PPTRANSFER_BUILD
            inherit = PPTRANSFER
            pre-script = "module load cray-python"
                CYCLEPERIOD = $( rose date $CYLC_TASK_CYCLE_POINT $CYLC_TASK_CYCLE_POINT --calendar {{CALENDAR}} --offset2 {{FMT}} -f y,m,d,h,M,s )
                PLATFORM = linux

Note: LINUX_UM may be called something different (e.g EXTRACT_RESOURCE) depending on the suite.


If there is both [[POSTPRC_RESOURCE]] and [[PPTRANSFER_RESOURCE]] present in the archer2.rc file they should be as follows:

        inherit = HPC_SERIAL
        pre-script = """module restore /work/y07/shared/umshared/modulefiles/postproc/2020.12.11
                        module list 2>&1
                        ulimit -s unlimited
        inherit = POSTPROC_RESOURCE
            batch system = background

If there is no [[POSTPROC_RESOURCE]] section, check [[PPTRANSFER_RESOURCE]] looks like this:

        inherit = HPC_SERIAL
        pre-script = """module restore /work/y07/shared/umshared/modulefiles/postproc/2020.12.11
                        module list 2>&1
                        ulimit -s unlimited
            batch system = background

Set up ssh-key to connect from ARCHER2 to JASMIN

This section setups up connection from ARCHER2 login node to JASMIN.

  1. Login to (Note: this needs to be from somewhere other than PUMA)
  1. Add the following lines to your profile. This will be either ~/.profile or ~/.bash_profile. (Create one if it doesn't already exist):
    # ssh-agent setup
    . ~/.ssh/ssh-setup
  1. Copy the ~/.ssh/ssh-setup script.
    $ cp /work/y07/shared/umshared/um-training/ssh-setup ~/.ssh
  1. Copy the ssh-key you use to access JASMIN to ~/.ssh directory (e.g. id_rsa_jasmin).
  1. Add the following to your ~/.ssh/config file (create one if it doesn't already exist):
    User <jasmin_username>
    IdentityFile ~/.ssh/<jasmin_key>
    ForwardAgent no

Where <jasmin_username> is your JASMIN username and <jasmin_key> is the name of you ssh-key.

Note: in order to use you need to have requested access to the High Performance Data Transfer service via the JASMIN accounts portal.

  1. Logout of ARCHER2 and then log back in again to start up your ssh-agent.
  1. Run ssh-add ~/.ssh/<jasmin_key> where <jasmin_key> is the name of your JASMIN ssh-key E.g. id_rsa_jasmin. (This is the key you generated when you applied for access to JASMIN). Type in your passphrase when prompted to do so.
  1. You should now be able to login to the required JASMIN transfer node (either xfer[1-2] or the high performance node without being prompted for passphrase/password.
Last modified 9 months ago Last modified on 15/01/21 17:12:22