Version 11 (modified by annette, 3 years ago) (diff)


Instructions for running the ShaCoNEMO ORCA1 configuration on ARCHER.


You need to register with the NEMO website to extract the code:

The ORCA1 configuration is held in the ShaCoNEMO repository. You need to register for access, and this uses the same username and password as the main NEMO repository.

To create a branch to hold your code changes on the NEMO repository, you will need additional permissions.


The main NEMO website gives more general instructions on building & running NEMO with the standard confgurations included in the code:

The ORCA1_LIM3_PISCES configuration is documented on the ShaCoNEMO (Shared Configuration NEMO) wiki:

  • LIM3 is the sea-ice
  • PISCES is the biogeochemical model

NEMO uses an external library to handle model output diagnostics called XIOS, which is documented here:

Code versions

Here we have used the following versions, based on the NEMO/ShaCoNEMO pages:

  • XIOS-1.0 r703
  • NEMO_3.6_stable r6969
  • ShaCoNEMO r60
  • ORCA1 input data set v9

Note: Although XIOS-2.0 is now recommended, revision 932 which is suggested for ORCA1 gave a link-time error which was not investigated.

Note: This is based on the latest version of ShaCoNEMO as of October 2016, and the recommended version of NEMO for this. Both are under active development, so you may wish to use newer versions as they become available.

Step 1: Checkout the code

First you need to check out the code from the NEMO repository.

  1. On ARCHER, navigate to the directory that you wish to hold the code. If you are making changes to the standard configuration, you may wish to use the home directory as this is backed up.
  1. To checkout the code to a directory called NEMOGCM_3.6_r6969, run the following, replacing mylogin with your NEMO username (you will be prompted for your password):
    svn --username "mylogin" co -r6969 NEMOGCM_3.6_r6969

This creates a subversion "working copy". Running svn info lists the revision that was checked out, and svn diff lists the changes that have been made.

  1. To setup and extract the ORCA1 configuration:
    cd NEMOGCM_3.6_r6969/CONFIG

Then edit cfg.txt to add the line:

  1. Next checkout the ORCA1 configuration using the same username (mylogin) and password as for the NEMO checkout:
    svn --username "mylogin" co

This creates the directory ORCA1_LIM3_PISCES, which is another "working copy".

Step 2: Build the code

To build NEMO, you need to point to the location of XIOS, and the parallel-netCDF libraries. A version of XIOS-1 can be found here:


It is advisable to build NEMO with the same key modules as the library. The above version was built with:

  • cce/8.4.1
  • cray-netcdf-hdf5parallel/
  • cray-mpich/7.2.6

(To build your own XIOS, see the notes below.)

To build the NEMO code:

  1. Navigate to the directory holding the build definition files:
    cd NEMOGCM_3.6_r6969/ARCH

Then create a new ARCHER configuration file in this directory arch-XC30_NCAS_ARCHER.fcm:


Note this uses the compile flags used for the Met Office GO6 runs.

  1. Go to the config directory:
    cd ../CONFIG
    And create a batch submission script for the compilation build_nemo.pbs:
    Edit the PBS headers in the file to set your own project code:
    #PBS -A <xxx>
    This build script sets the ARCHER modules required to build with the XIOS library described above, plus it removes the pre-processor flag key_xios2 for building with XIOS-1, and adds key_nosignedzero.
  1. Then submit the build script:
    qsub build_nemo.pbs
    This creates a new directory under CONFIG/ called O1L3P. This differentiates between changes made to the original configuration. O1L3P will contain the executable plus namelists to be read in by the model.

Step 3: Run the model

Once the build has completed successfully, set up the inputs required for the model to run.

  1. Go to the run directory:
    cd NEMOGCM_3.6_r6969/CONFIG/O1L3P/EXP00
    Copy in the file EMPave.dat:
    cp ../../ORCA1_LIM3_PISCES/EXP00/EMPave_old.dat . 
  1. In the same directory, copy in the XIOS-1 XML files (by default the configuration is set up for XIOS-2):
    cp -a ../../ORCA1_LIM3_PISCES/EXP00/ForXIOS1/* . 
  1. Then edit the namelist files to make any required changes. For a short test run, edit namelist_cfg and set:
    • nn_itend = 160 : Run length of 5 days.
    • nn_stock = 160 : Restart file write at 5 days.
    • jpni = 8, jpnj = 12, jpnij = 96 : Processor decomposition of 8x12.
  1. Assuming you have been working under your home directory so far, the run directory needs to be copied over to the work filesystem (/work/n02/n02/<username>/), as the ARCHER batch nodes cannot see the home filesystem.

Create a directory to hold your NEMO run, for example:

mkdir -p /work/n02/n02/<username>/nemo/NEMOGCM_3.6_r6969_O1L3P

Then copy over the run directory, evaluating any sym-linked files:

cd NEMOGCM_3.6_r6969/CONFIG/O1L3P
rsync -arL EXP00 /work/n02/n02/<username>/nemo/NEMOGCM_3.6_r6969

If you are on the work filesystem already there is not need to create a new run directory.

  1. Download the ORCA1 input data set, and untar. Again, this needs to go on your work space. To avoid having multiple copies of the data, it is recommended to store this outside of your NEMO run directory.

For example:

cd /work/n02/n02/<username>/nemo

Then create sym-links to the data in your NEMO run directory:

cd /work/n02/n02/<username>/nemo/NEMOGCM_3.6_r6969_O1L3P/EXP00
ln -s /work/n02/n02/<username>/nemo/INPUTS_ORCA1_LIM3_PISCES_V9/* . 
  1. Create a parallel batch submission file to submit the run, run_nemo.pbs:
    And submit with:
    qsub run_nemo.pbs

Step 4. Results

Output data are written to the run directory (EXP00).

Diagnostic fields are controlled by the XIOS definition file (iodef.xml), and are grouped into files according to their grid. For a 5 day test run, the following files were produced:

Restart files are written 1 per processor, and depending on how XIOS is set up, there may be mutliple files for the diagnostic output too. To concatenate these files, use the rebuild_nemo program (under /work/y07/y07/umshared/bin), with the base filename and number of files, e.g.:

rebuild_nemo eORCA1L3P_00000160_restart 96

To quickly view netCDF files on ARCHER you can use xconv, which is also under /work/y07/y07/umshared/bin.

Log messages (including warnings and errors) are written to ocean.output.

Working practices

These instructions attempt to separate your experimental setup which will be stored (and backed up) under the ARCHER $HOME space, from your results in ARCHER $WORK. We also keep changes related to our run in the O1L3P directory, distinct from the ORCA1_LIM2_PISCES reference configuration. You may wish to run things in a different way, of course.

If you are making code changes, you should create a branch on the NEMO repository and check this out instead. You should then commit changes back to the repository.

The ARCHER work disk is not intended for long term data storage. Files should be copied over to the RDF (/nerc on ARCHER) or to Jasmin.

Building XIOS

Build instructions for XIOS are documented here:

XIOS should compile on ARCHER with the built-in Cray XC30 options, as follows:

module load cray-netcdf-hdf5parallel
./maxe_xios --arch XC30_Cray

This should be put into a serial batch job, as the compilation can take a very long time (around 12 hours in some cases).

If using XIOS-2, remember not to remove the key_xios2 flag when building NEMO or copy in the ForXIOS1 files.

Attachments (3)

Download all attachments as: .zip