Opened 4 years ago

Closed 4 years ago

#1792 closed help (fixed)

Easiest way on ARCHER to create a file with a mean of a set of pp files

Reported by: s1251469 Owned by: um_support
Component: Data Keywords:
Cc: Platform: ARCHER
UM Version: <select version>

Description

Hello,

This is a pretty basic technical question. A couple of my runs didn't produce decadal means but I have all the pp files of the yearly means. What is the simplest way to average these into a single file (pp or netcdf)?

I could produce individual netcdf files, download them to my own computer and process them but that will take up a lot of space. I tried a few things with cfa and cdo but could get them to work correctly for various reasons. In the cdo case, a limit on memory was an issue.

Any advice?

Thanks Declan

Attachments (1)

ARCHERpython_err.txt (3.0 KB) - added by s1251469 4 years ago.
Memory error in python meaning step on ARCHER

Download all attachments as: .zip

Change History (7)

comment:1 Changed 4 years ago by charles

Hello Declan,

You should be able to use cf-python.

Run ipython and then type 'import cf' to load it. Read your files into a field by doing 'f = cf.read('filename*.pp')'. You can inspect the result by typing 'f' followed by enter or 'print f' for more details. If your files are not aggregated into one field it may help to do 'f = cf.read('filename*.pp', aggregate={'relaxed_identities': True})' instead.

Once your files are in one cf-python field you can take the mean by doing 'g = f.collapse('T: mean', weights='T')' and inspect g.

To write g to disk do 'cf.write(g, 'filename.nc')'.

Hope this helps.

Best regards,

Charles

comment:2 Changed 4 years ago by charles

Hello again,

In case you don't know it has been pointed out to me that to set up cf-python on Archer you must first do the following two commands:

module swap PrgEnv-cray PrgEnv-intel
module load anaconda cf

Also, the 'weights='T' ' in collapse is not strictly necessary in this case as cf-python will do this by default.

Best regards,

Charles

Changed 4 years ago by s1251469

Memory error in python meaning step on ARCHER

comment:3 Changed 4 years ago by s1251469

Hello Charles,

Thank you for your advice. It does seem to be a good way to go about it.

However, I seem to hit a "memory error" in the meaning stage (Full printout in attachment above). The pp files are read into f. There are 10 pp files, each about 1GB. Do you have any idea why that would be?

Regards,
Declan

comment:4 Changed 4 years ago by grenville

Declan

It's best to run this on a post processing machine, so from the archer login node try

ssh espp1

  • there's lots of memory on that machine.

Grenville

comment:5 Changed 4 years ago by s1251469

That has worked.

Thanks all.

comment:6 Changed 4 years ago by grenville

  • Resolution set to fixed
  • Status changed from new to closed
Note: See TracTickets for help on using tickets.