Opened 11 years ago

Closed 8 years ago

#442 closed help (fixed)

Help required with setting up a user STASH master file and ancillary

Reported by: jcrook Owned by: jeff
Component: UM Model Keywords:
Cc: Platform: <select platform>
UM Version: 4.5



I want to use a defined set of cloud fields in the radiation code of UM4.5 and have been told that using user ancillaries is the best way to do this. I understand I need to create an ancillary file and a user STASH master file and I need to create a mod for the radiation code to access this data in the D1 array. I know how to add this user STASH master file and the ancillary files to my job.
I want to create a user ancillary file for single level data and another for multi level data. In both cases I want to update the data on a 3 hourly timestep. I have copied an exisiting STASH master file but I don't know how to set up all the fields. C4 didn't help completely. Also no one here in Leeds has done this before so I can't copy someone else's file and be sure the fields are ok. I know what model, sectn, item and name are although I don't know which section to add them to (1 or 2?). Should Space be 0? I think Point can be set to 0. I need access in both SW and LW radiation timesteps so should Time be 0? Both my multi level and single level data is on the grid that cloud fields come out on. Is this the theta grid, ie grid type=1? I think I know what to set LevelT to (1 for my multi level data and 5 for my single level data). I have read that single level user ancillaries only work on land points. Is this because the data in the D1Array only stores the land points? Is there a way round this? If I set grid type to 1 and LevelT to 5 why does this not mean all points are stored? Should LevelF and LevelT be set to 1 and 2 respectively for the multi level fields? I presume PseudoLevel? fields should be 0? Should LevCom? be 0? My option code is currently set to all 0 - I'm only going to use these ancillaries in a copy of a job from which I got the cloud fields to make the ancillaries. The version codes for the file I copied are all 0 except digit 18 which is 1. Should I leave this the same? The DumpP code is 2, should I leave this the same? Should I set all PCn fields to -99? What is PPFC, LBVC, RBLEVV, CFLL and CFFF?
I also need to create ancillary files to contain these fields. Can I use xancil for this? The documentation on this only seems to cover the standard ancillary files not a user one. I have my data in netcdf format.

Change History (44)

comment:1 in reply to: ↑ description Changed 11 years ago by jeff

  • Owner changed from um_support to jeff
  • Status changed from new to accepted


User ancillaries have to use specific stash numbers, i.e. section 0 and items 301-320 for single level fields and items 321-340 for multi level fields. Here is an example file which has fields that I have used in the past (1 single level, 1 multi level)

#|Model |Sectn | Item |Name                                |
#|Space |Point | Time | Grid |LevelT|LevelF|LevelL|PseudT|PseudF|PseudL|LevCom|
#| Option Codes         | Version Mask         |
#|DataT |DumpP | PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8  PC9  PCA |
1|    1 |    0 |  301 |AMIPII TRACER A (EQ) SOURCE         |
2|    2 |    0 |    1 |    1 |    5 |   -1 |   -1 |    0 |    0 |    0 |    0 |
3| 00000000000000001000 | 00000000000000000001 |
4|    1 |    0 | -99  -99  -99  -99  -99  -99  -99  -99  -99  -99 |
5|    0 |  501 |    0 |  129 |    0 |    0 |    0 |    0 |    0 |
1|    1 |    0 |  321 |SPECIFIC HUMIDITY CLIMATOLOGY       |
2|    2 |    0 |    1 |    1 |    1 |    1 |    2 |    0 |    0 |    0 |    0 |
3| 00000000000000001000 | 00000000000000000001 |
4|    1 |    2 | -24  -24  -24  -24  -99   30  -99  -99  -99  -99 |
5|    0 |   95 |    0 |    9 |    0 |    0 |    0 |    0 |   13 |
1|   -1 |   -1 |   -1 |END OF FILE MARK                    |
2|    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |
3| 00000000000000000000 | 00000000000000000000 |
4|    0 |    0 | -99  -99  -99  -99  -30  -99  -99  -99  -99  -99 |
5|    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |    0 |

Space should be 2 and point 0. Time should be 1. I'm not sure what grid cloud fields are on but grid type = 1 sounds likely. By default single level user ancillary files are assumed to be on land points only but this can be changed with this mod

*/  single level user ancillaries not just on land points
*I GRB0F304.161
     &  OR.(FIELD.GE.48.AND.FIELD.LT.68). !single-level user ancillaries

LevelF and LevelT should be 1,2 for multi level fields and -1 for single level fields. The Pseud and LevCom? fields should all be 0. Option code should be fine as all zeros. For version code I would copy what I have in the example file, i.e. all 0 except 20 which is 1. DumpP controls whether you want the field to be 32 bit packed in the output dump, if selected in the umui. 0 means field is never packed 2 if field can be packed. PCn controls the packing of pp output and -99 means no packing which is the safest option. PPFC is the field code which you can set to something sensible if you like, see, otherwise set to 0. The other code aren't really important so just set them the same as in the example. The LBVC value is also taken from the field codes table.

What version of xancil are you using? The latest version can create user ancillary files, see You can also download the latest version from there.

I hope this has answered all your questions.


comment:2 Changed 11 years ago by jcrook

Thanks very much. I believe that my user STASH master file is now ok and I am looking at Xancil now. I hadn't started using Xancil I just found a UM document 073 on the NCAS-CMS website but I think it is very out of date. I am going to run Xancil on Hector. It's already installed. The description of Xancil on the web site seems quite clear. I have each parameter in a separate netcdf file so will need to read them all in. I presume I have to specify them in the order I put them in my user STASH master file, otherwise how else does it know which parameter is which?

comment:3 Changed 11 years ago by jcrook

I'm having some problems running xancil on Hector. It just seems to hang up. I'm on the General Configuration Page and clicked Add to add some netcdf files. Its just been sitting there not responding for some time now. I connect to Hector through Exceed and puTTy from my PC.

comment:4 Changed 11 years ago by jeff


In xancil you have to specify the STASH code for each field in the ancillary file.

Xancil is working fine from hector from here, can you try again.


comment:5 Changed 11 years ago by jcrook

Sorry I've been on holiday so couldn't try xancil again until now, but I still have the same problem this morning.

comment:6 Changed 11 years ago by jeff


Xancil on Hector still works okay here so it must be something to do with using putty/exceed, I've never used this combination so not sure what the problem could be.

Could you try running /home/n02/n02/jwc/bin/test.tcl on hector. A small button should appear, click on this and a file select window should come up, this is what should happen with xancil. Does this work or hang as well?


comment:7 Changed 11 years ago by jcrook

Your test.tcl program works fine but xancil doesn't.

comment:8 Changed 11 years ago by jcrook

Our IT support people have suggested I try Xming instead of Exceed and I have now got the file list to come up in xancil and have been able to progress. I have requested that it use the dates and vertical levels from the netcdf files. My next problem is that when I try to create the ancillary files it complains that hybrid_p_x1000 doesn't contain the standard_name attribute needed to calculate vertical levels. I have entered hybrid_p_x1000 as the dimension name to specify vertical levels but this doesn't help. Do I need some extra attribute in my netcdf files and if so what?

comment:9 Changed 11 years ago by jeff

Its a bit strange Exceed not working but I'm glad you got around that problem now.

The way to specify the vertical levels is explained here

under Configuration panels.

comment:10 Changed 11 years ago by jcrook

I have added an attribute on my vertical dimension in my nectdf files called standard_name and set it to "atmosphere_hybrid_sigma_pressure_coordinate". I believe I also need an attribute called formula_terms but I don't understand what I'm supposed to set this to. The documentation gives an example formula_terms = "a: var1 b: var2 ps: var3 p0: var4" but I don't know have these variables in my nectdf file and don't know where to get them from.

comment:11 Changed 11 years ago by jeff

For multi-level fields xancil needs to know the eta values, this is what is referred to in the formula_terms attribute. If your netcdf file doesn't already have these values then it is not worth adding them, in this case you need to use the namelist method for specifying these values as per the documentation.


comment:12 Changed 11 years ago by jcrook

I have managed to get the ancillary files now. I am not sure what the next step is. I have tried a reconfig only and a compile and stop. I don't think either work but I don't know why - I don't understand the output in the .leave files. I've asked some local people to look and they say they can't read my files in /home/n02/n02/jcrook but I did a chmod a+r on everything.

comment:13 Changed 11 years ago by jeff

I can see your files okay, which ones should I be looking at?


comment:14 Changed 11 years ago by jcrook

The job is xesaf. My .leave files are in /home/n02/n02/jcrook/um/umui_out and my job files are in /home/n02/n02/jcrook/work/um/<jobname>.

I have tried a few things now so I am forgetting which .leave file goes with what but xesaf*.d10196.t181147.leave was a reconfig only.

Someone here then suggested I should do a compile first so I set the job back to doing compile and stop but it seemed to do a reconfig again because it produced another .leave file (xesaf*.d10197.t134723) and not a .comp.leave file. I am currently running a compile and run. I expect the run will fail but I'd like the compile to work.

comment:15 Changed 11 years ago by jeff

It looks like your compile job has worked (xesaf000.xesaf.d10197.t142810.comp.leave).


comment:16 Changed 11 years ago by jcrook

I've now done a reconfig and the .leave file says
FIXHD(2) not set correctly; value=72057594037927936

So it looks like I need to set some header size. I am adding parameters to the D1 array through my user ancillaries so this makes sense but I don't know where in the UMUI to set this and what to set it to.

comment:17 Changed 11 years ago by jeff

  • Cc l.steenman-clark@… added

I don't have permission to read your ancillary files in /work, if you run this command I should be able to

chmod -R g+rX /work/n02/n02/jcrook

When you created your ancillary files in xancil did you select big endian ancillary files in the General Configuration panel? Your ancillary files need to be big endian to be read by the UM on Hector.

I'm away for the next 2 weeks so I've CCed Lois into this query so she can help you further.


comment:18 Changed 11 years ago by jcrook

I left the default of Big Endian selected. I believe all my directories/files should now be readable by the group.

comment:19 Changed 11 years ago by lois

  • Cc l.steenman-clark@… removed
  • Owner changed from jeff to lois
  • Status changed from accepted to assigned

I have had a quick look at your ancillaries Julia and they look 'odd'. I need to investigate further and follow the email conversation you have been having with Jeff. It may take me a bit of time to get up to speed. So apologies in advance for a delay in replying.


comment:20 Changed 11 years ago by lois

Sorry for the delay Julia. A few questions while I find my way through what you are trying to do.

  • have you used the start dump tcofaa@dam00c1 before?
  • where did this start dump come from?
  • has anyone used this start dump on HECToR before?


comment:21 Changed 11 years ago by lois

Hello Julia,

I can reproduce your problems and I have got a bit further by changing the endianness of your start dump tcofaa@dam00c1

But now I can see that there are some real problems with your ancillary files - not quite sure what yet.


comment:22 Changed 11 years ago by jcrook

Having trouble adding comments…

I originally created xesac by copying the job of a colleague in Leeds and left the startdump as it was. I ran this job fine. I also copied this job to xesad and changed the CO2 to go up by 1% per year and ran this ok from the same startdump. I kept some of the dump files produced by xesac in work/um/startdumps in case I wanted to do multiple 1% CO2 runs from different starting points. Don't know if they will be of any help to you. I then copied xesac and xesad to xesaf and xesag respectively and modified these to use my user ancillaries and modification to use the ancillary data in the radiation code.

comment:23 Changed 11 years ago by jcrook

Hi Lois, have you got any further with the ancillary files?

comment:24 Changed 11 years ago by lois

Hello Julia,

I wish I had, although I can get it to run further than you had, I haven't got to the bottom of the problem. I really need to spend more time delving into the details but HECToR is down this afternoon and I am out of the office Thursday and Friday. So I will talk to Jeff on Monday, when he is back from holiday, and see if we can together find a solution to your query.

Sorry for such a negative answer.


comment:25 Changed 11 years ago by jeff

  • Owner changed from lois to jeff

Hi Julia

I think I've found your problem. In the umui panel

Sub-Model Independent → Compilation and Modifications → Modifications for the reconfiguration

you have mods port_end_f.mod and port_end_c.mod turned off, these need to be on for running on hector. After you make this change you will need to recompile the reconfiguration.


comment:26 Changed 11 years ago by jcrook

These were switched off in the job I copied and I ran xesac and xesad with these switched off but I didn't need to do a reconfiguration. Are these just used in the reconfiguration and not required for actually running the model? When running with my ancillaries I want everything to be the same as for my previous runs as I need to compare the results.

Lois seemed to think my ancillaries were not right. Is this true?

comment:27 Changed 11 years ago by jcrook

I switched these mods back on and ran the reconfiguration but I'm not sure what is supposed to happen or whether it has worked. Should I get a new dump file some where? I don't know if I have set up the job properly foor reconfiguration. What settings should I have for compile and run and the atmosphere configuration stuff under start dump?

comment:28 Changed 11 years ago by jeff

These mods are needed in both the model and the reconfiguration, your jobs had them switched off for the reconfiguration but switched on for the model.

I think the ancillaries are okay, just without the mods they couldn't be read correctly.

If you want your run to be the same as your previous ones, then when you run the reconfiguration you need to make sure any ancillary fields set to be configured are changed to be not used. That way the new dump file will not have fields taken from the ancillary files but keep the same fields as the original dump file. I had a quick look at your job and I think everything is set to not used, where needed.

Your job seems to have worked correctly, it has produced a new dump file


The start dump setup looks correct. This job was set to be reconfiguration only so it didn't compile/run the model. You need to turn off reconfiguration only and set the model and the reconfiguration to "run from the existing executable, as named below". You don't need to run the reconfiguration again, go to the start dump panel and replace




and then switch off "Using the reconfiguration".


comment:29 Changed 11 years ago by jcrook

I changed the atmosphere start dump file and switched off the reconfiguration in the UMUI for xesaf as you said above and tried to run it but it still thinks I'm trying to reconfigure it. The SUBMIT file on PUMA still has STEP=99 for reconfiguration only. I've tried saving, processing and submitting a couple of times but it makes no difference.

I have also done the reconfig for job xesag which now works then set that to run. In this case it does try to run the job but then says there is a mismatch in the ocean data time. I did not do a reconfig for the ocean as I have not changed anything there.

comment:30 Changed 11 years ago by jeff

In umui panel

Sub-Model Independent → General Configuration and Control

Turn off "Perform the reconfiguration step only"

I think the other problem is caused by the settings you used in the reconfiguration. Turn the reconfiguration back on, using your original dump file and switch off these setting

Override year in dump with year in model basis time
Resetting data time to verification time

Hopefully things should work then.


comment:31 Changed 11 years ago by jcrook

Hi Jeff I've reconfigured as you suggested and started the xesaf job running again. This time it stops with error:

Model completed with the following :

Error Code : 14 Message : INANCILA: Insufficient space for LOOKUP headers

Any ideas?

comment:32 Changed 11 years ago by jeff

In umui panel

Atmosphere → Ancillary and input data files → In file related options → Header record sizes

This number has to be larger than the total number of fields in the ancillary files which are updated. You have 3 updated ancillary files, ozone, single level user ancil and multi level user ancil, and the total number of fields in these files are 175908, so this number needs to be larger than that. The problem is the maximum allowable value is 50000, I can change this though.

Your user ancil files are every 3 hours, is this actually needed? Most runs have the ancillary data as monthly means and the UM interpolates that into a 3 hour (in your case) updating period. Also your ancillary files are valid from 1849/12/01:00.00 to 1850/11/30:21.00, but your model run is from 2200/12/01 for a year, so they do not cover this period.

Your ancillary files also need to extend at least one extra field either side of your model run period.


comment:33 Changed 11 years ago by jcrook

The ozone file was there in the job I copied so I don't know anything about it. My ancillaries contain cloud data and I felt that clouds change more often than on a monthly basis. What I actually want to do is to provide an annual set of cloud data on a 3 hourly basis and have this repeated for every year. I didn't want to create a netcdf file with the data repeated for every year because this would be huge. I put the dates in the netcdf file used to produce the ancillary file to match the dates specified in Sub-model Independent→Ancillary reference time. I thought this was required. I could produce a 6 hourly cloud field may be even daily but I certainly don't want to go to monthly.

comment:34 Changed 11 years ago by jeff

Sorry I didn't realise the files where periodic, forget what I said about the dates your files are fine. For yearly periodic files the actual value of the year doesn't matter.

I've increased the maximum value of the updated ancillary headers to be 250000 so you should be able to increase that now. You have to close and reopen your umui job to pick up this change.


comment:35 Changed 11 years ago by jcrook

The model runs now but after a while I get NaN for several diagnostics and I noticed in the .leave file it says

FIELDCODE values reset to zeroes
INANCILA: update requested for item 105 STASHcode 336 but prognostic address not set

for various STASHcodes. Am I missing something else in my set up?

comment:36 Changed 11 years ago by jeff

Which output file has this error message?


comment:37 Changed 11 years ago by jcrook

The .leave file is xesaf*.d10221.t120035.leave

I don't think it is complaining about the ancillary fields I have added but some old ones. So may be I have some how messed up the old ones.

comment:38 Changed 11 years ago by jeff

I don't think these error messages are a problem. Your run is only updating your user ancillaries every day not every 3 hours, I think this may be limitation of the way this bit of the UM code is written. I'm not sure how hard it would be to get around this. Would this problem be likely to cause the NaNs?? If not you need to check the dump files to make sure your new fields look correct and that your mods are working as expected. If you want the model to abort when the first NaN appears, instead of carrying on running, then add these mods and recompile.

$MODS/general/fptrap_f.mod (Fortran mods)
$MODS/general/fptrap_c.mod (c mods)

also this script mod will enable core dumps


Have you tried running the same model but without your changes? Did this work?


comment:39 Changed 11 years ago by jcrook

I don't think updating only every day would cause the NaNs?. I tried running the job last night with my mods switched off and the model ran for 4 years 3 months with no NaNs?. I looked at one of my single level cloud fields and it looked ok. I don't know about the multi level ones. I have changed my mod to only use the single level fields and will see if I get NaNs? with that. I will also look at the multi level fields to see if they look reasonable.

comment:40 Changed 11 years ago by jcrook

The model didn't run last night. It failed complaining about the spectral file which is not something I have changed. See xesaf*.d10223.t111240.leave. I am going to submit again today to see if it will run today.

comment:41 Changed 11 years ago by jcrook

I think the last problem must have been due to me changing my mods incorrectly. I have now separated my mods into a layer cloud mod and a convective cloud mod so I can switch them on independently without altering the code. So since then I ran the model with each of these mods separately but both cause the temperature to go to NaN. I looked at the ancillary data for my convective cloud top and base and it had some strange numbers in which made me reasses what should be in these fields. I still don't know really as no one at the Met office seems to know and you can't get them out of the UM as diagnostics. I have regenerated my ancillaries assuming model levels are 1-19 with 0 meaning no cloud rather than being 0-18 with "missing value" being no cloud. I also took the oportunity to make my ancillaries 6 hourly. I have since then reconfigured the model with the new ancillary and have run it for 4 years 3 months (ie one day of running) with just convective cloud mods on and my temperature is still sensible so may be it is working. However when I look at the dump files which do contain my user ancillary fields, the cloud base and top fields (303 and 304) don't look right at all. The 1st is mostly 1 and the 2nd mostly 19. So I don't know what to do. Should I carry on running the model and just look at the temperature to see if I am getting something similar to my previous run xesac which is exactly the same apart from these fixed cloud mods or should I worry about the dump file contents of cloud base and top? I don't know what to look for in the xesaf*.d10225.t175623.leave file to give me any clues as to what is happening.

comment:42 Changed 11 years ago by jeff

I had a look at your dumps in /work/n02/n02/jcrook/um/xesaf/datam and the 303,304 fields seem to be all zero to me, which doesn't seem right. For some reason the dates on these files is 1st Jan 1980, which is very odd. The value of your user ancillary fields in the dump files should be the interpolated values from the ancillary files, so something is going wrong. Can you confirm which files you mean when you say

However when I look at the dump files which do contain my user ancillary fields, the cloud base and top fields (303 and 304) don't look right at all. The 1st is mostly 1 and the 2nd mostly 19.


comment:43 Changed 11 years ago by jcrook

Sorry I decided to try something else this morning and deleted all the old files and started the model again with my layer cloud mods on and my convective cloud mods off. Before I did so, I converted the convective cloud amount, conv cloud base and conv cloud top and my equivalent ancillaries 301, 303 and 304 that were in the dump files to a netcdf file (see So if you look in that netcdf file you will see what those values were. The fact that they are now different when the mod for convective clouds is off prompted me to search through the code and I have found that one of the subroutines called by RAD_CTL does have the effect of modifying the cloud top and cloud base, despite the fact that the comments say these are input variables. So this means they would look different depending on whether I am using the 303 and 304 ancillary fields, but I would expect in the case with convective cloud mods off (as the dumps are now) the values should be as they were read from the ancillary file. May be I should just not use these 303 and 304 ancillary fields at all.

In the case of layer cloud mods on, my temperature is going to NaN and i need to find why this is so.

comment:44 Changed 8 years ago by jeff

  • Platform set to <select platform>
  • Resolution set to fixed
  • Status changed from assigned to closed
Note: See TracTickets for help on using tickets.