Opened 4 months ago

Closed 2 months ago

#2671 closed help (fixed)

line information unavailable error code: 10 (Unable to get function name)

Reported by: amenon Owned by: um_support
Priority: normal Component: UM Model
Keywords: unable to get function name Cc:
Platform: Monsoon2 UM Version: 10.9

Description

Dear CMS,

I am runnning an ensemble nesting suite in MONSooN, that fails at the LAM forecast of all the ensembles. It doesn't give any error message, but shows some exceptions as below:

[533] exceptions: [backtrace]: (  8) : line information unavailable error code: 10 (Unable to get function name)
[533] exceptions: [backtrace]: (  9) : Address: [0x200085c8] 
[536] exceptions: [backtrace]: (  8) : um_shell_ in file /home/d04/arame/cylc-run/u-bb030/share/fcm_make/preprocess-atmos/src/um/src/control/top_level/um_shell.F90 line 510
[539] exceptions: [backtrace]: (  8) : um_shell_ in file /home/d04/arame/cylc-run/u-bb030/share/fcm_make/preprocess-atmos/src/um/src/control/top_level/um_shell.F90 line 510
[536] exceptions: [backtrace]: (  9) : Address: [0x200085c8] 
[538] exceptions: [backtrace]: (  8) : um_shell_ in file /home/d04/arame/cylc-run/u-bb030/share/fcm_make/preprocess-atmos/src/um/src/control/top_level/um_shell.F90 line 510
[532] exceptions: [backtrace]: (  8) : um_shell_ in file /home/d04/arame/cylc-run/u-bb030/share/fcm_make/preprocess-atmos/src/um/src/control/top_level/um_shell.F90 line 510
[539] exceptions: [backtrace]: (  9) : Address: [0x200085c8] 
[532] exceptions: [backtrace]: (  9) : Address: [0x200085c8] 
[538] exceptions: [backtrace]: (  9) : Address: [0x200085c8] 
ptions: run addr2line --exe=</path/too/executable> <address>

Could you please help me out with this? Many thanks.

Arathy

Attachments (1)

install_cold_ss.png (47.1 KB) - added by amenon 4 months ago.

Download all attachments as: .zip

Change History (32)

comment:1 Changed 4 months ago by grenville

Arathy

The problem is a divide by zero here:

/home/d04/arame/cylc-run/u-bb030/share/fcm_make/preprocess-atmos/src/um/src/atmosphere/radiation_control/set_rad_steps_mod.F90 line 148

The job has the ns option which sets i_sw_radstep_perday_prog=$RADSTEP_PROG

but in the job file

RADSTEP_PROG="0"

RADSTEP_PROG is calculated in suite-runtime-lams.rc — I don't know what it is supposed to do.

Can you check that suite-runtime-lams.rc is being included in the suite.rc file.

I suggest you run an ensemble of size 1 until the setup is correct.

Grenville

comment:2 Changed 4 months ago by amenon

Thanks Grenville. The suite.rc has the suite-runtime-lams.rc included as :

{% if DRV_MOD["nregns"] > 0 %}
    {% include "suite-runtime-lams.rc" %}
{% endif %}

Earlier, I ran the same suite with two ensembles for two cycles as a test and succeeded. But that had radiation timesteps as 15 (prognostic) and 5(diagnostic) as shown in this worked example https://code.metoffice.gov.uk/trac/rmed/attachment/wiki/suites/ensemble/worked_eg_2017/regn1_resln1_mod1_T1.png

When I started this simulation with 10 ensembles, I changed the radiation time step to 1800 (prognostic) and 600 (diagnostic) as the help window associated with it generally suggests to have this diagnostic radiation timestep as 5 times the model timestep (which is 120 s in this case) and prognostic as 4 to 5 times the diagnostic radiation timestep.

As this error seems like coming from the rad_step, I will try to change this time step back to 15X5 and will see how it goes.

Regards,
Arathy

comment:3 Changed 4 months ago by grenville

The radiation time steps are in units of the model time step - not seconds

comment:4 Changed 4 months ago by amenon

Ah! I didn't pay much attention to that. In all other suites that I ran (which are not ensembles, but the regular nesting suite), the radiation time step was in seconds.

I tried, with the 15X5 radiation timestep, the suite still failed with the same 'divide by zero' error.

Regards,
Arathy


comment:5 Changed 4 months ago by grenville

But you said this worked in a test case?

comment:6 Changed 4 months ago by amenon

Yes, this worked for two ensemble members (with the 15X5 radiation time step setup). It was the same suite. That was a test. Now for the real case study, I increased the number of ensembles and then started the same suite as new using rose suite-run —new

comment:7 Changed 4 months ago by amenon

Hi Grenville,

I tried rerunning the suite with two ensemble members for a single day (to recreate the test run that succeeded once). Now the LAM forecast fails with the following error:

???!!!???!!!???!!!???!!!???!!!       ERROR        ???!!!???!!!???!!!???!!!???!!!
?  Error code: 1
?  Error from routine: c_io_rbuffering_change_mode
?  Error message: Failure to mode change in another layer means read buffer cannot be made consistent.
?  Error from processor: 0
?  Error number: 24

I found this ticket http://cms.ncas.ac.uk/ticket/2650, but don't know how to get over this error.

Regards,
Arathy

comment:8 Changed 4 months ago by grenville

Arathy

I took a copy of u-bb030 but it fails in install_engl_startdata for me because I don't have MASS access. I am a bit confused about the cycle dates - my copy is starting at 20150323T0000Z, but your logs show 20160701T0000Z as the first cycle (plus the suite I copied requested 18 ensemble members, which also caused an error) - I'm not following what's happening here?

Grenville

comment:9 Changed 4 months ago by amenon

Hi Grenville,

Maybe I forgot to commit the suite. Could you please have a look at it now?

Arathy

comment:10 Changed 4 months ago by grenville

Arathy

It looks like the Model output stream called "1" has a bad filename base — why are there triple quotes ronnd $DATAM/${RUNID}a_py%N ?

change to '$DATAM/${RUNID}a_py%N', and run all the stash macros (I'm not sure this is essential)

That worked for me.

Grenville

comment:11 Changed 4 months ago by amenon

Thanks Grenville. It worked fine after removing those extra quotes. But the extra quotes appears just like that and I don't know where it comes from. For eg., yesterday I removed those extra quotes, ran the model and today when I opened the rose GUI again, I found that those extra quotes are back. Thankfully the forecast job was done by then.

But currently the suite fails to do the archive job. job.err file says

[FAIL] moo put -F -c umpp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pz006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pc000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ph018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_py006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ps000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ps018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_py012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pf018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pz012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_py000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pe000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pz000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_py018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pt012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pd006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pe006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ph012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ps006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pc018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pf000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pt006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pe018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pz018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pc012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ph000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pt018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pd000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pc006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ps012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pt000.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_ph006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pd012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu018.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pe012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pf006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pf012.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb006.pp /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pd018.pp moose:/devfc/u-bb030/field.pp/ # return-code=2, stderr=
[FAIL] put command-id=634485212 failed: (SSC_TASK_REJECTION) one or more tasks are rejected.
[FAIL]   /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu012.pp -> moose:/devfc/u-bb030/field.pp/20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pu012.pp: (TSSC_SET_DOES_NOT_EXIST) no such data set.
[FAIL] put: failed (2)
[FAIL] ! moose:/devfc/u-bb030/field.pp/ [compress=None, t(init)=2018-11-15T17:46:35Z, dt(tran)=0s, dt(arch)=2s, ret-code=2]
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa000.pp (umnsaa_pa000)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa006.pp (umnsaa_pa006)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa012.pp (umnsaa_pa012)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pa018.pp (umnsaa_pa018)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb000.pp (umnsaa_pb000)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb006.pp (umnsaa_pb006)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb012.pp (umnsaa_pb012)
[FAIL] !	20160701T0000Z_INCOMPASS_km4p4_ra1t_inc4p4_em00_pb018.pp (umnsaa_pb018)

And so on. Is /scratch/jtmp/pbs.66859.xcs00.x8z/tmpCigNQ7/ the source directory? pbs.66859.xcs00.x8z does not exist in /scratch/jtmp. In app/archive/rose-app.conf, the source directory is given as

[arch:field.pp/]
command-format=moo put -F -c umpp %(sources)s %(target)s
rename-format=%(cycle)s_${NAME}_p%(tag)s.pp
rename-parser=^.*a_p(?P<tag>.*)$
source=*a_p*
source-prefix=$DATADIR

If the archiving also works fine, then I can finally start the real simulation with 10 ensembles for 10 days.

Regards,
Arathy

comment:12 Changed 4 months ago by grenville

Have you previously successfully sent data to MASS?

comment:13 Changed 4 months ago by grenville

Does ticket #2396 have the solution?

Last edited 4 months ago by willie (previous) (diff)

comment:14 Changed 4 months ago by amenon

Hi Grenville,

This suite /projects/cascade/arame/roses/u-ar437 has archived the outputs successfully to /devfc

Arathy

Changed 4 months ago by amenon

comment:15 Changed 4 months ago by amenon

Hi Grenville,

Sorry, it looks like the ticket 2396 has the solution to this problem. I should have checked it earlier. I was following the suggestions there, but not able to figure out the following step:

Next to command default press the '+' sign and select 'add to configuration'

The install_cold command window looks like the attachment above. Could you please advise how to add the 'add to configuration'?

Arathy

comment:16 Changed 4 months ago by willie

Hi Arathy,

You need to select View → View Latent variables to see the plus sign.

Willie

comment:17 Changed 4 months ago by willie

Hi Arathy,

You can set Rose to always display hidden variable by editing your ~/.metomi/rose.conf file and adding

[rose-config-edit]
should_show_latent_pages=True
should_show_latent_vars=True
should_show_ignored_vars=True
should_show_ignored_pages=True

Willie

comment:18 Changed 4 months ago by amenon

Thanks Willie. I did those steps and got past this error and the suite succeeded one cycle. However, I am not able to see my archived outputs. I can't find the outputs anywhere in cylc-run too. In xcs-c, when I do 'moo ls moose:/devfc/u-bb030/', I get the following error :

failed: (SSC_TASK_REJECTION) one or more tasks are rejected.
moose:/devfc/u-bb030: (TSSC_SET_DOES_NOT_EXIST) no such data set

I can access my other model outputs in MASS, hence not any issue with my access.

comment:19 Changed 4 months ago by willie

Hi Arathy,

This is because the install cold app had a failure:

mkset command-id=636839455 failed: (SSC_TASK_REJECTION) one or more tasks are rejected.
  moose:/devfc/u-bb030: (TSSC_PROJECT_DOES_NOT_EXIST) the requested project does not exist.

so the directory was never created.

You need to select Monsoon on the general options page and supply a charging code.

Willie

comment:20 Changed 4 months ago by amenon

Hi Willie,

Sorry, I forgot to mention this: the path to the suite is /projects/cascade/arame/roses/u-bb030/; not the one in home. The suite in /projects/cascade/arame has the site and charging code specified. Is it an issue to have the suite in the projects directory?

I was once experiencing some issues with space in my Monsoon home. I was somehow using 2TB in home and I received an email reminder saying the home is not backed up if it is more than 10 GB. Hence moved my roses directory to the project workspace.

Arathy

comment:21 Changed 4 months ago by willie

Hi Arathy,

I think it is too confusing to have the roses directory in /projects. I think we should proceed as follows,

First sort our your home directory. In ~arame/cylc-run you have 1TB of data in u-aa753. Something bad happened here and the normal links, which are tiny, have been replace with the actual data from /projects. If you need this data, then copy it to /projects:

cp -a ~arame/cylc-run/u-aa753 /projects/cascade/arame/cylc-run

Ensure that the data is copied and then delete it from /home:

rm -rf ~arame/cylc-run/u-aa753

This will make your home directory a reasonable size. If you look at the other suite output in ~arame/cylc-run, you will seen that share and work are links to the cascade directory. This means they don't take up much space in /home.

Next, I think we can delete ~arame/roses/u-bb030 as this is very old and all the changes have been committed.

Now, go into /projects/cascade/arame/roses/u-bb030 and commit your changes. Once this has succeeded, delete /projects/cascade/arame/roses/u-bb030. Then go to your home directory ~arame/roses and check out a copy of u-bb030 and run it.

After you have fixed u-bb030, you will then need to deal with each suite in /projects/cascade/arame/roses in a similar manner, but you need also to check that you have no uncommitted changes in the home version. If you have then you need to proceed with even more caution.

This should get you back to the status quo.

Willie

comment:22 Changed 4 months ago by amenon

Hi Willie,

I did all these changes and sorted out the issue with home space. I ran the suite for a single cycle for a single member as a test. The problem still persists. This is what I get on doing moo ls:

moo ls moose:/devfc/u-bb030
ls command-id=642590116 failed: (SSC_TASK_REJECTION) one or more tasks are rejected.
  moose:/devfc/u-bb030: (TSSC_SET_DOES_NOT_EXIST) no such data set.

Arathy

comment:23 Changed 3 months ago by willie

Hi Arathy,

The install_cold task is still failing,

mkset command-id=642555635 failed: (SSC_TASK_REJECTION) one or more tasks are rejected.
  moose:/devfc/u-bb030: (TSSC_PROJECT_DOES_NOT_EXIST) the requested project does not exist.

It is trying to do,

moo mkset --single-copy -p project-${CHARGING_CODE} moose:/devfc/$ROSE_SUITE_NAME || true

Could you run that command please on xcs. Set CHARGING_CODE to cascade and the suite name to u-bb030. What is the result?

Willie

comment:24 Changed 3 months ago by amenon

Hi Willie,

This is what I get:

arame@xcslc0:~> moo mkset --single-copy -p project-cascade moose:/devfc/u-bb030 || true
### mkset, command-id=643760672

comment:25 Changed 3 months ago by willie

Hi Arathy,

Has it created the directory in MASS? Is it hanging?

Willie

comment:26 Changed 3 months ago by amenon

Hi Willie,

"moo ls moose:/devfc/u-bb030" doesn't return any error. So it seems the directory is created.

Should I try rerunning the model now?

Cheers,
Arathy

comment:27 Changed 3 months ago by willie

Hi Arathy,

That's great. Try running the model now. It might be that you need single quotes round the charging code in the suite, that's the only reason I can think why it didn't create it from the suite.

Willie

comment:28 Changed 3 months ago by amenon

Great! Thanks Willie. Will get back to you after giving it a try.

Arathy

comment:29 Changed 2 months ago by willie

Hi Arathy,

Are you still having problems with this?

Willie

comment:30 Changed 2 months ago by amenon

Hi Willie,

The model didn't archive the output to MASS. But I switched off rigorous house keeping and ran the suite. Hence I now have all the output in the cylc-run. I was in contact with Stu regarding this and we heard from Katie that if I am using my external access to MASS via Jasmin then I can't use the mkset command. I don't know if somehow I have the wrong moose file (the one for Jasmin) in my Monsoon. In that case I don't know if I can even access MASS via Monsoon. But I have no issue with accessing MASS from Monsoon. For time being, I will have to archive the output myself.

Arathy

comment:31 Changed 2 months ago by willie

  • Resolution set to fixed
  • Status changed from new to closed

Hi Arathy,

Glad you got your results. I'll close this ticket now.

Willie

Note: See TracTickets for help on using tickets.