Custom Query (3351 matches)


Show under each result:

Results (10 - 12 of 3351)

1 2 3 4 5 6 7 8 9 10 11 12 13 14
Ticket Resolution Summary Owner Reporter
#3384 fixed Transitioning rose/cylc suites to SLURM jules_support mtodt


I'm trying to transition suites I have been and am currently running to SLURM, starting with u-bv464. I've set up login to cylc1.jasmin. In suite.rc I've changed job submissionmethod from lsf to slurm and job specifications for fcm_make and jules according to the JASMIN help page. This seems to work, however, fcm_make nevertheless fails with the following error message:

[FAIL] mpif90 -oo/water_constants_mod.o -c -DSCMA -DBL_DIAG_HACK -DCOMPILER_INTEL -I./include -I/gws/nopw/j04/jules/jules_build/libs/include -heap-arrays -fp-model precise -traceback -I/gws/nopw/j04/jules/jules_build/libs/include -ip -no-prec-div -static-intel -lz -lm /home/users/mtodt/cylc-run/u-bv464/share/fcm_make/preprocess/src/jules/src/params/standalone/water_constants_mod_jls.F90: command not found
[FAIL] compile    0.0 ! water_constants_mod.o <- jules/src/params/standalone/water_constants_mod_jls.F90
[FAIL] ! water_constants_mod.mod: depends on failed target: water_constants_mod.o
[FAIL] ! water_constants_mod.o: update task failed
[FAIL] fcm make -f /work/scratch-pw/mtodt/cylc-run/u-bv464/work/1/fcm_make/fcm-make.cfg -C /home/users/mtodt/cylc-run/u-bv464/share/fcm_make -j 4 # return-code=255

The error messages pop up for lots of modules, not just water_constants_mod. I assume there's something I've overlooked while changing to SLURM? Something like a link to a library that has to be changed as well? Thanks a lot for your help!

Cheers Markus

#3383 answered supermeans Python2.7 code crashing with iris and biggus on JASMIN short-serial um_support pmcguire

Hi CMS Helpdesk: These lines of ~pmcguire/autoassess/autoassess7a1a/ which is called by ~pmcguire/autoassess/autoassess7a1a/submit-supermean7e3mam.scr uses iris in the short-serial SLURM queue on JASMIN. The error message below shows that iris is using biggus which is threaded, which is not allowed on short-serial, giving intermittent errors after 10-20 hours of wallclock time. What should/could I do? I have my processing for AutoAssess setup to do 2x35x5=350 runs of these supermeans calculations (one per each season, one for each year, one for each run). Patrick

   for i in range(len(vars)):
        foo[i] = vars[i].collapsed('time',iris.analysis.MEAN),out_dir+'/'+runid_short+'a.m'+supermeanlabel+str(year)+time+'.pp')

produces this error in ~pmcguire/autoassess/autoassess7a1a/supermean7e3mam.16310412_29.err

File "", line 136, in <module>,out_dir+'/'+runid_short+'a.m'+supermeanlabel+str(year)+time+'.pp')

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/iris/io/", line 422, in save

saver(cube, target, kwargs)

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/iris/fileformats/", line 2309, in save

save_fields(fields, target, append=append)

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/iris/fileformats/", line 2558, in save_fields

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/iris/fileformats/", line 1387, in save

data =

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/iris/fileformats/", line 1290, in data

data = self._data.masked_array()

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/biggus/", line 2619, in masked_array

result, = biggus.engine.masked_arrays(self)

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/biggus/", line 437, in masked_arrays

return self._evaluate(arrays, True)

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/biggus/", line 431, in _evaluate

ndarrays = group.evaluate(masked)

File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/biggus/", line 409, in evaluate


File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/site-packages/biggus/", line 135, in thread


File "/apps/contrib/jaspy/miniconda_envs/jaspy2.7/m2-4.6.14/envs/jaspy2.7-m2-4.6.14-r20190715/lib/python2.7/", line 736, in start

_start_new_thread(self.bootstrap, ())

thread.error: can't start new thread

#3382 answered Submit multiple UMUI jobs on external HPC without requiring password for every job um_support watson

Hello, I recently started running UMUI jobs on a university HPC system with the new Puma security imposed. Our current submission script runs on the HPC system and works by copying over job files processed by the UMUI and then running commands there to submit the job. With the new Puma security, it seems that a password is required every time a job is submitted this way in order to copy the files. When trying to submit a couple of dozen jobs in one go, this is rather cumbersome. Is there a solution that doesn't require a password every time?

1 2 3 4 5 6 7 8 9 10 11 12 13 14
Note: See TracQuery for help on using queries.