The answer to the first of these is: yes, but this is an experimental feature still in beta. Check out credo.jobrunner.pbsjobrunner if you are interested in this, and if you have an Underworld checkout, the script in Underworld/InputFiles/credo_rayTaySuitePBS.py.
You can also submit a Python CREDO script in parallel on a HPC system running PBS by writing the appropriate PBS script yourself, and embedding the CREDO call within it - see Different ways to launch CREDO scripts for an example.
This problem is usually because you haven’t add the directory containing the CREDO Python source to your PYTHONPATH. See Setting up your environment to use CREDO for the various ways to do this.
Currently (11/4/2011), these tests have been written so that by default when run from the command line, they expect to post-process the benchmark tests and reporting from an existing set of results.
If you want to modify this behaviour so that you do first run and generate the models required by the benchmark, then set the postProcessFromExisting flag to False in the __main__ section at the bottom of a benchmark, e.g.
if __name__ == "__main__":
postProcFromExisting = False
jobRunner = credo.jobrunner.defaultRunner()
testResult, mResults = sciBTest.runTest(jobRunner, postProcFromExisting,
createReports=True)
Q: I have a problem running parallel CREDO system tests, along the lines of being unable to parse Field convergence results, where it brings up an exception message along the lines of: “credo.io.stgcvg.CVGReadError: Error, couldn’t read expected error” ...
A: This problem is often caused by using an “mpirun” or “mpiexec” not corresponding to the MPI library you compiled the code with. Not doing this (eg running the code with OpenMPI that you compiled with MPICH2) can result in annoying parallel bugs e.g. corruptions when writing output files, which among other things throw off the CREDO system testing now included with the code.
What are the ways of dealing with this?