Running 2 suites using different numerical features and comparing results

This examples shows how to use CREDO to run two suites of Underworld runs, with varying numerical solver approaches, and compare the performance of the two.

Thus, it is a good baseline to use as an example for more complex performance analysis/accuracy testing.

Note

The actual script is available in the examples sub-directory of the CREDO distribution.

Setup

The script is shown below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#! /usr/bin/env python
import os, copy
import csv
from credo.modelrun import ModelRun, SimParams
from credo.modelsuite import ModelSuite
import credo.jobrunner

jobRunner = credo.jobrunner.defaultRunner()

outPathBase = os.path.join('output','PPC_Compare')
if not os.path.exists(outPathBase):
    os.makedirs(outPathBase)

defParams = SimParams(nsteps=2)
stdRun = ModelRun("Arrhenius-normal",
    os.path.join('..','..', 'Underworld', 'InputFiles', 'Arrhenius.xml'),
    simParams=defParams)
ppcRun = ModelRun("Arrhenius-ppc", "Arrhenius.xml",
    basePath=os.path.join("Ppc_Testing","udw_inputfiles"),
    simParams=defParams)

stdSuite = ModelSuite(os.path.join(outPathBase, "arrBasic"))
ppcSuite = ModelSuite(os.path.join(os.getcwd(), outPathBase, "arrPIC"))

for ii in range(10):
    stdRun.outputPath = os.path.join(stdSuite.outputPathBase, "%.5d" % ii)
    ppcRun.outputPath = os.path.join(ppcSuite.outputPathBase, "%.5d" % ii)
    stdSuite.addRun(copy.deepcopy(stdRun))
    ppcSuite.addRun(copy.deepcopy(ppcRun))

stdResults = jobRunner.runSuite(stdSuite)
ppcResults = jobRunner.runSuite(ppcSuite)

#-----------------------------

cpuRegs = []
cpuPPCs = []
for stdRes, ppcRes in zip(stdResults, ppcResults):
    stdRes.readFrequentOutput()
    ppcRes.readFrequentOutput()
    fStep = stdRes.freqOutput.finalStep()
    cpuReg = stdRes.freqOutput.getValueAtStep('CPU_Time', fStep)
    cpuPPC = ppcRes.freqOutput.getValueAtStep('CPU_Time', fStep)
    print "CPU time regular was %g, PPC was %g" % (cpuReg, cpuPPC)
    cpuRegs.append(cpuReg)
    cpuPPCs.append(cpuPPC)

avgReg = sum(cpuRegs) / len(cpuRegs)
avgPPC = sum(cpuPPCs) / len(cpuPPCs)

print "Avg over 10 runs: regular=%f, PPC=%f" % (avgReg, avgPPC)
sName = os.path.join(outPathBase, "comparePPC.txt")
csvName = os.path.join(outPathBase, "comparePPC-runs.csv")
avgInfo = open(sName, "w")
avgInfo.write("Avg regular = %f\n" % avgReg)
avgInfo.write("Avg PPC = %f\n" % avgPPC)
avgInfo.close()
csvFile = open(csvName, "wb")
wtr = csv.writer(csvFile)
wtr.writerow(["Run", "Reg t(sec)", "PPC t(sec)"])
for runI, (cpuReg, cpuPPC) in enumerate(zip(cpuRegs, cpuPPCs)):
    wtr.writerow([runI, cpuReg, cpuPPC])
csvFile.close()

print "Wrote summary to %s, run results to %s" % (sName, csvName)

Similar to the Using CREDO to run and analyse a Suite of Rayleigh-Taylor problems, the script first sets up CREDO suites to run, runs them using a JobRunner, then performs some analysis on the results.

Unlike the RayTaySuite example though, in this case we’re not varying a parameter across the suites, but are attaching the same model run a fixed number of times to both suites, in order to be able to average results.

Note that in this example, we’re using the basePath option to one of the suites, because the XML Model files must be run in a sub-directory of the current path the script is located in. This is an example of how CREDO can work in with any arbitrary directory structure that best suits you.

Other things of note about this script as an example are:

  • Use of the os.path.join() Python standard library function to construct paths, and re-using the credo.modelsuite.ModelSuite.outputPathBase attribute to help with constructing these. This is a good practice to keep outputs from an analysis run all in the same directory.
  • Use of the Python csv library to write custom results of interest to a CSV file as a useful record. A good tutorial on writing CSV files is available on Steven Lott’s Python pages.

Expected Results

Running the script should produce a report at the end similar to the following:

Doing post-run tidyup:
Restoring initial path '/home/pds/AuScopeCodes/stgUnderworldEGM-packages-devel/Experimental/InputFiles'
CPU time regular was 0.826317, PPC was 0.830904
CPU time regular was 0.840123, PPC was 0.814179
CPU time regular was 0.809768, PPC was 0.809136
CPU time regular was 0.813736, PPC was 0.827999
CPU time regular was 0.826321, PPC was 0.81942
CPU time regular was 0.815167, PPC was 0.844881
CPU time regular was 0.808789, PPC was 0.849388
CPU time regular was 0.799198, PPC was 0.826902
CPU time regular was 0.849262, PPC was 0.831013
CPU time regular was 0.800944, PPC was 0.810086
Avg over 10 runs: regular=0.818962, PPC=0.826391

And also save a text file and CSV file with contents such as:

Avg regular = 0.817589
Avg PPC = 0.932345
Run,Reg t(sec),PPC t(sec)
0,0.794358,0.889719
1,0.803794,0.951473
2,0.792373,0.935146
3,0.806235,0.952792
4,0.798883,0.796491
5,0.797865,1.00772
6,0.843122,0.923531
7,0.867709,0.986717
8,0.801994,0.849298
9,0.869554,1.03056