SysTest API

This module contains the System Testing functionality of CREDO.

Working at a higher-level than the credo.modelrun and credo.analysis modules, it is able to use their capabilities to run system tests of scientific applications, and communicate and record the results.

From a user perspective, doing an:

from credo.systest import *

Should allow key functionality to be accessed: e.g. the credo.systest.systestrunner.SysTestRunner class, and standard system tests such as Analytic, Restart and Reference.

Examples of how to use this module are provided in the CREDO documentation, see Using CREDO for System Testing of StGermain-based codes such as Underworld.

Note

in early CREDO versions, importing credo.systest would have imported into it the entire namespace of the credo.systest.api module. But Python 2.5 disallows this for relative imports (although the feature was added back in in Python 2.6) ... so the import of the credo.systest.api namespace into credo.systest was removed, and we only import the ‘public interface’ of implementations designed to be used from the API in testing.

As a result, if you want to define new api.SysTest or api.TestComponent implementations, you should import credo.systest.api to access the base classes of these.

credo.systest.api

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/systest.rst, line 11)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

Core API of the credo.systest module.

This defines the two key classes of the model, SysTest and TestComponent, from which actual examples of System tests or Test components need to inherit.

class credo.systest.api.CREDO_ERROR(errorMsg)

Bases: credo.systest.api.SysTestResult

Simple class to represent an CREDO error

class credo.systest.api.CREDO_FAIL(failMsg)

Bases: credo.systest.api.SysTestResult

Simple class to represent an CREDO failure

class credo.systest.api.CREDO_PASS(passMsg)

Bases: credo.systest.api.SysTestResult

Simple class to represent an CREDO pass

class credo.systest.api.MultiRunTestComponent(tcType)

Bases: credo.systest.api.TestComponent

A type of component designed to operate and report on multiple modelRuns (e.g., analysing they converge or overall meet some requirement.

Unlike the SingleRunTestComponent, this class’s attachOps() and check() methods operate on a list of modelRuns and modelResults, not just a single one.

attachOps(modelRuns)

Provided a list of modelRuns (credo.modelrun.ModelRun) attaches any necessary analysis operations to each run needed for the test. (see credo.modelrun.ModelRun.analysis).

check(mResults)

A function to check a set of results - returns True if the Test passes, False if not. Also updates the tcStatus attribute.

class credo.systest.api.SingleModelSysTest(testType, inputFiles, outputPathBase, basePath=None, nproc=1, timeout=None, paramOverrides=None, solverOpts=None, nameSuffix=None)

Bases: credo.systest.api.SysTest

A subclass of SysTest for common system test types that are based on variations on a single Model (defined by a group of XML model files, see inputFiles. Includes utility functions for easily creating new model runs based on these standard parameters.

Constructor keywords not in member attribute list:

  • nameSuffix: if specified, this defines the suffix that will be added to the test’s name, output path, and log path (where the test’s result and stderr/out will be saved respectively) - Overriding the default one based on params used.
inputFiles

StGermain XML files that define the Model to be tested.

paramOverrides

Any model parameter overrides to be passed to ModelRuns performed as part of running the test - see credo.modelrun.ModelRun.paramOverrides. Thus allow customisation of the test properties.

solverOpts

Solver options to be used for any models making up this test. See credo.modelrun.ModelRun.solverOpts

updateOutputPaths(newOutputPathBase)

Function useful for when modifying suites, and you wish to change the output Path the suite reports are saved in. Necessary because suites with multiple runs will use different sub-dirs for each run.

Note

in current design, _don’t_ also update expected/reference soln paths, just output-related paths.

class credo.systest.api.SingleRunTestComponent(tcType)

Bases: credo.systest.api.TestComponent

attachOps(modelRun)

Provided a modelRun (credo.modelrun.ModelRun) attaches any necessary analysis operations to that run in order to produce the results needed for the test. (see credo.modelrun.ModelRun.analysis).

check(mResult)

A function to check a set of results - returns True if the Test passes, False if not. Also updates the tcStatus attribute.

class credo.systest.api.SysTest(testType, testName, basePath, outputPathBase, nproc=1, timeout=None)

A class for managing SysTests in CREDO. This is an abstract base class: you must sub-class it to create actual system test types.

The SysTest is designed to interact with the SysTestRunner class, primarily by creating a ModelSuite on demand to run a set of ModelRuns required to do the test. It will then do a check that the results pass an expected metric:- generally by applying one or more TestComponent classes.

testType

records the “type” of the system test, as a string (e.g. “Analytic”, or “SciBenchmark”) - for printing purposes.

testName

The name of this test, generally auto-generated from other properties.

basePath

The base path of the System test, that runs will be done relative to.

outputPathBase

The “base” output path to store results of the test in. Results from individual Model Runs performed as part of running the test may also be stored in that directory, or in subdirectories of it.

mSuite

The suite of Models that will be run as part of the test. Initially None, must be filled in as part of calling genSuite.

nproc

Number of processors to be used for the test. See credo.modelrun.ModelRun.nproc.

timeout

if set to a positive integer, this will be used as a maximum time (in seconds) the test is allowed to run for - if it runs over this the result of the test will be set to an Error. If timeout is None, 0 or negative, no timeout will be applied.

testStatus

Status of the test. Initially None, once the test has been run the SysTestResult generated will be saved here.

testComps

A list of dictionaries of TestComponent (Single run) classes used as part of performing this system test. The primary list is indexed by run number of the model run in the systest’s mSuite.

multiRunTestComps

A dictionaries of MultiRunTestComponent classes used as part of performing this system test.

generatedReports

A list of the file-names of all generated reports created based on this benchmark.

attachAllTestCompOps()

Useful in configureTestComps(). but default is to call ‘attachOps’ method of all testComps (requires all testComps to have been already set up and declared in configureTestComps())

checkModelResultsValid(resultsSet)

Check that the given result set is “valid”, i.e. exists, has the right number of model results, and model results have necessary analysis ops associated with them to allow aspects of test to evaluate properly.

configureSuite()

Function for configuring the credo.modelsuite.ModelSuite to be used for testing on. Must be saved to mSuite by the end of this function. By Default, calls method genSuite() which the user should override.

configureTestComps()
createReports(mResults)

Create any custom reports, then update record XML

defaultSysTestFilename()

Return the default system test XML record filename, based on properties of the systest (such as testName).

genSuite()

Must update the mSuite attribute so it contains all models that need to be run to perform the test.

getStatus(resultsSet)

After a suite of runs created by :meth:”.genSuite” has been run, when this method is passed the results of the suite (as a list of credo.modelresult.ModelResult), it must decide and return the status of the test (as a SysTestResult).

It also needs to save this status to testStatus.

By default, this simply gets each TestComponent registered for the system test do check its status, all must pass for a total pass.

Note

if using this default method, then sub-classes need to have defined failMsg and passMsg attributes to use.

getTCRes(tcName, allowMissing=True)

Utility function for single run test components to get lists of test components, and tc results, for each run of a given testComp (This can be done by list manipulation in Python, this function just makes it easier).

Parameters:allowMissing – if True, runs that don’t have a TC of givne name applied to them will have None placed in output array. if False, KeyError exception will be propagated.
Returns:a tuple of 2 lists: all test components of a given name ordered by model run in the test, and a list of corresponding test component results.
regenerateFixture(jobRunner)

Function to do any setup of tests for the first time they’re run, or e.g. when updating to a new Underworld version. Since not all tests need this functionality, the default behaviour is to do nothing.

runTest(jobRunner, postProcFromExisting=False, createReports=True)

Run this sysTest, and return the SysTestResult it produces. Will also write an XML record of the System test, and each ModelRun and ModelResult in the suite that made up the test.

Parameters:
  • postProcFromExisting – if True, will not actually run the test, but will read the result from existing modelResults.
  • createReports – if True, will create external reports (additional to the XML record) this test specifies.
Returns:

SysTestResult, and list of ModelResults (since latter may be useful for further post-processing)

setCustomReporting(customReportingFunc)

Method to use to set the value of the customReporting method.

setErrorStatus(errorMsg)

Utility function for if a model run fails as part of the test, this function can be called to automatically set the test status.

setTimeout(seconds=0, minutes=0, hours=0, days=0)

Sets the timeout parameter, used to determine how long the test is allowed to run for.

setupEmptyTestCompsList()
setupTest()
updateXMLWithReports(outputPath='', filename='', prettyPrint=True)

Updates a Sys Test XML with record of any report files generated as a result of the test. If the XML file has the standard name, as defined by defaultSysTestFilename(), then it should be found automatically.

Other arguments and return value same as for writePreRunXML().

updateXMLWithResult(resultsSet, outputPath='', filename='', prettyPrint=True)

Given resultsSet, a set of model results (list of ModelResult), updates a Sys Test XML with the results of the test. If the XML file has the standard name, as defined by defaultSysTestFilename(), then it should be found automatically.

Other arguments and return value same as for writePreRunXML().

writePreRunXML(outputPath='', filename='', prettyPrint=True)

Write the SysTest XML with as much information before the run as is possible. This includes general info about the test, and detailed specificiation of appropriate parameters and test components.

Parameters:
  • outputPath – (opt) path the XML should be saved to.
  • filename – (opt) filename within that path that should be used.
  • prettyPrint – whether to indent the XML for better human-readability (pretty-printing).
Returns:

the name of the file written to.

writeRecordXML(mResults, outputPath='', filename='', prettyPrint=True)

Convenience function to call all other XML writing funcs (pre-run and post-run) in one go.

class credo.systest.api.SysTestResult

Class to represent an CREDO system test result.

detailMsg

detailed message of what happened during the test.

statusStr

One-word status string summarising test result (eg ‘Pass’).

_absRecordFile

The absolute path of where the system test was saved to.

getRecordFile()
printDetailMsg()
setRecordFile(recordFile)

Save the record file: as an absolute path.

exception credo.systest.api.SysTestSetupError

Bases: exceptions.Exception

An exception for when a System test fails to set up correctly.

class credo.systest.api.TestComponent(tcType)

A class for TestComponents that make up an CREDO System test/benchmark. Generally they will form part a list contained by a SysTest.

This is an abstract base class, individual test components must subclass from this interface.

tcStatus

Status of this test component. Initially None, will be updated after the test component is evaluated to a SysTestResult.

tcType

Type of the test component, as a (single world descriptive) string.

updateXMLWithResult(tcNode, resultInfo)

Updates a given XML node with the result of the test component. the ‘resultInfo’ will be passed through to sub-functions, and varies according to the type of test being performed.

writePreRunXML(parentNode, name)

Function to write out info about the test component to an XML file, as a sub-tree of parentNode.

credo.systest.api.getStdOutputPath(testClass, inputFiles, testOpts)

Get the standard name for the test’s output path. Attempts to avoid naming collisions where reasonable.

credo.systest.api.getStdTestName(testTypeStr, inputFiles, nproc, paramOverrides, solverOpts, nameSuffix)

Utility function, to get a standard name for system tests given key parameters of the tests. If nameSuffix is given a string, it will be used as the suffix after processor number, instead of one based on any parameter over-rides used.

credo.systest.api.getStdTestNameBasic(testTypeStr, inputFiles)

Basic part of the test name. Useful for restart runs etc.

credo.systest.systestrunner

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/systest.rst, line 21)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

Package for manipulation of a suite of system tests. Analogous to the role of the Pythun unittest TestRunner.

class credo.systest.systestrunner.SysTestRunner

Class that runs a set of SysTest, usually collected into SysTestSuite collections.

For examples of how to use, see the CREDO documentation, especially Running CREDO system test suites directly, and how to modify them.

getResultsTotals(results)

Gets the totals of a set of results, and returns, including indices of which results failed, and which were errors.

getSuiteResultsFilename(suite)

Get a standard name for a suite record file, from given suite and its attributes.

printResultsDetails(sysTests, results)

Prints details of which tests failed in a sub-suite.

printResultsSummary(sysTests, results, projName=None, suiteName=None)

Print a textual summary of the results of running a set of sys tests.

printSuiteResultsByProject(testSuites, resultsLists)

Utility function to print a set of suite results out, categorised by project, in the order that the projects first appear in the results.

printSuiteResultsOrderFound(testSuites, resultsLists)

Utility function to print a set of results in the order they were entered (not sub-categorised by project).

printSuiteTotalsShortSummary(results, projName, suiteName)

Prints a short summary, useful for suites with sub-suites.

runSingleTest(sysTest, **kwargs)

Convenience function to setup and run a single SysTest. .. note:: all keywords appopriate to

credo.systest.api.SysTest.runTest() are passed through directly in the kwargs parameter.
runSuite(suite, runSubSuites=True, subSuiteMode=False, outputSummaryDir='testLogs', **kwargs)

Runs a suite of system tests, and prints results. The suite may contain sub-suites, which will also be run by default.

Returns:a list of all results of the suite, and its sub-suites

Note

Currently, just returns a flat list of results, containing results of all tests and all sub-suites. Won’t change this into a hierarchy of results by sub-suite, unless really necessary.

Note

all keywords appopriate to credo.systest.api.SysTest.runTest() are passed through directly in the kwargs parameter.

runSuites(testSuites, outputSummaryDir='testLogs', **kwargs)

Runs a list of suites, and prints a big summary at the end.

Parameters:
  • testSuites – list of test suites to run.
  • outputSummaryDir – name of directory to save a summary of tests to.
Returns:

a list containing lists of results for each suite (results list in the same order as testSuites input argument).

Note

all keywords appopriate to credo.systest.api.SysTest.runTest() are passed through directly in the kwargs parameter.

runTests(sysTests, projName=None, suiteName=None, printSummary=True, **kwargs)

Run all tests in the sysTests list. Will also save all appropriate XMLs and print a summary of results.

Parameters:
  • projName – the name of the ‘project’ to report these tests as belonging to.
  • suiteName – the name of the suite these tests should be reported as belonging to.

Note

all keywords appopriate to credo.systest.api.SysTest.runTest() are passed through directly in the kwargs parameter.

credo.systest.systestrunner.getSuitesFromModules(suiteModNames)

Gets a list of suites from the list of suites to import given in suiteModNames.

credo.systest.systestrunner.runSuitesFromModules(suiteModNames, **kwargs)

Runs a set of System test suites, where suiteModNames is a list of suites to import and run.

Note

all keywords appopriate to credo.systest.api.SysTest.runTest() are passed through directly in the kwargs parameter.

Core System Test class implementations

CREDO provides a set of core SysTest instantations, which supercede the functionality of the pre-existing test scripts system, which are documented below.

The user can always add to this list, by defining new SysTest classes to use.

The most flexible of the set is the SciBenchmarkTest, but this requires the most customisation (i.e. generally can’t be created in the short-hand form of the other tests using the sysTestRunner’s addStdTest() method).

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/systest.rst, line 46)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

depending on whether you have the Python Imaging Library (PIL) installed, you can also use the credo.systest.imageReference.ImageReferenceTest system test.

credo.systest.analyticTest

class credo.systest.analyticTest.AnalyticTest(inputFiles, outputPathBase, basePath=None, nproc=1, timeout=None, paramOverrides=None, solverOpts=None, nameSuffix=None, defFieldTol=0.029999999999999999, fieldTols=None)

Bases: credo.systest.api.SingleModelSysTest

An Analytic System test. This case requires the user to have configured the XML correctly to load an anlytic soln, and compare it to the correct fields. Will check that each field flagged to be analysed is within the expected tolerance. Uses a FieldWithinTolTC test component to perform the check.

Optional constructor keywords:

  • defFieldTol: The default tolerance to be applied when comparing fields of interest to the analytic solution. See also the FieldWithinTolTC’s defFieldTol.
  • fieldTols: a dictionary of tolerances to use when testing particular fields, rather than the default tolerance defined by the defFieldTol argument.
fTestName

Standard name to use for this test’s field comparison TestComponent in the testComponents list.

checkModelResultsValid(resultsSet)

See base class checkModelResultsValid().

configureTestComps()
genSuite()

See base class genSuite().

For this test, just a single model run is needed, to run the model and compare against the analytic solution.

credo.systest.referenceTest

Provides a ReferenceTest for use in system testing.

credo.systest.referenceTest.DEF_TEST_FIELDS

Default fields that will be tested, if not explicitly provided as a constructor keyword argument to ReferenceTest instantiations.

class credo.systest.referenceTest.ReferenceTest(inputFiles, outputPathBase, basePath=None, nproc=1, timeout=None, paramOverrides=None, solverOpts=None, nameSuffix=None, fieldsToTest=None, runSteps=20, defFieldTol=0.01, fieldTols=None, expPathPrefix='expected')

A Reference System test. This case simply runs a given model for a set number of steps, then checks the resultant solution matches within a tolerance of a previously-generated reference solution. Uses a FieldWithinTolTC test component to perform the check.

Optional constructor keywords:

  • runSteps: number of steps the reference solution should run for.
  • fieldsToTest: Which fields in the model should be compared with the reference solution, as a list. If not provided, will default to DEF_TEST_FIELDS.
  • defFieldTol: The default tolerance to be applied when comparing fields of interest to the reference solution. See also the FieldWithinTolTC’s defFieldTol.
  • fieldTols: a dictionary of tolerances to use when testing particular fields, rather than the default tolerance as set in the defFieldTol argument.
fTestName

Standard name to use for this test’s field comparison TestComponent in the testComponents list.

checkModelResultsValid(resultsSet)

See base class checkModelResultsValid().

configureTestComps()
genSuite()

See base class genSuite(). For this test, just a single model run is needed, to run the model and compare against the reference solution.

regenerateFixture(jobRunner)

Do a run to create the reference solution to use.

Note

by default, this will save checkpoint for the entire step, not just fields to be checkpointed against.

credo.systest.restartTest

class credo.systest.restartTest.RestartTest(inputFiles, outputPathBase, basePath=None, nproc=1, timeout=None, paramOverrides=None, solverOpts=None, nameSuffix=None, fieldsToTest=['VelocityField', 'PressureField'], fullRunSteps=20, defFieldTol=1.0000000000000001e-05, fieldTols=None)

A Restart System test. This case simply runs a given model for set number of steps, then restarts half-way through, and checks the same result is obtained. (Thus it’s largely a regression test to ensure checkpoint-restarting works for various types of models). Uses a FieldWithinTolTC test component to perform the check.

Optional constructor keywords:

  • fullRunSteps: number of steps to do the initial “full” run for. Must be a multiple of 2, so it can be restarted half-way through.
  • fieldsToTest: Which fields in the model should be compared with the reference solution.
  • defFieldTol: The default tolerance to be applied when comparing fields of interest between the restarted, and original solution. See also the FieldWithinTolTC’s defFieldTol.
  • fieldTols: a dictionary of tolerances to use when testing particular fields, rather than the default tolerance defined by the defFieldTol argument.
fTestName

Standard name to use for this test’s field comparison TestComponent in the testComponents list.

checkModelResultsValid(resultsSet)

See base class checkModelResultsValid().

configureTestComps()
genSuite()

See base class genSuite().

For this test, will create a suite containing 2 model runs: one to initally run the requested Model and save the results, and a 2nd to restart mid-way through, so that the results can be compared at the end.

updateOutputPaths(newOutputPathBase)

See base class updateOutputPaths().

credo.systest.analyticMultiResTest

class credo.systest.analyticMultiResTest.AnalyticMultiResTest(inputFiles, outputPathBase, resSet, basePath=None, nproc=1, timeout=None, paramOverrides=None, solverOpts=None, nameSuffix=None)

Bases: credo.systest.api.SingleModelSysTest

A Multiple Resolution system test. This test can be used to convert any existing system test that analyses fields, to check that the error between the analytic solution fields and the actual results improves at the required rate as the model resolution is increased. Uses a FieldCvgWithScaleTC test component to perform the check.

Optional constructor keywords:

  • resSet: a list of resolutions to use for the test, as tuples. E.g. to specify testing at 10x10 res then 20x20, resSet would be [(10,10), (20,20)]
resSet

Set of resolutions to use, as described for the resSet keyword to the constructor.

checkModelResultsValid(resultsSet)

See base class checkModelResultsValid().

configureTestComps()
genSuite()

See base class genSuite().

The generated suite will contain model runs all with the same model XML files, but with increasing resolution as specified by the resSet attribute.

credo.systest.imageReferenceTest

credo.systest.sciBenchmarkTest

class credo.systest.sciBenchmarkTest.SciBenchmarkTest(testName, outputPathBase=None, basePath=None, nproc=1, timeout=None)

Bases: credo.systest.api.SysTest

A Science benchmark test. This is an open-ended system test, designed for the user to add multiple TestComponent s to, which test the conditions of the benchmark. Contains extra capabilities to report more fully on the test result than a standard system test.

See the examples section of the CREDO documentation, Scientific Benchmarking using CREDO, for examples of sci benchmarking in practice.

addTestComp(runI, testCompName, testComp)

Add a testComponent (TestComponent) with name testCompName to the list of test components to be applied as part of determining if the benchmark has passed. Does basic error-checking.

configureSuite()
configureTestComps()
setupTest()

Overriding default SysTest.setupTest() method, as for SciBenchmarks we want to allow the user to manage test setup explicitly in their benchmark script. Thus assume suite runs and test components have been setup correctly already.

Core TestComponent implementations

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/systest.rst, line 102)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

credo.systest.fieldWithinTolTC

class credo.systest.fieldWithinTolTC.FieldWithinTolTC(fieldsToTest=None, defFieldTol=0.01, fieldTols=None, useReference=False, useHighResReference=False, referencePath=None, testTimestep=0)

Bases: credo.systest.api.SingleRunTestComponent

Checks whether, for a particular set of fields, the error between each field and an (analytic or reference) solution is below a specificed tolerance.

This relies largely on functionality of:

Other than those that are directly saved as attributes documented below, the constructor arguments of interest are:

fieldsToTest

A list of strings containing the names of fields that should be tested- i.e. those that will be compared with an expected solution. If left as None in constructor, this means the fieldsToTest list will be expected to be defined in the StGermain model XML files themselves.

defFieldTol

The default allowed tolerance for global normalised error when comparing Fields with their expected values.

fieldTols

A dictionary, mapping particular field names to particular tolerances to use, overriding the default. E.g. {“VelocityField”:1e-4} means the tolerance used for the VelocityField will be 1e-4.

fComps

A credo.analysis.fields.FieldComparisonList used as an operator to attach to ModelRuns to be tested, and do the actual comparison between fields.

fieldResults

Initially {}, after the test is completed will store a dictionary mapping each field name to a Bool saying whether or not it was within the required tolerance.

fieldErrors

Initially {}, after the test is completed will store a dictionary mapping each field name to a float representing the global normalised error in the comparison.

attachOps(modelRun)

Implements base class credo.systest.api.SingleRunTestComponent.attachOps().

check(mResult)

Implements base class credo.systest.api.SingleRunTestComponent.check().

getTolForField(fieldName)

Utility func: given fieldName, returns the tolerance to use for testing that field (may be given by defFieldTol, or been over-ridden in fieldTols).

credo.systest.fieldCvgWithScaleTC

class credo.systest.fieldCvgWithScaleTC.FieldCvgWithScaleTC(fieldsToTest=None, calcCvgFunc=<function calcFieldCvgWithScale at 0x2ba5500>, fieldCvgCrits={'recoveredTauField': (1.6000000000000001, 0.98999999999999999), 'VelocityField': (1.6000000000000001, 0.98999999999999999), 'recoveredPressureField': (1.6000000000000001, 0.98999999999999999), 'recoveredEpsDotField': (1.6000000000000001, 0.98999999999999999), 'PressureField': (0.90000000000000002, 0.98999999999999999), 'recoveredSigmaField': (1.6000000000000001, 0.98999999999999999), 'StrainRateField': (0.84999999999999998, 0.98999999999999999), 'recoveredStrainRateField': (1.6000000000000001, 0.98999999999999999)})

Bases: credo.systest.api.MultiRunTestComponent

Checks whether, for a particular set of fields, the error between each field and an (analytic or reference) solution reduces with increasing resolution at a required rate. Thus similar to FieldWithinTolTC, except tests accuracy of solution with increasing model resolution.

This relies largely on functionality of:

fieldsToTest

A list of strings containing the names of fields that should be tested- i.e. those that will be compared with an expected solution. If left as None in constructor, this means the fieldsToTest list will be expected to be defined in the StGermain model XML files themselves.

fieldCvgCrits

List of Convergence criterions to be used when checking the fields. Currently required to be in the form used by the convernce checking credo.analysis.fields.calcFieldCvgWithScale(), which requires tuples of the form (cvg_rate, correlation).

Note

if this list doesn’t contain a cvg criterion for a field that’s tested, the behaviour is to skip the formal test of this field, but print a warning (based on previous SYS test behaviour).

calcCvgFunc

Function to use to calculate convergence of errors of a group of runs - currently uses credo.analysis.fields.calcFieldCvgWithScale() by default.

fComps

A credo.analysis.fields.FieldComparisonList used as an operator to attach to ModelRuns to be tested, and do the actual comparison between fields.

fErrorsByRun

Initially {}, after the test is completed will store a dictionary mapping each field name to a list of floats representing the global normalised error in the comparison, for each ModelRun, indexed by ModelRun.

fCvgMeetsReq

Initially {}, after the test is completed will store a dictionary mapping each field name to a Bool recording whether the field error converged acceptably as resolution increased, according to the convergence algorithm used.

fCvgResults

Initially {}, after the test is completed will store a dictionary mapping each field name to a tuple containing information on actual convergence rate. See the return value of credo.analysis.fields.calcFieldCvgWithScale() for more.

attachOps(modelRuns)

Implements base class credo.systest.api.SingleRunTestComponent.attachOps().

check(resultsSet)

Implements base class credo.systest.api.MultiRunTestComponent.check().

As well as performing check, will save relevant into to attributes fErrorsByRun, fCvgMeetsReq, fCvgResults.

credo.systest.fieldCvgWithScaleTC.getDofErrorsByRun(fComp, resultsSet)

For a given field comparison op, get all the dof errors from a set of runs, indexed primarily by run index.

credo.systest.fieldCvgWithScaleTC.getNumDofs(fComp, mResult)

Hacky utility function to get the number of dofs of an fComp, by checking the result. Need to do this smarter/neater.

credo.systest.fieldCvgWithScaleTC.printCvgResult(fieldName, fieldConvResults)
credo.systest.fieldCvgWithScaleTC.testAllCvgWithScale(lenScales, fieldErrorData, fieldCvgCriterions)

Given a lists of length scales, field error data (a dictionary mapping field names to dofError lists for that field), and field convergence criterions, returns a Bool specifying whether all the fields met their required convergence criterions.

The first two arguments can be created by running getFieldScaleCvgData_SingleCvgFile() on a path containing a single cvg file.

credo.systest.fieldCvgWithScaleTC.testCvgWithScale(fieldName, fieldConvResults, fieldCvgCriterion)

Tests that for a given field, given a list of fieldConvResults (See credo.analysis.fields.calcFieldCvgWithScale()) - that they converge according to the given fieldCvgCriterion.

Returns:result of test (Bool)

credo.systest.outputWithinRangeTC

class credo.systest.outputWithinRangeTC.OutputWithinRangeTC(outputName, reductionOp, allowedRange, tRange=None, opDict=None)

Bases: credo.systest.api.SingleRunTestComponent

Test component to check that a given output parameter (found in the frequent output) is within a given range, and optionally also that this occurs within a given set of model times.

See also

credo.io.stgfreq.

outputName

The name of the model observable to check, as it’s recorded in the Frequent Output file. E.g. “Vrms”.

reductionOp

The reduction operation to perform in choosing where the value should be checked. Simple examples using Python built-ins could be:

  • max() - the Maximum value
  • min() - the Minimum value
allowedRange

The allowed range for the paramter to fall into for the test to pass. A tuple of (min,max) form.

tRange

(Optional) determines if a secondary check should be performed, that the parameters value checked (eg max) also fell within a given range of model simulation times as a (min,max) tuple. If None, this secondary check won’t be performed.

actualVal

After the check is performed, the actual value of the parameter is recorded here.

actualTime

After the check is performed, records the model sim time at which the parameters chosen property (eg max or min) occurred.

withinRange

After the check is performed, records a Bool saying whether the test component passed.

opDict

(Optional) - will be later passed as keyword arguments to the reductionOp function - so use if the reduction op function requires these.

attachOps(modelRun)

Implements base class credo.systest.api.SingleRunTestComponent.attachOps().

Note

Currently does nothing. Intend to make it ensure the correct plugin is set to be loaded (to make sure observable is generated in FrequentOutput.dat).

check(mResult)

Implements base class credo.systest.api.SingleRunTestComponent.check().

credo.systest.imageCompTC