Analysis API

The CREDO Analysis module is to group together tools for analysis of the results of Underworld models: for use in both individual user scripts, and as part of system testing (see credo.systest).

For analysis operations that are designed to be added to ModelRun instances, and possibly used as part of system testing, these should inherit from AnalysisOperation. There are also several functions that can be used to access and post-process Underworld results in a more direct manner, using the credo.io interface.

For examples of using the analysis operations with models, see the Doing Model analysis with CREDO section of the documentation.

credo.analysis.api

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/analysis.rst, line 11)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

This is the core interface for analysis operations in CREDO.

class credo.analysis.api.AnalysisOperation

Abstract base class for Analysis Operations in CREDO: i.e. that require some analysis to be done during a ModelRun. All instances should provide at least this standard interface so that records of analysis can be stored.

postRun(modelRun, runPath)

Does any required post-run actions for this analysis op, e.g. moving generated files into the correct output directory. Is passed a reference to a ModelRun, so can use it’s attributes such as outputPath.

writeInfoXML(parentNode)

Virtual method for writing Information XML about an analysis op, that will be saved as a record of the analysis applied.

writeStgDataXML(rootNode)

Writes the necessary StGermain XML to require the analysis to take place. This is likely to include creating and configuring Components, and adding them to the Components dictionary. See credo.io.stgxml for the interface for setting these up.

credo.analysis.fields

System Message: WARNING/2 (/home/docs/sites/readthedocs.org/checkouts/readthedocs.org/user_builds/credo/checkouts/latest/doc/api/analysis.rst, line 21)

Could not execute ‘dot’. Are you sure you have ‘graphviz’ installed?

CREDO functions and classes for doing analysis of Fields in StGermain-based codes.

Currently this is primarily based on the FieldTest component/plugin within StgFEM, which allows comparison between one or more Fields in a model run (eg “VelocityField”), and either saved reference fields, or analytic solutions.

The CREDO Field functions allow either single-model comparisons, e.g. those defined by FieldComparisonOp, or analysis on fields to be performed across multiple runs, e.g. calcFieldCvgWithScale().

Note

In future it’s planned to add functions that load a checkpointed field result into Python for further analysis, but this feature is not yet implemented.

class credo.analysis.fields.FieldComparisonList(fieldsList=None)

Bases: credo.analysis.api.AnalysisOperation

Class for maintaining and managing a list of field comparisons (managed as a list of FieldComparisonOp objects), including IO from StGermain XML files.

Currently maps to the “FieldTest” component’s functionality in StgFEM.

Note

Currently the whole _list_ of Field comparisons is a single credo.analysis.api.AnalysisOperation, because this is the design of the FieldTest component in StgFEM. In future we may look at modularising this functionality further so that single comparisons can be managed as operators.

fields

A dictionary mapping field names that need to be compared, to FieldComparisonOp to perform the comparison.

fromXML

If True, means the list of fields to compare (ie fields) should be read from the Model XML files of the model to attach to. If False, the user has to manually specify the fields to compare.

useReference

Determines whether fields are compared against a reference solution (if True), or analytic (if False). If useReference is true, user must also specify referencePath so that the appropriate StGermain XML for the operation can be written.

useHighResReference

Determines whether fields are compared against a high-res reference solution (if True), or analytic (if False). If useHighResReference is true, user must also specify referencePath so that the appropriate StGermain XML for the operation can be written.

Note

Don’t also specify useReference, choose one or the other.

referencePath

(Relative or absolute) path to the reference solutions for the specified fields.

testTimestep

Integer, the timestep of the model that the comparison will occur at. If 0, means the final timestep. Based on the capability of the StGermain FieldTest component.

add(fieldComparisonOp)

Add another FieldComparisonOp to the list to compare.

checkStgXMLResultsEnabled(inputFilesList, basePath)

Checks that the field comparison has the writing of comparison info to file enabled (returning Bool).

getAllResults(modelResult)

Return a list of FieldComparisonResult based on all the FieldComparisonOps specified to be done during a run, from the given modelResult (ModelResult).

getCmpSrcString()

Returns an appropriate string to document the comparison source of the fields being compared - i.e. either reference or analytic.

postRun(modelRun, runPath)

Implements AnalysisOperation.postRun(). In this case, moves all CVG files created to output path.

readFromStgXML(inputFilesList, basePath)

Read in the list of fields that have already been specified to be tested from a set of StGermain input files. Useful when e.g. working with an Analytic Solution plugin.

writeInfoXML(parentNode)

Writes information about this class into an existing, open XML doc node, in a child element.

writeStgDataXML(rootNode)

Writes the necessary StGermain XML to enable these specified fields to be compared.

class credo.analysis.fields.FieldComparisonOp(fieldName)

Class for setting up and performing a comparison between two Fields. Currently uses the functionality of the FieldTest component in StgFEM, and requires using a FieldComparisonList to run a group of FieldComparisons at once (this is as a result of the structure of the FieldTest component).

name

name of the field that is being compared (to an analytic or ref soln).

getResult(modelResult)

Gets the result of the operator on the given fields (as a FieldComparisonResult), given a modelResult (ModelResult) which refers to a directory containing field comparisons (i.e. cvg files, see credo.io.stgcvg).

writeInfoXML(parentNode)

Writes info about a comparison op: currently assumes will be called by FieldComparisonList.writeInfoXML().

class credo.analysis.fields.FieldComparisonResult(fieldName, dofErrors)

Simple class for storing CREDO FieldComparisonOp Results, so they can be analysed and saved.

By default only contains the difference between the field DOFs at the final timestep - but recording a reference to the credo.io.stgcvg.CvgFileInfo for this field allows more complex analysis.

fieldName

Name of the field that has been compared.

dofErrors

Comparison errors for each DOF of the field, at the final timestep that was run.

cvgFileInfo

A credo.io.stgcvg.CvgFileInfo allowing detailed access to the CVG result for this field. Required for plotting etc. Is optional, needs to be recorded after the class has been constructed.

plottedCvgFilename

If the plotOverTime() method has been called, this attribute will record the filename the plot was saved to.

plotOverTime(save=True, show=False, dofIndex=None, path='.')

Plot the result of a FieldComparison over all timesteps of a model. Requires the cvgFileInfo paramater to have been set to give access to the cvg info of this field.

Note

Requires you to have the Matplotlib library installed.

‘show’, ‘save’ and ‘path’ parameters are the same as for credo.io.stgfreq.FreqOutput.plotOverTime(). The optional ‘dofIndex’ parameter allows you to only plot a particular DOF of the field, otherwise all dofs will be plotted on separate graphs.

withinTol(tol)

Checks that the difference between the fields is within a given tolerance, at the final timestep.

writeInfoXML(fieldResultsNode)

Writes information about a FieldComparisonResult into an existing, open XML doc node

credo.analysis.fields.calcFieldCvgWithScale(fieldName, lenScales, dofErrors)

Gets the convergence and correlation of a field with resolution (taking the log10 of both).

lenScales argument should simply be an array of length scales for the different runs. dofErrors must be a list, for each dof of the field, of the error vs some expected solution at the corresponding length scale.

returns a list of tuples, one per dof, where each tuple contains: (convergence rate, pearson correlation) over the set of scales.

credo.analysis.fields.getFieldScaleCvgData_SingleCvgFile(cvgFilePath)

Given a path that CVG files reside in, returns the length scales of each run (as a list), and a list of field error data for each field/cvg info in the given path. Thus is a utility function for generating necessary fieldErrorData for a multi-res convergence analysis.

Note

This assumes all cvg info is stored in the same convergence file (the default approach of the legacy SYS tests)

credo.analysis.stats

A library of useful stats functions for analysis operations.

The aim is for simple functions to be able to run without further dependencies ... with more advanced stats libraries from the likes of SciPy being able to be loaded at the user’s discretion.

credo.analysis.stats.linreg(X, Y)
Summary
Linear regression of y = ax + b
Usage
real, real, real = linreg(list, list)

Returns coefficients to the regression line “y=ax+b” from x[] and y[], and R^2 Value

(Obtained from From http://www.answermysearches.com/how-to-do-a-simple-linear-regression-in-python/124/) Useful for field analysis, e.g. when applied to a list of length scales & field errors, to calculate convergence info, such as credo.analysis.fields.calcFieldCvgWithScale().

credo.analysis.images

credo.analysis.modelplots

Collection of utility functions for plotting interesting aspects of models

credo.analysis.modelplots.getNumEls(mRun)

Calculate the number of elements used by a model run.

credo.analysis.modelplots.getSpeedups(mRuns, mResults, profilerName=None)
credo.analysis.modelplots.getTimePerEls(mRuns, mResults, profilerName=None)
credo.analysis.modelplots.getValsFromAllRuns(mResults, outputName)
credo.analysis.modelplots.plotOverAllRuns(mResults, outputName, depName='Timestep', show=False, save=True, path='.', labelNames=True)

Create a plot of values over multiple runs.

credo.analysis.modelplots.plotSpeedups(mRuns, mResults, profilerName=None, show=False, save=True, path='.', showIdeal=True)

Plot the speedup of a set of mResults, by processor.

credo.analysis.modelplots.plotTimePerEls(mRuns, mResults, profilerName=None, show=False, save=True, path='.', showIdeal=True)

Plot the speedup of a set of mResults, by processor.

credo.analysis.modelplots.plotWalltimesByRuns(mRuns, mResults, profilerName=None, show=False, save=True, path='.', showIdeal=True)

Project Versions

Table Of Contents

Previous topic

IO (Input Output functions) API

Next topic

SysTest API

This Page