Notes on Oberkampf & Trucano (2002): Sections 1 and 2

Oberkampf, W. L., & Trucano, T. G. (2002). Verification and validation in computational fluid dynamics. Progress in Aerospace Sciences, 38(3), 209-272.

Contents

  1. General
  2. Introduction
  3. Historical Terminology
  4. Historical Contributors
  5. Verification Methodology
  6. Validation Methodology

General

Good historical reference, with masses of references. Somewhat focuses on the validation experiment being an experiment that is focused on the validation of numerical code rather than its purpose being primarily about physical insight.

Useful Quotes

Why numerically model? (Page 211):

“This new trend of modeling and simulation based design… is also driven by the high cost and time that are required for laboratory testing or field components, as well as complete systems.”

How well established is V&V? (Page 211):

“The state of the art has not developed to the point where one can clearly point out all of the actual methods, procedures, and process steps that must be undertaken for V&V.”

Present process is not OK (Page 211):

“The present method of qualitative “graphical validation,” i.e., comparison of computational results and experimental data on a graph, is inadequate.”

V&V is difficult! (Page 211):

“We recognise, however, that the complexities of the quantification of V&V are substantial, from both a research perspective and a practical perspective.”

V&V is important in complex systems (Page 214):

“Regardless of the difficulties and constraints, methods must be devised for measuring the accuracy of the model for as many conditions as the model is deemed appropriate. As the complexity of a model increases, its accuracy and range of applicability can become questionable.”

Code Verification is challenging (Page 215):

“the veracity, correctness, and accuracy of a computational model cannot be demonstrated for all possible conditions and applications, except for trivial models.”

Verification is only for tested cases (Page 215):

V&V activities can only assess the correctness or accuracy of the specific cases tested.

General V&V doctrine (Page 215):

“Verification is the first step of the validation process and, while not simple, is much less involved than the more comprehensive nature of validation. Validation addresses the question of the fidelity of the model to specific conditions of the real world. The terms “evidence” and “fidelity” both imply the concept of “estimation of error,” not simply “yes” or “no” answers.”

Graphical comparisons are not good enough (Page 216):

“The typical validation procedure in CFD, as well as other fields, involves graphical comparison of computational results and experimental data. If the computational results “generally agree” with the experimental data, the computational results are declared “validated”. Comparison of computational results and experimental data on a graph, however, is only incrementally better than a qualitative comparison.”

Missing information in graphical comparison (Page 216):

With a graphical comparison, one does not commonly see quantification of the numerical error or quantification of computational uncertainties due to missing initial conditions, boundary conditions, or modeling parameters. Also, an estimate of experimental uncertainty is not typically quoted, and in most cases it is not even available.

Poor verification practises in academic (Page 217):

“Upon examining the CFD literature as a whole, it is our view that verification testing of computer codes is severely inadequate.”

Section 1

Introduction

References to leading authorities in V&V, for instance:

The paper issues some critical needs on page 212 such as:

There is a distinction between error, uncertainty, code verification and solution verification, apparently, discussed in section 3.

There is significant emphasis on the hierarchical methodology for validation.

Section 2

Historical Terminology (p213)

Historical figures in V&V were Popper, Carnap and the Operations Research (OR) community. Because of the complexities of the systems that OR attempted to deal with, Validation is virtually impossible in this case.

The Society of Computer Simulations (SCS) published the first definitions of verification and validation. Talked a lot about substantiation, evidence of correctness.

Something called Qualification was also defined by the SCS, which is to ask the question of how well does the conceptual model represent reality. This is not validation, which asks how well the numerical model represents reality. See the Fig. 1 on page 213.

IEEE definitions are given but may not be appropriate for computational sciences as they are referential, as in they need another document to define the requirements. Yet,

“the IEEE definitions are the more prevalent definitions used in engineering, and one must be aware of the potential confusion when other definitions are used.”

The American Institute of Aeronautics and Astronautics (AIAA) provided the defacto definitions of validation and verification:

V&V is an ongoing activity, i.e. there is no completion, we are always building evidence, as Roach also asserts.

Accuracy is common to most definitions of V&V which implies that a measure of correctness can be determined.

Historical Contributors (p215)

Talks about contributors from the CFD community and also environmental quality modelling such as surface and ground water flows. The water-quality work relates to wave energy modelling. As Oberkampf puts it:

“it addresses validation for complex processes in the physical sciences where validation of models is extremely difficult, if not impossible. Second, because of the limited knowledge, the environmental-modeling field has adopted statistical methods of calibration and validation assessment.”

There is discussion on the lack of information in purely graphical comparisons and some discussion on the selection of metrics for validation, such as

“A metric would quantify both errors and uncertainties in the comparison of computational results and experimental data.”

There is also some information on validation databases although these are described as “ad hoc and duplicative”! I had an idea here about having “open tests” rather than open code, so at least people could independently verify and validate black box codes.

Verification Methodology (p217)

Why is the UK not represented on the AIAA committee?

The AIAA guide was one of the first standards published for V&V and is a “first level” document, denoting the early developmental stage of the processes.

Oberkampf states that benchmarks (or a fiducial) are required for verification, but if you’re verifying the numerics for a solution then there are no benchmarks, right?

“Verification, thus, provides evidence (substantiation) that the conceptual (continuum mathematics) model is solved correctly by the discrete mathematics embodied in the computer code.”

Important note on the sources of error:

“Given a numerical procedure that is stable, consistent, and robust, the five major sources of errors in CFD solutions are insufficient spatial discretization convergence, insufficient temporal discretization convergence, insufficient convergence of an iterative procedure, computer round-off, and computer programming errors.”

There is a distinction made between solution verification (the first 4 of the errors listed above) and code verification (the last error). Code verification is a matter of Software Quality Engineering (SQE), which academics don’t do very well for reasons we know too well.

Validation Methodology (p218)

“Comparison between sufficiently accurate computational results and experimental data.” But what is “sufficiently accurate. Oberkampf does go on to discuss the quantification of numerical accuracy later in this document but its not driven home early, like in later work.

Note that validation does not, “specifically address the inference of the models accuracy for cases different from the validation comparison.”

“validation involves identification and quantification of the error and uncertainty in the conceptual and computational models, quantification of the numerical error in the computational solution, estimation of the experimental uncertainty, and finally, comparison between the computational results and the experimental data.”

So there is discussion of the numerical error in the validation process, rather than the verification process. I guess the paper sort of tries to drive that home a little bit, but there is a lot of cross-over of the processes, which is a tad confusing.

Continued discussion of the validation hierarchy. Most discussion is about experimental measurements at each tier of the problem.

There is definitely this “verification of calculation” distinction, with the error calculations reappearing in the validation part. This is discussed in more details in sections 3 and 4, but it appears these things are less segregated in the latest thinking.

© 2013 Mathew B. R. Topper