Notes on Cheung et al. (2011)

Cheung, S. H., Oliver, T. A., Prudencio, E. E., Prudhomme, S., & Moser, R. D. (2011). Bayesian uncertainty analysis with applications to turbulence modeling. Reliability Engineering & System Safety, 96(9), 1137-1149.

General

Recent paper using Bayesian uncertainty analysis to investigate the uncertainty of various turbulence models was using a RANS solver. Unlike the work of O’Hagan (2006), they do not use an emulator, instead, using Bayesian techniques to select an appropriate uncertainty model to embed within the simulator (being OpenFOAM).

Validation

There is an interesting quote about the use of the term validation:

“we follow Babuška et al. and define validation as the process of determining whether a mathematical model is a sufficient representation of reality to be used in making particular decisions… Note, however, that the word “validation” has also been used differently by other authors.

This validation of the use of a model for a particular purpose is somewhat different from validation of a model (or theory) as a description of the physical world. In the latter case, the critical question is whether the theory is consistent with observations, and any inconsistency is grounds for rejection of the theory. Alternatively, in this work, we accept that a model may be inconsistent with observations; the critical question is whether the inconsistency is so large as to introduce unacceptably large discrepancies in the predictions.”

This appears more like a discussion on adequacy, than validation, and perhaps it is the “model (or theory)” part that is the issue. The model and the theory are not the same as far as I am concerned, but perhaps it reflects on it.

Multiple Stochastic Model Classes

Much of the paper is dedication to the selection of a stochastic model to fit the uncertainty in the turbulence model. 3 different statistical model classes, $M_{j}$, are attempted and a better one is found by some crazy maths. When the best model is found, they use it to calculate the calibration parameters.

The Discrete Model

As mentioned earlier, the numerical model used was OpenFOAM. Two meshes were used for 3 cases, being 2080 cells and 4200 cells. These are quite coarse really and they do a convergence study on this, but then do not consider the numerical error is the statistical analysis, which is odd. They put it that

“the discretization errors are smaller than the uncertainties in the problem (i.e., the model and data uncertainties described in the next sections).”

I think this is a real indicator of the difference in approaches, in that the foibles of the numerical model itself is not really of interest to the statistician.

Results

Of the three stochastic model classes addressed, the $M_{3}$ class is found to be the most plausible. The $M_{3}$ model has the form of a “Correlated Gaussian Uncertainty”, given by

$$\mathbf{\eta} \sim N(\mathbf{1}, K_{m})$$

where $K_{m}$ is covariance matrix determined from a Gaussian random field model of the velocity field. That model is quite complicated, so I guess there is good reason for it to be the most plausible.

Interestingly, both in the calibration and validation of predicted outcomes, the $M_{3}$ model has the most uncertain outcomes. Thus, they stress it is quite important to ascertain how good an uncertainty model is being used.

Conclusions

The paper concludes by stating there are a lot of remaining challenges to solve including evaluating more uncertainty models and different turbulence models.

They also say that the computational effort is an issue and that emulators might be required.

There is also a realisation that although the methodology allows for some models to be rejected, it does not ensure validation. This may require additional experimental data and examination of its effect.

Finally, some discussion of validation failure and what that mean to the model is broached. This highlights the difficulties with the separation of model development with Bayesian approaches.

© 2013 Mathew B. R. Topper