Sensitivity auditing is an extension of sensitivity analysis for use in policy-relevant modelling studies. Its use is recommended - e.g. in the European Commission Impact assessment guidelines  and by the European Science Academies - when a sensitivity analysis (SA) of a model-based study is meant to demonstrate the robustness of the evidence provided by the model, but in a context where the inference feeds into a policy or decision-making process.
In settings where scientific work feeds into policy, the framing of the analysis, its institutional context, and the motivations of its author may become highly relevant, and a pure SA - with its focus on parametric (i.e. quantified) uncertainty - may be insufficient. The emphasis on the framing may, among other things, derive from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about `what the problem is' and foremost about `who is telling the story'. Most often the framing includes implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).
In order to take these concerns into due consideration, sensitivity auditing extends the instruments of sensitivity analysis to provide an assessment of the entire knowledge- and model-generating process. It takes inspiration from NUSAP, a method used to qualify the worth (quality) of quantitative information with the generation of `Pedigrees' of numbers. Likewise, sensitivity auditing has been developed to provide pedigrees of models and model-based inferences. Sensitivity auditing is especially suitable in an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, is the subject of partisan interests. These are the settings considered in Post-normal science  or in Mode 2 science. Post-normal science (PNS) is a concept developed by Silvio Funtowicz and Jerome Ravetz, which proposes a methodology of inquiry that is appropriate when “facts are uncertain, values in dispute, stakes high and decisions urgent” (Funtowicz and Ravetz, 1992: 251–273). Mode 2 Science, coined in 1994 by Gibbons et al., refers to a mode of production of scientific knowledge that is context-driven, problem-focused and interdisciplinary. Carrozza (2015) offers a discussion of these concepts and approaches. Sensitivity auditing - together with post-normal science is one the lenses recommended to study sustainability in.
Sensitivity auditing is recommended by the European Commission for use in impact assessments in order to improve the quality of model-based evidence used to support policy decisions. Similar recommendations can be found in the report from the European Academies’ association of science for policy SAPEA .
Sensitivity auditing is summarised by seven rules or guiding principles:
- Check against the rhetorical use of mathematical modeling. Question addressed: is the model being used to elucidate or to obfuscate?;
- Adopt an `assumption hunting' attitude. Question addressed: what was `assumed out'? What are the tacit, pre-analytical, possibly normative assumptions underlying the analysis?;
- Detect Garbage In Garbage Out (GIGO). Issue addressed: artificial deflation of uncertainty operated in order to achieve a desired inference at a desired level of confidence. It also works on the reverse practice, the artificial inflation of uncertainties, e.g. to deter regulation;
- Find sensitive assumptions before they find you. Issue addressed: anticipate criticism by doing careful homework via sensitivity and uncertainty analyses before publishing results.
- Aim for transparency. Issue addressed: stakeholders should be able to make sense of, and possibly replicate, the results of the analysis;
- Do the right sums, which is more important than `Do the sums right'. Issue addressed: is the viewpoint of a relevant stakeholder being neglected? Who decided that there was a problem and what the problem was?
- Focus the analysis on the key question answered by the model, exploring the entire space of the assumptions holistically. Issue addressed: don't perform perfunctory analyses that just `scratch the surface' of the system's potential uncertainties.
The first rule looks at the instrumental use of mathematical modeling to advance one's agenda. This use is called rhetorical, or strategic, like the use of Latin to confuse or obfuscate an interlocutor.
The second rule about `assumption hunting' is a reminder to look for what was assumed when the model was originally framed. Modes are full of ceteris paribus assumptions. For example, in economics, the model can predict the result of a shock to a given set of equations, assuming that all the rest - all other input variables and inputs - remain equal, but in real life "ceteris" are never "paribus", meaning that variables tend to be linked with one another, so they cannot realistically change independently of one another.
Rule three is about artificially exaggerating or playing down uncertainties wherever convenient. The tobacco lobbies exaggerated the uncertainties about the health effects of smoking according to Oreskes and Conway, while advocates of the death penalty played down the uncertainties in the negative relations between capital punishment and crime rate. Clearly the latter wanted the policy, in this case the death penalty, and were interested in showing that the supporting evidence was robust. In the former case the lobbies did not want regulation (e.g. bans on tobacco smoking in public places) and were hence interested in amplifying the uncertainty in the smoking-health effect causality relationship.
Rule four is about `confessing' uncertainties before going public with the analysis. This rule is also one of the commandments of applied econometrics according to Kennedy: `Thou shall confess in the presence of sensitivity. Corollary: Thou shall anticipate criticism'. According to this rule, a sensitivity analysis should be performed before the results of a modeling study are published. There are many good reasons for doing this, one being that a carefully performed sensitivity analysis often uncovers plain coding mistakes or model inadequacies. The other is that, more often than not, the analysis reveals uncertainties that are larger than those anticipated by the model developers.
Rule five is about presenting the results of the modeling study in a transparent fashion. Both rules originate from the practice of impact assessment, where a modeling study presented without a proper SA, or as originating from a model which is in fact a black box, may end up being rejected by stakeholders. Both rules four and five suggest that reproducibility may be a condition for transparency and that this latter may be a condition for legitimacy.
Rule six, about doing the right sum, is not far from the `assumption-hunting' rule; it is just more general. It deals with the fact that often an analyst is set to work on an analysis arbitrarily framed to the advantage of a party. Sometime this comes via the choice of the discipline selected to do the analysis. Thus an environmental impact problem may be framed through the lenses of economics, and presented as a cost benefit or risk analysis, while the issue has little to do with costs or benefits or risks and a lot to do with profits, controls, and norms. An example is in Marris et al. on the issue of GMOs, mostly presented in the public discourse as a food safety issue while the spectrum of concerns of GMO opponents - including lay citizens - appears broader. An approach which extends this particular rule to a spectrum of plausible frames is the so-called Quantitative storytelling.
Rule seven is about avoiding a perfunctory sensitivity analysis. A SA where each uncertain input is moved at a time while leaving all other inputs fixed is perfunctory. A true SA should make an honest effort at exploring all uncertainties simultaneously, leaving the model free to display its full nonlinear and possibly non-additive behaviour. A similar point is made in Sam L. Savage's book `The flaw of averages'.
Questions addressed by sensitivity auditingEdit
In conclusion, these rules are meant to help an analyst to anticipate criticism, in particular relating to model-based inference feeding into an impact assessment. What questions and objections may be received by the modeler? Here is a possible list:
- You treated X as a constant when we know it is uncertain by at least 30%
- It would be sufficient for a 5% error in X to make your statement about Z fragile
- Your model is but one of the plausible models - you neglected model uncertainty
- You have instrumentally maximized your level of confidence in the results
- Your model is a black box - why should I trust your results?
- You have artificially inflated the uncertainty
- Your framing is not socially robust
- You are answering the wrong question
Sensitivity auditing in the European Commission GuidelinesEdit
Sensitivity auditing is described in the European Commission Guidelines for impact assessment . Relevants excerpts are (pp. 392):
- "[… ]where there is a major disagreement among stakeholders about the nature of the problem, … then sensitivity auditing is more suitable but sensitivity analysis is still advisable as one of the steps of sensitivity auditing."
- "Sensitivity auditing, […] is a wider consideration of the effect of all types of uncertainty, including structural assumptions embedded in the model, and subjective decisions taken in the framing of the problem."
- "The ultimate aim is to communicate openly and honestly the extent to which particular models can be used to support policy decisions and what their limitations are."
- "In general sensitivity auditing stresses the idea of honestly communicating the extent to which model results can be trusted, taking into account as much as possible all forms of potential uncertainty, and to anticipate criticism by third parties."
The European Academies’ association of science for policy SAPEA describes in detail sensitivity auditing in its 2019 report entitled “Making sense of science for policy under conditions of complexity and uncertainty” .
Sensitivity auditing is among the tools recommended in the context of a possible ethics of quantification ,  which aims to identify common ethical elements in different problems seen in quantification, such as metrics fixation , misuse of statistics , poor modelling  and unethical algorithms .
- European Commission. (2015). Guidelines on Impact Assessment - European Commission European Commission Better Regulation Guidelines
- Science Advice for Policy by European Academies, Making sense of science for policy under conditions of complexity and uncertainty, Berlin, 2019.
- Saltelli, A., van der Sluijs, J., Guimarães Pereira, Â., 2013, Funtowiz, S.O., What do I make of your Latinorum? Sensitivity auditing of mathematical modelling, International Journal Foresight and Innovation Policy, 9 (2/3/4), 213–234.
- Van der Sluijs JP, Craye M, Funtowicz S, Kloprogge P, Ravetz J, Risbey J (2005) Combining quantitative and qualitative measures of uncertainty in model based environmental assessment: the NUSAP system. Risk Analysis 25(2):481-492
- Funtowicz, S. O. & Ravetz, J. R. 1993. Science for the post-normal age. Futures, 25(7), 739–755.
- Gibbons, Michael; Camille Limoges; Helga Nowotny; Simon Schwartzman; Peter Scott; Martin Trow (1994). The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage. ISBN 0-8039-7794-8.
- Funtowicz, S.O. and Jerome R. Ravetz (1991). "A New Scientific Methodology for Global Environmental Issues." In Ecological Economics: The Science and Management of Sustainability. Ed. Robert Costanza. New York: Columbia University Press: 137–152.
- Funtowicz, S. O., & Ravetz, J. R. 1992. Three types of risk assessment and the emergence of postnormal science. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 251–273). Westport, CT: Greenwood.
- Carrozza, C., 2015. “Democratizing Expertise and Environmental Governance: Different Approaches to the Politics of Science and their Relevance for Policy Analysis”, Journal of Environmental Policy & Planning, 17(1), 108-126.
- Saltelli, A., Benini, L., Funtowicz, S., Giampietro, M., Kaiser, M., Reinert, E. S., & van der Sluijs, J. P. (2020). The technique is never neutral. How methodological choices condition the generation of narratives for sustainability. Environmental Science and Policy, Volume 106, April 2020, Pages 87-98, https://doi.org/10.1016/j.envsci.2020.01.008
- Oreskes N, Conway EM (2010) Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Press, New York.
- Leamer EE (2010) Tantalus on the road to asymptopia. Journal of Economic Perspectives 4(2):31-46.
- Kennedy, P. (2007) A Guide to Econometrics, 5th ed., p.396, Blackwell Publishing, Oxford.
- Saltelli, A., Funtowicz, S., 2014, When all models are wrong: More stringent quality criteria are needed for models used at the science-policy interface, Issues in Science and Technology, Winter 2014, 79-85.
- Saltelli, A., Funtowicz, S., 2015 Evidence-based policy at the end of the Cartesian Dream: The case of mathematical modelling, in "The end of the Cartesian dream", Edited by Ângela Guimarães Pereira, and Silvio Funtowicz, Routledge, p. 147-162.
- Marris, C., Wynne, B., Simmons, P., and Weldon, Sue. 2001. Final Report of the PABE Research Project Funded by the Commission of European Communities, Contract number: FAIR CT98-3844 DG12-SSMI) Dec, Lancaster: University of Lancaster.
- Saltelli, A., Annoni, P., 2010, How to avoid a perfunctory sensitivity analysis, Environmental Modeling and Software, 25, 1508-1517 https://doi.org/10.1016/j.envsoft.2010.04.012.
- Savage SL (2009) The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty, Wiley.
- L. Araujo, A. Saltelli, and S. V. Schnepf, “Do PISA data justify PISA-based education policy?,” Int. J. Comp. Educ. Dev., vol. 19, no. 1, pp. 20–34, 2017 https://doi.org/10.1108/IJCED-12-2016-0023.
- A. Saltelli and S. Lo Piano, “Problematic quantifications: a critical appraisal of scenario making for a global ‘sustainable’ food production,” Food Ethics, vol. 1, no. 2, pp. 173–179, 2017 https://doi.org/10.1007/s41055-017-0020-6.
- S. Lo Piano and M. Robinson, “Nutrition and public health economic evaluations under the lenses of post normal science,” Futures, vol. 112, p. 102436, Sep. 2019 https://doi.org/10.1016/j.futures.2019.06.008.
- A. Galli et al., “Questioning the Ecological Footprint,” Ecol. Indic., vol. 69, pp. 224–232, Oct. 2016.
- Saltelli, A. (2019). Statistical versus mathematical modelling: a short comment. Nature Communications, 10, 1–3. https://doi.org/10.1038/s41467-019-11865-8.
- Saltelli, A. (2020). Ethics of quantification or quantification of ethics? Futures, https://doi.org/10.1016/j.futures.2019.102509.
- Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.
- Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s Statement on p -Values: Context, Process, and Purpose. The American Statistician, 70(2), 129–133.
- Saltelli, A. (2018). Should statistics rescue mathematical modelling? ArXiv, arXiv:1712(06457.
- O’Neil, C. (2016). Weapons of math destruction : how big data increases inequality and threatens democracy. Random House Publishing Group.