Abstract for: The Validity Tests Used by Social Scientists and Decision Makers
How can simulation be “sold” to policy decision makers? How can simulation be sold to other social scientists that do not accept simulation as a complement to “accepted” techniques” (Repenning, 2003)? Decision makers and social scientists use validation tests to determine how much confidence they should vest in a model (Forrester and Senge, 1980). And because these communities have different uses for models, they will employ different validation tests. If validation tests are not sufficiently clear, several problems could occur. A decision-maker may “dismiss” a simulation model using a particular validation test, unbeknownst to the modeler. This paper collects the validation tests in the various simulation and statistical / psychometric literatures into a comprehensive framework. Decision-makers are keenly interested in use as well as how “scientifically valid” that model is. Therefore, there are analytic, “consequential”, and pragmatic validity tests. Decision-makers may rely on heuristics (Kahneman, Slovic and Tversky, 1982) as validity tests because they have difficulty understanding simulation (Cronin, Gonzalez, & Sterman, 2009). Decision-makers can teach modelers too – humans have used heuristics to become the dominant species in complex ecosystems. The long-term research objective is to use experiments (Sterman, 1987) to understand how decision-makers use both adaptive and dysfunctional heuristics.