This paper provides an overview of the literature in assessment of system dynamics (SD) models to substantiate a pragmatic framework intended to guide model testing, refinement and evaluation. It recaps the predominant philosophy of science embraced in the field, and its implications for model validation. It reviews tests for building confidence in SD models. In this literature, SD is presented as a relatively uniform approach to dynamic modeling. However, surveys of the field paint a different picture, containing surprisingly diverse forms of practice. We draw upon this breadth of existing practice to develop our framework. We propose five components of practice: 1) system’s mapping, 2) quantitative modeling, 3) hypothesis testing, 4) uncertainty analysis, and 5) forecasting/optimization. In light of the proposed framework, we reclassify tests for assessment of dynamic models across these five practical categories. We believe this is useful to tailor tests to specific modeling efforts, guide model testing in different phases of model development, and to help conduct partial assessments of levels of confidence.