The authors thank the Python organization, their parent
company, and the Organizational Learning Center and System Dynamics Group at the
MIT Sloan School of Management for financial support. We are especially grateful
to the members of the Python team for the time they devoted to our research and
the feedback they provided to improve it.
Abstract Knowledge intensive processes are often driven and constrained by the
mental models of experts acting as direct participants or managers. For example,
product development is guided by expert knowledge including critical process
relationships which are dynamic, biased by individual perspectives and goals,
conditioned by experience, aggregate many system components and relationships
and are often nonlinear. Descriptions of these relationships are not generally
available from traditional data sources such as company records but are stored
in the mental models of the process experts. Often the knowledge is not explicit
but is tacit, so it is difficult to describe, examine and use. Consequently,
improvement of complex processes is plagued by false starts, failures,
institutional and interpersonal conflict, and policy resistance. Formal modeling
approaches such as system dynamics are often used to help improve system
performance. However, modelers face great difficulties in eliciting and
representing the knowledge of the experts in these systems so useful models can
be developed. Increased clarity and specificity are required concerning the
methods used to elicit expert knowledge for modeling. We propose, describe and
illustrate an elicitation method which uses formal modeling and three
description format transformations to help experts explicate their tacit
knowledge. To illustrate the approach we describe the use of the method to
elicit detailed process knowledge describing the development of a new
semiconductor chip. The method improved model accuracy and credibility in the
eyes of the participants and provided tools for development team mental model
improvement. We evaluate our method to identify future research
opportunities.
Introduction Many public and private sector
systems increasingly depend on knowledge intensive processes managed and
operated by interdisciplinary teams. These systems are difficult to manage.
Often formal models such as system dynamics models are used to help managers
understand the sources of difficulties and design more effective policies.
Typically, the expert knowledge of the people who actually operate the system is
required to structure and parameterize a useful model. To develop a useful model
which is also credible in the eyes of the managers, however, modelers must
elicit from these experts information about system structure and governing
policies, and then use this information to develop the model. While many methods
to elicit information from experts have been developed, most assist in the early
phases of modeling: problem articulation, boundary selection, identification of
variables, and qualitative causal mapping. These methods are often used in
conceptual modeling, that is, in modeling efforts which stop short of the
development of a formal model which can be used to test hypotheses and proposed
policies. The literature is comparatively silent, however, regarding methods to
elicit the information required to estimate the parameters, initial conditions,
and behavioral relationships which must be specified precisely in formal
modeling.
Much of the information about system structure and decision
processes resides in the mental models of process participants, where it remains
tacit (Nonaka and Takeuchi, 1995; Forrester, 1994; Polanyi, 1966). Compared to
explicit knowledge, tacit knowledge is subjective, personal, and
context-specific. It is difficult to describe, examine and use. Therefore an
important activity in modeling these systems is the elicitation, articulation
and description of knowledge held in the mental models of system experts. By
system expert we mean those people who participate in the process directly in
operational or managerial roles. We seek to improve modeling and mental model
improvement techniques by proposing and testing a method of expert knowledge
elicitation. The method we develop is designed to assist modelers and their
clients specify parameters and relationships in a form suitable for formal
modeling. We also argue, however, that the additional precision and discipline
required to elicit information about these relationships in a form suitable for
formal modeling can yield insights of value to modelers and clients even when no
formal model is contemplated or built.
We illustrate the use of the technique
with an example drawn from a model of the development of a high-tech product.
Product development is one of many processes in which globalization, accelerated
technology evolution and increased customer sophistication have resulted in a
dramatic increase in complexity and a corresponding rise in cost overruns,
delays, quality problems and outright failures. Under pressure to bring new
products to market ever faster and cheaper, methods such as concurrent
development and co-located cross functional teams have been widely adopted.
Concurrent product development requires multiple knowledge-driven processes such
as design which produces descriptions of the final product and quality assurance
which transforms unchecked designs into approved designs or designs requiring
changes. Effective product development and effective modeling of product
development depend on knowledge of these critical process relationships which
are dynamic, nonlinear, biased by individual perspectives and goals, conditioned
by experience, and aggregate many system components and relationships.
Descriptions of process relationships are often not generally available from
traditional data sources such as company records but are stored in the mental
models of the process experts. Differences in the mental models of team members
can constrain progress and lead to conflict.
The frequently divergent mental
models of marketing managers and design engineers regarding the sequence of
steps required to develop a product concept into a detailed design provide
examples (Ford and Sterman 1997, Clark and Fujimoto 1990, Kim 1993). System
dynamics models of these systems must include the process knowledge of system
experts which drive and constrain these processes (Barlas 1996, Williams, Eden,
Ackermann and Tait 1995, Wolstenholme 1994; Richardson and Pugh 1981, Lyneis
1980, Forrester and Senge 1980). The complexity of modern product development
processes provides a rich setting in which to test our method for eliciting the
information required to specify and parameterize formal simulation
modes.
Methods of Expert Knowledge Elicitation for
Modeling Most system dynamics
research on knowledge elicitation for model building has focused on the
identification of system components and causal links to formulate conceptual
models (e.g. Vennix and Gubbels 1994). Vennix, Anderson, Richardson and
Rohrbaugh (1994) review the knowledge elicitation literature for group decision
support. They find that elicitation techniques are generally used for problem
definition, model conceptualization and model boundary definition. They mention
a role for information elicitation in model formulation to develop different
formulations but do not describe a method for doing so. They identify five
factors which should guide knowledge elicitation methods: (1) the purpose of the
modeling effort; (2) the phase of the model-building process and type of task
being performed (e.g. elicitation, exploration or evaluation); (3) the number of
people involved in the elicitation process; (4) the time available and (5) the
cost of the elicitation methods.
Morecroft and van
der Heijden (1994) describe in detail the method used to elicit expert knowledge
for conceptual model development (phase 1 of Morecroft's (1985) two phase
modeling approach). However formal modeling and the description of tacit expert
process knowledge at the level of detail and precision required to improve
complex, tacit mental models requires the description of relations at an
operational level, i.e. at a level which describes the specific characteristics
which relate individual system elements. Formal modeling requires more precision
than purely conceptual modeling. Formal modeling requires specification of stock
and flow structure, functional forms, and numerical estimates of parameters and
behavioral relationships. Such data go well beyond the types of data typically
elicited from participants in the early stages of modeling or in purely
conceptual modeling studies. Morecroft and van der Heijden describe the use of
expert knowledge for specifying relationships (part of Morecroft's second
modeling phase) only briefly. In fact Morecroft and van der Heijden consider
this use of expert knowledge to be unusually difficult. They lament "It is
difficult to imagine how one would have engaged the full team [of 10 experts] in
this more detailed work."
Methods of expert knowledge elicitation for
parameter estimation have been mentioned (e.g. by Graham, 1980) but not
extensively addressed in the literature. While the knowledge elicitation
literature is comparatively silent regarding methods to estimate functions and
parametric relationships, standard system dynamics methodology includes a number
of methods designed to help the modeler or student estimate these relationships.
For example, techniques to estimate nonlinear relations between variables
("table" or "graph" functions) include identifying reference points (points
where the relationship takes on a specified value by definition or based on high
confidence), reference curves defining a particular policy or behavior, and
extreme conditions tests (see e.g. Richardson and Pugh, 1981). However, much of
the skill involved in estimating nonlinear relationships by expert modelers also
remains tacit, passed on from teacher to student in apprenticeships. While
expert system dynamics modelers no doubt use these skills in their fieldwork,
there is essentially no published literature describing their use in field
settings and no literature evaluating their effectiveness.
Methods for
eliciting expert knowledge in product development contexts have been addressed
by several authors. The objective of most methods has been the development of
conceptual designs and models. For example Nonaka and Takeuchi (1995) combine
the elicitation of tacit knowledge and its transformation into explicit
knowledge with three other knowledge conversions to generate organizational
knowledge for developing product concepts. They propose three steps to make
tacit knowledge explicit: describe system knowledge with metaphors, integrate
metaphors with analogies and model the resulting product concepts. They provide
evidence for the effectiveness of the first two steps in Japanese
self-organizing development teams. However Nonaka and Takeuchi use modeling only
to develop "rough descriptions or drawings, far from being fully specific" (p.
67). While effective in a concept development context, the
metaphor-analogy-model method produces knowledge descriptions which are not
explicit or specific enough to be used in formal models. Similarly, Burchill and
Fine (1997) describe a qualitative method for concept generation and also
discuss the use of causal loop diagrams to map the feedbacks involved in concept
development for new products. Their method, while carefully grounded in coding
of participant comments, yields causal loop diagrams capturing participant
beliefs about the processes governing time to market but none of the
quantitative information needed to specify and test these hypotheses.
More clarity and precision are required in knowledge elicitation for
specifying, validating and analyzing formal model relationships and parameters
than for conceptual modeling. Elicitation methods which are effective for
conceptual modeling are necessary but far from sufficient. We seek to understand
how modelers and system experts can elicit specific structural system knowledge
for formal modeling and mental model improvement. We also argue that these
techniques generate valuable information even when no formal model is
contemplated.
Our method is
motivated by the diversity of characteristics of information sources for
modeling. Forrester (1994) categorizes knowledge sources for system dynamics
modeling as mental, written or numerical and analyzes the strengths and
weaknesses of each form. Mental models are expansive in scope and richness of
available information. Written knowledge descriptions have the advantages of
being codified and therefore more widely accessible than mental model knowledge;
written descriptions facilitate the abstraction of more detailed mental model
data. But written knowledge is limited by: (1) the richness it can describe, (2)
the inability of modelers to query written knowledge to test, expand and
understand it beyond the text and (3) being filtered and biased during
codification. Forrester considers numerical data to be the narrowest of the
three knowledge forms in scope and lacking in supporting contextual information
about the structure which generated the numerical data. However numerical data
are critical for estimating specific parameters for modeling, establishing
patterns of behavior and for some forms of model testing (Homer, 1996; Sterman,
1984; Forrester and Senge, 1980). Forrester clearly sees value in all three
knowledge description forms for modeling, and argues against the bias of some
schools of modeling against use of the mental and written databases. However,
Forrester does not address how modelers can access the benefits of all three
knowledge types while avoiding their weaknesses.
Our
method aids in knowledge description by focusing the development of a formal
model as the eventual product of the effort. We hypothesize that pushing experts
to describe relationships at the simulation model level helps them to clarify
and specify their knowledge more than they would if we work at a more abstract
level using tools such as causal loop diagrams. We believe this is true even if
no formal model is ever built, though of course the process of formal modeling
almost always yields additional insight into problem situations (see e.g.
Forrester 1961; Sterman, 1994; Homer, 1996). Our method also differs from
knowledge elicitation approaches which seek a shared image from a group of
experts and which abstract expert knowledge for the purpose of reaching
consensus. Later we will describe how our method utilizes differences among
individual expert descriptions to improve formal modeling and mental
models.
The Knowledge Elicitation
Method Our method structures the development of
knowledge descriptions into three sequential phases: the positioning,
description and discussion phases:
The Positioning Phase The purpose
of the positioning phase is to establish a context and goals for the description
process. The positioning phase has three steps:
1. Establish
context: The facilitator creates an environment in which the elicitation
will occur by briefly describing the model purpose, major subsystems and their
interactions, the roles and structures of the subsystem and the relationship to
be characterized. The context provides experts with a reason for developing the
descriptions, bounds on the scope of the model, an initial focus of attention
and a transition period from their activities prior to the elicitation workshop.
Context setting prepares the experts for the description work ahead in the same
sense that Morecroft and van der Heijden (1994) "conditioned" a group of oil
industry experts as a prelude to knowledge elicitation for model building.
Usually there will have been prior modeling work including problem articulation,
boundary selection, and even the development of initial working simulations. Of
course such initial models, boundaries, and even the basic definition of the
problematic issue to be modeled are always provisional and may change as the
modeling process iterates; indeed detecting inadequacies in problem definition,
model boundary, and formulations is the main purpose of our method and of
iteration in modeling in general (Forrester 1971/1985, Sterman 1994, Homer
1996).
2. Focus on one relationship at a time: Two activities
narrow the experts' attention to a single relationship. First the facilitator
describes the relationship operationally by identifying and defining the input
and output variables which the relationship describes with units of measure,
where the relationship is used in the model, why the relationship is important
and what other parts of the system and model the relationship affects. This
activity assures that the experts understand exactly what part of the system is
to be described. Second, the facilitator provides a description aid which helps
experts make explicit and codify their knowledge. One example of such aids is a
"graphic frame" which can be used to describe a (possibly nonlinear)
relationship between two variables (see below).
3. Illustrate the
method: Each expert is given a set of relationship description worksheets
which have been completed based on an example from an analogous setting. The
facilitator explains the examples in detail, using the four steps of the
description phase to illustrate the process and reasoning the experts will use
to describe the relationship in their own system.
The Description
Phase The description phase guides experts through the sequential
development of four different descriptions of the relationship. Each description
takes a different form and serves a unique purpose in the process of
transforming their tacit knowledge into usable form. During the description
phase the experts are directed to use their own images and not to interact with
the other experts.
4. Visual description: Experts are first asked
to visualize the process, to "see a picture in your mind of what happens". In
the case of a product development process, the experts imagine (presumably in
pictures) the flow of the work described by the relationship. Experts are
invited to close their eyes or otherwise disengage from others during this step.
The purpose of this step is to activate, bound and clarify the
experts' mental images of the relationship.
5. Verbal description:
Experts are then asked to ìtell the story of what happensî to themselves.
Experts are encouraged to use a large unstructured portion of the description
worksheet to write informal notes about their mental image. The informal and
solely individual use of these notes is emphasized to encourage their use. The
purpose of the verbal description is to transform the expert's mental image of
the process and relationship into a more explicit form and begin to codify their
knowledge. The completeness or accuracy of descriptions is not emphasized until
the discussion phase of the method.
6. Textual description:
Experts are then directed to capture their ëstoryí in writing on the
worksheets. The textual description generates a more specific codified
description of the expert's knowledge by constraining the feasible shape of the
relationship. In particular, for the specification of a nonlinear relationship
between variables, the experts are directed to identify anchor points and the
reasoning or data justifying them. Anchor points are those values of the
relationship required by system constraints (e.g. shipments must fall to zero
when inventory is zero), defined by convention (e.g. the 1997 value = 1.0) or in
which the expert has high confidence. A separate portion of the description
worksheet provides spaces for anchor point coordinates and the basis for each
point, and the definition of anchor points is reiterated from the examples
explained in step 3 (Illustrate the method) to assist the experts.
7.
Graphic description: Experts then create a graphic description of the
relationship in two steps. First the anchor points are plotted on an empty
graphic frame provided on the description worksheet. Then experts consider the
shape of the relationship between anchor points and use that reasoning to draw
their estimate of the relationship. The facilitator emphasizes that the second
step is significantly more than connecting the anchor points with straight lines
and illustrates the use of nonlinear relationships to describe a relationship.
The experts are not directed to generate smooth graphs. The objective of the
method is to elicit and describe expert knowledge as accurately as possible and
not as constrained by expectations about relationship continuity. However, the
experts are directed to explain and provide justification and data for any
inflections and discontinuities in terms of the underlying process.
The Discussion Phase The discussion phase seeks to test, understand and improve
the descriptions of different experts. We base this phase on the
estimate-feedback-talk protocol suggested by Vennix and Gubbels (1994) because
of its focus on the assumptions underlying tacit knowledge and adapt the
protocol to our focus on describing and understanding rather than
consensus-building. The discussion phase begins by displaying the graphic frames
containing the descriptions generated by the different experts side by side.
8. Examine individual
descriptions: Each expert shares their verbal
description with the group as a basis for explaining the anchor points and shape
of their graphic description. This step provides an internal test of description
consistency by comparing multiple descriptions of a single relationship
developed by a single expert. The verbal and graphic descriptions can provide
enlightening information concerning the expert's visual description. Anchor
points prove valuable as checks against the limits of relationships and as the
basis for the reasoning behind individual estimates. The underlying mental model
leading to a critical activity or anchor point may become clear only through the
description of the shapes assigned to the areas between anchor points.
9. Compare descriptions: Differences among descriptions are
inevitable because of the complexity of the relationships being described and
the incomplete and particular knowledge of different experts. These differences
naturally lead the experts to discuss their mental models and assumptions used
to describe the relationship. The facilitator directs the experts to identify
and investigate the causes of differences based on their roles in the process,
relationships among functional groups and organizational structures. No attempt
is made to resolve differences or reach consensus. This step provides an
external test of individual descriptions by comparing them with those of other
experts with different roles and perspectives.
The elicitation process
uses a focus on formal modeling and a single specific relationship to prepare
experts to transform their tacit knowledge into usable forms. Individual experts
develop multiple descriptions of the same knowledge in different formats. The
descriptions are tested through their use by individuals to communicate with
other experts and by comparison of individual descriptions among colleagues. The
method provides several advantages over interview-based or group modeling
approaches to knowledge elicitation: (1) information losses during elicitation
are reduced compared to single-step processes through the use of several small,
separate and explicit format transitions; (2) the generation of multiple
descriptions in different formats by a single expert allows testing and
improvement through triangulation; (3) the generation of multiple
individually-generated descriptions in a group context allows testing and
improvement of descriptions through comparison to the views of other experts
while reducing the potential for group-think and premature convergence.
Example: Eliciting Knowledge of Product
Development Processes We used our knowledge elicitation
method to model process relationships in a product development project (called
"Python" here) which created a moderately complex ASIC (application specific
integrated circuit) for a major player in the semiconductor industry. The
process was used to develop two types of concurrence relationships which relate
the amount of work completed to the amount of work which is available to be
completed (Ford and Sterman, 1997). Complex development projects consist of
multiple phases such as design, prototyping, and testing. The ability of the
engineers responsible for an activity to begin and successfully complete their
work depends on the timing and quality of the work released to them by other
activities. For example, prototype builds cannot begin until at least some
design information (usually engineering drawings and associated technical
specifications) are completed and released. Sometimes an activity can begin its
work with partial and incomplete information; in other cases all upstream work
must be completed before a downstream activity can begin. In project and product
development models concurrence relationships describe the constraint
imposed on one activity by another. Concurrence relations describe the degree to
which activities can be carried out in parallel or must be done sequentially.
External concurrence relationships describe the dependency of the
development tasks in a downstream phase on the release of tasks from an upstream
phase, such as the constraint imposed on testing by the release of design work.
Internal concurrence relationships describe the inter-dependency of the
development tasks within a single development phase. Internal concurrence
relationships are necessary because each phase in a development project
aggregates a number of different activities. Not all the activities in a given
phase can always be done independently. For example, consider the construction
phase of a building. The second floor cannot be built until the structural
members for the first are completed. This sequentiality is captured by the
internal concurrence relationship (see below for a detailed example).
Seven concurrence relationships were developed based on twenty three
expert estimates from developers and managers of the Python project. The
parameter estimates were developed in a set of seven workshops averaging
forty-five minutes in duration. Each workshop developed a single concurrence
relationship with a different set of experts. Developers and managers
responsible for each project phase participated in the workshop to develop the
internal concurrence relationship for their phase. The workshops to develop the
external concurrence relationships included those responsible for the relevant
upstream and downstream phases.
Developers and managers of the Python
project were invited to participate in the workshops. Experts had an average of
over ten years of experience working in the development of computer chips and a
minimum of five years experience in the companyís product development
organization. Most developers in the Python organization also have management
roles in development projects, often making the distinction between developers
and managers unclear and sometimes meaningless. The experts were familiar with
the system dynamics approach to modeling product development projects (Ford,
1995) and several had received training in systems thinking (Voyer, Gould and
Ford, 1996). A few of the experts had heard an informal conceptual description
of the model but none had knowledge of the formal model structure or
descriptions of specific relationships used in the formal model. The number of
participants in each workshop varied from two to five, depending primarily on
the number of personnel in each development phase and whether the concurrence
relationship to be estimated was internal or external. No experts left during
the workshops. One participant in one workshop was initially unable to describe
a relationship. How this was addressed is described below.
The Positioning Phase
1. Establish context: The
facilitator described the model purpose as the modeling of development processes
within a single development project to improve the understanding of how those
processes affect product development performance. A diagram facilitated the
description of the general model structure as a linked set of development phase
modules (Figure 1). This helped the experts focus on the four phases used in the
model: product definition, design, test prototype and reliability/quality
control.
2. Focus on one relationship at a time:
The facilitator first described
and defined internal and external concurrence relationships. External
concurrence relationships were operationalized by defining the independent
variable as the fraction of the development tasks in the scope of an upstream
phase which has been released to the downstream phase. The dependent variable is
the fraction of development tasks in the scope of the downstream phase which can
be completed. Both parameters have a range of 0 - 100% of the phase scope (the
number of tasks to be completed). For internal concurrence relationships the
independent variable is the percent of the development tasks for the phase which
are perceived by the developers to be completed correctly. The dependent
variable is the percent of development tasks in the same phase which can be
completed. Both parameters have a range of 0 - 100% of the phase scope. The role
of these relationships in constraining project progress independently and
interactively with project resources was described.
To
facilitate the description of concurrence relationships by the experts the
facilitator provided a "graphic frame" for each type of concurrence
relationship. Figure 3 shows a graphic frame for an external concurrence
relationship. The frame is a box in which the abscissa represents the
independent variable and the ordinate represents the dependent variable using
the definitions above. The facilitator described the variables and the range and
scales of the axes.
Percent of Tasks Percent of Tasks
Perceived Completed or Available Satisfactory to
Complete Notes 0 10 Can do 1st floor at start 10
20 Completing first floor makes 2nd floor available 20 30 same ... ... Ö 90 100
Completing 9th floor makes entire structure available 100 100 Building structure
erection complete
Table 1: Erect Steel Example of Internal Concurrence
Relationship Anchor Points
Figure 4 shows the graphic description of the
internal concurrence relationship for the Erect Steel phase of the office
building example. The linear progression reflects the sequential increase in the
number of total floors available for steel erection as work proceeds up the
building.
Description Phase 4. Visual description: Experts were given three to five minutes to
develop a visual description of the process. The redirection of expert's
attention toward the facilitator indicated when this step was completed.
5. Verbal description: A review of the informal "Process Story Notes" sections of the worksheets
completed by the experts (see appendix for examples) indicates that process
images can take several forms. Most experts disaggregated the phase or pair of
phases into development activities performed on a stable set of development
tasks and important events in the phase such as product hand-offs and approvals.
These were typically described with lists though some experts wrote in story
form. Examples include:
"From rough block diagram + biz [business] plan (20%)
Design can begin top-level architecture and block definition (20%)" "Begin
vector generation as soon as possible. If a cell completes then look at it from
test perspective." "Once have relevant chardata [product characterization data]
can set specs [specifications]."
Several experts linked disaggregated
activities in their notes by estimating the availability of work each allowed in
subsequent activities. Other experts estimated fractional contributions of each
activity to completing the phase. A few experts listed combinations of
activities and product components which are addressed concurrently during the
phase.
6. Textual
description: In addition
to notes of their verbal descriptions, anchor point descriptions were the
primary form of textual description. These were represented by a set of
coordinates which would be plotted as points on the graphic frame and
identifying notes describing the basis for the anchor point. One expert's
textual description of the anchor points in the external concurrence
relationship between the product definition and design phases is shown in Table
2 below.
Percent of Percent of
Upstream Downstream Tasks
Tasks Completed or Available
Released to Complete Notes
0 0 Enabled 10 20 System definition - allows start bench coding 70 50
General functionality defines architectural design 100 100 Detailed operation
allows internal block coding & all other steps to layout
Table 2: One Expert's notes describing Anchor Points of the External
Concurrence Relationship between the Product Definition and Design
Phases
The written examples based on the building project proved to be
valuable aids to the experts while developing their own concurrence relationship
descriptions. Eighty-seven percent of the relationship descriptions (20 of 23)
included separate textual descriptions of anchor points. In two of the
relationship descriptions in which anchor points were not specified they were
included in the verbal or graphic descriptions.
7. Graphic
description: With only one exception (described in the next step) the
experts had little difficulty describing the relationships graphically. Figure 5
shows five graphic descriptions of the external concurrence relationship between
the design and prototype test phases of the Python project. The five
descriptions have very similar anchor point output values (Percent of Test
Prototype Tasks Available to Initially Complete) at the extreme input values (0
and 100% of Percent of Design Tasks Released) but vary more between these anchor
points, as one would expect.
Discussion Phase 8. Examine individual descriptions: The examination of individual descriptions
began with a verbal account by each expert of the basis for their graphic
description. Some experts described their estimates during the discussion phase
in the form of Gantt charts (Moder, Phillips and Davis, 1983; Levy, Thompson and
Wiest, 1963) based on work units instead of time units. As an example the
following description captures the verbal descriptions given by the process
experts of the internal concurrence relationship of the design phase of the
Python project, which produces the software code used to lay out the physical
features of the computer chip: The code to be produced is organized into
seventeen blocks (code modules). A few of these blocks of code must be designed
and written before other blocks can be started. Therefore only these initial
blocks (estimated to be 20% of the code) can be worked on at the beginning of
the design phase. Itís not feasible to begin work on the other blocks of code
until the initial blocks are nearly complete. When the initial blocks are
complete, most of the remaining code can be developed. When most of the blocks
of code have been written the work of integrating them into a single operational
program begins. Integration is fundamentally less parallel, producing a flat
tail on the right side of a graphic description. The graphic descriptions of
this relationship developed individually by three experts are shown with the
dashed lines in Figure 6.
The verbal descriptions
added richness to the graphic descriptions by improving the modeler's and expert
group's understanding of the mental model which formed the basis of the
relationship description. In one case an expert was unable to construct a useful
mental image of the process for the development of an internal concurrence
relationship. The facilitator was able to assist the expert to build a
description by asking the expert questions to disaggregate the work in the phase
into availability-based units, tag the units with meaningful titles, quantify
and then causally relate those units based on the approach used in the Design
Structure Matrix (Smith and Eppinger 1997). These unitswere then used to
identify anchor points in the expert's mental model which then served as the
basis for a graphic description.
9. Compare descriptions: Differences among experts'
descriptions provided the opportunity for testing and improvement. The
descriptions of the internal concurrence relationship of the design phase
(Figure 6) provide an example of how experts clarified and improved their
graphic descriptions based on their discussion. The three experts were in
general agreement with the verbal description of the relationship (see above)
despite the differences in the three graphic descriptions. A discussion led by
the facilitator resulted in the experts deciding that a horizontal portion at
the beginning of the phase would improve the description by capturing the fact
that most code blocks could not be started until the initial 20% were nearly
complete. Based on this discussion an improved graphic description was developed
(the solid line in Figure 3).
Differences among descriptions also
identified differences among the experts and suggested causes for those
differences. Consider the external concurrence relationship between product
definition and design (Figure 7). Product architects and marketing
representatives gradually define the requirements for the new product. As they
release these the designers can begin to code the software which generates the
chip layout so that it provides the specified functionality. Four experts, two
from the upstream phase (one from strategic marketing and one product architect)
and two from the downstream phase (a designer and design manager) participated
in the workshop. The strategic marketing representative was the participant
farthest upstream in the product definition/design portion of the Python
project. His estimate suggested the most concurrence, implying little product
definition work needed to be completed and released before designers could
usefully do most of their work. The product architect, whose work is also
upstream of design, also suggests that design can begin after only a small
fraction of the product specifications have been released. In contrast the
manager of the design phase and the designer believe the degree of potential
concurrence is significantly less, estimating that roughly half the product
specifications must be released before any significant design work can begin.
Of particular interest in this example
is that the team member who actually performs the design work (the designer) and
the participant providing the basis for the designer's work (the strategic
marketing representative) differ in their perceptions of the rate at which the
design work can be done by over fifty percent of the design scope throughout
most of the design phase. Such differences identify disparities in the mental
model's of team members which may be high leverage areas for improvement.
Marketers believe design work can begin very early, while product attributes are
still vague and evolving, while designers believe they must have detailed and
stable specifications to do their work. This gap leads to conflict. In fact, the
identification of this disparity in perspectives led to vigorous and useful
discussion and helped the different parties come to a better understanding of
the source of prior conflicts between their groups.
Using the Results
of the Elicitation MethodOur method generates many useful products including
multiple independently-generated parameter descriptions, expert reasoning behind
those descriptions, comparison and testing of parameter descriptions by peers,
communication among experts and the identification of areas of team mental model
consistency and inconsistency. We used the descriptions generated by our
elicitation method in several ways to improve our modeling.
-Verbal and
textual descriptions provided data for triangulation with previously collected
interview data on the structure and parameterization of the development process
in the Python project. This improved structural model validity (Barlas, 1996;
Forrester and Senge, 1980).
-Structural behavior validation (Barlas, 1996)
was enhanced by setting limits on the extreme conditions of important model
parameters. For example Figure 8 shows consistency among four descriptions of
the internal concurrence relationship of the test prototype phase. None of the
four descriptions extend above the 45 line crossing the vertical axis at 50% of
Test Prototype Tasks Available for Initial -Completion. Therefore this is a more
reasonable value for extreme conditions testing than a totally concurrent
relationship described by a horizontal line along the upper axis of the graphic
frame.
-Behavior pattern validation
was improved through better model calibration by using expert knowledge instead
of modeler estimates of critical relationships.
-Model analysis was improved
by using differences in the expert's assessments to select the ranges of
variation in parameters and relationships for sensitivity testing.
-The
process significantly improved formal model credibility in the eyes of the
participants by involving those responsible for the system in the modeling
process, acknowledging and honoring participant expertise and making special
efforts to incorporate that expertise into the model. Developing such
understanding is essential to successful transfer of insight, the development of
systems thinking skills among the client team, and ultimately, successful
implementation of model-based policy recommendations.
We also used the
results of our method to analyze and improve expert mental models. We derived
valuable insights from the similarities and differences among estimates of the
same concurrence relationship. The various descriptions of each concurrence
relationship indicated both areas of agreement and conflict among the mental
models of the Python development team. For example the variation in estimates of
the external concurrence relationship linking the design and test prototype
phases (Figure 5) is less than for the external concurrence relationship which
links the product definition and design phases (Figure 7). This indicates that
the Python team's mental models are more consistent for the first relationship
than the second.
The codification of expert knowledge and discussion of
descriptions provided a vehicle for improving shared mental models, as suggested
by Morecroft (1994). The examination of individual mental models at the level of
specificity facilitated by our method in a non-threatening group context
provided a way to investigate the causes of disparities in the beliefs of team
members and resolve differences in their mental models. The experts in our
workshops recognized the potential of using our elicitation method to identify
areas of team mental model inconsistency and therefore potential high leverage
points for improving the product development process. Combining parameter
sensitivity and degree of consistency may provide an effective means of
identifying effective system features for improvement. This can also help to
avoid group-think and premature convergence (Vennix et al., 1994).
Evaluation of the Method Our application of the elicitation
method acts as an initial test of our hypothesis concerning the use of
successive knowledge transformations across description formats as an improved
means of eliciting expert knowledge for modeling. The benefits achieved for
formal modeling of the Python project and the Python development team suggest
that our method can improve expert knowledge elicitation for formal modeling and
mental model improvement.
Our method includes several aspects which are
new or potentially need to be customized for effective application in a
particular modeling project. We assess our method as follows.
1. Knowledge
held primarily in mental models is usually not described in other formats
because of its complex and tacit nature. The method uses multiple formats to
elicit and capture expert knowledge from several descriptive perspectives. These
multiple formats are more likely to capture portions of expert knowledge which
might be lost with a single format - single step elicitation method. 2. The use
of four description formats adds richness which improves information quality
through triangulation. This triangulation occurs in two places: within
individual experts as they seek consistency among their descriptions of a
particular relationship and across experts when they compare their different
descriptions. 3. The generation of a graphical representation through a
succession of smaller steps (image to words to anchor points to graph) rather
than asking people to simply ìdraw the relationshipî improves knowledge
elicitation by reducing the cognitive processing required of system experts in
each step. Multiple formats and steps also slow the elicitation process, thereby
providing more time for reflection and revision.4. Explaining and providing
complete documentation of the steps to be performed by the experts using an
example from a familiar but different context significantly improves the quality
of the descriptions and the experience of the experts.5. The method focuses
modeling efforts on knowledge that experts consider both important and
proprietary in that they are the holders and users of the knowledge. Prior work
documents the benefits of including system participants in modeling at the
conceptual level; our method engages experts more fully by honoring the full
range of participant expertise. 6. The discussion phase provides immediate
benefits to experts by allowing them to share and compare mental models in a
form which facilitates learning through the investigation of underlying
assumptions. 7. The process of describing and comparing individual descriptions
in a group of peers increased error checking.
Conclusions Eliciting expert knowledge for formal modeling
raises different challenges than elicitation for conceptual modeling or
consensus-based decision making. Our method focuses on the elicitation of expert
knowledge in a form suitable for formal modeling to help experts make more of
their tacit knowledge explicit and available for examination and improvement. We
use our method to develop estimates of specific system relationships by
individual experts which are the basis for description testing and mental model
improvement. The method was found to be an effective tool to improve formal
modeling and to help a development team improve their mental
models.
Improving our ability to make tacit expert
knowledge explicit and usable for formal modeling and mental model improvement
can have important effects on both research and practice. Researchers can use
the increased quantity, quality and improved understanding of expert system
knowledge derived in this way to build more complete and accurate models.
Practitioners can increase their awareness, understanding and use of the tacit
knowledge which generates organizational behavior and often leads to
organizational dysfunction.
Future research can assess our method by
comparing it with other elicitation techniques such as the qualitative
approaches used in conceptual model building. The broader application of our
elicitation method can be tested by applying it to other types of model
parameters and relationships, and with experts and contexts different from the
product development setting used here. The integration of the techniques we have
used with other elicitation techniques may provide the basis for more advanced
elicitation methods.
References
Barlas, Yaman. 1996. .Formal Aspects of model validity and validation in system dynamics. System Dynamics Review. 12(3):183-210. Burchill, G. and C. Fine (1997). Time versus market orientation in product concept development: Empirically based theory generation. Management Science, 43(4), 465-478.Clark, Kim and Fujimoto, Takahiro. 1990. The Power of Product Integration. Harvard Business Review. Nov.-Dec., 1990.
Ford, David N. 1995. The Dynamics of Project Management: An Investigation of the Affects of Project Process and Coordination on Performance. doctoral thesis. Massachusetts Institute of Technology. Cambridge, MA., USA.
Ford, David N. and Sterman John D. 1997. Dynamic Modeling of Product Development Processes. Working Paper no. 4355. Sloan School of Management. Massachusetts Institute of Technology. Cambridge, MA. USA.
Forrester, Jay W. 1961. Industrial Dynamics. Productivity Press. Cambridge, MA.
Forrester, Jay W. (1971/1985) ìThe Model vs. A Modeling Process,î System Dynamics Review, 1(1), 133-134.
Forrester, Jay W. and Senge, Peter M. 1980. Test for Building Confidence in System Dynamics Models. TIMS Studies in the Management Sciences. 14:209-28.
Graham, Alan K. (1980). Parameter Estimation in System Dynamics Modeling. Randers, Jørgen, ed. Elements of the System Dynamics Method. Productivity Press. Portland, OR., USA.
Homer, Jack. 1996. Why We Iterate: scientific modeling in theory and practice. System Dynamics Review. 12(1):1-19.
Homer, Jack. (1985) Worker Burnout: a dynamic model with implications for prevention and control. System Dynamics Review. 1(1):42-62.
Kim, Daniel H. 1993. A Framework and Methodology for Linking Individual and Organizational Learning: Applications in TQM and Product Development. doctoral thesis. Massachusetts Institute of technology. Cambridge, MA. USA.
Levy, F.K., Thompson, G.L. and Wiest, J.D. 1963. The ABCs of the Critical Path Method. Harvard Business Review. 41:5:98-108.
Lyneis, James M. 1980. Corporate Planning and Policy Design, A System Dynamics Approach. MIT Press. Cambridge, MA. USA.
Moder, Joseph J, Phillips, Cecil R. and Davis, Edward W. 1983. Project Management with CPM, PERT and Concurrence Diagramming. Van Nostrand Reinhold Co. New York.
Morecroft, J.D.W. 1985. The Feedback View of Business Policy and Strategy. System Dynamics Review. 1(1):4-19.
Morecroft, J.D.W. 1994. Executive Knowledge, Models, and Learning. in Morecroft, John and Sterman, John, ed. Modeling for Learning Organizations. Productivity Press. Portland OR., USA.
Morecroft, J.D.W. and van der Heijden, K.A.J.M. 1994. Modeling the Oil Producers: Capturing Oil Industry Knowledge in a Behavioral Simulation Model. Morecroft, John and Sterman, John, ed. Modeling for Learning Organizations. Productivity Press. Portland OR., USA.
Nonaka, Ikujiro and Takeuchi, Hirotaka. 1995 The Knowledge-Creating Company, How Japanese Companies Create the Dynamics of Innovation. Oxford University Press. New York.
Polanyi, M. 1966. The Tacit Dimension. Routledge & Kegan Paul. London.
Richardson, George P. and Pugh III, Alexander L. 1981. Introduction to System Dynamics Modeling with Dynamo. MIT Press. Cambridge, MA, USA.
Smith, Robert P. and Eppinger, Stephen D. 1997. Identifying Controlling Features of Engineering Design Iteration. Management Science. 43:3:276-93.
Sterman, John 1994. Learning in and about Complex Systems. System Dynamics Review. 10(2-3):291-330.
Sterman, John 1984. Appropriate Summary Statistics for Evaluating the Historical Fit of System Dynamics Models. Dynamica. 10(2):51-66.
Vennix, J.A.M. and Gubbels, J.W. 1994. Knowledge Elicitation in Conceptual Model Building: A Case Study in Modeling a Regional Dutch Health Care System. Morecroft, John and Sterman, John, ed. Modeling for Learning Organizations. Productivity Press. Portland OR., USA.
Vennix, J.A.M., Anderson, D.F., Richardson, G.P. and Rohrbaugh, J. 1994. Model Building for Group Decision Support: Issues and Alternatives in Knowledge Elicitation. Morecroft, John and Sterman, John (eds.) Modeling for Learning Organizations. Productivity Press. Portland OR., USA.
Voyer, John, Gould, Janet and Ford, David. 1996. Systemic Creation of Organizational Anxiety: An Empirical Study. Working Paper #3916. Sloan School of Management. Massachusetts Institute of Technology. Cambridge, MA. USA.
Williams, Terry, Eden, Colin, Ackermann, Fran and Tait, Andrew. 1995. The Effects of Design Changes and Delays on Project Costs. Journal of the Operational Research Society. 46:809-18.
Wolstenholme, Eric F. 1994. A Systematic Approach to
Model Creation. in Morecroft, John and Sterman, John, ed. Modeling for
Learning Organizations. Productivity Press. Portland OR., USA.
Appendix: Examples of Worksheets used in Example
Notes
1. The handwritten notes produced by the experts on the first page of the worksheets have been transcribed onto identical forms for increased legibility. Explanations of abbreviations and jargon are provided in brackets, [].
2. The term "Internal Precedence Relationship" was used in the example to refer to an Internal Concurrence Relationship. The term "External Precedence Relationship" was used in the example to refer to an External Concurrence Relationship.
3. The word "Infeasible" and the shading of the lower right
half of Internal Concurrence Relationship graphic frames (see Figures 4, 6 and
8) were added subsequent to the example workshop to facilitate the explanation
of the infeasability of relationships described by curves in this area. See Ford
and Sterman (1997) for an explanation of this constraint.
INTERNAL PRECEDENCE WORKSHEET
Development Activity described: PPS [Test Product Prototypes]
Position held by author: [Process Engineer]
PROCESS STORY NOTES:
Must wait for fab [fabrication] for all other,
Once have probecard/program/wafers can sort,
once sorted can assemble
once hardware/programs/assembled can do char [product characterization],
once have relevant chardata [product characterization data] can set specs [specifications]
one specs [specifications] set and apps [applications] OK can complete
ANCHOR POINTS IN TABLE:
Percent Completed Percent Completed or Available to Complete Notes
0 --> 10% Fab [fabrication]
10% 15% Sort
15 20 assembly
NOTES:
1. See Figure 8 for a comparison of the relationship described above with other descriptions of the same relationship.
2. This example illustrates the results of repeated reflection by an expert through our method. The textual description in the form of the anchor points (step 6) shown on the first page of this description was adjusted by the expert in the formation of the initial graphic description (step 7), shown by the lower line in the graphic frame above. The initial graphic description was improved when examination (step 8) revealed the unfeasibility of a portion of the initial graphic description, resulting in the final description shown by the upper line in the graphic frame above. See Ford and Sterman (1997) for an explanation of the unfeasible portion of the graphic frame.
Upstream Development Activity: Product Definition
Downstream Development Activity: Design
Position held by author: Product Architect
PROCESS STORY NOTES:
1. Product "straw-man" complete - can begin high-level design & acquisition of
needed design info [information] (e.g. cells, tools)
2, Feedback incorporated into straw-man, producing 1st-cut product def'n [definition]
3. Incremental product-def'n [definition] refinement,
4. Hand-off complete
ANCHOR POINTS IN TABLE:
Percent of Upstream Percent of Downstream Tasks Notes
Tasks Released Completed or Available to be Completed
10 40 1. [see above]
35 65 2. [see above]
60 85 3. [see above]
80 100
NOTES:
1. See Figure 7 for a comparison of the relationship
described above with other descriptions of the same relationship.
Upstream Development Activity: Product Design
Downstream Development Activity: PPS [Product
Prototype Testing]
Position held by author: [Test Engineer]
PROCESS STORY NOTES:
- Design starts block development,
test has acquired Product Knowledge from Arch [product architecture] Review
- Design complete Block 1, Test starts DFT [testing] on Block 1
continue loop until Design complete, Along the way:
| Develop test plan (TEP)
| Develop char [product characterization] plan ( )
| Develop final test plan (Production
- FAB [fabrication] cycle / Complete simulation / test programs
ANCHOR POINTS IN TABLE:
Percent of Upstream Percent of Downstream Tasks Notes
Tasks Released Completed or Available to be Completed
0 10 10 Product Knowledge
20 15 25 DFT [testing] anchors
50 20 75 DFT + test plans
75 25 90 Final test plan for char/PPT/wafer sort
100 35 100 DFT complete.
complete % can
complete
NOTES:
1. See Figure 5 for a comparison of the relationship
described above with other descriptions of the same relationship.