«Running Head: Analyzing A Novel Expertise Analyzing a Novel Expertise: An Unmarked Road Wayne D. Gray George Mason University & Susan S. Kirschenbaum ...»
Running Head: Analyzing A Novel Expertise
Analyzing a Novel Expertise: An Unmarked Road
Wayne D. Gray
George Mason University
Susan S. Kirschenbaum
Naval Undersea Warfare Center Division Newport
Gray, W. D., & Kirschenbaum, S. S. (in press). Analyzing a novel expertise: An unmarked road.
In J. M. C. Schraagen, S. F. Chipman, & V. L. Shalin (Eds.), Cognitive task analysis.
Mahwah, NJ: Erlbaum.
Please send all correspondence to:
Wayne D. Gray George Mason University MSN 3f5 Fairfax, VA 22030 1-703-993-1357 email@example.com Gray & Kirschenbaum page 2 Analyzing a Novel Expertise Analyzing a Novel Expertise: An Unmarked Road There are many varieties of task analysis-each with its advantages and disadvantages, each with its adherents and detractors (e.g. see the recent collections published by Annett & Stanton, 1998; Kirwan & Ainsworth, 1992). Most published descriptions focus on how to apply the technique or why it is a good technique to apply. Few accounts written by advocates of a technique are specifically directed at problems and pitfalls in applying the technique. This account is different. Although we are unabashedly enthusiastic advocates of the theory-driven combination of task analysis and protocol analysis that we employ, we hope that by identifying problems and obstacles that we encountered that more people will be better prepared and, therefore, more successful at applying these techniques.
Beware—knowing that the road is narrow, winding, and unmarked does not make the trip easy. It might, however, discourage someone from setting out in the family sedan. For those who are better equipped, knowledge of the hazards ahead may help them avoid blindly plunging forward into a known problem. It is in this spirit that we write this chapter.
The following section provides a brief overview of the techniques we employ. It then introduces the known obstacles to these techniques. The main part of the chapter discusses these obstacles in the context of a specific project—Project Nemo.
Theory-Driven Task Analysis and Protocol Analysis Theory-driven task analysis decomposes the procedural and declarative knowledge required to perform a task into components supported by the theory. With some additional work on the part of the analyst, the control structure provided by the theory can use the elements of the analysis Gray & Kirschenbaum page 3 Analyzing a Novel Expertise to form a model of how a user performs the task. Theories with weak or rigid control structures, such as keystroke-level GOMS or CPM-GOMS (for an overview, see John & Kieras, 1996a;
John & Kieras, 1996b), may produce models that are only capable of performing the exact task that was analyzed. Theories with more powerful control structures, such as NGOMSL, ACT-R, Soar, or EPIC (see Gray, Young, & Kirschenbaum, 1997b), may respond adaptively to perform variations of the analyzed task.
Cognitive theories provide constraints to the final form of the analysis—that is, for how the components must fit together. However, the components per se vary widely, and the analysis of expertise into such components is an underconstrained problem. For example, the expertise exhibited by a chess master in plotting his next move is different from that shown by a medical expert diagnosing a rare disease (VanLehn, 1989). Once the components (i.e., the knowledge structures and strategies) of expertise have been delineated, they can be cast into the mechanisms of a cognitive theory. However, existing cognitive theory provides few a priori constraints for deriving the components of a hitherto unstudied expertise.
Given a rare form of expertise, one that has been subject to few published reports (Kirschenbaum, 1990; 1992; 1994), how does the analyst proceed? The method adopted here is a form of bootstrapping. As shown in Figure 1, an initial task analysis1 guides a protocol analysis.
The task analysis is revised and used as the basis of the next round of protocol analysis. When the analyst deems that the results of the analysis are as good as the existing data permit, the
The story told in this chapter is the story of an iterative loop around the stages shown in Figure 1. The project has now moved to the next phase. The story of the next phase is ongoing;
its outlines are given in Ehret, Kirschenbaum, and Gray (1998) and Ehret, Gray, and Kirschenbaum (1999).
road – one with many intersections and forks. What aspects of the expertise are important for the goals of the analysis? Can knowledge of the components of other expertise guide and inform the current analysis or does this knowledge serve to lead us astray? If the expertise is a dynamic expertise – problem solving that takes place over a period of time and that is responsive to events in the environment – what is the best way to capture key aspects of the expertise without distorting it due to artifacts of data collection?
These problems are common in analyzing expertise. Indeed, others have warned that an almost inevitable danger of doing the first, deep-level cognitive analysis of a hitherto unstudied expertise is that “the final interpretation of the data and its matching against the theory [i.e., task analysis] may appear to reflect mainly hindsight – pulling out of the data the hypotheses that have been implicitly hidden in it by the data interpretation process” (Ericsson & Simon, 1993, p. 286).
Although we have read (and heeded) the warnings, what we have not read is a detailed discussion by analysts of their encounters with problems that threaten the validity of their conclusions. This chapter fills that void. In it, we provide an autobiographic description of the problems encountered in our analyses for Project Nemo. We feel no shame at admitting to having problems. Indeed, when the road is rocky and unmarked, problems must be expected. The shame lies not in having problems, but in not recognizing problems. The shame falls to the analyst who mistakes a dead end for the end of the trip or a wrong turn for the right path.
The Task and Our Goals in Analyzing It Different cognitive task analyses may have different goals. The goal of Project Nemo was to
they tried to localize an enemy submarine hiding in deep waters. The project is a collaboration between a navy and a university researcher. An important role played by the navy researcher was to feed the results of the project, as they came in, to those parts of the submarine research community that could make the most use of them. From the beginning of the project, the most interested parties have been the designers of the command workstation for the next-generation submarine.
Knowing that the results of the analysis would be used for interface design provided an important constraint on the knowledge and cognitive processes that Project Nemo analyzed. The AO possesses specialized knowledge that is acquired over a 20-year period. Rather than focus on the acquisition, depth, and breadth of this knowledge, we focused on how it is used as the AO makes progress in his goal of localizing the enemy submarine. Our effort focused on the knowledge, structures, and strategies unique to the dynamic problem-solving process of localizing an enemy submarine hiding in deep water.2 Any given task takes place in the context of the artifacts and organization used to perform the task. The goal of many task analyses, including most GOMS analyses, is to analyze task interactions at this activity level. Such analyses can either assess the problems with the current way of performing the task or provide specifications for a new system. The goal of the current analysis is different. We are neither involved in critiquing the current system nor directly involved in designing the new system. Rather, our intent is to provide the designers of the new
localizes an enemy submarine. Hence, in our analysis, we seek to
from the activity level to the functional level (Gray, John, & Atwood, 1993, pp. 244-257).
Issues We had two sets of problems in understanding AO expertise. The first set included issues in understanding the control structure of the cognitive processes used by the AO to perform the localizing task. The second set entailed understanding the nature and limits to the data we had collected.
The Control Structure of AO Expertise A common metaphor is to conceptualize problem solving as search in a problem space (Newell & Simon, 1972). Tasks involving expertise are often thought of as being both wide and deep. At each step in the problem space, there are many alternative next steps (width). Solving a problem involves solving many subproblems, and each subproblem can be decomposed into another subproblem that needs to be solved (depth). In contrast to expertise, the accepted wisdom is that, for everyday tasks, the search space is limited. The problem spaces for everyday tasks are either shallow and wide (like choosing a flavor from the menu in an ice cream store) or narrow and deep (like following a recipe from a cookbook) (Norman, 1989).
Clearly, localizing targets hiding in deep water is not an everyday activity. Therefore, we reasoned that rather than being shallow and wide or narrow and deep that the AOs’ expertise must be both wide and deep. This bias led us to make our first and most fundamental mistake
One of the most basic control structures used in cognitive task analysis is hierarchical, or goalsubgoal, decomposition. From discussions with experts, however, it was clear that AOs kept track of many different pieces of several different tasks. Hence, a classic, hierarchical control structure did not seem accurate.
Our alternative was a matrix goal structure. Although we abandoned the matrix notion before working out the details, its basic elements were as follows. Imagine a cube with AO goals along the x axis, information elements about the target, own ship, and ocean conditions along the y axis, and rules that capture the AOs’ procedural knowledge along the z axis. A given rule would yield information regarding one or more information element. In turn, a given information element would be applicable to one or more AO goals. In such a system, one could imagine that the rules that fired (i.e., the actions that the AOs take) would be those rules that yielded the greatest amount of new information for the greatest number of AO goals.
Unfortunately, for our preconceptions, we found little in our data that would support a matrix organization. We gradually abandoned this idea and, for a while, ignored this issue to concentrate on the issue of how and why one rule rather than another is selected. Our explorations of these control structure issues led us, albeit a bit unwillingly, to the realization that most AO actions could be characterized as small steps in a shallow goal hierarchy. However, unlike the everyday task of choosing one flavor from a wide but shallow ice cream store menu, AOs make many successive choices. It is the nature of these successive choices that characterize the AOs’
Our current theory of how AOs solve the localizing problem can be summarized by the rather awkward phrase “schema-directed problem solving with shallow and adaptive subgoaling” (SDPSSAS). The schema is the task-relevant knowledge accumulated over 20 years of experience as a submariner (half of it at sea). It is a knowledge structure3 that contains both declarative and procedural knowledge. An implication of shallow subgoaling is that the knowledge available to AOs is so rich that the steps required to supplement this knowledge can be fairly shallow.
The second implication is that the problem the AO is constantly solving is “what is the state of the world – NOW” (where NOW is somewhere on the order of 30 to 300 s). The AO is trying to find a quiet target hiding in a noisy environment while remaining covert and undetected himself. What we see him doing is taking short steps that either (a) assess the noise from the environment or signal from the target – NOW, or (b) attempt to reduce the noise or increase the signal from the target by maneuvering own ship. As shown in Figure 2, these short steps result in shallow subgoaling. When a subgoal pops, the schema is reassessed. The result of this reassessment directs the next step (i.e., selects the next subgoal). This step is accomplished, it returns information to the schema, the schema is reassessed, and so on The process of subgoaling is adaptive in two senses. First, the subgoal that is chosen next reflects the current reassessment of the schema. Second, this choice is sensitive to both the longterm importance of the subgoal as well as its recent history of success or failure. Regardless of a goal’s long-term importance, AOs will not continue to attempt a goal if successive tries fail.
The dynamic aspect of the AO's task plays an important role in this view of schema-directed problem solving with shallow and adaptive subgoaling. First, the state of the AO's world is continually changing – both own ship and target are moving at a given depth, direction, and speed. For own ship, the value of these attributes can be changed, but neither own ship nor the target can be stopped. Consequently, time is an important part of the picture. Second, subgoals are not accomplished once and then discarded. In the AO's world, subgoals bring in certain types of information or accomplish certain changes to own ship. As the world changes, any given subgoal may be revisited (e.g., DET-BEARING in Figure 2).
Choosing What to Do Next: Goal-Driven? Event-Driven?
When we put aside the issue of the larger organization of the AO's actions – subgoal or matrix (at this point, we had not hit on SDPSSAS) – we focused on the more local issue of how or why an individual rule was chosen. For problems such as arithmetic, physics problems, and Tower of Hanoi, the pacing of the solution is entirely under the control of the person solving the problem.
At any given place in this goal-subgoal hierarchy, the action chosen next completely depends on the plans and knowledge of the problem solver. Such tasks are said to be goal-driven.