Originally published December 6, 2022

This is the 86th article in the series featuring thought leaders in knowledge management. Gary A. Klein has written five books, co-written one, and co-edited three. He is known for the cognitive methods and models he developed, including the Data/Frame Theory of sensemaking, the Management by Discovery model of planning in complex settings, and the Triple Path Model of insight. He developed the Pre-Mortem method of risk assessment, Cognitive Task Analysis for uncovering the tacit knowledge that goes into decision making, and the ShadowBox Training approach for cognitive skills. Gary pioneered the Naturalistic Decision Making (NDM) movement in 1989, which has grown to hundreds of international researchers and practitioners. And he has helped to initiate the new discipline of macrocognition.

Gary developed the Recognition-Primed Decision (RPD) model to describe how people actually make decisions in natural settings. It has been incorporated into Army doctrine for command and control. He has investigated sensemaking, replanning, and anticipatory thinking. Gary devised methods for on-the-job training to help organizations recycle their expertise and their tacit knowledge to newer workers. And he developed the Knowledge Audit for doing cognitive task analysis, the Critical Decision Method (CDM), and the Artificial Intelligence Quotient (AIQ).

Gary A. Klein should not be confused with (PhD in Cognitive Social Psychology and BA in Experimental Psychology) who specializes in Artificial Intelligence and Cognitive Psychology and recorded a or (PhD, MS, and BS) who wrote .

Gary and I have been at three of the same KMWorld Conferences: 2022, 2015, and 2012.


Gary received his Ph.D. in experimental psychology from the University of Pittsburgh in 1969. He spent the first phase of his career in academia as an Assistant Professor of Psychology at Oakland University (1970–1974). The second phase was spent working for the government as a research psychologist for the U.S. Air Force (1974–1978). The third phase began in 1978 when he founded his own R&D company, Klein Associates, which grew to 37 people by the time it was acquired by Applied Research Associates (ARA) in 2005.

He was selected as a Fellow of Division 19 of the American Psychological Association in 2006. In 2008 Gary received the Jack A. Kraft Innovator Award from the Human Factors and Ergonomics Society.


  • Ph.D., Experimental Psychology — University of Pittsburgh, 1969
  • M.S., Physiological Psychology — University of Pittsburgh, 1967
  • B.A., Psychology — City College of New York, 1964


  • President and Chief Executive Officer, ShadowBox LLC, 2014 — Present
  • Senior Scientist, Macrocognition LLC, 2009 — Present
  • Senior Scientist, Cognitive Solutions Division of Applied Research Associates, 2005–2010
  • Chairman and Chief Scientist, Klein Associates, Inc, 1978–2005
  • Research Psychologist, U.S. Air Force Human Resources Laboratory, WPAFB, 1974–1978
  • Assistant Professor of Psychology, Oakland University in Michigan. 1970–1974
  • Associate Professor of Psychology, Wilberforce University in Ohio, 1969–1970



  1. Knowledge elicitation tools, primarily methods for doing Cognitive Task Analysis such as the Critical Decision Method, the Situation Awareness Record, Applied Cognitive Task Analysis (ACTA), the Knowledge Audit, the Cognitive Audit, and Concept Maps.
  2. Cognitive specifications and representations, such as the Cognitive Requirements Table, the Critical Cue Inventory, the Cognimeter, Integrated Cognitive Analyses for Human-Machine Teaming, Contextual Activity Templates, and Diagrams of Work Organization Possibilities.
  3. Training approaches, including the ShadowBox technique, Tactical Decision Games, Artificial Intelligence Quotient, On-the-Job Training, and Cognitive After-Action Review Guide for Observers.
  4. Design methods, e.g., Decision-Centered Design, Principles for Collaborative Automation, and Principles of Human-Centered Computing.
  5. Evaluation techniques such as Sero!, Concept Maps, Decision Making Record, Work-Centered Evaluation.
  6. Teamwork aids, e.g., the Situation Awareness Calibration questions, the Cultural Lens model.
  7. Risk assessment methods: the Pre-Mortem.
  8. Measurement techniques such as Macrocognitive measures, Hoffman’s “performance assessment by order statistics” and four scales for explainable Artificial Intelligence.
  9. Conceptual descriptions: these are models like the Recognition-Primed Decision model that have been used in a variety of ways.

How can we strengthen our tacit knowledge? Here are nine ideas we can put into practice.

  1. Seek feedback.
  2. Consult with Experts.
  3. Vicarious experiences.
  4. Curiosity.
  5. A growth mindset.
  6. Overcoming a procedural mindset.
  7. Harvesting mistakes.
  8. Adapt and discover.
  9. Don’t let evaluation interfere with training.

There are soft criteria, indicators we can pay attention to. I have identified seven so far. Even though none of these criteria are fool-proof, all of them seem useful and relevant:

  1. Successful performance — measurable track record of making good decisions in the past.
  2. Peer respect.
  3. Career — number of years performing the task.
  4. Quality of tacit knowledge such as mental models.
  5. Reliability.
  6. Credentials — licensing or certification of achieving professional standards.
  7. Reflection.

We can distinguish five general types of skills that experts may have:

  1. Perceptual-motor skills
  2. Conceptual skills
  3. Management skills
  4. Communication skills
  5. Adaptation skills

These are not components of expertise. Some skills may be relevant in one domain but not another. And they are reasonably independent.

These five general skills are identified on several criteria: First, they are acquired through experience and feedback, as opposed to being natural talents. Second, they are relevant to the tasks people perform, and therefore the set of skills will vary by task and domain. Third, superior performance on these skills should differentiate experts from journeymen.

The skills will vary for different domains and tasks. Our focus should be on the most important sub-skills for that domain. Otherwise, it is too easy to have an ever-expanding set of skills to contend with. In some domains, one or more of these general skills may not apply at all. And people may be experts in some aspects of a task but not others.




— — — — — — — — — — — — — —


edited by Henry Montgomery, Raanan Lipshitz, and Berndt Brehmer — Chapter 23 with Laura Militello:

The Knowledge Audit as a Method for Cognitive Task Analysis

The Knowledge Audit was designed to survey the different aspects of expertise required to perform a task skillfully (Crandall, Klein, Militello, & Wolf, 1994). It was developed for a project sponsored by the Naval Personnel Research & Development Center. The specific probes used in the Knowledge Audit were drawn from the literature on expertise (Chi, Glaser, & Farr, 1988; Glaser, 1989; Klein, 1989; Klein & Hoffman, 1993; Shanteau, 1989). By examining a variety of accounts of expertise, it was possible to identify a small set of themes that appeared to differentiate experts from novices. These themes served as the core of the probes used in the Knowledge Audit.

The Knowledge Audit was part of a larger project to develop a streamlined method for Cognitive Task Analysis that could be used by people who did not have an opportunity for intensive training. This project, described by Militello, Hutton, Pliske, Knight, and Klein (1997), resulted in the Applied Cognitive Task Analysis (ACTA) program, which includes a software tutorial. The Knowledge Audit is one of the three components of ACTA.

The original version of the Knowledge Audit is described by Crandall et al. (1994) in a report on the strategy that was being used to develop ACTA. That version of the Knowledge Audit probed a variety of knowledge types: perceptual skills, mental models, metacognition, declarative knowledge, analogues, and typicality/anomalies. Perceptual skills referred to the types of perceptual discriminations that skilled personnel had learned to make. Mental models referred to the causal understanding people develop about how to make things happen. Metacognition referred to the ability to take one’s own thinking skills and limitations into account. Declarative knowledge referred to the body of factual information people accumulate in performing a task. Analogues referred to the ability to draw on specific previous experiences in making decisions. Typicality/anomalies referred to the associative reasoning that permits people to recognize a situation as familiar, or, conversely, to notice the unexpected.

Some other probes were deleted because they were found to be more difficult concepts for a person just learning to conduct a Cognitive Task Analysis to understand and explore; others were deleted because they elicited redundant information from the subject-matter experts being interviewed. To streamline the method for inclusion in the ACTA package, eight probes were identified that seemed most likely to elicit key types of cognitive information across a broad range of domains.


The Knowledge Audit has been formalized to include a small set of probes, a suggested wording for presenting these probes, and a method for recording and representing the information. This is the form presented in the ACTA0 software tutorial. The value of this formalization is to provide sufficient structure for people who want to follow steps and be reasonably confident that they will be able to gather useful material.

Table 23.1 presents the set of Knowledge Audit probes in the current version. These are listed in the column on the left. Table 23.1 also shows the types of follow-up questions that would be used to obtain more information. At the conclusion of the interview, this format becomes a knowledge representation. By conducting several interviews, it is possible to combine the data into a larger scale table to present what has been learned.

However, formalization is not always helpful, particularly if it creates a barrier for conducting effective knowledge elicitation sessions. We do not recommend that all the probes be used in a given interview. Some of the probes will be irrelevant, given the domain, and some will be more pertinent than others. Furthermore, the follow-up questions for any probe can and should vary, depending on the answers received.

In addition, the wording of the probes is important. Militello et al. (1997) conducted an extensive evaluation of wording and developed a format that seemed effective. For example, they found that the term tricks of the trade generated problems because it seemed to call for quasi-legal procedures. Rules of thumb was rejected because the term tended to elicit high-level, general platitudes rather than important practices learned via experience on the job. In the end, the somewhat awkward but neutral term job smarts was used to ask about the techniques people picked up with experience.

Turning to another category, the concept of perceptual skills made sense to the research community but was too academic to be useful in the field. After some trial and error, the term noticing was adopted to help people get the sense that experience confers an ability to notice things that novices tend to miss. These examples illustrate how important language can be and how essential the usability testing was for the Knowledge Audit.

The wording shown in Table 23.1 is not intended to be used every time. As people gain experience with the Knowledge Audit, they will undoubtedly develop their own wording. They may even choose their own wording from the beginning. The intent in providing suggested wording is to help people who might just be learning how to do Cognitive Task Analysis interviews and need a way to get started. In structured experimentation, researchers often have to use the exact same wording with each participant. The Knowledge Audit, however, is not intended as a tool for basic research in which exact wording is required. It is a tool for eliciting information, and it is more important to maintain rapport and follow up on curiosity than to maximize objectivity.


We have learned that the Knowledge Audit is too unfocused to be used as a primary interviewing tool without an understanding of the major components of the task to be investigated. It can be too easy for subject-matter experts to just give speeches about their pet theories on what separates the skilled from the less skilled. That is why the suggested probes try to focus the interview on events and examples. Even so, it can be hard to generate a useful answer to general questions about the different aspects of expertise. A prior step seems useful whereby the interviewer determines the key steps in the task and then identifies those steps that require the most expertise. The Knowledge Audit is then focused on these steps, or even on substeps, rather than on the task as a whole. This type of framing makes the Knowledge Audit interview go more smoothly.

We have also found that the Knowledge Audit works better when the subject-matter experts are asked to elaborate on their answers. In our workshops, we encourage interviewers to work with the subject-matter experts to fill in a table, with columns for deepening on each category. One column is about why it is difficult (to see the big picture, notice subtle changes, etc.) and what types of errors people make. Another column gets at the cues and strategies used by experts in carrying out the process in question. These follow-up questions seem important for deriving useful information from a Knowledge Audit interview. In this way, the interview moves from just gathering opinions about what goes into expertise and gets at more details. For a skilled interviewer, once these incidents are identified, it is easy to turn to a more in-depth type of approach, such as the Critical Decision Method (Hoffman, Crandall, & Shadbolt, 1998). Once the subject-matter expert is describing a challenging incident, the Knowledge Audit probes can be used to deepen on the events that took place during that incident.

For example, Pliske, Hutton, and Chrenka (2000) used the Knowledge Audit to examine expert-novice differences in business jet pilots using weather data to fly. Additional cognitive probes were applied to explore critical incidents and experiences in which weather issues challenged the pilot’s decision-making skills. As a result of these interviews, a set of cognitive demands associated with planning, taxi/takeoff, climb, cruise, descent/approach, and land/taxi were identified, as well as cues, information sources, and strategies experienced pilots rely on to make these difficult decisions and judgments.


One of the weaknesses of the Knowledge Audit is that the different probes are unconnected. They are aspects of expertise, but there is no larger framework for integrating them. Accordingly, it may be useful to consider a revision to the Knowledge Audit that does attempt to situate the probes within a larger scheme.

In considering the probes presented in Table 23.1, they seem to fall into two categories. One category is types of knowledge that experts have or “what experts know,” and the second category is ways that experts use these types of knowledge or “what experts can do.” Table 23.2 shows a breakdown that follows these categories. Probes from the current version of the Knowledge Audit are included in italics next to the aspect of expertise each addresses.

The left-hand column in Table 23.2 shows different types of knowledge that experts have. They have perceptual skills, enabling them to make fine discriminations. They have mental models of how the primary causes in the domain operate and interact. They have associative knowledge, a rich set of connections between objects, events, memories, and other entities. Thus, they have a sense of typicality allowing them to recognize familiar and typical situations. They know a large set of routines, which are action plans, well-compiled tactics for getting things done.

Experts also have a lot of declarative knowledge, but this is put in parentheses in Table 23.2 because Cognitive Task Analysis does not need to be used to find out about declarative knowledge.

The right-hand column in Table 23.2 is a partial list of how experts can use the different types of knowledge they possess. Thus, experts can use their mental models to diagnose faults, and also to project future states. They can run mental simulations (Klein & Crandall, 1995). Mental simulation is not a form of knowledge but rather an operation that can be run on mental models to form expectancies, explanations, and diagnoses. Experts can use their ability to detect familiarity and typicality as a basis for spotting anomalies. This lets them detect problems quickly. Experts can use their mental models and knowledge of routines to find leverage points and use these to figure out how to improvise. Experts can draw on their mental models to manage uncertainty. These are the types of activities that distinguish experts and novices. They are based on the way experts apply the types of knowledge they have. One can think of the processes listed in the right-hand column as examples of macrocognition (Cacciabue & Hollnagel, 1995; Klein, Klein, & Klein, 2000).

Also note that the current version of the Knowledge Audit does not contain probes for all the items in the right column.

Table 23.2 is intended as a more organized framework for the Knowledge Audit. It is also designed to encourage practitioners to devise their own frameworks. Thus, R. R. Hoffman (personal communication) has adapted the Knowledge Audit. His concern is not as much with contrasting experts and novices as with capturing categories of cognition relevant to challenging tasks, such as forecasting the weather. Hoffman’s categories are noticing patterns, forming hypotheses, seeking information (in the service of hypothesis testing), sensemaking (interpreting situations to assign meaning to them), tapping into mental models, reasoning by using domain-specific rules (e.g., meteorological rules), and metacognition.

One of the advantages of the representation of the Knowledge Audit shown in Table 23.2 is that the categories are more coherent than in Table 23.1. Although there is considerable overlap between the Knowledge Audit probes in Table 23.1 and the aspects of expertise in Table 23.2, we have not developed wording for all of the probes taking into account the new focus on macrocognition. Moreover, we have not yet determined the conditions under which we might want to construct a Knowledge Audit incorporating more of the items from the right-hand column of Table 23.2, the macrocognitive processes seen in operational settings.


The Knowledge Audit embodies an account of expertise. We contend that any Cognitive Task Analysis project makes assumptions about the nature of expertise. The types of questions asked, the topics that are followed up, and the areas that are probed more deeply all reflect the researchers’ concepts about expertise. These concepts may result in a deeper and more insightful Cognitive Task Analysis project, or they may result in a distorted view of the cognitive aspects of proficiency. One of the strengths of the Knowledge Audit is that it makes these assumptions explicit.

Because it distinguishes between types of knowledge and applications of the knowledge, the proposed future version of the Knowledge Audit is more differentiated than the original version. This distinction has both theoretical and practical implications. There are also aspects of expertise that are not reflected in the Knowledge Audit, such as emotional reactivity (e.g., Damasio, 1998), memory skills, and so forth. We are not claiming that the Knowledge Audit is a comprehensive tool for surveying the facets of expertise. Its intent was to provide interviewers with an easy-to-use approach to capture some important cognitive aspects of task performance.

There can be interplay between the laboratory and the field. Cognitive Task Analysis tries to put concepts of expertise into practice. It can refine and drive our views on expertise just as laboratory studies do. The Knowledge Audit is a tool for studying expertise and a tool for reflecting about expertise.



— — — — — — — — — — — — — —






— — — — — — — — — — — — — —

Articles by Others




Knowledge Management Author and Speaker, Founder of SIKM Leaders Community, Community Evangelist, Knowledge Manager https://sites.google.com/site/stangarfield/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Stan Garfield

Knowledge Management Author and Speaker, Founder of SIKM Leaders Community, Community Evangelist, Knowledge Manager