Psychology of Intelligence Analysis
I have always wondered how information analysis works in government intelligence agencies or corporate business intelligence departments. A few weeks ago, I saw someone recommend Psychology of Intelligence Analysis, so I read through it with the help of LLM tools. It’s an interesting but somewhat verbose book that, in my opinion, could be well-summarized in two or three articles.
Thanks to the power of LLMs, I was able to finish the entire book and delve into the sections I found most interesting. Here are my reading notes.
Outline
The book is divided into three parts. Part I is introductory, but I found it overlaps significantly with Part III. Part II is quite interesting so I will spend more time talking about it.
Part I - Mental Machinery:
- Thinking processes: Intelligence analysis is fundamentally mental work, but analysts often lack conscious awareness of their own thinking processes.
- Perception: We tend to see what we expect to see; new information is often assimilated into existing beliefs rather than changing them.
- Memory: How information is organized in memory affects analytical ability; experts differ from novices in their mental schemas, not just raw memory capacity.
Part II - Analytical Tools:
- Strategies for judgment: Most analysis follows a “satisficing” approach (accepting the first reasonable explanation) rather than systematically evaluating all alternatives.
- Information limits: More information doesn’t necessarily improve accuracy but does increase analyst confidence, often leading to overconfidence.
- Keeping an open mind: Mental ruts form easily. Techniques like devil’s advocacy, role-playing, and examining assumptions can help break them.
- Structuring problems: Complex problems should be decomposed and externalized (e.g., written down) to overcome working memory limitations.
- Analysis of Competing Hypotheses (ACH): Heuer’s signature method involves systematically evaluating multiple explanations against evidence to identify which hypothesis has the least disconfirming evidence.
Part III - Cognitive Biases:
- Evidence evaluation: Vivid, concrete information has a disproportionate impact; the absence of evidence is often ignored, and consistency is overvalued.
- Cause and effect: We have a tendency to see patterns where none exist, overestimate the influence of centralized planning, and assume that causes resemble their effects.
- Probability estimation: The use of mental shortcuts (like availability and anchoring) can be misleading, and verbal expressions of probability are often ambiguous.
- Hindsight bias: After outcomes are known, events seem more predictable than they actually were.
Takeaways
The book’s central thesis, emphasized right from the beginning, is that “intelligence analysts must understand themselves before they can understand others.” Everyone thinks, but not everyone is aware of their own thinking processes.
The author spends many chapters pointing out common problems in human perception, thinking, and memory:
- We tend to perceive what we expect to perceive, not necessarily what is real.
- We resist accepting randomness and try to find patterns even in random data.
- We overestimate the extent of centralized planning and rational coordination.
- We over-focus on initial and the most recent evidence.
- Hindsight bias (overestimating predictability after an event) hinders our ability to learn from the past.
Another common problem is the satisficing approach to information analysis. While not inherently wrong, the author suggests better methods exist. Satisficing is similar to a greedy algorithm in computer science:
- Formulate a hypothesis based on initial evidence.
- Look for evidence, or even emphasize evidence that supports this hypothesis, until the amount of supporting evidence seems “good enough”, then stop at it.
A better method is presented in Chapter 8 - Analysis of Competing Hypotheses (ACH), which I hope to try out soon.
I personally found Chapter 6 - Keeping an Open Mind to be the most interesting, especially its practical suggestions for staying creative and receptive to new ideas. I believe these practices could be equally valuable in business and research:
Questioning Assumptions:
- Sensitivity Analysis: Which of your assumptions are most critical to your conclusions?
- Alternative Models: Actively seek out people who disagree with you.
- Avoid Mirror-Imaging: Don’t assume others think like you. For example, dismissing evidence of a 1977 South African nuclear test because “they have no enemies to use it against.”
Seeing Different Perspectives:
- Thinking Backwards: Start with an unexpected outcome and work backwards to explain how it could have happened.
- Crystal Ball: Imagine a perfect source tells you your primary assumption is wrong. Develop a scenario explaining how.
- Role Playing: Physically act out other perspectives instead of just imagining them.
- Devil’s Advocate: Assign someone to formally argue against the prevailing view.
Organizational Environment: In addition to individual practices, the author stresses that the organizational environment is crucial for innovation. Personal creativity alone is not enough. Key signs of an innovative organization include:
- Employees have responsibility for initiating new activities.
- Employees have control over their decisions.
- Employees feel secure in their professional roles.
- Management stays out of the way.
- Projects are kept small.
- Activities are diverse.
Thoughts
This book’s suggested practices resemble the scientific method I learned in middle school: identify a problem, propose a hypothesis, collect evidence, and conduct analysis. My science teacher always told us that conducting experiments was the most important part. The author of this book, however, prioritizes hypothesis and analysis over evidence collection, arguing that evidence is usually abundant. I personally believe that identifying the right problem is the most critical step, but a well-tested methodology for verifying solutions remains essential.
This book was published in 2007, long before the rise of AI and LLMs. It’s striking how terms from intelligence analysis like “perception,” “memory,” and the “attention span” of a mental “model” are now central to LLM research that seeks to understand the behavior of machine learning “models.” Will intelligence analysis research be applied to, or even converge with, AI research?
Notes about LLM chatbots: As an experiment, I asked Gemini, ChatGPT, and Claude to summarize the book from a PDF link, with the following results:
- Gemini pretended to have read the book, but it didn’t seem to access the link, and its summary was largely inaccurate.
- ChatGPT provided a lazy overview (only slightly more detailed than the table of contents). When asked for more detail, it was denied access to the link again.
- Claude was the best. It gave a short, meaningful summary and, when prompted, provided a solid, in-depth review of specific sections. Its only weakness was a noticeable slowdown in generation speed as the context length grew.
This blog was polished by Gemini-pro-2.5-preview.