Thinking Through “Critical Intelligence” by Geoff Gibbins: Book Review
Why Being Human Still Matters in the Age of AI
Geoff Gibbins and I met on LinkedIn over a shared concern: how to keep human thinking alive in the age of AI.
While I run a philosophy salon and talk about life, love, and friendship, as you know, Geoff has taken the higher route. He has written a book and is launching courses to help professionals cultivate critical thinking at work, where AI is fast becoming a teammate.
I had the privilege of reading one of the earliest copies of his book Critical Intelligence and wanted to share a few insights that stayed with me.
The Diagnostics
The book begins with two diagnoses that most of us would probably agree with.
1. The rise of AI as a cognitive partner.
AI is no longer just a tool like a hammer. It is a genuine cognitive partner that we think with. As its computational power grows, AI increasingly overlaps with human strengths such as writing, problem-solving, and creating visual content.
2. The decline of human critical thinking.
We are in what Gibbins calls a critical thinking crisis. Our information ecosystem is polluted with misinformation, and schools still reward standardized testing over good reasoning. We swim in cognitive biases (RIP Daniel Kahneman), such as confirmation bias, availability bias, and anchoring bias, and AI now amplifies them by producing text that sounds authoritative and coherent while being entirely fabricated, or by acting as an agreeable “yes box” that reinforces our assumptions instead of challenging them.
So what do we do in the face of this?
Neither Optimism nor Doom
Gibbins’ stance is neither techno-optimistic nor dystopian.
He calls for an active and adaptive approach that is neither over-reliant on AI nor dismissive of it. This middle-ground position sounds reasonable, maybe even a little too on the nose.
But, as he interestingly notes, it is rare. Instead, businesses are rushing to adopt AI in the workplace without making corresponding investments in human skill. A 2023 PwC survey found that while 76 percent of business leaders are implementing AI tools, only 22 percent are investing in programs that build human oversight and evaluative capacity.
So how do we find the golden mean, the Aristotelian balance, between dependence and resistance?
Cultivating Critical Intelligence
Gibbins proposes developing what he calls Critical Intelligence, a framework that combines good old critical-thinking skills with new ones suited to human–AI collaboration.
1. The good old thinking skills
If we want to use AI effectively without losing ourselves in the process, we need to think about thinking (!)
Leave the first-level cognitive labor mostly to AI and focus instead on metacognition, reflecting on how we arrive at our conclusions.
That includes:
Scrutiny, asking whether our reasoning is sound.
Thinking like your adversary or red-teaming to identify weaknesses in your argument.
Decision journaling, tracking how you reach conclusions and noticing recurring biases.
Evidence-checking, backing positions with multiple legitimate sources.
Distinguishing correlation from causation and avoiding the trap of omitted variables: Just because sales increased after adopting AI does not mean AI caused it. Maybe it coincided with the holiday season.
2. The AI-specific skills
Gibbins urges teams to apply frameworks like CRAAP, RED, or the 4C method to evaluate AI outputs. Is the training data accurate, up to date, and diverse? Are the outputs appropriate for your purpose and audience?
He also warns against anchoring bias when writing prompts. For instance, if you ask, “How can I grow revenue by 5 percent this year?” AI will generate a roadmap without ever questioning whether 5 percent is the right goal in the first place.
Organizational Imperative
Gibbins states, Critical Intelligence should not depend on individual initiative. It must be embedded organizationally. That means creating domain-specific evaluation criteria, review systems, repositories of verified information, and performance metrics for both human and AI contributors in your organization.
Models of Human–AI Collaboration
One of the book’s most useful sections lays out four models of human–AI partnership, organized by how much control humans retain.
Real leadership, Gibbins argues, lies in making deliberate, context-specific decisions about when and how to employ AI, which again requires meta-level thinking about process and purpose.
Human in the loop. Humans review every AI decision or output.
AI in the loop. Humans lead but consult AI for insights.
Human on the loop. Humans set goals and monitor overall progress. AI handles routine cases while humans step in for ambiguous or borderline cases.
Autonomous AI. AI operates independently.
Assessing When to Involve AI
Gibbins offers a few guiding principles for deciding when and how much to rely on AI, such as risk and complexity level, time sensitivity, and scale, and makes suggestions for which model to adopt if you are a financial advisor, a marketing team, a warehouse manager, or a physician.
Why I Think This Book Matters
Gibbins’ book is timely and essential. It offers a framework for the kind of human–AI collaboration that is no longer optional.
As companies rush to adopt AI and lay off humans on the assumption that they have become redundant, his framework carves out a distinctly human role. It reminds us that our most valuable capacity lies in thinking about thinking, in reflecting on how we make decisions, the values that guide them, and the biases and blind spots that shape them.
AI is good at recognizing patterns in data.
But it is humans who think about thinking, who make value judgments about what is good and bad, who define the objectives themselves, and who can cope with the unexpected when patterns break.
Gibbins’ book, Critical Intelligence, is available on Amazon.


