AI is Learning to Think — Will Humans Forget How?

Human Thought & AI Thinking: How They Converge and Diverge

Greg Twemlow
6 min readMar 2, 2025

The newest stage of LLMs is AI that “thinks,” breaking down problems step by step like a human would. These are no longer just text predictors — they are reasoning machines, structuring thought, reconsidering logic, and even self-correcting before delivering an answer.

Listen to the Deep-Dive Podcast.

AI is Learning to Think article by Greg Twemlow, image by Image_FX
AI is Learning to Think article by Greg Twemlow, image by Image_FX

As I’ve closely followed this shift, I can attest to its massive implications.

These new reasoning models — ChatGPT o1, DeepSeek-v3 R1, Gemini 2.0 Flash Thinking, Claude 3.7 Sonnet, and Grok 3 — don’t just generate responses. They analyse, revise, and refine their thinking. This fundamental leap in how we interact with AI raises a crucial question: Will these advancements make us sharper, or will we increasingly outsource our reasoning to machines? It’s imperative that we actively engage with AI to ensure we maintain our cognitive independence.

However, this engagement also holds the potential for a Renaissance of reasoning, where AI amplifies and refines human thought, leading to a brighter future for cognitive science.

The race for reasoning AI has accelerated remarkably, particularly since the introduction of the o1 family by OpenAI. DeepSeek, a lesser-known player, swiftly matched and sometimes even surpassed o1’s capabilities with a model that used fewer resources. Google entered the competition with Gemini 2.0 Flash Thinking, while xAI introduced Grok 3 with a dedicated ‘thinking mode.’ Anthropic’s Claude 3.7 Sonnet took things a step further, becoming the first hybrid model capable of switching between instant responses and deep reasoning without needing a separate model. This rapid evolution in AI is a clear sign that every major AI lab is now focused on optimising reasoning.

These models use techniques like chain-of-thought reasoning, breaking down problems into logical steps, weighing options, examining assumptions, and self-correcting. DeepSeek R1 even employs reinforcement learning to improve over time. These models aren’t just predicting the next word — they are simulating structured reasoning, revising their conclusions, and even flagging their uncertainties. For the first time, we’re watching AI think in real-time.

But that raises an even bigger question: How do we adapt to this new form of AI?

How to Engage AI as a Thinking Partner

Instead of just highlighting the risks, we need to focus on solutions. The key to avoiding cognitive atrophy isn’t to reject AI but to engage it to amplify human reasoning rather than replace it. This begins with structured questioning. Here are two categories of prompts designed to help users maintain active engagement:

1. Validation Prompts (Testing AI’s Reasoning)

These prompts help challenge AI’s assumptions, verify its logic, and prevent passive acceptance of flawed reasoning.

  • What assumptions are you making in your response?
  • Can you provide counterarguments or alternative viewpoints?
  • How would an expert in [field] critique this answer?
  • What evidence supports this reasoning, and where might it be weak?
  • If I asked this question differently, how would your answer change?

2. Mutual Thinking Prompts (Co-Reasoning with AI)

These prompts encourage AI to act as a cognitive sparring partner, helping users refine and challenge their thinking. AI’s potential to provide answers and engage in a dynamic exchange of ideas can inspire and motivate us to actively collaborate, enhancing our cognitive abilities and pushing us towards a higher level of reasoning.

  • Let’s think through this step by step — where should we start?
  • What are the potential unintended consequences of this approach?
  • How would this reasoning change under different circumstances?
  • What mental models could I use to develop a completely original idea?
  • What biases might I have that could influence how I interpret this information?

The prompt examples above ensure more accurate AI outputs and train the user to think critically. Over time, AI isn’t just generating answers; it’s sharpening human thought, pushing us toward a higher level of reasoning.

The Shift in AI-Human Interaction

Historically, AI operated as a black box. It gave an answer, and we either accepted or rejected it without understanding how it got there. Now, we can follow its logic, meaning we must think more critically, not less. Much like interpreting search results, we must now learn to evaluate AI’s reasoning.

This shift is forcing me to rethink my approach. Since mid-2023, I’ve relied on prompt engineering tricks — “Think step by step” or “Let’s solve this systematically.” With reasoning models, those workarounds are obsolete. Instead, my role shifts from issuing commands to shaping a conversation. I’m no longer just a user; I’m a thought partner.

At first, I was excited. Watching AI self-correct in real time felt like unlocking a new superpower. I could see where its logic faltered and adjusted my thinking accordingly. But then I wondered: will people use this to become better thinkers? Or will they become passive, assuming that if the AI “thinks,” they don’t have to?

The real danger lies in the potential for AI reasoning to become a crutch. If AI reasoning appears good enough, we may stop questioning it. Subtly flawed logic could slip by because it sounds convincing. AI could transition from a cognitive amplifier to a crutch, leading us to rely on it for reasoning rather than to sharpen our own. If reasoning models supply polished, seemingly logical answers, we may unconsciously stop questioning them.

Within a decade, entire industries could shift from human-driven analysis to unchallenged AI conclusions. And the moment AI starts making subtle mistakes — or worse, gets manipulated — we may find that we’ve lost our collective ability to spot the problem.

What the Experts Are Seeing — I Interviewed a Psychologist, Instructional Designer, and Educator to Provide Insights

A highly experienced psychologist raised concerns about cognitive offloading and automation bias — the tendency to trust AI-generated conclusions without verifying them. Over time, unused reasoning skills could atrophy, making it harder for humans to think independently. If AI consistently supplies polished, logical-sounding answers, users may become habituated to accepting them without scrutiny. The long-term risk is a slow erosion of metacognitive abilities, where people rarely question their thinking.

An instructional designer pointed out that AI reasoning requires a new kind of literacy. Just as media literacy helps individuals detect misinformation, AI literacy must be developed to help users critique AI-generated reasoning. Without structured training, most users will default to passive consumption. One solution is to embed metacognitive reflection techniques into AI interactions — forcing users to engage with AI-generated logic rather than blindly accepting it. By promoting AI literacy, we can empower individuals to navigate the AI-driven future with confidence and critical thinking skills.

A Year 12 humanities teacher warned that students already accept AI-driven reasoning. Many young people now feel they don’t need to think critically when AI can provide structured reasoning. Students using AI for research, essay outlines, and argument development rarely stop to question the assumptions baked into AI-generated content. Without intervention, an entire generation could lose the ability to construct logical arguments independently. Educators must introduce structured questioning techniques to ensure students don’t just accept AI reasoning at face value.

A Renaissance of Reasoning or General Decline of Thought?

In the classical world, Socratic dialogue was the foundation of knowledge-building. The great thinkers of Athens — Socrates, Plato, and Aristotle — developed reasoning through constant questioning, debate, and intellectual sparring. Rome carried this forward, refining logic, rhetoric, and governance based on structured reasoning. AI, if used correctly, could recreate and amplify that intellectual culture.

Instead of waiting for inspiration from a single mentor or philosopher, AI could become an infinite Socratic interlocutor, challenging our biases, refining our arguments, and deepening our understanding of complex issues. The difference? This time, it’s not just a privileged few engaging in these dialogues — it’s everyone with access to AI.

This is a Kairos moment for humanity. Do we sit back and allow general cognitive decline — or do we seize the opportunity to elevate human intellect beyond anything seen in history — a 21st-century Renaissance of reasoning.

AI is creating a new cognitive divide. Those who use AI to refine their thinking will shape the future. Those who let AI think for them will struggle to keep up. The choice isn’t whether to use AI — it’s whether you use it to sharpen your mind or surrender your reasoning.

About the author:

📌 Greg Twemlow, Founder of XperientialAI & Designer of the Fusion Bridge

XperientialAI: AI-powered learning for leaders, educators, and organisations.

Fusion Bridge: My latest work — building AI-enabled frameworks for innovation & leadership.

🌎 Read more of my 300+ articleshttps://gregtwemlow.medium.com/

📧 Contact: greg@xperiential.ai or greg@fusionbridge.org

Greg Twemlow, Founder of XperientialAI & Designer of the Fusion Bridge

--

--

Greg Twemlow
Greg Twemlow

Written by Greg Twemlow

Connecting Disciplines to Ignite Innovation | Fusion Bridge Creator | AI Advisor

Responses (1)