Santiago Bobrik

Critical thinking for devs in the age of AI

As developers, we question things for a living. We review pull requests looking for what could go wrong, we pick between libraries by weighing trade-offs, we distrust code that “works but I don’t know why.” We do it every day without giving it a name. But it turns out it has one: critical thinking.

I went down the rabbit hole on this topic and what I found was useful. I think it might be useful to other devs too. I’m not a philosopher or a psychologist. This is what I read, what I think, and the sources are at the bottom so you can draw your own conclusions.

Before we get into how this applies to our field, let’s look at what critical thinking actually is, why it’s harder to practice today, and how to apply it to everyday life online.


What is critical thinking?

In 1941, Edward Glaser defined it with three requirements: wanting to think well, knowing how to do it, and actually doing it. Sounds simple, but most of the time we fail at the first one. It’s not that we can’t reason. It’s that everything around us pushes us not to.

It’s not about thinking negatively, and it’s not about finding fault in everything. It’s a way of operating: questioning what you receive, evaluating evidence, considering alternatives, and being willing to change your mind.

Richard Paul and Linda Elder, from the Foundation for Critical Thinking, add something I found key: intellectual integrity. What does that mean? Holding your own ideas to the same standard you hold everyone else’s. It’s easy to spot the flaw in someone else’s argument, find the bug in a colleague’s code, point out what doesn’t add up in someone else’s proposal. The hard part is doing it with your own. Reviewing your own architecture with the same rigor you’d apply to someone else’s PR. Questioning your own opinion with the same scrutiny you’d give an unfamiliar source.

It’s harder today

Digital platforms are designed to keep you on autopilot. Daniel Kahneman, Nobel laureate in economics, distinguishes between two modes of thinking:

  • System 1: fast, intuitive, automatic. Runs without conscious effort.
  • System 2: slow, analytical, deliberate. Requires more cognitive resources.

Infinite scroll, intermittent reinforcement, FOMO — it’s all designed to keep you in System 1. System 2, the one you need for critical thinking, requires time and sustained attention. Exactly what the feed doesn’t give you.

We haven’t lost the ability to reason. We’re operating in an ecosystem designed to undermine the time that reasoning requires.

There are moments when it’s harder than others: when something confirms what you already believe (you don’t question it because you’re comfortable), when you’re tired after eight hours of work (you consume without filtering), when the source feels trustworthy (you let your guard down). These are the situations where critical thinking matters most and where you least feel like doing it.

How do I apply it?

While researching I came across five-step models, seven-step models, entire frameworks. In the end I settled on something simpler: a single question.

Why do I believe this?

If the answer is “because I thought it through and it holds up against the evidence I have,” good. If the answer is “because I read it somewhere,” “because someone I respect said it,” “because everyone says so,” or “because the AI suggested it,” that’s an opportunity to think critically.

It applies to code and it applies outside of code. The difference is that in code the feedback is immediate: if something’s wrong, it breaks. Outside of code there’s no failing test, no compiler warning. It’s harder and it takes a conscious effort.

How do I validate everything I see?

You can’t. It’s impossible to verify every tweet, every article, every opinion. But there are strategies that help:

  • Look for a second independent source. Not another tweet. A different source that doesn’t belong to the same bubble.
  • Wait for the dust to settle. When something is brand new and there’s no context, noise filters itself out over time. What remains after a few days probably had substance. Herbert Simon, Nobel laureate in economics, put it best: sometimes the best decision is not to consume. That said, this strategy fails when the window of opportunity closes fast.
  • Evaluate the source when you don’t know the field. You can’t judge whether a biology paper is correct, but you can look at who wrote it, whether it’s peer-reviewed, whether there’s consensus. It’s the same thing we do when evaluating a library we don’t know: we check who maintains it, how many stars it has, whether there are open issues.
  • Acknowledge what you chose not to verify. You don’t need to verify everything. You need to know what you chose not to verify. But if you always fall back on this step, it becomes a comfortable excuse not to think.

So far, critical thinking applied to life in general. Now let’s see what happens when we cross it with what we do every day.

Devs already do this (but it’s not enough)

If you’re a developer, you’re probably already thinking critically at work. You don’t call it that, but you do it:

  • You see a pull request and think “this works, but why didn’t they use something else?” — you’re questioning assumptions.
  • A test passes but something feels off — you’re applying metacognition (thinking about your own thinking).
  • You pick between two libraries by comparing trade-offs instead of grabbing the first Google result — you’re gathering evidence and evaluating.
  • You find something odd in the code and go look for docs, patterns, alternatives — you’re verifying before acting.

The development environment pushes you toward critical thinking: the compiler, tests, and production either validate or destroy your hypothesis. Few professions have critique built into the daily workflow (code reviews, for example).

The gap: local vs. global critical thinking

But there’s a problem. Most of us operate with local critical thinking: debugging, refactoring, performance, choosing between technical solutions. We question within the system. What’s harder is global critical thinking: questioning requirements, product decisions, understanding why we’re building what we’re building.

Example: you’re asked to optimize an endpoint that takes 3 seconds.

  • Local thinking: you cache, optimize the query, reduce the payload.
  • Global thinking: why does the frontend need this endpoint? Could it be compensating for a design problem somewhere else? Does the product actually need to show this data in real time, or is that an assumption nobody questioned?

Sometimes the best optimization is eliminating the need.

The real leap happens when you go from solving problems to questioning whether those problems should exist.

Where does critical thinking stand in the age of AI?

We use AI every day for work. And that changes the rules.

Advait Sarkar, a researcher at Microsoft Research, gave a TED talk that raises an uncomfortable point. He says the typical knowledge worker no longer engages with the materials of their craft. They summarize emails with AI, generate drafts automatically, delegate data analysis. They’ve become a “professional validator of robot opinions”: reviewing ideas instead of creating them. He calls it outsourced reasoning.

Translated to development: how many times do we ask the AI “write me the function that does X” without first thinking about how we’d solve it ourselves? How many times do we accept a suggested architecture because “it sounds right” without questioning it with the same rigor we’d apply to a colleague’s PR?

The data backs up the concern:

  • Workers report putting less effort into critical thinking when using AI. The effect is stronger when confidence in AI is high and self-confidence is low. (Lee et al., CHI 2025)
  • Over 80% of ChatGPT users in an MIT study couldn’t recall key content from their own essays after writing them with the tool. (MIT Media Lab, “Your Brain on ChatGPT”)

How I use AI without stopping thinking

Based on what Sarkar proposes and my own experience as a dev, this is what I do:

1. Read and process it yourself first.

Instead of asking the AI “summarize this doc,” read it yourself and then use the AI to discuss what you read. Instead of “explain this error,” read the stack trace, form a hypothesis, and then test it. The difference is who processes the material: your brain or the machine.

2. Ask it to challenge you, not to obey you.

Instead of “write me the architecture for this service,” ask “what’s wrong with this architecture I designed?” Sarkar calls these provocations. If you know enough to reject the provocation with confidence, the system did its job. If you can’t reject it, you just learned something.

3. Ask yourself if you’re actually thinking.

Am I accepting this solution because it makes sense or because it sounds good? Are there alternatives I’m not considering? Am I evaluating what the AI tells me or just accepting it because it sounds reasonable? The AI can help generate those questions, but the work of answering them is yours.

None of this means you shouldn’t use AI from the start. You can use it to explore: review codebases, ask questions about the domain, understand context. That’s using it as a consultant. What you want to avoid is jumping straight to “what’s the best way to do this?” without thinking first. The flow that works for me: explore and gather information (with or without AI), formulate your own solution, then validate with the AI by pushing for provocations. You show up to the chat with a position to defend, not a blank question to fill.

Closing thoughts

I don’t claim any of this is absolute truth. It’s what I found through research and what I apply in my day-to-day as a dev. Critical thinking isn’t just an individual act either: the strongest provocations tend to come from people, not machines. A conversation with someone who thinks differently can do more than ten sessions with an LLM.

If any of this is useful to you, great. If not, I at least hope it makes you ask: why do I believe what I believe?


This article was thought by a human and drafted with the help of an AI.


Sources

  • Glaser, E. M. (1941). An Experiment in the Development of Critical Thinking. Columbia University. — Google Books
  • Paul, R. & Elder, L. Foundation for Critical Thinking. — criticalthinking.org
  • Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
  • Simon, H. A. Attention economy concept. Nobel Prize in Economics, 1978.
  • Sarkar, A. “How to Stop AI from Killing Your Critical Thinking.” TEDAI Vienna, 2025. — TED Talk
  • Lee, M. et al. “The Impact of Generative AI on Critical Thinking.” CHI 2025. — Paper
  • MIT Media Lab. “Your Brain on ChatGPT.” — Publication