The Rise of Pseudo-Expertise: What We Lose When We Outsource Our Thinking by Dr Jared Powell, PhD
Feb 17, 2026
I was reviewing a manuscript recently, and on the surface, it was a well-written and compelling piece. It was coherent, seemingly built on solid premises and methods, and reached a logical conclusion. But when I asked a follow-up question that slightly deviated from the main thesis… silence. Not the productive silence of someone working through a problem in real-time, but the empty silence and awkwardness of someone who hadn’t done the requisite thinking to develop a sufficient understanding of the topic.
They'd very likely outsourced their thinking and writing to an LLM.
This essay is not about condemning AI or LLM use. I use these tools myself, and what they can do is sometimes beyond belief. My main concern is we’re (knowledge workers) sleepwalking into a generation of pseudo-experts: practitioners, students, and even educators who can generate sophisticated-sounding content about subjects they don't deeply understand.
I Want It, And I Want It Now!
Every time we encounter a problem where we can't articulate an idea neatly, where competing ideas create tension, where the evidence doesn't really add up, we face a choice. Sit with that discomfort and think it through, or run to an LLM and get an answer within seconds.
LLMs in 2026 are phenomenal; hell, you can get 10,000 words on almost any topic you can imagine with the press of a few keys. Anyone can sound knowledgeable. Creating the façade of understanding has never been easier. We could call this ‘the great democratisation of pseudo-expertise’. You can publish that LLM-generated blog post, send that email, create that course content and make yourself out as an authority (no one will know, right?).
But you've bypassed the very cognitive struggle that builds expertise. Danger awaits.
Think about the last time you wrestled with a complex clinical problem - the patient who didn't respond as expected, the evidence that contradicted your worldview, the technique that often works but didn’t in a certain scenario. That discomfort wasn’t just wasted and unwanted emotion. The discomfort was THE mechanism of learning. When you sat with confusion, researched the literature with specific questions born from a real intent to understand, tried different solutions until something clicked, that's when understanding formed.
You Are Not a Symbol Manipulator, You Are a Thinker
An LLM generating 1000 words about rotator cuff pathology isn't thinking about shoulder anatomy or biomechanics, or anything really. It predicts statistically likely tokens based on patterns in the training data. There is no deeper layer of thinking.
Philosophers call this the Chinese Room problem: imagine someone in a room receiving Chinese characters and following instruction manuals to send back appropriate responses. To outside observers, perfect Chinese communication is happening. But the person in the room understands zero Chinese (Cantonese, Mandarin, etc.); they're just following rules to manipulate symbols. There is no comprehension.
When you outsource your thinking to an LLM and pass along its output without critical revision, you become that person in the room. You can produce seemingly correct answers while understanding minimal. Unlike the philosophical debate about whether machines could ever truly understand (where we genuinely don't know the answer), we can know whether you understand. The test is simple: put yourself in front of a complex patient, answer unexpected questions, and try to reason when the situation deviates from the script.
Knowledge tends to reveal itself under variation. Expertise proves itself under pressure - in live debate and teaching situations where someone asks "why?" (my 2-year-old helps me to realise I really know nothing daily with their beautifully incessant “whys”), and you need to go deeper than the standard or rehearsed explanation. Pseudo-expertise crumbles in these situations because there's no substance beneath the bluster.
You Can't Outsource Your Thinking Reps
MSK clinicians should understand this intuitively. You can't get strong by watching someone else lift weights. Your nervous and MSK systems need the reps, the failures, the stimulus.
Learning works similarly.
Every time we delegate a difficult problem or assignment to an LLM, we miss a rep. We don't build the thinking skills that enable us to think through similar problems independently, and we may miss out on finetuning that somewhat mystical sense of when an explanation is truly coherent versus when it just sounds good.
Use AI As a Co-Intelligence
Before you prompt an LLM with a question, ask yourself: Have I actually tried to think this through? Have I sat with the confusion? What exactly do I not understand?
Use AI to stress-test ideas you've already developed or need help refining. Use it to think through and articulate ideas you understand but struggle to explain. Use it to explore parallel possibilities once you've exhausted a few avenues of thought.
If you want to develop true expertise, don’t rush to LLMs to solve all your problems.
Our patients deserve practitioners with real expertise, not pseudo-expertise, and our students deserve teachers who've grappled with the material and come out victorious, or more aptly, intact.
The Complete Clinician
Tired of continuing education that treats clinicians like children who can’t think for themselves?
The Complete Clinician was built for those who want more.
It’s not another lecture library, it’s a problem-solving community for MSK professionals who want to reason better, think deeper, and translate evidence into practice.
Weekly research reviews, monthly PhD-level lectures, daily discussion, and structured learning modules to sharpen your clinical edge.
Join the clinicians who refuse to be average.