Essay
The Scaffolding Deficit
Can AI actually make us smarter?
What if AI could actually make us smarter? Not in the loose, output-driven productivity sense. Smarter in the deeper, biological sense: better at reasoning through hard problems, more creative at baseline, more rigorous by habit, and more capable even when the tools are turned off.
That question receives little serious attention. The AI-in-cognition discourse is understandably dominated by its anxious twin: what is AI taking from us? And the AI-in-practice conversation is dominated by the apparent plateauing of models and the gains to be made from better model harnesses and scaffolding — how to apply more traditional computing techniques around models (e.g., cron jobs, evaluations and validations, data interaction protocols) in ways that improve overall AI-embedded systems.
While we’re rightly worried about the “bear-case” for human cognitive degradation as a function of ever-smarter AI at our fingertips, is there a “bull-case” for humans to grow more intelligent alongside AI, and as a function of using it?
Can we deliberately design human-AI systems that build cognitive muscle rather than quietly atrophy it?
As I read a study published this month by researchers from Carnegie Mellon, Oxford, MIT, and UCLA, I naturally began worrying about cognitive down-side risk protection, to continue the finance metaphor.
In a randomized controlled trial of over 1,200 people, researchers found that while AI improved immediate performance on a task, just 10 to 15 minutes of reliance on it caused a measurable drop in unassisted performance afterward. More troublingly, participants’ willingness to persist through difficulty eroded almost instantly in the previously-using-AI condition.
The headline takeaway: Carelessly deployed AI doesn’t just substitute for our thinking; it actively reduces our tolerance for the friction required to do hard work.
Instead of settling for a defensive, harm-reduction strategy in the face of these findings, we need a positive case for AI-augmented brain conditioning.
The Human-AI Scaffolding Deficit
Right now, the tech industry (myself included) is obsessing over the machinery to constrain how the AI behaves beyond-the-model.
But we have almost zero scaffolding around our own usage.
We are treating AI like a utility cabinet we can blindly reach into, assuming our cognitive baseline will remain untouched. If we want to move from “not getting dumber” to actual cognitive augmentation, we need a framework for brain-AI optimization.
Like nutritional science for the mind: deliberate regimens for building cognitive muscle, aided by AI.
Here are five ways we can build that human-side scaffolding today:
1. The Forced Articulation Protocol (Writing to Clarify)
Hot take: the most powerful feature of an LLM isn’t the answer it gives, but the clarity it demands from you. By treating the prompt not as a simple search query, but as a forced articulation of your mental model, the act of writing to the AI becomes the cognitive workout. Forcing yourself to define the parameters, context, constraints, and nuance of a problem before the AI generates a single token makes you a clearer communicator by default. The rigor is in translating fuzzy thoughts into precise instructions.
2. Artifact-Driven Clarity (The Output Critique)
In pre-AI projects, we often realized what we actually wanted only after slogging through a first version of a product or a document. We revise our definition of “good” by reacting to what we produced. AI collapses this loop, giving you a fully-baked (though not necessarily quality) artifact instantly. But the cognitive muscle is built in the critique. When an AI’s output is slightly off-base, those mistakes jump off the screen, immediately exposing the unstated assumptions or blind spots in your original thinking. By aggressively reviewing the output, you aren’t just editing the AI; you are rapidly auditing your own strategic gaps.
3. The Curated Exocortex (Introspection via Markdown)
Many of the sharpest AI thinkers are moving toward maintaining interconnected personal knowledge bases (like local Markdown files or Obsidian vaults). The regimen here is strict: AI does not write the notes. You write the notes. You do the synthesis. You then use AI as a semantic layer to traverse your own curated thoughts, surfacing hidden connections between your past ideas. The AI drives introspection, not generation. If you externalize the raw material for introspection, you don’t have to be the sole entity performing that self-reflective thought process. You can create a virtuous cycle by bringing AI-derived ideas and insights into your own consciousness, manipulating them, and then reflecting that back into the system.
4. Cross-Domain Mapping
Fluid intelligence is often seen as the ability to map a framework from one discipline onto another. A deliberate AI regimen uses the tool to build these bridges. Instead of asking for a summary of a complex topic, you instruct the AI to explain a new concept strictly through the lens of a domain you already deeply understand (e.g., “Explain this macroeconomic shift using the mechanics of software architecture”). Metaphors-on-demand is a superpower of AI, and while it may smooth the edges of academic nuance in a field, the benefits of getting to a gestalt understanding far outweigh the risks. You are actively building new neural pathways, not passively consuming facts.
5. The Surgical Sparring Partner (Epistemic Friction)
The default AI interface is a sycophantic answering machine, heavily trained to validate you. Simply asking it to “play devil’s advocate” usually yields shallow, polite pushback. A deeper cognitive regimen forces “epistemic friction.” You don’t just ask for disagreement in general; you ask for a premise audit. The prompt looks more like: “Here is my thesis. Do not evaluate my conclusion. Instead, identify the unstated assumptions I am relying on to make this argument. Steelman the smartest possible counter-argument to my weakest premise, and ask me the single question I will most struggle to answer.” This elevates the AI from a simple contrarian to a rigorous intellectual auditor, forcing you to defend the architecture of your thinking, not just the outcome.
A Design Imperative
In my work, I think deeply about these human-machine collaborations — both as a user, and as a builder of AI-enabled software for high-stakes work. The default trajectory — AI optimized purely for immediate helpfulness, deployed at scale, with no frameworks for what it does to human capability over time — is the easiest one to build and measure. But it’s incomplete.
The question isn’t whether AI changes human cognition. It does. It already has. The question is whether we are going to start building products, interfaces, and personal habits that optimize for long-term human competence alongside immediate task completion.
The best human collaborators know when to push back, when to ask a question, and when to let you struggle.
We need to start demanding the same from our AI, and from our own use of it.