I read an article recently claiming that AI will replace spreadsheets. The argument goes something like this: why wrestle with pivot tables and VLOOKUP when you can just ask AI to generate code that does the same thing, but better? It sounds reasonable. It sounds like progress. And it fundamentally misunderstands what kind of leap we’re talking about.

A Bounded Jump

Going from a calculator to Excel is a meaningful increase in both power and complexity. You have to learn a new interface, internalize a new mental model, develop intuitions about what the tool can and can’t do. That’s real cognitive load, and it’s why plenty of people resist the jump.

But it’s a bounded jump. Excel has a knowable surface area. There are a finite number of functions. The data is tabular. The operations are defined. You can learn it, practice it, and eventually reach something like mastery. The cognitive load is front-loaded; it decreases over time as the tool becomes familiar. You put in reps and you get better, and at some point the tool mostly gets out of your way.

The jump from Excel to AI-generated solutions looks like the next rung on that same ladder. It isn’t.

A Phase Transition

When you move from “I’ll build this spreadsheet” to “I’ll ask AI to generate code that does what my spreadsheet did,” the nature of the problem changes. It’s not just harder. It’s differently hard.

A spreadsheet operates in a constrained problem space. AI-generated code operates in… whatever space the code needs to run in. Suddenly you’re not just thinking about your data and your formulas. You’re contending with runtime context, data flow assumptions, integration points, edge cases, and failure modes that the tool won’t flag for you. The cognitive load isn’t just higher; it’s ongoing and context-dependent. There’s no mastery plateau. Every new problem potentially requires validating a completely novel output against a completely different set of constraints.

And here’s the part that should worry us: a lot of that additional complexity is invisible. The AI produces output that looks clean, reads well, and might even run without errors. The gap between “this works” and “this is correct” can be enormous, and nothing about the tool helps you see it.

The Nail Gun

There’s a useful analogy in carpentry. Hand a novice a hammer and they’ll build slowly. They’ll make mistakes, but the pace gives them time to notice. They’ll see the nail going in crooked. They’ll feel the wood split. The feedback loop is tight enough to learn from.

Hand that same novice a nail gun and something different happens. They don’t build less; they build more, faster. Nails go in with confidence and speed. The wall goes up. It looks like progress. And by the time someone with actual framing experience walks over and notices the wall isn’t plumb, there are 200 nails to pull instead of 15.

Speed is not a bonus here. It’s a force multiplier on the error rate.

AI works the same way. The output arrives fast. It looks complete. It’s structurally coherent enough to pass casual inspection. And by the time someone with domain expertise reviews it – if they review it – it’s already been shipped, presented, or used as the basis for a decision. The novice with a nail gun doesn’t just make mistakes; they make confident mistakes at scale, and the results look like progress until they don’t.

Monopoly vs. D&D

Another way to think about the bounded-versus-unbounded distinction: consider the difference between Monopoly and Dungeons & Dragons.

Monopoly has finite rules. You can memorize them. There are optimal strategies. The problem space is closed. You might argue about house rules for Free Parking, but the game itself is a known quantity. Excel is Monopoly. Learn the rules, practice the strategies, win more often.

D&D offers guardrails – a system, some mechanics, a shared fiction – but the actual problem space is open. You can attempt things the rules don’t cover. The game expects you to make judgment calls, to reason about situations that aren’t in the manual. The DM doesn’t just enforce rules; they adjudicate ambiguity.

AI is D&D, but a lot of people are approaching it like Monopoly: follow the instructions, execute the prescribed moves, expect the outcome to be correct. When it isn’t, they don’t have a framework for understanding why, because nobody told them the game was open-ended.

The Synthesizer

The corporate world has decided that AI is a new instrument, and it’s going to make everyone’s music better. So they’re running training sessions. And those training sessions, overwhelmingly, teach people which buttons to press.

“You are a lead developer in a fintech startup.” Paste this prompt. Get this output. See? AI is helping you.

This is the equivalent of handing someone a synthesizer loaded with presets and calling them a musician. The presets sound good. That’s what presets are designed to do. And the output is convincing enough that everyone in the room thinks music is happening. But the person pressing buttons isn’t composing. They’re triggering samples. The gap between operating an instrument and being a musician is vast, and the quality of the instrument doesn’t close it.

Practice closes it. Not practice with the tool, but practice with the discipline behind the tool. A musician who picks up a new synth brings decades of theory, ear training, and compositional instinct. They’ll make that instrument sing. A non-musician with the same synth will make it sound like it’s singing – which is a very different thing, and the difference matters when the song has to hold up.

What the Training Gets Wrong

Most AI training I’ve seen – and I suspect this is broadly true, not unique to any one organization – seems to treat AI like it’s on the Excel side of this phase transition. It teaches prompt patterns. It demonstrates workflows. If you provide a “role-play prompt”, then you can expect output similar to the person you’re asking the AI to role-play.

This is the equivalent of teaching someone to SUM a column and calling it Excel proficiency. Technically accurate, but useless for any context without knowing that these specific numbers should be summed for a direct purpose. It’s easier to understand that the sum at the end of that Excel function is the aim than it is to understand the unbounded inputs that produce value in a generative artificial intelligence’s output.

The implicit message is: AI is a tool. Learn to operate it. But AI isn’t a tool the way Excel is a tool. Excel has a knowable surface that rewards practice. AI is a problem-solving context that demands skills most people have never been asked to develop: defining problems that don’t come pre-defined, evaluating outputs you yourself can’t easily verify, understanding the context into which those outputs need to fit. The training doesn’t acknowledge those skills exist, let alone try to build them.

Who Actually Has the Skills

I want to be careful here, because this could easily sound like “developers are special.” We’re not; we don’t get issued some divine AI competency with our GitHub accounts. But the best developers have spent years living in ambiguous problem spaces. We decompose problems that don’t come with instructions. We evaluate outputs against contexts the tool doesn’t understand. We’ve built intuitions about when something looks right but isn’t. That’s not because we’re smarter. It’s because the job required it, and we put in the reps.

But developers aren’t the only ones. Research scientists – especially experimentalists – design questions, validate results, and recognize when a clean-looking output hides bad methodology. Diagnosticians work from ambiguous inputs and generate hypotheses, knowing the first plausible answer might be the wrong one. Litigators operate in problem spaces where the rules are knowable but their application requires contextual judgment, and where a plausible-sounding argument can be completely wrong in ways only expertise reveals. Investigative journalists are trained to probe the gaps in a narrative that looks solid.

The common thread isn’t domain knowledge. It’s practiced skepticism toward plausible-seeming outputs combined with comfort operating in spaces where the rules don’t give you the answer. That’s the meta-skill AI actually demands.

The Cracked Foundation

So far, everything I’ve described is about the people holding the tool. But the misunderstanding doesn’t stop with them.

Right now we’re having loud, consequential debates about AI – about whether it will replace jobs, whether its outputs count as art, whether we can trust its conclusions. And almost all of those debates are built on the assumption that AI is a tool in the way we’ve always understood tools: something with a knowable function that produces predictable-ish results when operated correctly.

If that assumption is wrong – if AI is actually a phase-transition technology that demands invisible skills to use well – then every downstream argument inherits the crack.

Take the art question. “Is AI-generated music real music?” is usually argued on aesthetic or ethical grounds. But the framework here suggests a structural answer: it depends on whether the person prompting brought domain understanding to the act. A composer with deep musical knowledge who uses AI to realize something they hear but can’t physically perform is doing a fundamentally different thing than someone browsing presets and clicking “generate.” The output might even sound identical, but the act is not. And the tool doesn’t distinguish between the two, which means the debate about whether AI art “counts” is really a debate about whether the invisible skills were present – argued by people who can’t see them from the outside.

Or take the jobs question. “AI will let fewer people do more work” assumes that the work AI produces is correct, or at least correctible by the people receiving it. But if the outputs require the very expertise the tool was supposed to replace to even evaluate, then “fewer people doing more work” might actually mean “fewer people producing more confident errors, faster.” The nail gun, building walls nobody’s checking.

Or trust. We’re building institutional trust in AI outputs – basing decisions, policies, strategies on what the tool produces – without broadly understanding that the tool’s outputs are only as good as the invisible problem-solving that went into the prompt and the invisible evaluation that came out the other end. The tool itself provides no signal about whether either of those things happened.

None of this means AI is bad, or that non-technical people can’t use it well. It means we’re thinking about what kind of thing AI is incorrectly, and that incorrect mental model is the foundation under many other conversations we’re having about it. We’re arguing about the walls when the foundation isn’t plumb.

The question was never whether AI will replace spreadsheets. It was never whether AI art is real or whether AI will take your job. Those are all downstream. The question is whether we understand what kind of thing we’re dealing with – and right now, on every side of every debate, I don’t think we’re even asking.

Comments

To comment on this post, search for this URL in your ActivityPub client (such as Mastodon): https://asymptomatic.net/posts/2026-03-18-the-nail-gun-problem

No comments yet. Be the first to reply via ActivityPub!