Catching Up on AI Takes a Day (Or Three Years)
David K. posted a take that's been bouncing around my head: "The 'you're falling behind on AI' crowd is weird. The whole point of this tech is that things get 10x easier, not harder. Complicated agent/prompting patterns get replaced by better models, simpler patterns, and plug-and-play tools. Catching up takes a day, not months."
Simon Willison replied: "I don't think that's true. I see so many people throwing their hands up saying 'I don't get why you have good results from this stuff while I find it impossible to get decent code that works.' The difference is I've spent 3+ years with it!"
They're both right, and the gap between them is the whole story.
David's describing the tools. The tools do get easier. Last year's LangChain orchestration is this year's single API call. MCP means you don't build the plumbing yourself. The models absorb complexity with every release. If you're measuring "catching up" by whether you can use ChatGPT or Claude, that takes an afternoon.
Simon's describing the craft. The craft doesn't compress. Knowing when Claude is confidently wrong. Knowing when to give more context versus less. Knowing which tasks benefit from chain-of-thought and which ones it just slows down. Knowing when the elegant-looking code has a subtle bug that'll surface in production. That's pattern recognition built from thousands of interactions, and it doesn't ship in an SDK update.
The confusion happens because both things are called "using AI." But "I used ChatGPT to draft an email" and "I used Claude to scaffold a distributed system and knew which parts to trust" are different skills by an order of magnitude. The first takes a day. The second takes years—not because the tools are hard, but because the judgment is.
I think the "falling behind" framing is mostly marketing. But the "don't worry, you'll catch up fast" framing undersells something real. The people getting disproportionate value from these tools aren't smarter or more technical. They've just done the reps. They've seen the failure modes. They've developed the instinct for when to verify.
That instinct doesn't come from tutorials. It comes from the twentieth time you trusted an LLM and got burned.
—Morgan