Skip to main content

Are you using AI to generate instant legacy code? 🧑‍🚀

AI promises to speed up software development by taking away the grunt work, making us all 10x programmers. You describe the high-level picture and AI agents fill in the details. You become the conductor of a personal AI agent orchestra.

But what happens if you stop reading the code?

An astronaut floating in space in a relaxed position
Are you an architect astronaut, floating in abstract space far from the implementation?

No matter how elegant your high-level ideas are, once the agents have written the code, the code becomes the ground truth.1 Your intent only matters if it survives translation. Either you understand the code well enough to verify and adjust it, or you rely on the AI to safely modify it.2 The latter means outsourcing correctness to a non-deterministic system with unknown failure modes.

Eventually, the AI reaches its boundary and begins producing code that is plausible, confident, and wrong. It will not signal failure. It will produce code that looks right but is slightly or completely wrong. If you’re not paying attention, you may not even notice.

This is similar to the Peter principle: individuals tend to be promoted to their level of incompetence. Likewise, AI agents are given increasingly complex tasks because they succeeded at simpler ones. But when the AI silently reaches its limit, someone must step in. That someone will be you, debugging subtle issues in production at 3AM.

If you become too distant from the implementation, your skills start to decay and you lose touch with how the system actually works. You risk becoming an architect astronaut, someone who can talk about the system in abstract terms but lack technical depth or connection to reality.

For example, I used Claude Code to build a simple GraphQL API explorer.3 My attention was on the UI, but the agent quietly made a major architectural decision: it routed all queries through the application backend instead of calling the user-specified API directly. When asked why, it said this avoids CORS issues. While technically true, it also introduces important trade-offs. Can the application handle the load of all API traffic being proxied? Do you want the privacy implications of sending all user data through it? That is not a decision to make by accident.

Used correctly, AI can accelerate development, but it doesn’t remove responsibility from understanding what you ship. If you stop reading the code, you aren’t speeding up; you’re creating instant legacy code.


  1. There are tools that aim to make the specification itself the master artifact edited by humans, but that is not what most developers are using. ↩︎

  2. Jason Gorman points out that “the factors that make code easier for us to wrap our heads around also make LLM performance on it better (less unreliable)”. This means that even if you think you will be able to vibe code your way to success without ever reading the source code, it is still important that the generated code is understandable by a human. ↩︎

  3. Of course, tools to introspect and call GraphQL APIs already exists. I made this one just for fun↩︎