Skip to main content

Generative AI as arbiter of "least astonishment" 🤖

The Principle of Least Astonishment is the idea that software should behave in the way that least surprises the user. It should be predictable and consistent, minimizing unexpected or confusing results. This suggests that, unless we have good reason, we should design things “the way they’re usually done”. We should prefer the most common solution to a problem.

I recently realized that generative AI based on Large Language Models (LLM) can help us here. LLMs work by gobbling up enormous amounts of data and identify common patterns. So almost by definition, a suggestion by an LLM represents the least surprising way to solve a problem.

Code you get from OpenAI’s ChatGPT or GitHub Copilot is often based on a pattern the AI has seen thousands of times. That is a strong signal that it will be familiar to many readers.

On the flip side, you are unlikely to get a truly innovative approach which may fit your situation even better. It could even be that what you get is a very common, but outdated approach. And of course, generative AI can also hallucinate and generally mess things up. So its results are best used as inspiration rather than as definitive truth. 😊