Only use AI if you can verify its result 🐟
Lutefisk, a dish made from dried fish cured in lye, is often served on Christmas in Sweden. What type of fish is the dish traditionally made of?
This was a question in a Christmas quiz that my wife attended. The answer, according to most Swedes, would be “ling” (“långa” in Swedish). However, the quiz master said that the correct answer was “cod” (“torsk” in Swedish). He refused to accept “ling” even though most guests claimed it was the correct answer. Why did the quiz master stand by his answer? Because he had asked ChatGPT and it answered “cod”.
Was this a case where ChatGPT gave an incorrect answer? Not really. Looking at the Wikipedia page for lutefisk it reads:
Lutefisk is dried whitefish, usually cod, but sometimes ling or burbot, cured in lye.
So in the general sense, “cod” is a perfectly acceptable answer. But in the specific context of traditional Swedish Christmas dishes, “ling” is the correct answer.
How will you know? #
So why am I talking about “ling” and “lutefisk” on a tech blog? I think it is a clear example of asking an AI for results which you are not in a position to verify. The quiz master could have bothered to fact check this answer, perhaps by running it by someone knowledgeable about Swedish Christmas dishes. Unfortunately, the quizmaster chose not to. Perhaps they did not even consider the possibility that ChatGPT could be wrong.
Now, a question at a company Christmas quiz is perhaps not a big deal, but the same situation arises all the time in much more important contexts.
- Someone asks AI to summarize a long business document. How will they know that whether the summary accurately reflects the original document if they have not read the document themselves?
- Someone uses AI to write a contract for an area they are not familiar with. How will they know whether that contract is suitable for the jurisdiction they are in? (Especially if they work outside the US, since most AI tend to have a US bias.)
- Someone uses AI to solve a programming problem. How will they know whether the code is correct unless they go through the steps of solving the problem themselves? This is especially problematic if they could not have come up with the solution in the first place.
Any of these scenarios could lead to real-world consequences such as losing money or customers, getting sued, or shipping buggy software.
Don’t be lazy #
Using AI to produce results which you do not verify yourself is lazy and irresponsible. It shifts the burden of verification to someone else.
So whenever you use AI, ask yourself: Am I in the position to judge the result? If not, consider skipping AI, or at least run the result by someone who is knowledgable in the area.