The Gell-Mann Amnesia effect and LLMs
The “Gell-Mann Amnesia Effect”, coined by Michael Crichton (of Jurassic Park fame) and named after Murray Gell-Mann says:
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
If you’ve ever used a LLM for a field you’re familiar with you might have come across something like this. When I use an LLM to generate some Ruby code I will end up tweaking and changing the output a lot. When I use it to generate some code in a language I’m less familiar with, most recently Swift, I’m more likely to just accept the output.
Take something you did recently and have an LLM attempt to do the same thing. Did it get close? How much would you trust it in an area you don’t know as much about?