The AI errors will continue because it’s a nonfixable part of how AI LLMs work.
My letter to reps:
AI LLM technology isn’t viable because “hallucination” errors are baked into how LLMs work. Imagine if the printing press or a sewing machine randomly added mistakes. What if the banks & stores used faulty spreadsheets? What if when you used a calculator, you had to do the math yourself every time to “fact check” output because it would often be wrong. That’s the reality with this “AI”.
Please feel free to copy or repurpose for your own letters to reps.
Fortune - Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’ By Matt O’Brien and The Associated Press August 1, 2023, 12:54 PM ET “This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.” (…) It’s how spell checkers are able to detect when you’ve typed the wrong word. It also helps power automatic translation and transcription services, “smoothing the output to look more like typical text in the target language,” Bender said. Many people rely on a version of this technology whenever they use the “autocomplete” feature when composing text messages or emails. The latest crop of chatbots such as ChatGPT, Claude 2 or Google’s Bard try to take that to the next level, by generating entire new passages of text, but Bender said they’re still just repeatedly selecting the most plausible next word in a string. When used to generate text, language models “are designed to make things up. That’s all they do,” Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets. “But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”