AI Hallucinations (email to a friend)

I read the NYT article on hallucinations with interest despite my initial skepticism 

My opinion is these companies are ingesting  internet data recklessly.  They do so because of its accessibility but without regard for data quality.  I’ve been playing around with 6502 assembly language and Apple 2 Basic.  GPT does poorly despite there being millions of online references which it obviously has not ingested.  I believe the claims of ingested data are  broadly exaggerated 

Computer Science has always had an affection for giving soft nicknames to serious problems.  Instead of saying “addressing defect” we say “memory leak” as if to suggest it happened all on its own, like an aged pipe springing a leak.  No, it’s a software defect caused by a human.  

I feel the same way about “hallucinations”.  They are the result of garbage in, garbage out; AND insufficient rules engines to guide the AI analysis.  Both of these are defects.

The most interesting part is companies don’t understand the defects and their source.   THIS is a problem that could really harm the AI industry.   It’s one thing to have defects and correct them but it another more serious problem to not understand the source of the defect.   

I hesitate to mention it but in the many sci-fi movies regarding AI I’ve watched there is almost “one person” who the machine was programmed to trust when it came to fact checking and hallucinations.   Basically it’s the Hollywood version of better rules embedded in AI to keep it on the straight and narrow.  

Such rule measures are in place with AI today but obviously not getting enough investment during this “land grab” phase of the new technology.  Plus of course so much of the information on the www is simply garbage.  Worse, and my bigger worry is that it contains politically biased information.  I cringe every time I GPT has used Wikipedia as a source.  Should I trust it?

Lastly and my biggest concern from the start is that the human authors of such rules engines could be malevolent.  Do we think a CIA designed set of rules or CCP set of rules are desirable?  I suspect not.  They are examples of humans using AI to harm other humans, deliberately.  This is my biggest fear. 

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?smid=url-share

Leave a Reply