Can you hear me NOW about AI?
Below are links to some short AI videos. They paint a pretty bleak picture in terms of job displacement, but they also offer some insightful commentary. Unfortunately, the prediction that companies would be compelled to race toward AI adoption for productivity gains and cost savings is proving accurate—mass layoffs included. I also watched a roundtable where experts voiced real concern that the loss of millions of jobs could outpace society’s ability to adapt and benefit from the “good” side of AI. That’s my fear too. The tech sector alone has shed over 500,000 jobs; Overall, AI is being blamed for more than a million job losses.
From what I’m seeing, entry-level jobs are being reduced more sharply, which in turn eliminates internal promotion paths. Companies are doubling down on hiring people who already have exactly the experience and skills they want. Employers expect senior candidates to navigate ambiguity and deliver results—Use your age as a signal of seniority, not a liability.
The biggest resume opportunity is around showcasing business results. Don’t be modest—take credit where it’s due and sell the impact you’ve made. Your resume must grab attention in the first 10 seconds. If it doesn’t, it likely won’t survive the first pass. Think of your resume as the story you tell about yourself—one that hooks the reader and pulls them in.
AI Hallucinations (email to a friend)
I read the NYT article on hallucinations with interest despite my initial skepticism
My opinion is these companies are ingesting internet data recklessly. They do so because of its accessibility but without regard for data quality. I’ve been playing around with 6502 assembly language and Apple 2 Basic. GPT does poorly despite there being millions of online references which it obviously has not ingested. I believe the claims of ingested data are broadly exaggerated
Computer Science has always had an affection for giving soft nicknames to serious problems. Instead of saying “addressing defect” we say “memory leak” as if to suggest it happened all on its own, like an aged pipe springing a leak. No, it’s a software defect caused by a human.
I feel the same way about “hallucinations”. They are the result of garbage in, garbage out; AND insufficient rules engines to guide the AI analysis. Both of these are defects.
The most interesting part is companies don’t understand the defects and their source. THIS is a problem that could really harm the AI industry. It’s one thing to have defects and correct them but it another more serious problem to not understand the source of the defect.
I hesitate to mention it but in the many sci-fi movies regarding AI I’ve watched there is almost “one person” who the machine was programmed to trust when it came to fact checking and hallucinations. Basically it’s the Hollywood version of better rules embedded in AI to keep it on the straight and narrow.
Such rule measures are in place with AI today but obviously not getting enough investment during this “land grab” phase of the new technology. Plus of course so much of the information on the www is simply garbage. Worse, and my bigger worry is that it contains politically biased information. I cringe every time I GPT has used Wikipedia as a source. Should I trust it?
Lastly and my biggest concern from the start is that the human authors of such rules engines could be malevolent. Do we think a CIA designed set of rules or CCP set of rules are desirable? I suspect not. They are examples of humans using AI to harm other humans, deliberately. This is my biggest fear.
https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?smid=url-share