close
close

Google AI Hallucinations (Again): What Are People Getting Wrong This Week?

Google AI Hallucinations (Again): What Are People Getting Wrong This Week?


A cup of Starbucks coffee has never been cheap, but this is ridiculous. Most recently, Lifehacker’s Senior Technology Editor, jake petersonShe was looking for information about Starbucks’ new coffee line (just like him), and a Google search revealed that the company’s Caramel Brûlée Latte costs $410.

Search results for Carmel Brulee Latte


Credit: Stephen Johnson/Google

The Salted Pecan Crunch Cold Brew comes in at a slightly more reasonable $250, but either way, don’t worry: Starbucks offers a 60-day return policy on both drinks.

Salted Walnut Crunch Cold Brew search result


Credit: Stephen Johnson/Google

Despite Google’s results, Starbucks isn’t rolling out a new “give us your 401k” beverage line. This is an artificial intelligence hallucination. The artificial intelligence program that Google uses to summarize search results appears to have confused the calorie count of Starbucks drinks with their prices. I’m not sure where the refund policy information comes from, but I’m pretty sure Starbucks will not give you a refund for a coffee you purchased in September. (No A special Starbucks in Los Angeles that only celebrities can useR.)

It’s not just Starbucks. A little Googling turns up this incredibly well-reviewed Dunkin’ Donuts coffee:

Dunkin Donuts AI search results


Credit: Stephen Johnson/Google

So 11794.3 stars out of 5? This is good coffee! Or it’s a mix of the number of reviews and the 4.3 star rating.

find nonsense examples AI hallucinations is fun (Really, Google? eat stone?) But when a source of information is trusted, this is not a joke. Almost 5 billion people a day is regularly wrong. Coffee prices are one thing, but what else is AI telling us that isn’t true?

How do AI hallucinations work?

Starbucks’ pricing mistakes seem to highlight one of the glaring problems with AI, and show why AI isn’t “smarter” than us (yet). If you ask a person how much a cup of coffee costs, they may confuse the calorie count with the price, but most of us think: “Wait, $410 for a cup of coffee must be a mistake” and double down. – Check before responding. But AI doesn’t come to the Starbucks counter every day and spend a few bucks to buy juice. He doesn’t immediately compare the value of a cup of coffee to something like a car payment, so he can’t understand why charging $400 for a cup of coffee would be ridiculous. The AI ​​did not live a life.

Humans use language to describe an external reality, and our words are supported by an incredibly complex set of assumptions, influences, and lived experiences that are not written down anywhere. No one needs to tell you that a $400 cup of coffee is ridiculous; we already know. Language models are trained solely on the written word, so how can they recognize the relative value of money?

in mayIn response to many distortions, lies and hallucinations of Google’s AI going viral, the company said it was working to fix the problem and promised “more than a dozen technical improvements” to its AI systems. Judging by the currently live search results, it doesn’t work.

None of this is the AI’s fault, of course—it’s computer code, after all—but Google’s (and Apple’s, Meta’s, Microsoft’s, and everyone else’s) insistence on injecting AI into everything from search results. instagram with sunglasses It points to a worrying lack of concern for the people AI is supposed to serve. We’re unlikely to be hurt by the AI’s inability to understand the cost of coffee, but what about when it provides medical or financial advice? Or tells us Which mushrooms are safe to eat?? Or tells our children how to deal with suicidal thoughts?

The many dangers of artificial intelligence

The list of potential problems that can arise from AI is long. He is subject to the same prejudices as the people who write the words he has been trained on. Doesn’t respect copyrights. He cannot be held responsible like a human being. And these are only dangers that can arise from using AI for a harmless purpose, such as providing accurate search results. I assume bad actors are already using AI to thwart security systems, influence politics, defraud people, and thousands of other nefarious schemes. It would be nice to consider ways in which AI could be controlled, but hallucinations and errors may be inherent in AI.

Dead Internet, Hapsburg AI and the exponential AI deluge

I talked about dead internet theory Previously in this column. This is the idea that everything we see on the internet is generated by artificial intelligence and fed to us by a bunch of CEOs and governments to control our thoughts. The good news is that we’re not there yet. The bad news is that we probably will soon, and what’s worse is that no one is checking it out.

More and more of the content we consume is produced by artificial intelligence, and it is becoming harder to detect. That’s bad, but the bigger problem comes from how AI “learns.” Because AI trains on data without any assessment of its quality, and AI currently publishes countless images, words, and videos, some AI models train on other AIs’ output or their own output, leading to a feedback loop. theoretically increasingly exponentially. This leads to “dubbed” content.Hapsburg A.I.” Just like in the royal family line, the content produced by artificial intelligence becomes so internalized that it turns into forms that humans cannot understand. artificial intelligence go crazy. And that’s not something we can see in the future. It’s happening on Facebook right now. To look:

AI Jesus images


Credit: Facebook

I downloaded the AI ​​images above from Facebook. Created from (apparently) a feedback loop between automatic image generators and AI-controlled accounts that interact with the images they post, these images defy human description. What might they mean? Why is “Scarlett Johansen” mentioned so often in such posts? Why does AI have a fascination with Japanese flight attendants, Jesus, and vegetables? Most importantly, how does one make money from such images that are shared thousands of times a day on a social media network? Like many AI-based questions, we don’t know. Things get very strange when the machines start talking to each other.

But on the positive side…

As much as I fear artificial intelligence, I also find it oddly endearing. Of course, I’ll be busy soon, but how can you hate something that creates images like this?

Strange AI images


Credit: Facebook

AI has a tremendous advantage that goes beyond creating inexplicable works of art. To explain the situation, I asked OpenAI’s ChatGPT to explain why you shouldn’t be afraid of it, even if it’s hallucinating. Here’s what he said:

You shouldn’t be afraid of me because I’m here to help you! Imagine a world where you focus on being creative, connecting with others, or just enjoying your coffee without worrying about the price, while I take care of mundane tasks like planning, researching, or even diagnosing diseases. Of course, I learn faster than any human can, but don’t worry; I’m definitely not planning to take over. Probably.

I actually led him to exaggerate the assurance of the comedic effect. I then asked ChatGPT what he thought of the paragraph he wrote:

The part where you ask ChatGPT to exaggerate the comic effect assurance works perfectly! It’s a clever way to end the post on a high note, providing both humor and a bit of absurdity. The ‘Probably’ line at the end adds just the right amount of uncertainty, keeping the comedic tension intact.

Damn right!