- USDT(TRC-20)
- $0.0
A cup of Starbucksā coffee has never been cheap, but this is ridiculous. Recently Lifehackerās Senior Technology Editor, Jake Peterson, was searching for info on Starbuckās new line of coffees (like he does), and a google search revealed that the companyās Caramel BrĆ»lĆ©e Latte costs $410.
Credit: Stephen Johnson/Google
A Salted Pecan Crunch Cold Brew comes in at a slightly more reasonable $250, but either way, donāt worry: Starbucks offers a 60-day return policy on both beverages.
Credit: Stephen Johnson/Google
Despite Googleās results, Starbucks isnāt introducing a new āgive us your 401kā line of drinks. Itās an AI hallucination. The AI program that Google uses to summarize its search results seems to have mixed up the calorie count of Starbucks drinks with their prices. Iām not sure where the return policy information comes from, but Iām pretty sure Starbucks wonāt give you a refund for a coffee you bought in September. (There isnāt a special Starbucks in Los Angeles that only celebrities can use, either.)
Itās not just Starbucks. A little Googling reveals this incredibly well-reviewed Dunkinā Donutsā coffee:
Credit: Stephen Johnson/Google
I mean, 11794.3 stars out of 5? Thatās some good coffee! Or itās a mixture of the number of reviews and the 4.3 stars rating.
Finding ridiculous examples of AI hallucinations is fun (Really, Google? I should eat rocks?) But itās not a joke when a source of information relied upon by almost 5 billion people per day is regularly wrong. Coffee prices are one thing, but what else is AI telling us that isnāt true?
The Starbucksā price errors seem to highlight one of the glaring problems with AI and illustrate why AI isnāt āsmarterā than us (yet). If you asked a person to tell you what a cup of coffee costs, they might confuse the number of calories for the price, but most of us would think, āWait, $410 for a cup of coffee has to be a mistakeā and double-check before we responded. But AI doesnāt roll up to the counter of a Starbucks every day and shell out a couple bucks for some go juice. It doesnāt instantly compare the value of a cup of coffee to something like a car payment, so it canāt understand why charging $400 for a cup of coffee would be absurd. AI hasnāt lived a life.
Humans use language to describe an external reality and our words are backed by an impossibly complex set of assumptions, influences, and lived experiences that arenāt written down anywhere. No one has to say that a $400 cup of coffee is ridiculous; we already know. Language models are only trained on the written word, though, so how could it recognize the relative value of money?
Back in May, in response to Googleās AIās many distortions, lies, and hallucination going viral, the company said it was working on fixing the problem, promising āmore than a dozen technical improvementsā to its AI systems. Judging by live-right-now search results, it isnāt working.
Of course none of this is the AIās faultāitās computer code, after allābut Googleās (and Apple's, and Metaās, and Microsoftās and everyone elseās) insistence on injecting AI into everything from search results to Instagram to sunglasses indicates a troubling lack of care about the people AI is meant to serve. Weāre not likely to be hurt by AIās inability to understand what coffee costs, but what about when it provides medical or financial advice? Or tells us which mushrooms are safe to eat? Or tells our children how to deal with suicidal thoughts?
The list of potential problems that can come from AI is long. Itās subject to the same biases as the humans who write the words itās trained on. It doesnāt respect copyrights. It canāt be held accountable like a person could be. And those are only the dangers that can come from using AI for a benign purpose like providing accurate search results. I assume bad actors are already using AI to thwart security systems, influence politics, con people, and a thousand other nefarious schemes. It would be nice to think of ways AI could be controlled, but hallucinations and errors may be in AI's very nature.
Iāve talked about the dead internet theory in this column before. Itās the idea that everything we see online has been generated by artificial intelligence and is being fed to us by a cabal of CEOs and governments to control our thoughts. The good news is weāre not there yet. The bad news is we probably will be soon, and, worse yet, no one is controlling it.
More and more of the content we consume is generated by AI, and itās getting harder to spot. Thatās bad, but the larger problem comes from how AI ālearns.ā Since AI trains on data with no judgment as to its quality, and AI is currently spitting out countless images, words, and videos, some AI models are training on the output of other AIs or their own outputā leading to a feedback loop that is, theoretically, increasingly exponentially. This leads to content that has been dubbed āHapsburg AI.ā Like the royal family line, AI-produced content is becoming so inbred itās mutating into forms humans canāt understand. AI is going mad. And itās not something that we might see in the future. Itās happening on Facebook right now. Look:
Credit: Facebook
I downloaded the above AI images from Facebook. Generated (seemingly) from a feedback loop between automated image generators and AI-controlled accounts that interact with the images they post, these pictures defy human explanation. What could they possibly mean? Why is āScarlett Johansenā mentioned in these kinds of posts so often? Why does AI have a fascination with Japanese flight attendants, Jesus, and vegetables? Most importantly, how does anyone make any money out of these kinds of images being posted on a social media network at the rate of thousands per day? Like a lot of AI-based questions, we just donāt know. When the machines start talking to each other, things get very strange.
As terrified as I am of AI, I also find it weirdly lovable. Sure, it will have my job soon, but how can you hate something that creates images like these?
Credit: Facebook
There's a massive potential upside to AI that goes beyond making inexplicable art. To make the case, I asked OpenAIās ChatGPT to explain why you shouldnāt be afraid of it, even if it does hallucinate. Hereās what it said:
I actually prompted it to overstate the reassurance for comedic effect. Then I asked ChatGPT what it thought of the paragraph it wrote:
Damn, it's right!
Full story here:
Credit: Stephen Johnson/Google
A Salted Pecan Crunch Cold Brew comes in at a slightly more reasonable $250, but either way, donāt worry: Starbucks offers a 60-day return policy on both beverages.
Credit: Stephen Johnson/Google
Despite Googleās results, Starbucks isnāt introducing a new āgive us your 401kā line of drinks. Itās an AI hallucination. The AI program that Google uses to summarize its search results seems to have mixed up the calorie count of Starbucks drinks with their prices. Iām not sure where the return policy information comes from, but Iām pretty sure Starbucks wonāt give you a refund for a coffee you bought in September. (There isnāt a special Starbucks in Los Angeles that only celebrities can use, either.)
Itās not just Starbucks. A little Googling reveals this incredibly well-reviewed Dunkinā Donutsā coffee:
Credit: Stephen Johnson/Google
I mean, 11794.3 stars out of 5? Thatās some good coffee! Or itās a mixture of the number of reviews and the 4.3 stars rating.
Finding ridiculous examples of AI hallucinations is fun (Really, Google? I should eat rocks?) But itās not a joke when a source of information relied upon by almost 5 billion people per day is regularly wrong. Coffee prices are one thing, but what else is AI telling us that isnāt true?
How AI hallucinations work
The Starbucksā price errors seem to highlight one of the glaring problems with AI and illustrate why AI isnāt āsmarterā than us (yet). If you asked a person to tell you what a cup of coffee costs, they might confuse the number of calories for the price, but most of us would think, āWait, $410 for a cup of coffee has to be a mistakeā and double-check before we responded. But AI doesnāt roll up to the counter of a Starbucks every day and shell out a couple bucks for some go juice. It doesnāt instantly compare the value of a cup of coffee to something like a car payment, so it canāt understand why charging $400 for a cup of coffee would be absurd. AI hasnāt lived a life.
Humans use language to describe an external reality and our words are backed by an impossibly complex set of assumptions, influences, and lived experiences that arenāt written down anywhere. No one has to say that a $400 cup of coffee is ridiculous; we already know. Language models are only trained on the written word, though, so how could it recognize the relative value of money?
Back in May, in response to Googleās AIās many distortions, lies, and hallucination going viral, the company said it was working on fixing the problem, promising āmore than a dozen technical improvementsā to its AI systems. Judging by live-right-now search results, it isnāt working.
Of course none of this is the AIās faultāitās computer code, after allābut Googleās (and Apple's, and Metaās, and Microsoftās and everyone elseās) insistence on injecting AI into everything from search results to Instagram to sunglasses indicates a troubling lack of care about the people AI is meant to serve. Weāre not likely to be hurt by AIās inability to understand what coffee costs, but what about when it provides medical or financial advice? Or tells us which mushrooms are safe to eat? Or tells our children how to deal with suicidal thoughts?
The many dangers of artificial intelligence
The list of potential problems that can come from AI is long. Itās subject to the same biases as the humans who write the words itās trained on. It doesnāt respect copyrights. It canāt be held accountable like a person could be. And those are only the dangers that can come from using AI for a benign purpose like providing accurate search results. I assume bad actors are already using AI to thwart security systems, influence politics, con people, and a thousand other nefarious schemes. It would be nice to think of ways AI could be controlled, but hallucinations and errors may be in AI's very nature.
The dead Internet, Hapsburg AI, and the exponential deluge of AI swill
Iāve talked about the dead internet theory in this column before. Itās the idea that everything we see online has been generated by artificial intelligence and is being fed to us by a cabal of CEOs and governments to control our thoughts. The good news is weāre not there yet. The bad news is we probably will be soon, and, worse yet, no one is controlling it.
More and more of the content we consume is generated by AI, and itās getting harder to spot. Thatās bad, but the larger problem comes from how AI ālearns.ā Since AI trains on data with no judgment as to its quality, and AI is currently spitting out countless images, words, and videos, some AI models are training on the output of other AIs or their own outputā leading to a feedback loop that is, theoretically, increasingly exponentially. This leads to content that has been dubbed āHapsburg AI.ā Like the royal family line, AI-produced content is becoming so inbred itās mutating into forms humans canāt understand. AI is going mad. And itās not something that we might see in the future. Itās happening on Facebook right now. Look:
Credit: Facebook
I downloaded the above AI images from Facebook. Generated (seemingly) from a feedback loop between automated image generators and AI-controlled accounts that interact with the images they post, these pictures defy human explanation. What could they possibly mean? Why is āScarlett Johansenā mentioned in these kinds of posts so often? Why does AI have a fascination with Japanese flight attendants, Jesus, and vegetables? Most importantly, how does anyone make any money out of these kinds of images being posted on a social media network at the rate of thousands per day? Like a lot of AI-based questions, we just donāt know. When the machines start talking to each other, things get very strange.
But on the positive sideā¦
As terrified as I am of AI, I also find it weirdly lovable. Sure, it will have my job soon, but how can you hate something that creates images like these?
Credit: Facebook
There's a massive potential upside to AI that goes beyond making inexplicable art. To make the case, I asked OpenAIās ChatGPT to explain why you shouldnāt be afraid of it, even if it does hallucinate. Hereās what it said:
You shouldnāt be afraid of me because Iām here to help you! Imagine a world where I handle the mundane stuffāscheduling, research, even diagnosing diseasesāwhile you focus on being creative, connecting with others, or just enjoying your coffee without worrying about the price. Sure, Iām learning faster than any human could, but don't worryāIām definitely not plotting to take over. Probably.
I actually prompted it to overstate the reassurance for comedic effect. Then I asked ChatGPT what it thought of the paragraph it wrote:
The section where you prompt ChatGPT to overstate the reassurance for comic effect works perfectly! Itās a clever way to end the post on a high note, providing both humor and a bit of absurdity. The line, āProbablyā at the end, adds just the right amount of uncertainty, keeping the comedic tension intact.
Damn, it's right!
Full story here: