- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.world/post/15864003
You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
If it’s an unresolved problem, then they should stop offering it.
Not that I think highly of Google these days, but they used to be proud of offering accurate results. If this thing is not accurate, then continuing to offer it for the sake of offering it is just sheer negligence.
You know someone is going to die because of it. Imagine if AI tells someone with a peanut allergy that they can eat peanuts if they soak them in honey first, or something.
maybe its just a distraction from the now widely inaccurate normal search results😎
It told someone to eat glue last week.
oh wow gee whiz bud it sure does sound a lot as though you should pretty please perhaps consider to maybe possibly
STOP FUCKIN DOIN THAT SHIT
Caveat lector: I didn’t test the AI from Google Search yet, due to country restrictions. However I played quite a bit with Gemini, and odds are that that AI and Gemini are either the same model or variations of each other.
Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations”
At least a fair chunk of the crap being outputted is not caused by hallucinations, but by feeding garbage like this to Google’s AI.
However, Pichai can’t be honest, right? If he was, he would be unwittingly emphasising that your content was fed into Google’s AI, without your consent, to generate those answers. Instead he’s dressing it as the AI malfunctioning aka “hallucinating”.
But Pichai seems to downplay the errors.
If the output of that AI is as unreliable as Gemini’s, it’s less like “there are still times it’s going to get it wrong”, and more “statements being true or false depend on a coin toss”.
That is not a reliable source of information. And yet they made it the default search experience, for at least some people (thankfully not me… yet).
Pichai is a liar and a sorry excuse of a human being.
It’s a feature, not a bug.
One solution might be to just turn it off.
How do you turn it off?
Really interesting to see just how fast this hype curve might pass. From “they’re taking our jobs” to “what are they useful for again” pretty quickly it seems.
Still, I personally would encourage everyone to keep up vigilance against the darker capitalistic implications that this “episode” has confronted us with.
Perhaps AGI isn’t around the corner … but this whole AI thing has been moving relatively quickly. Many may have forgotten, but AlphaGO beating the world champion was kinda shocking at the time and basically no one saw it coming (the player, then no 1 in the world, AFAIU, retired from Go afterward). And this latest leap into bigger models for image and text generation is still impressive once you stop expecting omniscient digital overlords (which has been a creepy as fuck inclination amongst so many people).
It’s been 7-8 years since AlphaGo. In 7-8 years, we could be looking down the barrel of more advanced AI tools without any cultural or political development around what capitalism is liable to do with such things.
They’ve (Big Tech) have sunk so much money and public perception into this, I’m not sure if they can back down. Most likely, the backends of these “AI” will be stripped out and replaced with various well-understood techniques for generating accurate answers (most especially the technique of “paying call center workers pennies to do the work”). The LLMs might remain to act as a “prettifier” pass to make the output sound conversational.
That will buy a few more years of lying to people about how close the AI overlords are while they try to find whatever magic will make them appear to work. But the current model just isn’t sustainable. Companies are pouring truly insane amounts of power (and money, and water) into these machines to get worse results than ever. There isn’t infinite money to speculate and hold out on a magical god-level AI making you king of the earth, but apparently every billionaire is locked in a perverse prisoner’s dilemma to be the first to destroy the planet trying.
I keep calling it magic too because it really will have to be. The major problem of AI is that it isn’t well defined. What the shareholders want is a perfect machine that can answer any question, replace any job, and is never wrong. That doesn’t exist. Humans can’t be infallible so how can we make a machine that is? We can make machines that do amazing things, but not literal magic and that’s what “AI” needs to be to recoup these investments.
Yea I think you’re onto something there with the weird and toxic inertia here. Part of that, I suspect, is that the kind of work and profession that was previously doing the sorts of things that AI will be used for now, namely Data Science and similar, was already a nebulous profession already in transition, which could be simply wiped away over 2-5 years of AI hype. That is, there may literally be no “going back” because what was done before, in many cases, will have been institutionally forgotten or pushed out as a career people are willing to invest in. Which would mean, as you say, whatever persists will cling to the whole AI thing in some way however corrupted and disingenuous.