this post was submitted on 25 May 2024
819 points (97.7% liked)
Technology
59440 readers
3612 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They're not hallucinations. People are getting very sloppy with terminology. Google's AI is summarizing the content of web pages that search is returning, if there's weird stuff in there then that shows up in the summary.
You're right that they arent hallucinations.
The current issue isn't just summarized web page, its that the model for gemini is all of reddit. And because it only fakes understanding context, it takes shitposts as truth.
The trap is that reddit is maybe 5% useful content and the rest is shitposts and porn.
AI hallucination is a technical phrase, with the definition:
So it's like how a person sees stuff that isn't there, and similarly with AI.
Yes, but the AI isn't generating a response containing false information. It is accurately summarizing the information it was given by the search result. The search result does contain false information, but the AI has no way to know that.
If you tell an AI "Socks are edible. Create a recipe for me that includes socks." And the AI goes ahead and makes a recipe for sock souffle, that's not a hallucination and the AI has not failed. All these people reacting in astonishment are completely misunderstanding what's going on here. The AI was told to summarize the search results it was given and it did so.
"which contains false or misleading information presented as fact" (emphasis added) - the definition does not say how the misinformation was derived, only that it is in fact misinformation.
Perhaps it was meant humorously - e.g. if "Socks are edible" is a band name. Or perhaps someone is legitimately that dumb, that they believe that socks are genuinely edible. Or perhaps they were cooking up a recipe for maliciously harming someone by giving them intestinal upset. Or... are socks edible, if you cook them in an acidic substance that breaks apart their fabric?
If e.g. you got cancer and were going through chemo but someone came to visit you and gave you COVID and you died, was that "their fault", if they believed that COVID was merely a conspiracy theory? Perhaps... or perhaps it was your own fault, especially if you were aware that this has happened to multiple people before, and now you are just the latest casualty (bc you presumed that despite them doing it to others, they would never do it to you). Legalities of murder and blame aside, should we believe AI now that we know - regardless of how or why - it presents false information?
No, these "hallucinations" or "mirages" or whatever someone calls them makes them unreliable. Actually I think hallucination is a good name i.e. it cannot distinguish fact from fiction itself, therefore it cannot be trusted as it relates that info to you in a confident sounding manner.
"Hallucination" is a technical term in machine learning. These are not hallucinations.
It's like being annoyed by mosquitos and so going to a store to ask for bird repellant. Mosquitos are not birds, despite sharing some characteristics, so trying to fight off birds isn't going to help you.
I am not sure what you mean. e.g. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) says:
e.g., I continued your provided example of when "socks are edible" is a band name, but the output ended up in a cooking context.
There is a section on https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#Terminologies but the issue seems far from settled that hallucinations is somehow a bad word. And it is not entirely illogical, since AI, like humans, necessarily has a similar tension between novelty and creativity - i.e. going beyond either of our training to deal with new circumstances.
I suspect that the term is here to say. But I am nowhere close to an authority and could definitely be wrong:-). Mostly I am saying that you seem to be arguing a niche viewpoint, not entirely without merit obviously but one that we here in the Fediverse may not be as equipped to banter back and forth on except in the most basic of capacities.:-)
No, my example is literally telling the AI that socks are edible and then asking it for a recipe.
In your quoted text:
Emphasis added. The provided source in this case would be telling the AI that socks are edible, and so if it generates a recipe for how to cook socks the output is faithful to the provided source.
A hallucination is when you train the AI with a certain set of facts in its training data and then its output makes up new facts that were not in that training data. For example if I'd trained an AI on a bunch of recipes, none of which included socks, and then I asked it for a recipe and it gave me one with socks in it then that would be a hallucination. The sock recipe came out of nowhere, I didn't tell it to make it up, it didn't glean it from any other source.
In this specific case what's going on is that the user does a websearch for something, the search engine comes up with some web pages that it thinks are relevant, and then the content of those pages is shown to the AI and it is told "write a short summary of this material." When the content that the AI is being shown literally has a recipe for socks in it (or glue-based pizza sauce, in the real-life example that everyone's going on about) then the AI is not hallucinating when it gives you that recipe. It is generating a grounded and faithful summary of the information that it was provided with.
The problem is not the AI here. The problem is that you're giving it wrong information, and then blaming it when it accurately uses the information that it was given.
Now who is anthropomorphizing? It's not about "blame" so much as needing words to describe the event. When the AI cannot be relied upon, bc it was insufficiently trained to be able to distinguish truth from reality, which btw many humans struggle with these days too, that is not its fault but it would be our fault if we in turn relied upon it as a source of authoritative knowledge, merely bc it was presented in a confident sounding manner.
Wait... while true that that sounds like not hallucination then, what does that have to do with this discussion? The OP wasn't about running an AI model in this direct manner, it was about doing Google searches, where the results are already precomputed. It does not become a "hallucination" until whoever asked for the socks to be considered as edible tries to pass those results off in a wider context - where they are generally speaking considered inedible - as being applicable, when they would not be.
Because that's exactly what happened here. When someone Googles "how can I make my cheese stick to my pizza better?" Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told "write a short summary of this information." And the Overview AI does so, accurately and without hallucination.
"Hallucination" is a technical term in LLM parliance. It means something specific, and the thing that's happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that's happening with Google Overview, which is also not a hallucination.
Culture constantly evolves - e.g. "the matrix" used to mean one thing, then after the film starring Keanu Reeves it now means something else.
Also AI itself used to mean one thing, as in general intelligence like a robot slave that has never performed a task before, but you tell it to become a maid and it teaches itself and becomes one just like a human would, but now the term has been coopted to mean the product of a training procedure. The managers at Google, Apple, Microsoft, OpenAI, ChatGPT, etc. don't seem to mind or care about this bastardization of the terminology, as they borrow its power (from the movies and books and other works that have used "AI" in the former sense) while only paying lip service to actually putting in the effort to construct it.
And even with its greatly reduced formulation in the sense of an LLM, they still don't bother to train even that all that well - e.g. feeding it Reddit data that was intentionally corrupted as a result of Huffman's having greatly offended and stolen the communities from the same mods who originally built them. Yes they stole the terms, yes they are using it improperly - but what is anyone going to do about it? Words only have meaning by the consent of those who use them.
And if you are interested, I think you are fighting a losing battle bc of the way you are approaching it. Instead of acknowledging that others "know" the subject differently, and gently offering a nice perspective that they perhaps had not considered before - who isn't interested in historical tidbits about topics of interest, when presented in a captivating manner? - you instead came on strong, saying that everyone else is wrong except you, who has the secret knowledge. I know, it's true, but who cares? If your goal was to inform people, then do you think you succeeded? At least, I think you could have succeeded with a much wider audience. Ofc your words, so your call to do whatever you want with them, but I thought I would offer this perspective at least.
This sounds like a temper tantrum, you blaming everyone else for how you feel about the matter. Again, right or wrong, doesn't it sound like that to you now that I've pointed that out? Well, again, it's your choice to think about that or not, but I did want to offer in case it may help:-).
Ppl anthropomorphise LLMs way too much. I get it that at first glance they sound like a living being, human even and it’s exciting but we had some time already to know it’s just very cool big data processing algo.
It’s like boomers asking me what is computer doing and referring to computer as a person it makes me wonder will I be as confused as them when I am old?
Oh, hi, second coming of Edgar Dijkstra.
He may think like that when using language like that. You might think like that. The bulk of programmers doesn't. Also I strongly object the dissing of operational semantics. Really dig that handwriting though, well-rounded lecturer's hand.
Don’t say those things to me. I have special snowflake disorder. I got literally high reading this when seeing a famous intelligent person has same opinion as me. Great minds… god see what you have done.
Probably not about computers per se - like the Greatest generation knew a lot more about horses than the average person today - and similarly we know more about the things that have mattered to us over the course of our lifetimes.
What would get weird for us is if when we are retirement age - ofc we cannot ever retire, bc capitalism - and someone talks about the new horglesplort based on alien vibrations which are computer-generated from the 11th dimension of string theory and we are all like "wut!?"
fr fr no cap skibidi toilet rizz teabag
That said, humanity seems to not only have slowed down the accretion of new knowledge but actually gone backwards - children today won't live as long as boomers did, and e.g. despite being on mobile devices all day long, most don't have the foggiest clue of how computing works as in programming or even binary. So we will likely be confused in the opposite way as in "why can't you understand this?"
It's only going to get worse now that ChatGPT has a realistic-sounding voice with simulated emotions.
And a lot of that content is probably an AI generated hallucination.
Most of what I've seen in the news so far is due to content based on shitposts from reddit, which is even funnier imo
I do dislike when the “actual news” starts bringing up social media reactions. Can you imagine a whole show based on the Twitter burns of this week? … it would probably be very popular. 😭
Absolutely. I wrote about this a while back in an essay:
Prime and Mash / Kuru
Basically likening it to a prion disease like Kuru, which humans get from eating the infected brains of other humans.
Anyone who puts something in their coffee, makes it not coffee, and should try another caffeinated beverage!!
LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when "reading" large or multiple articles). With ChatGPT, it's output seems more likely to be factually correct when it just generates "facts" from it's model instead of "browsing" and adding articles to its context.
I asked ChatGPT who I was not too long ago. I have a unique name and I have many sources on the internet with my name on it (I'm not famous, but I've done a lot of stuff) and it made up a multi-paragraph biography of me that was entirely false.
I would sure as hell call that a hallucination because there is no question it was trained on my name if it was trained on the internet in general but it got it entirely wrong.
Curiously, now it says it doesn't recognize my name at all.
Sad how this comment gets downvoted, despite making a reasonable argument.
This comment section appears deeply partisan: If you say something along the lines of “Boo Google, AI is bad”, you get upvotes. And if you do not, you find yourself in the other camp. Which gets downvoted.
The actual quality of the comment, like this one, which states a clever observation, doesn’t seem to matter.