this post was submitted on 05 Dec 2024
528 points (94.4% liked)
Technology
60052 readers
4042 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No. Learn to become media literate. Just like looking at the preview of the first google result is not enough blindly trusting LLMs is a bad idea. And given how shitty google has become lately ChatGPT might be the lesser of two evils.
Yes.Using chatgpt as a search engine showcases a distinct lack of media literacy. It's not an information resource. It's a text generator. That's it. If it lacks information, it will just make it up. That's not something anyone should use as any kind of tool for learning or researching.
Both the paid version of OpenAi and co-pilot are able to search the web if they don't know about something.
The biggest problem with the current models is that they aren't very good at knowing when they don't know something.
The o1 preview actually solves this pretty well, But your average search takes north of 10 seconds.
They never know about something though. They are just text randomisers trained to generate plausible looking text
What does that have to do with what I wrote?
The problem isn't that the model doesn't know when it doesn't know. The models never know. They're text predictors. Sometimes the predictive text happens to be right, but the text predictor doesn't know.
So, let me get this straight. It's your purpose in life, to find anytime anyone mentions the word know in any form of context to butt into the conversation with no helpful information or context to the message at hand and point out that AI isn't alive (which is obvious to everyone) and say it's just a text predictor (which is misleading at best)? Can someone help me crowdsource this poor soul a hobby?
You're strangely angry
Every time I try to talk about something, somebody who has very little knowledge about the subject but very strong feelings about it, butts into pedant a point.
It gets kind of tiring, I guess.
How you doin?
I get it, My point was needlessly pedantic. Sorry about that
You ate wrong. It is incredibly useful if the thing you are trying to Google has multiple meanings, e.g. how to kill a child. LLMs can help you figure out more specific search terms and where to look.
Not knowing how to use a search engine properly doesn't mean these sites are better. It just means you have more to learn.
Well, inside that text generator lies useful information, as well as misinformation of course, because it has been trained on exactly that. Does it make shit up? Absolutely. But so do and did a lot of google or bing search results, even prior to the AI-slop-content farm era.
And besides that, it is a fancy text generator that can use tools, such as searching bing (in case of ChatGPT) and summarizing search results. While not 100% accurate the summaries are usually fairly good.
In my experience the combination of information in the LLM, web search and asking follow up questions and looking at the sources gives better and much faster results than sifting though search results manually.
As long as you don’t take the first reply as gospel truth (as you should not do with the first google or bing result either) and you apply the appropriate amount of scrutiny based on the importance of your questions (as you should always do), ChatGPT is far superior to a classic web search. Which is, of course, where media literacy matters.