this post was submitted on 31 Jul 2024
285 points (96.4% liked)
Technology
59374 readers
3671 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can prove some things are correct, like math problems (assuming the axioms they are based on are also correct).
You can't prove that things like events having happened are correct. That's even a philosophical issue with human memory. We can't prove anything in the past actually happened. We can hope that our memory of events is accurate and reliable and work from there, but it can't actually be proven. In theory everything before could have just been implanted into our minds. This is incredibly unlikely (as well as not useful at best), but it can't be ruled out.
If we could prove events in the past are true we wouldn't have so many pseudo-historians making up crazy things about the pyramids, or whatever else. We can collect evidence and make inferences, but we can't prove it because it is no longer happening. There's a chance that we miss something or some information can't be recovered.
LLMs are algorithms that use large amounts of data to identify correlations. You can tune them to give more unique answers or more consistent answers (and other conditions) but they aren't intelligent. They are, at best, correlation finders. If you give it bad data (internet conversations) or incomplete data then it at best will (usually confidently) give back bad information. People who don't understand how they work assume they're actually intelligent and can do more than this. This is dangerous and should be dispelled quickly, or they believe any garbage it spits out, like the example from this post.
This sounds like an overly pedantic view of "prove"
No. It's just pure math and logic. And LLMs are nothing more than billions of additions and multiplications. Literally. You can prove certain things on it just like you can prove theorems in mathematics. It's an ongoing ressearch field.
Okay: using additions and multiplications prove the assassination attempt on Donald Trump happened
How would you even prove something like that outside of LLMs? What is your point? That you cannot prove anything except "I think therefore I am"?
Either you haven't read my comments or you're intentionally trying to be provocative.
My point is what OPs point was (which you veered away from in order to try to show off that You Are Very Smart): it is literally impossible for a computer system to prove a historical event has happened.
I'm having a hard time keeping track of all of the threads and replies evolving here. Forgive me. But I assume you mean the followong one?
This is simply a wrong statement. You can indeed prove certain properties on these models. That implies of course that you're able to formulate that property fully.
I don't know why the discussion went this far off track. The main point though is that everyone including OP is trying to discredit AI by bringing up things it was never supposed to be good at. By design, it's not good at knowledge retrieval. But everyone is hating it because it's hallucinating fake news. It's beyond me why people argue like that.
Okay, how does the model prove the assassination attempt happened? Because that is what OP was talking about.
It was clear from the context that OP was saying "It is impossible to mathematically determine if something [historical] is correct." They omitted one word and instead of using context clues you went into a long unnecessary post on how we prove even numbers are divisible by 2. If you tried Iron Manning their post instead of trying to show off with an "Um Actually...." You wouldn't be getting lost in the replies as we'd be staying on the original topic.
We're missing the context again. It's not people trying to discredit AI. People are trying to discredit companies insisting on using AI for things it is bad at.
It sounds like you actually agree with OP: AI should not be used for this purpose. Instead of saying "I agree, this is a bad use of AI, it should only be used for X, Y, and Z" you felt the need to White Knight for AI. The problem right now isn't AI being attacked, it's companies treating AI like a miracle that can do everything.