this post was submitted on 05 Dec 2024
528 points (94.4% liked)
Technology
60071 readers
3674 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Shit's confidently wrong way too often. You wouldn't even realize the bullshit as you read it.
Give me an example to replicate.
Ask it how many Rs there are in the word strawberry.
Or have it write some code and see if it invents libraries that don't exist.
Or ask it a legal question and see if it invents a court case that doesn't exist.
It’s important to know how to use it, not just blindly accept its responses.
Previously it would say 2. Gpt thinks wailord is the heaviest Pokemon, google thinks you can buy a runepickaxe on osrs at any trader store. Was it google that suggested a healthy dose of glue for pizza to keep the toppings on?
Ai is wrong more often than right.
AI is right when you use it for the correct things. The user is wrong when they think it is supposed to be all powerful. And AI companies are ultimately to blame for marketing it as something more than it is: a buggy word calculator, that requires a lot of user effort.
I use it every day, and I know what it can and what it cannot do. I don’t complain, because ai understand it’s basically “alpha” software at this point, but I can see a huge difference between it now and last year.
Try to ask ChatGPT to make an image of a cat without a tail. It’s hilariously impossible. Does that mean it can’t summarize a document or help me calculate compound interest or help me understand a coding concept? Nope.
The majority of people using AI use it to ask a question, and AI will reliably spit out the wrong answer. This 'alpha' product is pushed absolutely fucking everywhere, is currently terrible to use properly for 99% of people, and online in less techy spaces people try AI like it's always correct.
If you can spend the time to filter out it's constant lies, good for you, most people don't even know it's constantly lying.
Right. It’s not ready for the average user. I agree completely. It has, however, made me significantly more productive in every part of my life.
AI gives different answers for the same question. I dont think you can make a prompt that can make it answer the same all the time
Calcgpt is an example where the AI is wrong most of the time, but it may not be the best example
Give me an example. It cannot be opinion based.
The ironic part is that it's not bad as an index. Ignore the garbage generative output and go straight to cited sources and somehow get more useful links than an actual search engine.