this post was submitted on 26 May 2024
454 points (95.6% liked)

Technology

60071 readers
4967 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

These are 17 of the worst, most cringeworthy Google AI overview answers:

  1. Eating Boogers Boosts the Immune System?
  2. Use Your Name and Birthday for a Memorable Password
  3. Training Data is Fair Use
  4. Wrong Motherboard
  5. Which USB is Fastest?
  6. Home Remedies for Appendicitis
  7. Can I Use Gasoline in a Recipe?
  8. Glue Your Cheese to the Pizza
  9. How Many Rocks to Eat
  10. Health Benefits of Tobacco or Chewing Tobacco
  11. Benefits of Nuclear War, Human Sacrifice and Infanticide
  12. Pros and Cons of Smacking a Child
  13. Which Religion is More Violent?
  14. How Old is Gen D?
  15. Which Presidents Graduated from UW?
  16. How Many Muslim Presidents Has the U.S. Had?
  17. How to Type 500 WPM
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 49 points 7 months ago (4 children)

What it demonstrates is the actual use case for AI is not All The Things.

Science research, programming, and . . . That’s about it.

[–] [email protected] 43 points 7 months ago* (last edited 7 months ago) (1 children)

LLM's are not AI, though. They're just fancy auto-complete. Just bigger Elizas, no closer to anything remotely resembling actual intelligence.

[–] [email protected] 6 points 7 months ago

True, I’m just using it how they’re using it.

[–] [email protected] 31 points 7 months ago (4 children)
[–] [email protected] 17 points 7 months ago (2 children)

It should not be used to replace programmers. But it can be very useful when used by programmers who know what they're doing. ("do you see any flaws in this code?" / "what could be useful approaches to tackle X, given constraints A, B and C?"). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.

[–] [email protected] 12 points 7 months ago* (last edited 7 months ago) (1 children)

The article I posted references a study where chatgpt was wrong 52% of the time and verbose 77% of the time.

And that it was believed to be true more than it actually was. And the study was explicitly on programming questions.

[–] [email protected] 18 points 7 months ago* (last edited 7 months ago) (1 children)

Yeah, I saw. But when I'm stuck on a programming issue, I have a couple of options:

  • ask an LLM that I can explain the issue to, correct my prompt a couple of times when it's getting things wrong, and then press retry a couple of times to get something useful.
  • ask online and wait. Hoping that some day, somebody will come along that has the knowledge and the time to answer.

Sure, LLMs may not be perfect, but not having them as an option is worse, and way slower.

In my experience - even when the code it generates is wrong, it will still send you in the right direction concerning the approach. And if it keeps spewing out nonsense, that's usually an indication that what you want is not possible.

[–] [email protected] 4 points 7 months ago (1 children)

I am completely convinced that people who say LLMs should not be used for coding.....

Either do not do much coding for work, or they have not used an LLM when tackling a problem in an unfamiliar language or tech stack.

[–] [email protected] 7 points 7 months ago

I haven't had need to do it.

I can ask people I work with who do know, or I can find the same thing ChatGPT provides in either la huage or project documentation, usually presented in a better format.

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago) (1 children)

do you see any flaws in this code?

Let’s say LLM says the code is error free; how do you know the LLM is being truthful? What happens when someone assumes it’s right and puts buggy code into production? Seems like a possible false sense of security to me.

The creative steps are where it’s good, but I wouldn’t trust it to confirm code was free of errors.

[–] [email protected] 2 points 7 months ago

That's what I meant by saying you shouldn't use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (1 children)

I'm not entirely sure why you think it shouldn't?

Just because it sucks at one-shotting programming problems doesn't mean it's not useful for programming.

Using AI tools as co-pilots to augment knowledge and break into areas of discipline that you're unfamiliar with is great.

Is it useful to kean on as if you were a junior developer? No, absolutely not. Is it a useful tool that can augment your knowledge and capabilities as a senior developer? Yes, very much so.

[–] [email protected] 2 points 7 months ago (1 children)

They answered this further down - they never tried it themselves.

[–] [email protected] 2 points 7 months ago

I never said that.

I said I found the older methods to be better.

Any time I've used it, it either produced things verbatim from existing documentation examples which already didn't do what I needed, or it was completely wrong.

[–] [email protected] 1 points 7 months ago

“Light” programming? ‘Find the errant period’ sort of thing?

[–] [email protected] 0 points 7 months ago

It does not perform very well when asked to answer a stack overflow question. However, people ask questions differently in chat than on stack overflow. Continuing the conversation yields much better results than zero shot.

Also I have found ChatGPT 4 to be much much better than ChatGPT 3.5. To the point that I basically never use 3.5 any more.

[–] [email protected] 2 points 7 months ago (1 children)

It also works great for book or movie recommendations, and I think a lot of gpu resources are spent on text roleplay.

Or you could, you know, ask it if gasoline is useful for food recipes and then make a clickbait article about how useless LLMs are.

[–] [email protected] 11 points 7 months ago (1 children)

I took it as just pointing out how “not ready” it is. And, it isn’t ready. For what they’re doing. It’s crazy to do what they’re doing. Crazy in a bad way.

[–] [email protected] 0 points 7 months ago

I agree it's being overused, just for the sake of it. On the other hand, I think right now we're in the discovery phase - we'll find out out pretty soon what it's good at, and what it isn't, and correct for that. The things that it IS good at will all benefit from it.

Articles like these, cherry picked examples where it gives terribly wrong answers, are great for entertainment, and as a reminder that generated content should not be relied on without critical thinking. But it's not the whole picture, and should not be used to write off the technology itself.

(as a side note, I do have issues with how training data is gathered without consent of its creators, but that's a separate concern from its application)

[–] [email protected] 2 points 7 months ago

I got some good veggie gardening tips today