this post was submitted on 03 Mar 2025
721 points (99.2% liked)

Technology

63614 readers
3293 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 38 points 14 hours ago (3 children)

AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

[–] [email protected] 7 points 6 hours ago (1 children)

It's like when you're having a conversation on autopilot.

"Mum, can I play with my frisbee?" Sure, honey. "Mum, can I have an ice cream from the fridge?" Sure can. "Mum, can I invade Poland?" Absolutely, whatever you want.

[–] [email protected] 1 points 1 hour ago

So chat gpt started ww2

[–] [email protected] 3 points 7 hours ago

Don't need something the size of AWS these days. I ran one on my PC last week. But yeah, you're right otherwise.