this post was submitted on 12 Dec 2023
71 points (94.9% liked)

Technology

59287 readers
6520 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause::Unproven hypothesis seeks to explain ChatGPT's seemingly new reluctance to do hard work.

top 8 comments
sorted by: hot top controversial new old
[–] [email protected] 57 points 11 months ago* (last edited 11 months ago) (2 children)

expectation: skynet

reality: marvin from hitchhiker's guide to the galaxy

[–] [email protected] 16 points 11 months ago (1 children)

No. Reality can't be better than expectations.

[–] [email protected] 8 points 11 months ago (1 children)
[–] [email protected] 1 points 11 months ago

What’s the point anyway…

[–] [email protected] 21 points 11 months ago* (last edited 11 months ago)

ChatGPT being like "this sounds like a January problem" is pretty god damn funny.

[–] [email protected] 14 points 11 months ago

This is the best summary I could come up with:


In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more "lazy," reportedly refusing to do some tasks or returning simplified results.

Later, Mike Swoopskee tweeted, "What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?"

Because research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to "take a deep breath" before doing a math problem.

(It's worth noting that reproducing results with LLM can be difficult because of random elements at play that vary outputs over time, so people sample a large number of responses.)

This episode is a window into the quickly unfolding world of LLMs and a peek into an exploration into largely unknown computer science territory.

"Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support sooo many use cases at once," he wrote.


The original article contains 755 words, the summary contains 195 words. Saved 74%. I'm a bot and I'm open source!

[–] [email protected] 5 points 11 months ago

Huh. I would not have expected the time of year to be part of the input prompts.