this post was submitted on 05 Mar 2024
111 points (89.4% liked)

Technology

59207 readers
2939 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 8 months ago* (last edited 8 months ago) (3 children)

This is Kyle Hill's video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

[–] [email protected] 5 points 8 months ago (1 children)

I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.

[–] [email protected] 3 points 8 months ago

Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.

Which is rather annoying, IMO, because as Fermi Paradox solutions go it's riddled with holes and implausibilities. But it's scary, and so people latch on to it easily.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.

The Sora demo, for example, is very compelling technologically, but it didn't impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.

As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

I wouldn't mind it as much if these chatbots weren't being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it's a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn't going to suddenly "wake up" and become self aware one day. All these systems are slowly inching towards it. There's not going to be a clean line between "just a program mimicking a human" and "a fully self-aware entity". It's up to us to determine that, and there's no hard rules to determine that, because it falls into the "problem of other minds" philosophical concept.