this post was submitted on 04 Aug 2024
313 points (90.9% liked)

Technology

59374 readers
6264 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 74 points 3 months ago (15 children)

I'm going to attract downvotes, but this article doesn't convince me he's becoming powerful and that we should be very afraid. He's a grifter, sleezy, and making a shit ton of money.

Anyone who has used these tools knows they are useful, but they aren't the great investment the investors claim they are.

Being able to fool a lot of people into believing the intelligence doesn't make it good. When it can fool experts in a field, actively learn, or solve problems without training on the issue, that's impressive.

Generative AI is just a new method of signal processing. The input signal, the text prompt, is passed through a function (the model) to produce another signal (the response). The model is produced by a lot of input text, which can largely be noise.

To get AGI it needs to be able to process a lot of noise, and many different signals. "Reading text" can be one "signal" on a "communication" channel - you can have vision, and sound on it too - body language, speech. But a neural network with human ability would require all five senses, and reflexes to them - fear, guilt, trust, comfort, etc. We are no where near that.

[–] [email protected] 24 points 3 months ago (1 children)

Strong agree here. You hit on a lot of the core issues on LLMs, so I'll say my opinions on the economic aspects.

It's been more than a year since chatGPT released this plague of "slap AI on the product and consumers will put their children down for collateral to buy!" which imo we haven't seen whatsoever. Investors still have a hard-on for the term AI that goes into the stratosphere but even that is starting to change a little.

Consumers level of AI distrust has risen considerably and consumers have seen past the hype. Wrapping this back around to the CEOs level of power, I just don't think LLMs are actually going to have enough marketability for general consumers to become juggernaut corpos.

LLMs absolutely have use cases but they don't fit into most consumer products. No one wants AI washers or rice cookers or friggin AI spoons and shoehorning them in decreases interest in the product.

[–] [email protected] 5 points 3 months ago

That's also how I feel about "smart" devices in general. I don't want a smart refrigerator, I just want it to work. The same goes for other appliances, like my laundry machine, dishwasher, and rice cooker. The one area I kind of want it, TVs, has been ruined by stupid tracking and ads.

What's going to kill AI isn't AI itself, it's AI being forced into products where it doesn't make sense, and then ads being thrown in on top to try to make some sort of profit from it.

load more comments (13 replies)