313
OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid
(www.theguardian.com)
This is a most excellent place for technology news and articles.
I'm going to attract downvotes, but this article doesn't convince me he's becoming powerful and that we should be very afraid. He's a grifter, sleezy, and making a shit ton of money.
Anyone who has used these tools knows they are useful, but they aren't the great investment the investors claim they are.
Being able to fool a lot of people into believing the intelligence doesn't make it good. When it can fool experts in a field, actively learn, or solve problems without training on the issue, that's impressive.
Generative AI is just a new method of signal processing. The input signal, the text prompt, is passed through a function (the model) to produce another signal (the response). The model is produced by a lot of input text, which can largely be noise.
To get AGI it needs to be able to process a lot of noise, and many different signals. "Reading text" can be one "signal" on a "communication" channel - you can have vision, and sound on it too - body language, speech. But a neural network with human ability would require all five senses, and reflexes to them - fear, guilt, trust, comfort, etc. We are no where near that.
Strong agree here. You hit on a lot of the core issues on LLMs, so I'll say my opinions on the economic aspects.
It's been more than a year since chatGPT released this plague of "slap AI on the product and consumers will put their children down for collateral to buy!" which imo we haven't seen whatsoever. Investors still have a hard-on for the term AI that goes into the stratosphere but even that is starting to change a little.
Consumers level of AI distrust has risen considerably and consumers have seen past the hype. Wrapping this back around to the CEOs level of power, I just don't think LLMs are actually going to have enough marketability for general consumers to become juggernaut corpos.
LLMs absolutely have use cases but they don't fit into most consumer products. No one wants AI washers or rice cookers or friggin AI spoons and shoehorning them in decreases interest in the product.
That's also how I feel about "smart" devices in general. I don't want a smart refrigerator, I just want it to work. The same goes for other appliances, like my laundry machine, dishwasher, and rice cooker. The one area I kind of want it, TVs, has been ruined by stupid tracking and ads.
What's going to kill AI isn't AI itself, it's AI being forced into products where it doesn't make sense, and then ads being thrown in on top to try to make some sort of profit from it.