this post was submitted on 13 Nov 2024
560 points (95.5% liked)

Technology

59347 readers
5016 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 19 hours ago (1 children)

I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

[–] [email protected] 4 points 14 hours ago* (last edited 14 hours ago) (1 children)

It's still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don't mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or "this is absolutely mathematically unsound but fun to mess with". Messing with stuff other than conditioning or mere model selection.

load more comments (1 replies)
[–] [email protected] -4 points 7 hours ago (3 children)

Well classic computers will always limited and power hungry. Quantum computer is the key to AI achieving next level

load more comments (3 replies)
[–] [email protected] 18 points 21 hours ago

Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.

[–] [email protected] 212 points 1 day ago (1 children)

"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."

Please let this happen

[–] [email protected] 28 points 1 day ago

Market crash and third world war. What a time to be alive!

[–] [email protected] 4 points 17 hours ago

Theres no bracing for this, OpenAI CEO said the same thing like a year ago and people are still shovelling money at this dumpster fire today.

[–] [email protected] 6 points 19 hours ago

Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.

[–] [email protected] 186 points 1 day ago (3 children)

I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.

[–] [email protected] 7 points 18 hours ago

Of course most don't actually even believe it, that's just the pitch to get that VC juice. It's basically fraud all the way down.

[–] [email protected] 56 points 1 day ago (7 children)

... bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects...

one doesn't imagine any of them even remotely thinks a technological panacaea is feasible.

... while they get super rich off it.

because they're only focusing on this.

load more comments (7 replies)
[–] [email protected] 4 points 20 hours ago (1 children)

Soooo... Without capitalism?

[–] [email protected] 3 points 18 hours ago

Pretty much.

[–] [email protected] 4 points 17 hours ago

It's had all the signs of a bubble for the last few years.

[–] [email protected] 49 points 1 day ago
[–] [email protected] 5 points 19 hours ago (4 children)

so long, see you all in the next hype. Any guesses?

load more comments (4 replies)
[–] [email protected] 61 points 1 day ago (4 children)

largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.

I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

[–] [email protected] 3 points 15 hours ago

I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful

[–] [email protected] 25 points 1 day ago (1 children)

OpenAI published a paper about GPT titled "Sparks of AGI".

I don't think they really believe it but it's good to bring in VC money

[–] [email protected] 5 points 22 hours ago (1 children)

That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.

load more comments (1 replies)
[–] [email protected] 26 points 1 day ago

Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.

[–] [email protected] 17 points 1 day ago (1 children)

The call is coming from inside. Google CEO claims it will be like alien intelligence so we should just trust it to make political decisions for us bro: https://www.computing.co.uk/news/2024/ai/former-google-ceo-eric-schmidt-urges-ai-acceleration-dismisses-climate

load more comments (1 replies)
[–] [email protected] 6 points 20 hours ago

Nice, looking forward to it! So much money and time wasted on pipe dreams and hype. We need to get back to some actually useful innovation.

[–] [email protected] 96 points 1 day ago (11 children)

No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.

Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.

load more comments (11 replies)
[–] [email protected] 32 points 1 day ago (4 children)

Well duhhhh.
Language models are insufficient.
They also need:

load more comments (4 replies)
[–] [email protected] 3 points 19 hours ago

Until Open AI announces a new 5t model or something and then the hype refreshes

[–] [email protected] 6 points 22 hours ago (4 children)

AI was 99% a fad. Besides OpenAI and Nvidia, none of the other corporations bullshitting about AI have made anything remotely useful using it.

[–] [email protected] 5 points 14 hours ago* (last edited 6 hours ago) (1 children)

Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.

To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.

LLMs are a small subset of AI and Accelerated-Compute workflows in general.

[–] [email protected] 1 points 14 hours ago* (last edited 14 hours ago) (1 children)

To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology.

Right because corporate management doesn't ever blindly and stupidly overinvest in fads that blow up in their faces...

I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I.

You clearly have no clue what you're on about. As someone with a degrees and experience in both CS and Finance all I have to say is that's not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they're wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can't be just as good or better.

I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.

This is usually what happens once they finally realize spending money on hype doesn't pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it's programmed correctly.

LLMs are a small subset of AI and Accelerated-Compute workflows in general.

No one ever said otherwise. And we're talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.

[–] [email protected] 3 points 13 hours ago (1 children)

That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.

That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.

“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.

Perhaps I’m just not as jaded in my tech career.

operations research, and conventional software which never makes mistakes if it's programmed correctly.

Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.

load more comments (1 replies)
[–] [email protected] 4 points 20 hours ago

I would say LLMs specifically are in that ball park. Things like machine vision have been boringly productive and relatively un hyped.

There's certainly some utility to LLMs, but it's hard to see through all the crazy over estimations and being shoved everywhere by grifters.

[–] [email protected] 4 points 21 hours ago (1 children)

Nvidia made money, but I've not seen OpenAI do anything useful, and they are not even profitable.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 11 points 1 day ago

Of course it'll crash. Saying it's imminent though suggests someone needs to exercise their shorts.

[–] [email protected] 31 points 1 day ago (5 children)

"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."

"As I have always warned," he added, "that's just a fantasy."

load more comments (5 replies)
load more comments
view more: ‹ prev next ›