this post was submitted on 27 Jan 2025
887 points (98.2% liked)

Technology

61227 readers
4054 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemm.ee/post/53805638

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 days ago (1 children)

From a "compute" perspective (so not consumer graphics), power... doesn't really matter. There have been decades of research on the topic and it almost always boils down to "Run it at full bore for a shorter period of time" being better (outside of the kinds of corner cases that make for "top tier" thesis work).

AMD (and Intel) are very popular for their cost to performance ratios. Jensen is the big dog and he prices accordingly. But... while there is a lot of money in adapting models and middleware to AMD, the problem is still that not ALL models and middleware are ported. So it becomes a question of whether it is worth buying AMD when you'll still want/need nVidia for the latest and greatest. Which tends to be why those orgs tend to be closer to an Azure or AWS where they are selling tiered hardware.

Which... is the same issue for FPGAs. There is a reason that EVERYBODY did their best to vilify and kill opencl and it is not just because most code was thousands of lines of boilerplate and tens of lines of kernels. Which gets back to "Well. I can run this older model cheap but I still want nvidia for the new stuff...."

Which is why I think nvidia's stock dropping is likely more about traders gaming the system than anything else. Because the work to use older models more efficiently and cheaply has already been a thing. And for the new stuff? You still want all the chooch.

[–] [email protected] 3 points 4 days ago (1 children)

Your assessment is missing the simple fact that FPGA can do things a GPU cannot faster, and more cost efficiently though. Nvidia is the Ford F-150 of the data center world, sure. It's stupidly huge, ridiculously expensive, and generally not needed unless it's being used at full utilization all the time. That's like the only time it makes sense.

If you want to run your own models that have a specific purpose, say, for scientific work folding proteins, and you might have several custom extensible layers that do different things, N idia hardware and software doesn't even support this because of the nature of Tensorrt. They JUST announced future support for such things, and it will take quite some time and some vendor lock-in for models to appropriately support it.....OR

Just use FPGAs to do the same work faster now for most of those things. The GenAI bullshit bandwagon finally has a wheel off, and it's obvious people don't care about the OpenAI approach to having one model doing everything. Compute work on this is already transitioning to single purpose workloads, which AMD saw coming and is prepared for. Nvidia is still out there selling these F-150s to idiots who just want to piss away money.

[–] [email protected] 5 points 4 days ago (1 children)

Your assessment is missing the simple fact that FPGA can do things a GPU cannot faster

Yes, there are corner cases (many of which no longer exist because of software/compiler enhancements but...). But there is always the argument of "Okay. So we run at 40% efficiency but our GPU is 500% faster so..."

Nvidia is the Ford F-150 of the data center world, sure. It’s stupidly huge, ridiculously expensive, and generally not needed unless it’s being used at full utilization all the time. That’s like the only time it makes sense.

You are thinking of this like a consumer where those thoughts are completely valid (just look at how often I pack my hatchback dangerously full on the way to and from Lowes...). But also... everyone should have that one friend with a pickup truck for when they need to move or take a load of stuff down to the dump or whatever. Owning a truck yourself is stupid but knowing someone who does...

Which gets to the idea of having a fleet of work vehicles versus a personal vehicle. There is a reason so many companies have pickup trucks (maybe not an f150 but something actually practical). Because, yeah, the gas consumption when you are just driving to the office is expensive. But when you don't have to drive back to headquarters to swap out vehicles when you realize you need to go buy some pipe and get all the fun tools? It pays off pretty fast and the question stops becoming "Are we wasting gas money?" and more "Why do we have a car that we just use for giving quotes on jobs once a month?"

Which gets back to the data center issue. The vast majority DO have a good range of cards either due to outright buying AMD/Intel or just having older generations of cards that are still in use. And, as a consumer, you can save a lot of money by using a cheaper node. But... they are going to still need the big chonky boys which means they are still going to be paying for Jensen's new jacket. At which point... how many of the older cards do they REALLY need to keep in service?

Which gets back down to "is it actually cost effective?" when you likely need

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago) (1 children)

I'm thinking of this as someone who works in the space, and has for a long time.

An hour of time for a g4dn instance in AWS is 4x the cost of an FPGA that can do the same work faster in MOST cases. These aren't edge cases, they are MOST cases. Look at a Sagemaker, AML, GMT pricing for the real cost sinks here as well.

The raw power and cooling costs contribute to that pricing cost. At the end of the day, every company will choose to do it faster and cheaper, and nothing about Nvidia hardware fits into either of those categories unless you're talking about milliseconds of timing, which THEN only fits into a mold of OpenAI's definition.

None of this bullshit will be a web-based service in a few years, because it's absolutely unnecessary.

[–] [email protected] 0 points 4 days ago (1 children)

And you are basically a single consumer with a personal car relative to those data centers and cloud computing providers.

YOUR workload works well with an FPGA. Good for you, take advantage of that to the best degree you can.

People;/companies who want to run newer models that haven't been optimized for/don't support FPGAs? You get back to the case of "Well... I can run a 25% cheaper node for twice as long?". That isn't to say that people shouldn't be running these numbers (most companies WOULD benefit from the cheaper nodes for 24/7 jobs and the like). But your use case is not everyone's use case.

And, it once again, boils down to: If people are going to require the latest and greatest nvidia, what incentive is there in spending significant amounts of money getting it to work on a five year old AMD? Which is where smaller businesses and researchers looking for a buyout come into play.

At the end of the day, every company will choose to do it faster and cheaper, and nothing about Nvidia hardware fits into either of those categories unless you’re talking about milliseconds of timing, which THEN only fits into a mold of OpenAI’s definition.

Faster is almost always cheaper. There have been decades of research into this and it almost always boils down to it being cheaper to just run at full speed (if you have the ability to) and then turn it off rather than run it longer but at a lower clock speed or with fewer transistors.

And nVidia wouldn't even let the word "cheaper" see the glory that is Jensen's latest jacket that costs more than my car does. But if you are somehow claiming that "faster" doesn't apply to that company then... you know nothing (... Jon Snow).

unless you’re talking about milliseconds of timing

So... its not faster unless you are talking about time?

Also, milliseconds really DO matter when you are trying to make something responsive and already dealing with round trip times with a client. And they add up quite a bit when you are trying to lower your overall footprint so that you only need 4 notes instead of 5.

They don't ALWAYS add up, depending on your use case. But for the data centers that are selling computers by time? Yeah,. time matters.

So I will just repeat this: Your use case is not everyone's use case.

[–] [email protected] 0 points 4 days ago (1 children)

I mean...I can shut this down pretty simply. Nvidia makes GPUs that are currently used as a blunt force tool, which is dumb, and now that the grift has been blown, OpenAI, Anthropic, Meta, and all the others trying to make a business center around a really simple tooling that is open source, are about to be under so much scrutiny for the cost that everyone will figure out that there are cheaper ways to do this.

Plus AMD, Con Nvidia. It's really simple.

[–] [email protected] 0 points 4 days ago (1 children)

Ah. Apologies for trying to have a technical conversation with you.

[–] [email protected] 1 points 4 days ago (1 children)

I have you an explanation, and how it is used and perceived. You can ignore that all day long, but point is still valid 👍

[–] [email protected] 0 points 3 days ago

What "point"?

Your "point" was "Well I don't need it" while ignoring that I was referring to the market as a whole. And then you went on some Team Red rant because apparently AMD is YOUR friend or whatever.