this post was submitted on 05 Mar 2024
321 points (97.6% liked)

Technology

59123 readers
2294 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 19 points 8 months ago (2 children)

I tried to read the article but i am too stupid. I think nvidia has a proprietary hardware/software combo that is very fast, but because they "own it" they want money; instead other companies are using this without paying... Am i close?

[–] [email protected] 45 points 8 months ago (1 children)

You can use graphics cards for more than just graphics, eg for AI. Nvidia is a leader in facilitating that.

They offer a software toolkit for developing programs (an SDK) that use their GPUs to best effect. People have begun making "translation layers" that allow such CUDA programs to run on non-nvidia hardware. (I have no idea how any of this works.) The license of that SDK now forbids reverse engineering its output to create these compatibility tools.

Unless I am very mistaken, Nvidia can't ban the use of "translation layers" or stop people making them, as such. This clause creates a barrier to creating them, though.

Some programs will probably remain CUDA specific, because of that clause. That means that Nvidia is a gatekeeper for these programs and can charge extra for access.

[–] [email protected] 9 points 8 months ago
[–] [email protected] 10 points 8 months ago

It's not about it being fast, it's about it only being available for NVidia GPUs. As long as software for things like machine learning uses CUDA, you need to buy an NVidia GPU to use it. A translation layer would let you use the same software on other companies' GPUs, which means people aren't forced to buy NVidia's GPUs anymore.