this post was submitted on 02 Apr 2025
47 points (88.5% liked)

Selfhosted

45420 readers
865 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
47
Consumer GPUs to run LLMs (lemmy.dbzer0.com)
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 

Not sure if this is the right place, if not please let me know.

GPU prices in the US have been a horrific bloodbath with the scalpers recently. So for this discussion, let's keep it to MSRP and the lucky people who actually managed to afford those insane MSRPs + managed to actually find the GPU they wanted.

Which GPU are you using to run what LLMs? How is the performance of the LLMs you have selected? On an average, what size of LLMs are you able to run smoothly on your GPU (7B, 14B, 20-24B etc).

What GPU do you recommend for decent amount of VRAM vs price (MSRP)? If you're using the TOTL RX 7900XTX/4090/5090 with 24+ GB of RAM, comment below with some performance estimations too.

My use-case: code assistants for Terraform + general shell and YAML, plain chat, some image generation. And to be able to still pay rent after spending all my savings on a GPU with a pathetic amount of VRAM (LOOKING AT BOTH OF YOU, BUT ESPECIALLY YOU NVIDIA YOU JERK). I would prefer to have GPUs for under $600 if possible, but I want to also run models like Mistral small so I suppose I don't have a choice but spend a huge sum of money.

Thanks


You can probably tell that I'm not very happy with the current PC consumer market but I decided to post in case we find any gems in the wild.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 2 days ago (2 children)

I would prefer to have GPUs for under $600 if possible

Unfortunately not possible for a new nvidia card (you want CUDA) with 16GB VRAM. You can get them for ~$750 if you're patient. This deal was available for awhile earlier today:
https://us-store.msi.com/Graphics-Cards/NVIDIA-GPU/GeForce-RTX-50-Series/GeForce-RTX-5070-Ti-16G-SHADOW-3X-OC
Or you could try to find a 16GB 4070Ti Super like I got. It runs Deepseek 14B and stuff like Stable Diffusion no problem.

[–] [email protected] 1 points 1 day ago (1 children)

I am OK with either Nvidia or AMD especially if Ollama supports it. With that said I have heard that AMD takes some manual effort whilst Nvidia is easier. Depends on how difficult ROCM is

[–] [email protected] 1 points 1 day ago

With Ollama, all you have do is copy an extra folder of ROCm files. Not hard at all.