Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I've not run such things on Apple hardware, so can't speak to the functionality, but you'd definitely be able to do it cheaper with PC hardware.
The problem with this kind of setup is going to be heat. There are definitely cheaper minipcs, but I wouldn't think they have the space for this much memory AND a GPU, so you'd be looking for an AMD APU/NPU combo maybe. You could easily build something about the size of a game console that does this for maybe $1.5k.
You can get a GPU with 192GB VRAM for less than a Mac? Sign me up please.
AMD APU uses whatever system RAM is as VRAM, so...yeah. NPU as well.
And what is the memory bandwidth on these APUs?
As fast as it gets to the CPU. That should be pretty obvious.
Which is how fast?
Up to half of system RAM*
For context length, vram is important, you can’t break contexts across memory pools so it would be limited to maybe 16gb. With m series you can have a lot more space since ram/vram are the same, but its ram at apple prices. You can get a +24gb setup way cheaper than some nvidia server card though
Yeah the VRAM of Mac M series is very attractive for running models at full context length and the memory bandwidth is quite good for token generation compared to the price, power consumption and heat generation of NVidia GPUs.
Since I’ll have to put this in my kitchen/living room that’d be a big plus but idk how well prompt processing would work if I send over like 80k tokens.
I’d honestly be open for that but would an AMD setup not take up a lot of space and consume lots of power / be loud?
It seems like in terms of price & speed, the Macs suck compared to other options, but if you don’t have a lot of space and don’t want to hear an airplane engine constantly I’m wondering if there are options.
~~I just looked, and the MM maxes out at 24G anyway. Not sure where you got the thought of 196GB at.~~ NVM you said m2 ultra
Look, you have two choices. Just pick one. Whichever is more cost effective and works for you is the winner. Talking it down to the Nth degree here isn't going to help you with the actual barriers to entry you've put in place.
Mac Mini M4 Pro can be ordered with up to 64GB shared memory
I understand what you’re saying but I’m coming to this community because I like having more input, hear about the experience of others and potentially learn about things I didn’t know about. I wouldn’t ask specifically in this community if I wouldn’t want to optimize my setup as much as I can.
Here's a quick idea of what you'd want in a PC build https://newegg.io/2d410e4
Thanks, that’s very helpful! Will look into that type of build
You can have a slightly bigger package in PC form and doing 4x the work for half the price. That's the gist.