this post was submitted on 02 Oct 2023
1019 points (99.1% liked)

Programmer Humor

32380 readers
1313 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (5 children)

You should be able to fit a model like LLaMa2 in 64GB RAM, but output will be pretty slow if it's CPU-only. GPUs are a lot faster but you'd need at least 48GB of VRAM, for example two 3090s.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (3 children)

Amazon had some promotion in the summer and they had a cheap 3060 so I grabbed that and for Stable Diffusion it was more than enough, so I thought oh... I'll try out llama as well. After 2 days of dicking around, trying to load a whack of models, I spent a couple bucks and spooled up a runpod instance. It was more affordable then I thought, definitely cheaper than buying another video card.

[–] [email protected] 4 points 1 year ago (2 children)

As far as I know, Stable Diffusion is a far smaller model than Llama. The fact that a model as large as LLaMa can even run on consumer hardware is a big achievement.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I had couple 13B models loaded in, it was ok. But I really wanted a 30B so I got a runpod. I'm using it for api, I did spot pricing and it's like $0.70/hour

I didn't know what to do with it at first, but when I found Simply Tavern I kinda got hooked.

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)