this post was submitted on 26 Feb 2024
67 points (83.8% liked)
Technology
59148 readers
2533 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Neural engines are coming to basically all CPUs. It won’t be long before you can run your own girlfriend offline on your phone. Training the data is the expensive part after all. I can already run basic llama 2B on my iPad, though offloading the software instead of just downloading off the App Store.
I’m fairly sure anyone with a good GPU can also run these, but I haven’t tried.
Yes. The Llama 70B derived models, as well as Mixtral 8x7B and the new Mistral Medium 70B are competitive with ChatGPT 3.5. Most of them can do 16,000 token context similar to ChatGPT as well.
You only NEED 40GB of free RAM to run them at decent quality, but it's slow.
With a 24GB GPU like a 3090 or 4090 you can run them at a reasonable speed with partial GPU offload. About 1-2 words per second. I run 70Bs in this manner on my computer.
With two 24GB GPUs you can run them very fast, like ChatGPT.
There's of course a whole world in between as well, but those are the rough hardware requirements to match ChatGPT in a self-hosted sort of way. There's also a new thing people are doing where they add layers from one model onto another one, like a merge but keeping >50% of the original layers from each model. "Goliath 120B" and the like, which is made from 2 different 70Bs. They're even better but it's a bit beyond reasonable consumer hardware for now.