this post was submitted on 18 Mar 2024
130 points (93.3% liked)
Technology
59287 readers
5168 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't see how it's any different to using Google as the default search engine in Safari.
Also - phones don't have terabytes of RAM. The idea that a (good) LLM can run on a phone is ridiculous. Yes, you can run small AI models on there - but they're about as intelligent as an ant... ants can do a lot of useful work, but they're not on the same level as Gemini or ChatGPT.
It may be no different than using Google as the search engine on safari, assuming I get an opt out. If it’s used for Siri interactions then that gets extremely tricky for one to verify that your interactions aren’t being used to inform adds and or train an LLM. Much harder to opt out vs default search engine there, perhaps.
LLMs do not need terabytes of ram. Heck you can run quantized 7billion param models on 16gb or less (Bloom, Falcon7B — falcon outperforms models with higher memory by the way, so there’s room here for optimization). While not quite as good as openAIs offerings, they’re still quite good. There are Android phones with 24gb of ram so it’s quite possible for Apple to release an iPhone pro with that much, and run it similar to running any large language model on an M1 or M2 Mac. Hell you could probably fit an inference only model in less. Performance wouldn’t be blazing but depending on the task, it could absolutely be sufficient. With Apple MLX and Ferret coming online it’s totally possible that you could, basically today, have a reasonable LLM running on an iPhone 15 Pro. People run OpenHermes 7B for example which uses ~4.4GB to run, without those frameworks. Battery life does take a major hit, but to be honest I’m at a loss for what I need an LLM for on my phone anyways.
Regardless, I want a local LLM or none at all.