this post was submitted on 11 Jun 2024
268 points (94.7% liked)

Technology

59148 readers
2266 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 4 months ago (1 children)

I understood it like Apple provides a pre trained LLM and it is then trained on device with user data directly resulting in new weights and configuration for each person‘s personal AppleLLM. For me that seems more reasonable that way because the data is way less random but strictly orchestrated by the limitations defined by apple through the API that needs to be used in order to integrate your app with the user’s personal AppleLLM

And I still agree, the weights and configuration of the AppleLLM is as critical as 100gb screenshots of your windows, but definitely harder to understand if extracted.

[–] [email protected] 3 points 4 months ago (1 children)

I just don't think that's plausible at all. I mean, they can "train" further by doing stuff like storing certain things somewhere and I imagine there's a fair amount of "dumb" algorithm and programming work going on under the whole thing...

...but I don't think there's any model training on device. That's orders of magnitude more processing power than running this stuff. Your phone would be constantly draining for months, it's just not how these things work.

[–] [email protected] 0 points 4 months ago (1 children)

Ahh, lol, sorry for taking so long to understand 😅 guess many misunderstood apple, like I did, or not, at least I think I get it now.

So, the only difference between copilot and apple is that appleAI has access to the API where app developers decide what is seeable for the AI vs Access to everything one has seen on the screen except DRM stuff

At apple, as attacker, you would need to get access to that API and you can get all data and at copilot you need access to the Photos

So the difference why anybody prefer Apples solution, is because their LLM gets butter clean data which is perfectly structured by devs vs at windows, where the LLM has to work with pretty much chaos data

Where exactly is Apples solution spyware? It is only a process that runs while interacting and processing data. Or is it enough to be proprietary and have access to this data, well then, spotlight is spyware.

[–] [email protected] 3 points 4 months ago

It's spyware in that both applications are a centralized searchable repository that knows exactly what you did, when and how. And no, the supposed ability to limit specific applications is not a difference, MS also said you can block specific apps and devs can block specific screens within an app. They're both the same on that front, presumably.

What I'm saying is the reason people are reacting differently is down to branding and UX.