Z4rK

joined 1 year ago
[–] [email protected] 2 points 5 months ago

Lol thank you autocorrect. Ollama.

[–] [email protected] 0 points 5 months ago (2 children)

Ok, I just don’t see the relevance to this post then. Sure, you’re fine to rant about Apple in any thread you want to, it’s just not particularly relevant to AI, which was the technology in question here.

I hear good things about GrapheneOS but just stay away from it because of all the stranger. I love Olan’s.

[–] [email protected] -3 points 5 months ago (4 children)
  1. Security / privacy on device: Don’t use devices / OS you don’t trust. I don’t see what difference on-device AI have at all here. If you don’t trust your device / OS then no functionality or data is safe.
  2. Security / privacy in the cloud: The take here is that Apples proposed implementation is better than 99% of every cloud service out there. AI or not isn’t really part of it. If you already don’t trust Apple then this is moot. Don’t use cloud services from providers you don’t trust.

Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.

[–] [email protected] -4 points 5 months ago (6 children)

They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/

All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.

They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.

[–] [email protected] 2 points 5 months ago (1 children)

To be honest, I’m not sure what we’re arguing - we both seem to have a sound understanding of what LLM is and what it is not.

I’m not trying to defend or market LLM, I’m just describing the usability of the current capabilities of typical LLMs.

[–] [email protected] 0 points 5 months ago

Well they just name-grabbed all of AI with their stupid Apple Intelligence branding.

2
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.

Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.

Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via "function calling"; kernel process LLM that can schedule and coordinate work across them given user queries.

Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, "always on", and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.

Step 4 Initiative. Don't perform a task given a prompt, anticipate the prompt, suggest, initiate.

Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.

Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).

Step 7 Privacy. <3

We're quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.

https://x.com/karpathy/status/1800242310116262150?s=46

 

Hi everyone! We're incredibly excited to announce that we're launching a beta of Finamp's redesign today. This is a major update to the app, and we're looking for feedback from anyone willing to try it out before we roll it out to everyone.

The beta is a work-in-progress, there are several new features already, but we will be adding more features over time.

Looks very nice!

view more: next ›