For me I decided to reduce AI usage as it starts to hurt my real intelligence :)
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
I don't use it but I have a self hosted llama instance that works alright.
Probably a bit too technical but I wrote (and keep on updating) https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence which does include some of those, as mentioned here by others, e.g. GPT4All, LMStudio ,tlm, localAI, Ollama, etc.
If I can somehow clarify this to help you, please do ask.
LMStudio is what I use, extremely simple and runs well
~~I'm trying to switch to this from Ollama after seeing the benchmarks, so much faster. But it has given me nothing but issues with CUDA incompatibility where Ollama runs smooth as butter. Hopefully I get some feedback on my repo discussion. Same docker setup as working Ollama, but Ollama has a lot more detailed docs.~~
Ignore that, thought you said LMDeploy.
Indeed, very convenient. Just noticed they also now provide a JS/TS way to access models https://github.com/lmstudio-ai/lmstudio.js so might try that soon, especially if they conveniently support RAG.
get a llamafile.
jan.ai
, I would appreciate suggestions for offline AI chatbots that can be easily installed on various operating systems without requiring technical expertise.
The path on which you are going will likely require you to up skill at some point
Good luck!
But there decent amount of options for non tech person to run local llm but unless you got a good gaming PC ie high end graphics with RAM it ain't as usae.
CPU/RAM set up is too slow for chatbot functionality... Maybe Apple Silicon could work but I am not sure, it does have better bandwidth than traditional PC architectures
I can confirm that Apple silicon works for running the largest Llama models. 64GB of RAM. Dunno if it would work with less as I haven’t tried. It’s the M1 Max chip, too. Dunno how it’d do on the vanilla chips.
Look up LM Studio. It's a free software that let's you easily install and use local LLMs. Note that you need to have a good graphics card and a lot of RAM for it to be useful.
I've been very happy with GPT4All. It's open source and privacy-focused by running on your own hardware. It provides a clean GUI for downloading various LLMs to chat with.
Take a look at [email protected] ... they have lots of info in the sidebar and it's moderately active so people will probably answer questions.