Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Today's AI is way worst then when ChatGPT was first released.... it is way too censored.
But either way, I never considered LLMs to be A.I. even if they have the possibility to be great.
You might need to reconsider that position. There are plenty of uncensored models available, that you can run on your local machine, that match or beat GPT-3 and beat the everliving shit out of GPT-2 and other older models. Just running them locally would have been unthinkable when GPT-3 released, let alone on CPU at reasonable speed. The fact that open source models do so well on such meager resources is pretty astounding.
I agree that it's not AGI though. There might be some "sparks" of AGI in there (as some researchers probably put it), but I don't think there's much evidence of self-awareness yet.
which one is your favorite one? I might buy some hardware to be able to run them soon (I only have a laptop right now that is not the greatest, but I am willing to upgrade)
You might not even need to upgrade. I personally use GPT4All and like it for the simplicity. What is your laptop spec like? There are models than can run on a Raspberry Pi (slowly, of course 😅) so you should be able to find something that'll work with what you've got.
I hate to link the orange site, but this tutorial is comprehensive and educational: https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/
The author recommends KoboldCPP for older machines: https://github.com/LostRuins/koboldcpp/wiki#quick-start
I haven't used that myself because I can run OpenOrca and Mistral 7B models pretty comfortably on my GPU, but it seems like a fine place to start! Nothing stopping you from downloading other models as well, to compare performance. TheBloke on Huggingface is a great resource for finding new models. The Reddit guide will help you figure out which models are most likely to work on your hardware, but if you're not sure of something just ask 😊 Can't guarantee a quick response though, took me five years to respond to a YouTube comment once...
thanks a lot man, I will look into it but I have on-board gpu.... not a big deal if I need to upgrade (I spend more on hookers and blow weekly)
It's ok if you don't have a discrete GPU, as long as you have at least 4GB of RAM you should be able to run some models.
I can't comment on your other activities, but I guess you could maybe find some efficiencies if you buy the blow in bulk to get wholesale discounts and then pay the hookers in blow. Let's spreadsheet that later.