With respect to 2, it would stop others scrapping the content to train more open models on. This would essentially give Reddit exclusive access to the training data.
Falcon
Bind tun0 in the settings but what I do is run BitTorrent in a docker container with WireGuard so the vpn doesn’t effect my day to day browsing
Of course poor regulation can be bad, it was a silly question that was loaded. Look at, for example the 2002 tort reforms and the damage that did to public safety.
Imagine how much damage could be done to individual privacy and freedom by an ill informed legislature if they elect to regulate gradient descent.
No, they said bs is published about ai.
You want H2OGPT or just use Langchain with CLI
I just use and old laptop
If and only if the trained model is accessible without licence.
E.g. I don’t want Amazon rolling out a Ilm for $100 a month based on freely accessible tutorials written by small developers.
But yeah duck copyright
These comments often indicate a lack of understanding about ai.
Ml algorithms have been in use for nearly 50 years. They certainly become much more common since about 2012, particularly with the development of CUDA, It’s not just some new trend or buzz word.
Rather, what we starting to see are the fruits of our labour. There are so many really hard problems that just cannot be solved with deductive reasoning.
The mistral-7b is a good compromise of speed and intelligence. Grab it in a GPTQ 4bit.
If you can find a copy yeah. GNU sed isn’t written for windows but I’m sure you can find another version of sed that targets windows.
Oh no you need a 3060 at least :(
Requires cuda. They’re essentially large mathematical equations that solve the probability of the next word.
The equations are derived by trying different combinations of values until one works well. (This is the learning in machine learning). The trick is changing the numbers in a way that gets better each time (see e.g. gradient descent)
This is the only path forward.