1047
Elon Musk Dragged After His Own Chatbot Admits He's A 'Significant Spreader' Of Misinformation
(www.comicsands.com)
This is a most excellent place for technology news and articles.
Well then they will have to train their Ai with incorrect informations... politically incorrect, scientifically incorrect, etc.... which renders the outputs useless.
Scientifically accurate and as close to the truth as possible never equals conservative talking points.... because they are scientifically wrong.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don't know what "correct information" is.
A lot of philosophy connected to language starts mattering, when your main approach to "AI" is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative "anti science" folks.
And you want an Ai that doesn't offend these folks / is taught based on their output. What use could that be of?
Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that's why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as 'pi' string.
These all matter for language models, do they not?
It is already taught on their output among other things.
But I personally don't think this leads anywhere.
Somebody someplace decided it's a genial idea to extrapolate text, because humans communicate their thoughts via text, so it's something that can be used for machines.
Humans don't just communicate.