dxdydz

joined 2 years ago
[–] [email protected] 38 points 4 days ago (8 children)

LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.

Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.

Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.

[–] [email protected] 13 points 1 week ago (1 children)

lol, spoken like somebody who has never actually tried to speak English in Germany. As a shameful monoglot, I have had occasion to test the limits of English understanding in a variety of countries, and Germany has pretty low rates of English speakers in my personal experience. The Netherlands on the other hand…