488
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis
(www.theverge.com)
This is a most excellent place for technology news and articles.
I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.
But that's because the AI doesn't know how to solve the problem.
Because the AI doesn't know anything.
Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.
There's a certain point where this just feels like the Chinese room. And, yeah, it's hard to argue that a room can speak Chinese, or that the weird prediction rules that an LLM is built on can constitute intelligence, but that doesn't mean it can't be. Essentially boiled down, every brain we know of is just following weird rules that happen to produce intelligent results.
Obviously we're nowhere near that with models like this now, and it isn't something we have the ability to work directly toward with these tools, but I would still contend that intelligence is emergent, and arguing whether something "knows" the answer to a question is infinitely less valuable than asking whether it can produce the right answer when asked.
I really don't think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it's just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn't mean that LLMs can't produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn't make the book intelligent.
But you don't really "know" anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn't set up that way by any "intelligence", the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you've received over your life).
Thinking about how a thing works is, imo, the wrong way to think about if something is "intelligent" or "knows stuff". The mechanism is neat to learn about, but it's not what ultimately decides if you know something. It's much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.
And again, I'm not trying to argue that an LLM is intelligent, just that whether it is or not won't be decided by talking about the mechanism of its "thinking"
We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.
I agree, and I formalize it like this:
Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.
So far I have not seen any reason not to consider these LLMs to be generally intelligent.
Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.
It took me 2 seconds to think of something LLMs can't do that AGI could.