this post was submitted on 04 Dec 2023
698 points (92.7% liked)

Technology

60071 readers
3505 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 1 year ago* (last edited 1 year ago) (3 children)

Those words imply agency. It would be more accurate to say it returned responses that included cheating, lies, and cover-ups, rather than using language to suggest the LLM performed such actions. The agents that cheated, lied, and covered up were presumably the humans whose responses were used in the training data. I think it's important to use accurate language here given how many people are already inappropriately anthropomorphizing these LLMs, causing many to see AGI where there is none.

[–] [email protected] 6 points 1 year ago (3 children)

If I take my car into the garage for repairs because the "loss of traction" warning light is on despite having perfectly good traction, and I were to tell the mechanic "the traction sensor is lying," do you think he'd understand what I said perfectly well or do you think he'd launch into a philosophical debate over whether the sensor has agency?

This is a perfectly fine word to use to describe this kind of behaviour in everyday parlance.

[–] [email protected] 22 points 1 year ago

Is your conversation with a mechanic meant to be the summary and description of a rigorous scientific discovery?

This isn't 'everyday parlance' this is the result of a study.

[–] [email protected] 14 points 1 year ago

The point of the distinction in that situation is that no one thinks your car is actually alive and capable of lying to you. The language distinction when describing an obviously inanimate object isn't important because there is no chance for confusion.

[–] [email protected] 7 points 1 year ago

If someone doesn't know the answer to something and they guess, or think they know the answer but don't, they are wrong. If they do know the answer and intentionally give a wrong answer, they are lying.

If someone is in a competition or playing a game and they break a rule they didn't know about, they made a mistake. If they do know the rules and break it, they are cheating.

Lying and cheating fundamentally requires intent. This is important no matter what you're referring to. If a child gets something wrong, you should not get mad at them for lying. If they make a mistake in a game, you should not acuse them out cheating. There is a difference and it matters.

ChatGPT literally cannot think. It's not sitting around contemplating it's existence while waiting for inputs. It's taking what you say, comparing that to everything that it's been trained on, assigning a bunch of statistics, and outputting something based on more statistics that hopefully is correct and makes sense.

It doesn't know if it makes sense. It doesn't "know" anything. It's just an incredibly sophisticated version of "if user inputs 'Hi how are you', respond 'I am well, how are you?'".

It can't do things with intent. Therefore it cannot lie or cheat. It can simply output wrong or problematic text based on statistics.

[–] [email protected] -4 points 1 year ago

One frame from The Matrix where Morpheus says "you think that's air you're breathing?" but instead captioned with "you think that's 'agency' making you do things?"

Maybe it would be more accurate to say "so-and-so exhibited behaviors that included cheating, lies, and coverups" rather than using language to suggest that people have free will. (There's no dearth of philosophies that would say something not too far from that.)

Even if humans are ultimately essentially different in that way from any technologies we've devised so far, we use convenient fictions for technology all the time. This page comes to mind .

[–] [email protected] -4 points 1 year ago (1 children)

The people who designed it do have agency, and they designed to "lie" intentionally.

[–] [email protected] 5 points 1 year ago (1 children)

They did no such thing. LLMs are probabilistic, not deterministic, and it can generate meaningful responses (to us) that the engineers neither predicted nor designed for.

[–] [email protected] 3 points 1 year ago (1 children)

I get what you're trying to say, but they are absolutely deterministic. All traditional (i.e., non quantum) computers and their programs are deterministic. Computation would be otherwise impossible. LLMs use a "random" seed value when generating their responses in order to "randomize" their responses, but it's all perfectly deterministic. The same input plus the same seed results in the exact same response.

Computers are just a series of binary switches, and programs and data are a bunch of instructions on how to initially set those switches before running a cycle of the CPU. It's deterministic at every step.

I put "random" in quotes because random number generators in software are also deterministic. They also use seed values (like the current time and the MAC address of the PC's network interface) to generate numbers that only seem random. When true randomness is needed, a physical source of entropy must be used like an atmospheric sampler.

The quirks of behavior you're talking about have nothing to do with randomness vs determinism. Their behavior comes from the fact that their data sources are extremely large, and the neural network that it runs on was not designed by a human with specific behaviors like most algorithms are. The weights of the nodes in the neural network were generated by training and not by programmers, and it's extremely complex, so no one can predict its output before running it.

Of course, this is true of even basic algorithms a lot of the time.

[–] [email protected] 1 points 1 year ago

They also use seed values (like the current time and the MAC address of the PC’s network interface) to generate numbers that only seem random.

For purposes of this discussion pseudo random with weights is probabilistic, or so close to it that this distinction is irrelevant.