this post was submitted on 04 Dec 2023
698 points (92.7% liked)

Technology

59374 readers
3040 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 11 months ago* (last edited 11 months ago) (13 children)

Ethical theories and the concept of free will depend on agency and consciousness. Things as you point out, LLMs don't have. Maybe we've got it all twisted?

I'm not anthropomorphising ChatGPT to suggest that it's like us, but rather that we are like it.

Edit: "stochastic parrot" is an incredibly clever phrase. Did you come up with that yourself or did the irony of repeating it escape you?

[–] [email protected] 14 points 11 months ago* (last edited 11 months ago) (11 children)

I feel like this is going to become the next step in science history where once again, we reluctantly accept that homo sapiens are not at the center of the universe. Am I conscious? Am I not a sophisticated prediction algorithm, albiet with more dimensions of input and output? Please, someone prove it

I'm not saying, and I don't believe that chatgtp is comparable to human-level consciousness yet, but honestly I think that we're way closer than many people give us credit for. The neutral networks we've built so far train on very specific and particular data for a matter of hours. My nervous system has been collecting data from dozens of senses 24/7 since embryo, and that doesn't include hard-coded instinct, arguably "trained" via evolution itself for millions of years. How could a llm understand an entity in terms outside of language? How can you understand an entity in terms outside of your own senses?

[–] [email protected] 2 points 11 months ago (7 children)

I’d give you two upvotes if I could.

We know how a neural network works in the brain. Unless you’re religious and believe in a soul, you’ve only got the reward model and any in-born setup left.

My belief is the consciousness is just the mind receiving a significant amount of constant input and reacting to it. We refuse to feel an LLM is conscious because it receives extremely little input (and probably that it isn’t simulating a neural network as large as ours, yet).

[–] [email protected] 14 points 11 months ago (1 children)

Neural networks are named like that because they're based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn't resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.

We now know this model is... effectively completely wrong.

Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a "self-attention" layer. You can't say LLMs work like a brain, because they don't. The rest is debatable but it's important to remember that there are billions of dollars of value in selling the dream of conscious AI.

[–] [email protected] 2 points 11 months ago

I'm with you that LLM's don't work like the human brain. They were built for a very specific task. But that's a model architecture problem (and being gimped by having only two dimension of awareness, arguably two if you count "self attention" another limiting factor in it's depth of understanding, see my post history if you want). I wouldn't bet against us making it to agi however we define it through incremental improvements over the next decade or two.

load more comments (5 replies)
load more comments (8 replies)
load more comments (9 replies)