this post was submitted on 16 Oct 2023
154 points (98.1% liked)

Ask Lemmy

26279 readers
1304 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 11 months ago

I mean yeah, and if I were trained on more articles and papers saying the earth was flat then I might say the same.

I'm not disputing what you've written because it's empirically true. But really, I don't think brains are all that more complex when it comes down to decision making and output. We receive input, evaluate our knowledge and spit out a probable response. Our tokens aren't words, of course, but more abstract concepts which could translate into words. (This has advantages in that we can output in various ways, some non-verbal - movement, music - or combine movement and speech, e.g. writing).

Our two major advantages: 1) we're essentially ongoing and evolving models, retrained constantly on new input and evaluation of that input. LLMs can't learn past a single conversation, and that conversational knowledge isn't integrated into the base model. And 2) ongoing sensory input means we are constantly taking in information and able to think and respond and reevaluate constantly.

If we get an LLM (or whatever successor tech) to that same point and address those two points, I do think we could see some semblance of consciousness emerge. And people will constantly say "but it's just metal and electricity", and yeah, it is. We're just meat and electricity and somehow it works for us. We'll never be able to prove any AI is conscious because we can't actually prove we're conscious, or even know what that really means.

This isn't to disparage any of your excellent points by the way. I just think we overestimate our own brains a bit, and that it may be possible to simulate consciousness in a much simpler and more refined way than our own organically evolved brains, and that we may be closer than we realize.