0ops

joined 2 years ago
[–] [email protected] 5 points 1 year ago

I mean, I think so?

[–] [email protected] -1 points 1 year ago

A.) Do you have proof for all of these claims about what llm's aren't, with definitions for key terms? B.) Do you have proof that these claims don't apply to yourself? We can't base our understanding of intelligence, artificial or biological, on circular reasoning and ancient assumptions.

It can't do a single thing without human input.

That's correct, hence why I said that chatGPT isn't there yet. What are you without input though? Is a human nervous system floating in a vacuum conscious? What could it have possibly learned? It doesn't even have the concept of having sensations at all, let alone vision, let alone the ability to visualize anything specific. What are you without an environment to take input from and manipulate/output to in turn?

[–] [email protected] 23 points 1 year ago* (last edited 1 year ago) (3 children)

The perceived quality of human intelligence is held up by so many assumptions, like "having free will" and "understanding truth". Do we really? Can anyone prove that? (Edit, this works the other way too. Assuming that we do understand truth and have free will - if those terms can even be defined in a testable way - can you prove that the llm doesn't?)

At this point I'm convinced that the difference between a llm and human-level intelligence is dimensions of awareness, scale, and further development of the model's architecture. Fundamentally though, I think we have all the pieces

Edit: I just want to emphasize, I think. I hypothesize. I don't pretend to know

[–] [email protected] 14 points 1 year ago* (last edited 1 year ago) (11 children)

I feel like this is going to become the next step in science history where once again, we reluctantly accept that homo sapiens are not at the center of the universe. Am I conscious? Am I not a sophisticated prediction algorithm, albiet with more dimensions of input and output? Please, someone prove it

I'm not saying, and I don't believe that chatgtp is comparable to human-level consciousness yet, but honestly I think that we're way closer than many people give us credit for. The neutral networks we've built so far train on very specific and particular data for a matter of hours. My nervous system has been collecting data from dozens of senses 24/7 since embryo, and that doesn't include hard-coded instinct, arguably "trained" via evolution itself for millions of years. How could a llm understand an entity in terms outside of language? How can you understand an entity in terms outside of your own senses?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not so convinced that logic is completely unrelated to the senses. How did you learn to count, add, and subtract mentally? You used your fingers. I don't know about you, but even though I don't count my fingers anymore I still tend to "visualize" math operations. Would I be capable of that if I were born blind? Maybe I'd figure out how to do the same thing in a different dimension of awareness, but I have no doubt that being able to conceptualize visually helps my own logic. As for more complicated math, I can't do that mentally either, I need a calculator and/or scratch paper. Maybe analogues to those can be implemented into the model? Maybe someone should just train a model on khan academy videos, and it'll pick this stuff up emergently? I'm not saying that the ability to visualize is the only roadblock though, I'm sure that improvements could be made to the models themselves, but I bet that it'll be key to human-like reasoning

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

See my reply to the person you replied to. I think you're right that there will need to be more algorithmic development (like some awareness of its own confidence so that the network can say IDK instead of hallucinating its best guess). Fundamentally though, llm's don't have the same dimensions of awareness that a person does, and I think that that's the main bottleneck of human-like understanding.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (3 children)

My hypothesis is that that "extra juice" is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm's can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: Presumably you "understand" your dog - can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too. EDIT: before someone says it, as for feelings I'm not really sure, I'm not a biology guy. But my guess is we sense our own hormones as well

[–] [email protected] 1 points 1 year ago

Same. For my needs (streaming 4k HDR over the LAN), Plex and jelleyfin have been basically equivalent

[–] [email protected] 10 points 1 year ago

Yeah, there's a lot of things IDGAF about myself (like my birthdays, etc.) that make it easy to forget that for other people it means a lot. So even if I'm a cool acquaintance I'll be a shitty friend :(. I'm working on that

[–] [email protected] 16 points 1 year ago

I used to burn paper with a 9 volt and a paperclip. Good times

[–] [email protected] 16 points 1 year ago (1 children)

Sprinkle a little garlic powder in there and it's like eating a grilled cheese and garlic bread at the same time

[–] [email protected] 3 points 1 year ago (1 children)

Well good on you for checking yourself. I've been hearing rumors the last few days but nothing concrete

view more: ‹ prev next ›