0ops

joined 1 year ago
[–] [email protected] 14 points 11 months ago* (last edited 11 months ago) (11 children)

I feel like this is going to become the next step in science history where once again, we reluctantly accept that homo sapiens are not at the center of the universe. Am I conscious? Am I not a sophisticated prediction algorithm, albiet with more dimensions of input and output? Please, someone prove it

I'm not saying, and I don't believe that chatgtp is comparable to human-level consciousness yet, but honestly I think that we're way closer than many people give us credit for. The neutral networks we've built so far train on very specific and particular data for a matter of hours. My nervous system has been collecting data from dozens of senses 24/7 since embryo, and that doesn't include hard-coded instinct, arguably "trained" via evolution itself for millions of years. How could a llm understand an entity in terms outside of language? How can you understand an entity in terms outside of your own senses?

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

I'm not so convinced that logic is completely unrelated to the senses. How did you learn to count, add, and subtract mentally? You used your fingers. I don't know about you, but even though I don't count my fingers anymore I still tend to "visualize" math operations. Would I be capable of that if I were born blind? Maybe I'd figure out how to do the same thing in a different dimension of awareness, but I have no doubt that being able to conceptualize visually helps my own logic. As for more complicated math, I can't do that mentally either, I need a calculator and/or scratch paper. Maybe analogues to those can be implemented into the model? Maybe someone should just train a model on khan academy videos, and it'll pick this stuff up emergently? I'm not saying that the ability to visualize is the only roadblock though, I'm sure that improvements could be made to the models themselves, but I bet that it'll be key to human-like reasoning

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

See my reply to the person you replied to. I think you're right that there will need to be more algorithmic development (like some awareness of its own confidence so that the network can say IDK instead of hallucinating its best guess). Fundamentally though, llm's don't have the same dimensions of awareness that a person does, and I think that that's the main bottleneck of human-like understanding.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (3 children)

My hypothesis is that that "extra juice" is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm's can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: Presumably you "understand" your dog - can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too. EDIT: before someone says it, as for feelings I'm not really sure, I'm not a biology guy. But my guess is we sense our own hormones as well

[–] [email protected] 1 points 11 months ago

Same. For my needs (streaming 4k HDR over the LAN), Plex and jelleyfin have been basically equivalent

[–] [email protected] 10 points 11 months ago

Yeah, there's a lot of things IDGAF about myself (like my birthdays, etc.) that make it easy to forget that for other people it means a lot. So even if I'm a cool acquaintance I'll be a shitty friend :(. I'm working on that

[–] [email protected] 16 points 11 months ago

I used to burn paper with a 9 volt and a paperclip. Good times

[–] [email protected] 16 points 11 months ago (1 children)

Sprinkle a little garlic powder in there and it's like eating a grilled cheese and garlic bread at the same time

[–] [email protected] 3 points 1 year ago (1 children)

Well good on you for checking yourself. I've been hearing rumors the last few days but nothing concrete

[–] [email protected] 15 points 1 year ago

It's a probabilistic network that generates a response based on your input.

Same

[–] [email protected] 9 points 1 year ago

You can get good condition, used, last year flagships on eBay and Swappa all day long. I've literally never bought a brand new phone.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Similarly without warning or context: "Poop breaks easily"

view more: ‹ prev next ›