EatATaco

joined 1 year ago
[–] [email protected] 4 points 5 months ago

Im way too sexy to be a robot.

[–] [email protected] 5 points 5 months ago

One quibble: it ain't new. I've been accused of being a bot on /r/conspiracy for well over a decade.

But my response has long since been the same: does it matter? Whether I'm a bot has absolutely zero bearing on the truth of what I'm saying. Don't get me wrong, we should definitely do something to curb botting, but I agree with you: if you find yourself using it as a reason to dismiss an argument you're just relying on a garbage ad hominem.

[–] [email protected] 15 points 5 months ago (5 children)

I swear it's actually the opposite where they are like "it's only one pixel, it doesn't count." And does the guy on the bike count? It seems like no matter what I do - unless I get through on the first try - it's wrong, and I'm clicking for what seems like an hour.

[–] [email protected] 3 points 5 months ago (1 children)

I'm not sure if they changed, or my taste changed, but the fries are almost inedible to me now. They smell fantastic still, but they just taste so fake.

[–] [email protected] 4 points 5 months ago

The article talks about this. You should try reading it instead of reacting to the headline. This is generally a good idea.

[–] [email protected] -2 points 6 months ago

My guess is you know nothing about this. They may think reinserting them is too risky for the patient because they don't know. You're almost certainly just making up facts to justify your conclusions, rather than assessing the facts and coming to a conclusion based on them.

[–] [email protected] 36 points 6 months ago (10 children)

You cherry-picked the first part of that paragraph. The end goes like this:

Arbaugh went on to say that he has since recovered from the initial disappointment and continues to have hope for the technology.

And then the next part of his statement is found in the following paragraph:

"I thought that I had just gotten to, you know, scratch the surface of this amazing technology, and then it was all going to be taken away," he added. "But it only took me a few days to really recover from that and realize that everything I’ve done up to that point was going to benefit everyone who came after me.” He also said that "it seems like we’ve learned a lot and it seems like things are going in the right direction."

Of course, the goal here is not to have an honest assessment of what happened. . .but to simply choose what we want to further our hatred (justified, IMO) of Musk.

[–] [email protected] 21 points 6 months ago (4 children)

Itt, people being downvoted for answering the question.

Gotta love Lemmy. Lol

[–] [email protected] -3 points 6 months ago

Hard to say. You claim they are incapable of understanding, which is why they can't be fluent. however, really, the whole argument boils down to whether they are capable of understanding. You just state that as if it's established fact, and I believe that's an open question at this point.

So whether it is circular depends on why you think they are incapable of understanding. If it's like the other poster, and it's because that's a human(ish) only trait, and they aren't human...then yes.

[–] [email protected] -5 points 6 months ago (1 children)

An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

But this is a deliberate decision, not an inherent limitation. The model could get feedback from the outside world, in fact this is how it's trained (well, data is fed back into the model to update it). Of course we are limiting it to words, rather than a whole slew of inputs that a human gets. But keep in mind we have things like music and image generation AI as well. So it's not like it can't be also be trained on these things. Again, deliberate decision rather than inherent limitation.

We both even agree it's true that it can learn from interacting with the world, you just insist that because it isn't persisting, that doesn't actually count. But it does persist, just not the the new inputs from users. And this is done deliberately to protect the models from what would inevitably happen. That being said, it's also been fed arguably more input than a human would get in their whole life, just condescended into a much smaller period of time. So if it's "total input" then the AI is going to win, hands down.

You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.”

I'm not ignoring this. I understand that it's the whole argument, it gets repeated around here enough. Just saying it doesn't make it true, however. It may be true, again I'm not sure, but simply stating and saying "full stop" doesn't amount to a convincing argument.

They simply do not think, much less understand.

It's not as open and shut as you wish it to be. If anyone is ignoring anything here, it's you ignoring the fact that it went from basically just, as you said, randomly stacking objects it was told to stack stably, to actually doing so in a way that could work and describing why you would do it that way. Additionally there is another case where they asked chat gpt4 to draw a unicorn using an obscure programming language. And you know what? It did it. It was rudimentary, but it was clearly a unicorn. This is something that wasn't trained on images at all. They even messed with the code, turning the unicorn around, removing the horn, fed it back in, and then asked it to replace the horn, and it put it back on correctly. It seemed to understand not only what an unicorn looked like, but what was the horn and where it should go when it was removed.

So to say it just can "generate more words" is something you can accuse us of as well, or possibly even just overly reductive of what it's capable of even now.

But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

There are all kinds of problems with human memory, where we imagine things all of the time. You've ever taken acid? If so, you would see how unreliable our brains are at always interpreting reality. And you want to really trip? Eye witness testimony is basically garbage. I exaggerate a bit, but there are so many flaws with it with people remembering things that didn't happen, and it's so easy to create false memories, that it's not as convincing as it should be. Hell, it can even be harmful by convicting an innocent person.

Every short coming you've used to claim AI isn't real thinking is something shared with us. It might just be inherent to intelligence to be wrong sometimes.

view more: ‹ prev next ›