jsomae

joined 8 months ago
[–] [email protected] 23 points 4 months ago (5 children)

This sounds like a cool idea because it is a novel approach, and it appeals to my general heuristic of the inevitability of technology and freedom. However, I don't think it's actually a good idea. People are entitled privacy, on this I hope we agree -- and I believe this is because of something more fundamental: people are entitled dignity. If you think we'll reach a point in this lifetime where it will be too commonplace to be a threat to someone's dignity, I just don't agree.

Not saying the solution is to ban the technology though.

[–] [email protected] 7 points 5 months ago (1 children)

40% of cops!

But for this one, please don't actually assume it's your friend timmy's dad, folks.

[–] [email protected] 10 points 5 months ago (4 children)

What's the college one mean?

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

I did an honors math+cs degree. I'm pretty good at advanced math. I never learned long division. Don't feel bad about that.

(In case any other mathy people read this and wonder how I could understand ring theory without Euclid's division algorithm, relax)

[–] [email protected] 3 points 5 months ago (1 children)

I have never played genshin impact and I object fundamentally to gatcha games. But I like video games a lot. Should I watch?

[–] [email protected] 1 points 5 months ago (1 children)

That's what I meant, yes. They're not built based on any linguistic field

[–] [email protected] 1 points 5 months ago

Thank you for explaining that. I didn't understand the need to use drinking water.

[–] [email protected] 2 points 5 months ago (2 children)

This may be true of chopping down forests or mining coal. But we can use nuclear power. And the earth has plenty of water -- does chatgpt need clean drinking water specifically?

[–] [email protected] 1 points 5 months ago (3 children)

Transformers are not built with our knowledge of language. That's a gross approximation -- it would honestly be more accurate to say they're modelled after the human brain than that they're built with our understanding of language. A big problem is that the connection between AI and language is poorly understood -- we can't even understand what the word2vec axes are.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

Ehh, I mean, it's not really surprising it knows how to lie and will do so when asked to lie to someone as in this example (it was prompted not to reveal that it is a robot). It can see lies in its training data, after all. This is no more surprising than "GPT can write code."

I don't think GPT4 is skynet material. But maybe GPT7 will be, with the right direction. Slim possibility but it's a real concern.

[–] [email protected] 9 points 5 months ago

Sometimes a bullshitter is what you need. Ever looked at a multiple choice exam in a subject you know nothing about but feel like you could pass anyway just based on vibes? That's a kind of bullshitting, too. There are a lot of problems like that in my daily work between the interesting bits, and I'm happy that a bullshit engine is good enough to do most of that for me with my oversight. Saves a lot of time on the boring work.

It ain't a panacea. I wouldn't give a gun to a monkey and I wouldn't give chatgpt to a novice. But for me it's awesome.

[–] [email protected] 3 points 5 months ago

This is an absolutely wonderful graph. Thank you for teaching me about the trough of disillusionment.

view more: ‹ prev next ›