GoodEye8

joined 1 year ago
[–] [email protected] 4 points 5 months ago

So that you can find that one porn video you watched six months ago that really got you off but you don't remember how you found it.

[–] [email protected] 8 points 5 months ago

Because of a lot of things. From graphics side RTX and DLSS left AMD catching up (even if RTX isn't really that big of a deal now), then there was Nvidia cards being better at crypto mining and now it's Nvidia cards being better at AI computation + Nvidia pivoting into AI hardware space..

If you want to boil it down to the undeniable, it's that Nvidia is just better at marketing. Everyone knows what Nvidia is doing. What is AMD doing? Besides playing catch-up to Nvidia.

[–] [email protected] 1 points 5 months ago

That's a matter of perspective. I took the other persons comments as "Don't take away my chatGPT, change the regulations if you must but don't take it away", which is essentially the same as "get rid of regulation".

Realistically I also don't see this killing LLMs since the infringement is on giving accurate information about people. I'm assuming they have enough control over their model to make it say "I can't give information about people" and everything is fine. But if they can't (or most likely won't because it would cost too much money) then the product should get torn down. I don't think we should give free pass to companies for playing stupid games, even if they make a useful product.

[–] [email protected] 5 points 5 months ago (2 children)

Agree to disagree. Regulations exist for a purpose and companies need to follow regulations. If a company/product can't existing without breaking regulations it shouldn't exist in the first place. When you take a stance that a company/product needs to exist and a regulation prevents it and you go changing the regulation you're effectively getting rid of the regulation. Now, there may be exceptions, but this here is not one of those exceptions.

[–] [email protected] 26 points 5 months ago (4 children)

You do know the R in GDPR literally stands for Regulation? There's already a regulation that chatGPT should follow but deliberately doesn't. Your idea isn't to regulate, it's to get rid of regulation so that you could keep using your tool.

[–] [email protected] 3 points 5 months ago (1 children)

Online casinos are also tech. The devops in the article literally says they set up proxies to continue operating in countries where their main domain is blocked. I know the core domain of casinos are very regulated, but I doubt the entire tech aspect of online casinos are regulated. I imagine there's plenty of fuckery to do there.

Also casinos will throw out people who benefit too much at the expense of the casino. The casino benefitted too much at the expense of Cloudflare and refused to share the profits, so Cloudflare did what any casino would do and kicked them out.

[–] [email protected] 4 points 5 months ago (2 children)

Can't help you with Trichotillomania but hitting the gym tends to help with weight and confidence. I don't know your situation but I was bordering on obesity and I was suggested 10min warmup + stronglifts 5x5 + 10min cool down as a routine. I did it for almost a year and it definitely had a impact on my weight and confidence.

If you're not sure where to start have a session with a personal trainer with the purpose of setting up your own routine and then just stick with it. It feels really hard at first but after you start seeing results it'll get easier.

[–] [email protected] 3 points 6 months ago (1 children)

It might end up showing all the communities but it won't show you every post. Like F1 posts rarely end up in all despite having an active community. You're more likely to see posts from formuladank than formula 1.

[–] [email protected] 2 points 6 months ago (1 children)

It doesn't need to verify reality, it needs to be internally consistent and it's not.

For example I was setting up logging pipeline and one of the filters didn't work. There was seemingly nothing wrong with configuration itself and after some more tests with dummy data I was able to get it working, but it still didn't work with the actual input data. So I have the working dummy example and the actual configuration to chatGPT and asked why the actual configuration doesn't work. After some prompts going over what I had already tried it ended up giving me the exact same configuration I had presented as the problem. Humans wouldn't (or at least shouldn't) make that error because it would be internally inconsistent, the problem statement can't be the solution.

But the AI doesn't have internal consistency because it doesn't really think. It's not making sure what it's saying is logical based on the information it knows, it's not trying to make assumptions to solve a problem, it can't even deduce that something true is actuality true. All it can do is predict what we would perceive as the answer.

[–] [email protected] 12 points 6 months ago (5 children)

I think you're giving a glorified encyclopedia too much credit. The difference between us and "AI" is that we can approach knowledge from a problem solving position. We do approximate the laws of physics, but we don't blindly take our beliefs and run with it. We put we come up with a theory that then gets rigorously criticized, then come up with ways to test that theory, then be critical of the test results and eventually we come to consensus that based on our understandings that thing is true. We've built entire frameworks to reduce our "hallucinations". The reason we even know we have blind spots is because we're so critical of our own "hallucinations" that we end up deliberately looking for our blind spots.

But the "AI" doesn't do that. It can't do that. The "AI" can't solve problems, it can't be critical of itself or what information its giving out. All our current "AI" can do is word vomit itself into a reasonable answer. Sometimes the word vomit is factually correct, sometimes it's just nonsense.

You are right that theoretically hallucinations cannot be solved, but in practicality we ourselves have come up with solutions to minimize it. We could probably do something similar with "AI" but not when the AI is just a LLM that fumbles into sentences.

[–] [email protected] 9 points 6 months ago

Technically, you just posted a comment.

[–] [email protected] 13 points 6 months ago (1 children)

But that's cool because the company name is a elementary school level word play, exactly the kind Musk likes. Also the company is about digging holes and everyone in kindergarten knows how cool digging is.

You can never forget that we're dealing with a person whose emotional aptitude is the equivalent of a child.

view more: ‹ prev next ›