Rhaedas

joined 8 months ago
[–] [email protected] 4 points 2 months ago

I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in "strawberries" thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said "really?", and it corrected itself once again.

LLMs are very useful as long as know how to maximize their power, and you don't assume whatever they spit out is absolutely right. I've had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I've found some of the simplest errors in the middle of a lot of helpful things. It's at an assistant level, and you need to remember that assistant helps you, they don't do the work for you.

[–] [email protected] 55 points 2 months ago (1 children)

I'm all for eternity with an opt-out choice. But forever without parole? That is hell.

[–] [email protected] 49 points 2 months ago (5 children)

Who wants to live forever?

[–] [email protected] 15 points 2 months ago (2 children)

I have a Tesla store near my work, and I've been seeing a few of them drive by lately. Each time, even seeing them coming, I still have a WTF reaction. That is a god awful looking vehicle. Even if it was of good quality. I drew better trucks in crayon when I was 5.

[–] [email protected] 9 points 2 months ago

Even failures could be bad, for nearby areas or the world. Just a missile falling and then burning is going to release stuff into the air and water. A far cry from a working launch, but still a mess and that's just one missile. What is the probability that they all fail to even launch or just do something and crash inert? Not big, I would guess. Even a badly maintained nuclear arsenal has its own deterrence.

[–] [email protected] 36 points 3 months ago (2 children)

9% is only recycled once, only 1% has been truly reused multiple times, so you're close enough.

Also:

Of the remaining waste, 12% was incinerated and 79% was either sent to landfills or lost to the environment as pollution.

They're the same thing. Incinerated is lost as pollution, it just happened to have one more use on the way there.

And I just realized, this wikipedia page linked is almost 10 years out of date!

[–] [email protected] 2 points 3 months ago

Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you're too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.

I forgot the fourth, that I've seen in a few places (satirically, but could be true). The ASI analyses what we've done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.

[–] [email protected] 5 points 3 months ago

We got one of the older baseline Eufy models a number of years ago, and have been fine with it. Even got a second one for the upstairs since we're lazy and got tired of carrying it up there every now and then. I'd love to have one with more of a memory of where it's been, but really the random various patterns work fine to get most everywhere if the battery lasts long enough. I'm sure there some math there to show the drunken walk covers everywhere. Plus sometimes things get blown around, and if the robot mapped out a place as cleaned but it got some stuff pushed over there, it won't go back to get it like the random might.

But the real reason I like the lower model is that it is less complicated. I do think that the higher end ones with all the bells and whistles probably can be a pain when things go wrong. Keep it simple. Also, your mileage may vary depending on what you're cleaning - the type of dirt and pet hair, the room layout, bare vs. carpet, etc. A side note - bare floor is the only way to go with pets...you don't realize until you do a deep cleaning or have to replace some areas of carpet that get damaged. Yuck.

[–] [email protected] 8 points 3 months ago

Must vary regionally. I used to work at BK in the 90s, got the free meal regularly, and I won't touch any BK here anymore because the quality of everything is far below even McD standards (which is a 50/50 gamble itself).

A shame because the original Whopper was a great product.

[–] [email protected] 16 points 3 months ago

"We're sorry (we got caught). Here's a free identity production scam to make you feel safe again.

I begin to wonder if identity theft protection was the next big thing to get into after self storage. A bit of investment and then there's very little upkeep, and the companies keep that demand rolling in.

[–] [email protected] 38 points 3 months ago (5 children)

I read "free credit monitoring" as allowing your name to get on another list to be sold.

[–] [email protected] 17 points 3 months ago

LLMs alone won't. Experts in the field seem to have different opinions on if they will help get us there. What is concerning to me is that the issues and dangers of AGI also exist with advanced LLM models, and that research is being shelved because it gets in the way of profit. Maybe we'll never be able to get to AGI, but we sure better hope if we do we get it right the first time. How's that been going with the more primitive LLMs?

Do we even know what the "right" AGI would be? We're treading in dangerous waters.

view more: ‹ prev next ›