admiralteal

joined 1 year ago
[–] [email protected] 8 points 11 months ago (9 children)

I see no problem whatsoever with having frustrating levels of obtuse security required before complying with a request from law enforcement.

There is no downside.

[–] [email protected] 9 points 11 months ago

It is an absolute privacy nightmare. Nothing should be asking for your identity that doesn't have a DAMN good reason to be asking for your identity.

Age verification is not a damn good reason. Especially since any number of free VPNs can circumventing it with just a few clicks.

[–] [email protected] 8 points 11 months ago* (last edited 11 months ago) (2 children)

Not to be steelmaning for Tesla, but... all the major manufacturers of consumer products do this same shit. Pretend known defects don't exist, fail to honor warranties, blame customers for the mfg's own failures. That's just what happens when your society decides collectively that they prefer a system of civil torts to actual regulation.

[–] [email protected] 40 points 11 months ago* (last edited 11 months ago)

The headline is literally exactly opposite the truth, I would say.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

I mean, suppose the LLM bot is actually good at avoiding false positives/misunderstandings -- doesn't that simply remove one of the biggest weaknesses of old-fashioned keyword identification? I really just see this as a natural evolution of the technology and not some new, wild thing. It's just an incremental improvement.

What it absolutely does NOT do is replace the need for human judgement. You'll still need an appeals process and a person at the wheel to deal with errors and edge cases. But it's pretty easy to imagine an LLM bot doing at least as well a job as the average volunteer Reddit/Discord mod.

Of course, it's kind of a moot point. Running a full LLM bot, parsing every comment against some custom-design model, as your automoderator would be expensive. I really cannot see it happening routinely, at least not with current tech costs. Maybe in a few years the prices will have come down enough, but not right now.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago)

It could search for all kinds of keywords to enforce rules. For example, scan titles to find question identifiers to suggest a user maybe needed to check an FAQ/wiki, or that kind of thing. Find keywords to detect probable off-topics. That sort of stuff.

At the end of the day, is what the LLM bot doing really any different? I'd say it's more sophisticated but the same fundamental thing.

[–] [email protected] 23 points 11 months ago* (last edited 11 months ago) (2 children)

Huh? The Apollo dev was very specific about why he couldn't make it work. The turnaround was too fast. He had users on multi-month and even annual subscriptions. Users who were effectively owed service by him. The new model would have turned all of those users into giant financial liabilities for him far beyond whatever revenue he earned from them. And theoretically there was no upper limit on how much those users could have cost him.

If they'd give him 12 months notice about the changes instead of 30 days he would have been able to keep the app running. It would have cost quite a bit more as users would have had to pay for his costs plus the api costs. But with only 30 days the only financially sane thing he could do was refund everyone, rather than let them turn into liabilities he couldn't afford.

If you're wondering why he didn't refund all existing users and then roll out an update with the higher subscriptions... I mean, I'm sure he just didn't want to because he didn't feel like it after being forced to go through all that terribleness and repeatedly being defamed by the admins.

[–] [email protected] 6 points 11 months ago (5 children)

That's not really innovative though. Auto moderator bots have been sending out warnings like this based on simple keyword criteria for years.

[–] [email protected] 15 points 11 months ago* (last edited 11 months ago) (2 children)

In a city a connection like that is probably going to be in the area of $60 to $100. I pay $80 all in for a similar fiber connection.

Outside of a city you just aren't going to get it.

There are a few places that have Community ISPs where it will be substantially less expensive, but those are the exceptions and many states have actually made it illegal to operate community ISPs.

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago) (1 children)

It's not a dumb question, but you're presuming standards and exactness that do not exist in practice.

A pub pint is a pint glass that is deceptively smaller than a full pint, usually about 14oz. That's all it is. This can is the same as a pub pint -- both in spirit and practice -- as far as I can tell.

[–] [email protected] 4 points 11 months ago (3 children)

330 and 440 are standard metric can sizes. 404 is weird.

[–] [email protected] 3 points 11 months ago

Nope. The 4ml diff from 400 is within a margin of error I'm sure, so this size really seems arbitrary to me. Wolfram's language model doesn't recognize it as some obscure unit either.

view more: ‹ prev next ›