HarkMahlberg

joined 1 year ago
[–] [email protected] 0 points 9 months ago (2 children)

What a catchy name.

[–] [email protected] 2 points 9 months ago

Man, you said everything I wanted to in less than half the words. Shoulda just linked to your comment lol

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

And shareholders

He couldn’t have imagined the drama of this week, with four directors on OpenAI’s nonprofit board unexpectedly firing him as CEO and removing the company’s president as chairman of the board. But the bylaws Altman and his cofounders initially established and a restructuring in 2019 that opened the door to billions of dollars in investment from Microsoft gave a handful of people with no financial stake in the company the power to upend the project on a whim.

https://www.wired.com/story/openai-bizarre-structure-4-people-the-power-to-fire-sam-altman/

Oh! Turns out I was wrong... "a handful of people with no financial stake in the company" doesn't sound like shareholders, and yet they could change the direction of the company at will. And just so we're clear, whether it's four faceless ghouls or Sam Altman, 1 or 4, the fact that the company is beholden to so few people, who themselves are not democratically elected, nor necessarily law experts, nor necessarily have any history being police officers... their AI is what decides whether or not to hold a police officer accountable for his misdeeds? Hard. Pass.

Oh, and lest we forget Microsoft is invested in OpenAI, and OpenAI has a quasi-profit-driven structure. Those 4 board directors aren't even my biggest concern with that arrangement.

(2/2)

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (1 children)

It’s fine to not understand what “AI” is and how it works

That's highly presumptive isn't it? I didn't make any statement about what AI is, or the mechanics behind it. I only made a statement regarding the owners and operators of AI. We're talking about the politics of using AI to aid in police accountability, and for those intents and purposes, AI need not be more than a black box. We could call it a sentient jar of kidney beans for all it matters.

So for the sake of argument - the one I made, not the one I didn't make - what did I misunderstand?

Unreliable

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes, and citations.

Judge Castel’s opinion offers a detailed analysis of one such opinion, Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019), which the sanctioned lawyers produced to the Court. The Varghese decision is presented as being issued by three Eleventh Circuit judges. While according to Judge Castel’s opinion the decision “shows stylistic and reasoning flaws that do not generally appear in decisions issued by the United States Court of Appeals,” and contains a legal analysis that is otherwise “gibberish,” it does in fact reference some real cases. Additionally, when confronted with the question of whether the case is real, the AI platform itself doubles down, explaining that the case “does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.”

https://www.natlawreview.com/article/artificially-unintelligent-attorneys-sanctioned-misuse-chatgpt

Regardless of how ChatGPT made this error, be it "hallucination" or otherwise, I would submit this as exhibit A that AI, at least currently, is not reliable enough to do legal analysis.

Beholden to corporate interests

Most of the large, large language models are owned and run by huge corporations: OpenAI's ChatGPT, Google's Bard, Microsoft's Copilot, etc. It is already almost impossible to hold these organizations accountable for their misdeeds, so how can we trust their creations to police the police?

The naive "at-best" scenario is that AI trained to identify unjustified police shootings sometimes fails to identify them properly. Some go unreported. Or perhaps it reports a "justified" police shooting (I am not here to debate that definition but let's say they occur) as unjustified, which gums up other investigation efforts.

The more conspiratorial "at-worst" scenario is that a company with a pro-cop/thin-blue-line sympathizing culture could easily sweep damning reports made by their AI under the rug, which facilitates aggressive police behavior under the guise of "monitoring" it.

As reported by ProPublica, Patterson PD has a contract with a Chicago-based software company called Truleo to examine audio from bodycam videos to identify problematic behavior by officers. The company charges around $50,000 per year for flagging several types of behaviors, such as when officers use force, interrupt civilians, use profanities, or turn off their cameras while on active duty. The company claims that its data shows such behaviors often lead to violent escalation.

How does Truleo determine what is "risky" behavior, what is an "interruption" to a civilian? What is a profanity? Does Truleo consider "crap" to be a profanity? More importantly, what if you disagree with Truleo's definitions? What recourse do you have against a company that has zero duty to protect you? If you file a lawsuit alleging officer misconduct, can Truleo's AI's conclusions be admissible as evidence, and can it be used against you?

(1/2)

[–] [email protected] 1 points 9 months ago (5 children)

Let's not confuse ourselves here. The opposite of one evil is not necessarily a good. Police reviewing their own footage, investigating themselves: bad. Unreliable AI beholden to corporate interests and shareholders: also bad.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

All those racks of hard drives are taking up the space they need for racks of Nvidia GPU's.

[–] [email protected] 13 points 9 months ago (1 children)

I moved from Vivaldi to Firefox during the crackdown, signed out all of my Google accounts, and immediately noticed the problems went away. Sorry Vivaldi...

[–] [email protected] 2 points 9 months ago

Seen plenty of people talking about the crazy ads they see on Youtube. Right wing propaganda, blatant grifting, scams... Folding Ideas has done not one but two videos talking about the ads he saw and picking them apart. Surely the people complaining about these ads know adblockers exist right? Why don't they use them? I'm sure there are several reasons but, it's been a known quantity for decades that you have the power to control how many and what kind of ads you see.

[–] [email protected] 11 points 9 months ago

May also indicate that users were shopping around for a blocker that worked against Youtube. Maybe some of those users actually just settled with AdGuard coming from ABP, or uBlock, or whoever.

[–] [email protected] 6 points 9 months ago

my android phone, which I’ve paid off completely

I think this is about where I realized your anxiety has more to do with your financial situation than your technology situation. Your worries are about the way you spend money, how much you spend, what you spend it on, and how corps try to part you from your money. Like another commenter said, all the free and open technology in the world isn't going to magically balance your checkbook... though of course, it will help!

Yeah installing Ubuntu is great, learning to code is great, these are valuable endeavors if for no other reason than just to learn and try new things, but you don't need to learn programming to "convert your chromebook to Ubuntu."

I have no idea what to do about Amazon or Amazon Prime. ... things that, in a small town with a particular disability keeping me from driving, I can only get on Amazon.

If using Amazon is unavoidable, then it is what it is. There's no shame in using them to get what you need. If you're concerned about, say, your habit of impulse buying (not an accusation, just an example), you could try setting up a secured credit card with a spending limit so you can only use it for exactly what you need.

death consciousness of mindlessly scrolling through Facebook

Block Facebook in your router settings (or get a Raspberry Pi, install Pihole, and set up a block rule there). If you need Facebook to communicate with friends and family, could you rely solely on Messenger? That way you don't need to see anything on Facebook other than your DM's.

If your mental health is dire enough that all that's not enough, you probably need a therapist. You can even get it through some tele-health programs (YMMV). Hope this helps!

[–] [email protected] 3 points 9 months ago

Always happy to see Simon Stalenhag's work lol

[–] [email protected] 0 points 9 months ago (4 children)

Sounds like a product they're gonna kill off soon.

view more: ‹ prev next ›