this post was submitted on 24 May 2024
290 points (97.7% liked)

Technology

59148 readers
2533 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

archive.is

Shall we trust LM defining legal definitions, deepfake in this case? It seems the state rep. is unable to proof read the model output as he is "really struggling with the technical aspects of how to define what a deepfake was."

top 30 comments
sorted by: hot top controversial new old
[–] [email protected] 60 points 5 months ago (2 children)
[–] [email protected] 19 points 5 months ago* (last edited 5 months ago) (1 children)
[–] [email protected] 16 points 5 months ago (1 children)
[–] [email protected] 1 points 5 months ago

Little pig boy comes from the dirt.

[–] [email protected] 16 points 5 months ago

State Senator adjusts bifocals

"What the hell is a poop knife?"

[–] [email protected] 50 points 5 months ago* (last edited 5 months ago) (2 children)

I understand the irony. But can we not pretend they blindly used an output or even generated a full page. It was a specific section to provide a technical definition of “what is a deepfake”.

“I was really struggling with the technical aspects of how to define what a deepfake was. So I thought to myself, ‘Well, why not ask the subject matter expert (i do not agree with that wording, lol) , ChatGPT?’” Kolodin said. 

The legislator from Maricopa County said he “uploaded the draft of the bill that I was working on and said, you know, please, please put a subparagraph in with that definition, and it spit out a subparagraph of that definition.”

“There’s also a robust process in the Legislature,” Kolodin continued. “If ChatGPT had effed up some of the language or did something that would have been harmful, I would have spotted it, one of the 10 stakeholder groups that worked on or looked at this bill, the ACLU would have spotted, the broadcasters association would have spotted it, it would have got brought out in committee testimony.”

But Kolodin said that portion of the bill fared better than other parts that were written by humans. “In fact, the portion of the bill that ChatGPT wrote was probably one of the least amended portions,” he said.

I do not agree on his statement that any mistakes made by ai could also be made by humans. The reasoning and errors in reasoning is quite different in my experience but the way chatgpt was used is absolutely fair.

[–] [email protected] 11 points 5 months ago* (last edited 5 months ago)

No kidding. When I read that, my first thought was, "He's clearly at least above the median intelligence of his fellow Arizona GOP reps, if not in the top 10% of their entire conference"

Anyone who read the article AND has experience with the Arizona GOP, probably thought the same thing.

The Arizona GOP collects some of the dumbest people alive.

[–] [email protected] 10 points 5 months ago (1 children)

I get this feeling this will generally be the peak of generative AI. Used for assistance when needed and with lots of oversight. The problem is that not all people bother to check the AI's work.

[–] [email protected] 8 points 5 months ago

That’s the point, literally. These tools don’t make some idiot all of a sudden a genius. It’s for already competent experts to expedite their work. They are the oversight

[–] [email protected] 29 points 5 months ago* (last edited 5 months ago) (2 children)

These types of things are exactly what Generative AI models are good for, as much as Internet people don’t want to hear it.

Things that are massively repeatable based off previous versions (like legislation, contracts, etc) are pretty much perfect for it. These are just tools for already competent people. So in theory you have GenAI crank out the boring stuff and have an expert “fill in the blanks” so to speak

[–] [email protected] 6 points 5 months ago

True, if the LLM is training on those legal documents. Less true if its trained on whatever random garbage was scrapped out of reddit.

At least this time the Rep. was actually reviewing the output, so thats responsible at least.

[–] [email protected] 0 points 4 months ago (1 children)

Ideally it would be a generative AI trained specifically on legal textbooks.

I don't know why there seem to be no LLMs trained specifically on expert subject matter.

[–] [email protected] 1 points 4 months ago (1 children)

There are, just not available publicly. Tons of enterprises (law firms included) are paying to have models trained on their data

[–] [email protected] 0 points 4 months ago

There are, just not available publicly.

I meant publicly available

[–] [email protected] 27 points 5 months ago (1 children)
[–] [email protected] 2 points 5 months ago

Yeah, side hurts. 🤣

[–] [email protected] 15 points 5 months ago (1 children)

And yet again it cynically amuses me that AI has become "artificial" intelligence in the sense of "fake."

It's a shabby substitute for real intelligence, used by people who don't possess any of their own to impress other people who don't possess any of their own.

[–] [email protected] -1 points 5 months ago (1 children)

That's a use. But not their only use.

[–] [email protected] 3 points 5 months ago

This is actually true.

Most notably to me, the ability to sift through and collate enormous amounts of data has led to surprising things like diagnosing diabetes through retinal scans.

But those sorts of things, beneficial and impressive though they might be, remain at the fringe of AI research for the simple reason that those sorts of uses are too niche to provide the revenue stream that all of the bubble-building corporate parasites demand. Their focus is on the AI-as-a-substitute-for-real-intelligence aspect (and increasingly "AI" as just a meaningless marketing buzzword), since that's where the money is. And unfortunately but not coincidentally, that's where most of the public attention is too.

[–] [email protected] 12 points 5 months ago

He argues that any shortcomings associated with using ChatGPT to write part of a law would also be present if humans take the reins. Kolodin said he didn’t see any pitfalls “that I don’t also see with relying on legislative attorneys to draft up legislation.”

Last I checked humans carried 100% of the liability.

[–] [email protected] 7 points 5 months ago (2 children)

Someone should run all lawyer books through Chat-GPT so we can have a free opensource lawyer in our phones.

During a traffic stop: "Hold on officer, I gotta ask my lawyer. It says to shut the hell up."

Cop still shoots him in the head so he can learn his lesson. He pulled out his phone!

[–] [email protected] 5 points 5 months ago

Or lawyer-bot cites some sovereign citizen crap as if it were established legal precedent. "You can't prosecute me in this court! Your flag has a gold fringe on it!"

[–] [email protected] -2 points 4 months ago* (last edited 4 months ago)

Honestly I think this is the inevitable future. There are lots of jobs where what you're paying for is the knowledge. And while LLMs likely won't be as good as an actual expert, most "professionals", in my experience, both in personal professional work, as well as contracting "professional" work, are not even remotely experts, and a properly-trained LLM will run circles around them.

You won't be able to buy them, because machines are, for some reason, not allowed to be fallible like humans, but I can certainly see a scenario where someone takes an open-source LLM and trains it with professional materials (obtained both legally and illegally) and releases it for free, and it does a better job than 70% of "professionals".

[–] [email protected] 5 points 5 months ago (1 children)

🙊 and the group think nonsense continues...

Y'all know those grammar checking thingies? Yeah, same basic thing. You know when you're stuck writing something and your wording isn't quite what you'd like? Maybe you ask another person for ideas; same thing.

Is it smart to ask AI to write something outright; about as smart as asking a random person on the street to do the same. Is it smart to use proprietary AI that has ulterior political motives; things might leak, like this, by proxy. Is it smart for people to ask others to proof read their work? Does it matter if that person is a grammar checker that makes suggestions for alternate wording and has most accessible human written language at its disposal.

[–] [email protected] 9 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 4 points 5 months ago

I don't see any issue whatsoever in what he did. The model can draw meaning across all human language in a way humans are not even capable of doing. I could go as far as creating a training corpus based on all written works of the country's founding members and generate a nearly perfect simulacrum that includes much of their personality and politics.

The AI is not really the issue here. The issue is how well the person uses the tool available and how they use it. By asking it for writing advice for word specificity, it shouldn't matter so long as the person is proof reading it and it follows their intent. If a politician's significant other writes a sentence of a speech, does it matter. None of them write their own sophist campaign nonsense or their legislative works.

[–] [email protected] 5 points 5 months ago

A new meme I expect to take hold is how tempting ChatGPT is. And how the temptation will only grow as LLMs and similar get better, and as our externalized knowledge habits change.

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago)

The problem is that tools use to detect AI writing are not accurate. At the end of the day as long as the information is worded correct and the information is correct that's that matters. When you have AI write an argument to cases that don't exist as a defense lawyer... that's when theirs problems

[–] [email protected] 1 points 5 months ago (1 children)

This chud uploaded potentially sensitive information to a public service. People really need education on how to intelligently use these services.

[–] [email protected] 1 points 5 months ago

This chud uploaded potentially sensitive information to a public service.

A bill draft, which eventually/maybe gets signed and is public by its very nature is sensitive?

People really need education on how to intelligently use these services.

Agreed on principle, but I don't see how what he did was wrong...other than calling ChatGPT a subject matter expert.