this post was submitted on 05 Dec 2023
76 points (97.5% liked)

Technology

59148 readers
3105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 11 months ago (4 children)

When we finally have some rules/laws that AI's need to adhere to, then someday we also need to define what to do with AI's that do not adhere to the laws?

Shoot them?

Delete them?

Put them in jail?

Forbid them to enter our country?

Take away their money?

[–] [email protected] 1 points 11 months ago

Let them hang!

[–] [email protected] 1 points 11 months ago

We unleash the wolves.

[–] [email protected] 1 points 11 months ago (1 children)

Non of that is possible with FOSS AI code, if it's out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.

[–] [email protected] 1 points 11 months ago (2 children)

Non of that is possible

That is not enough. Think harder.

Today's existing AI's are child's play, but it's not going to be like that for long.

One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.

[–] [email protected] 1 points 11 months ago

Maybe they could be handled like a virus or an exploit.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago)

We already have issue to stop people believing fake news in writing form. I don't see how we can stop people believing well made fake news with audio and video.

Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.

Everything profit oriented will result in propagation of missinformation as long as it generates clicks.

Oh and don't let AI control weapons, worst mistake one can make. We don't even manage self driving cars, let alone a drone with mass killing weapons.

Punishment won't reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I'm trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.

To block commercial products from allowing certain actions could be a start, but not a total fix. Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.

AI is very broad, you can put everything with software into that topic too. Also it's not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.

I'm pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.

We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save...

[–] [email protected] 0 points 11 months ago

Posture and some sanctions

[–] [email protected] 5 points 11 months ago

This is the best summary I could come up with:


LONDON (AP) — Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them “goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.


The original article contains 1,119 words, the summary contains 221 words. Saved 80%. I'm a bot and I'm open source!