this post was submitted on 08 Jun 2024
361 points (97.9% liked)
Technology
59374 readers
3671 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I had a short look at the text of the bill. It's not as immediately worrying as I feared, but still pretty bad.
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
Here's the thing: How would you react, if this bill required all texts that could help someone "hack" to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill "just" sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won't be used as such. Power is nothing without control.
This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc... IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.
Controlling information or access to education isn't exactly a good guy move. It's not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that's not really what this bill is about.
Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved
I don't see much harm in a "kill switch", so If it makes people happy... But it is sci-fi silliness. AI is software. Malfunctioning software can be dangerous if it controls, say, heavy machinery. But we don't have kill switches for software. We have kill switches for heavy machinery, because that is what needs to be turned off to stop harm.