DigitalWebSlinger

joined 1 year ago
[–] [email protected] 19 points 1 year ago (3 children)

So we just let them break the law without penalty because it's hard and costly to redo the work that already broke the law? Nah, they can put time and money towards safeguards to prevent themselves from breaking the law if they want to try to make money off of this stuff.

[–] [email protected] 153 points 1 year ago (33 children)

"AI model unlearning" is the equivalent of saying "removing a specific feature from a compiled binary executable". So, yeah, basically not feasible.

But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).

Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?

[–] [email protected] 3 points 1 year ago

Be me whose server is on Ubuntu 18.04 and needs upgrading to get Bluetooth into home assistant 😭

[–] [email protected] 6 points 1 year ago (1 children)

Too many negative words for chatgpt, imo. "isn't", "not", etc, chatgpt is usually positive and friendly to a fault.

Maybe you could provide a prompt that would output something substantially similar to what they wrote?