I don't think that will have the impact people think it will, maybe at first, but eventually it'll just start treating "wrong" code as a negative and reference it as a "how NOT to do things" lmao
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
It needs to understand that that code is bad to be able to do that though
That's just a matter of properly tagging the training data, which AI trainers need to do regardless.
For sure, but just like that whole "poison our pictures" from artists thing, the people building these models (be it a company or researchers or even hobbyists) are going to start modifying the training process so that the AI model can recognize bad code. And that's assuming it doesn't already, I think without that capability from the getgo the current models would be a lot worse at what they generate than they are as is lmao
When you ask an LLM to write some prose, you could ask it "I'd like a Pulitzer-prize winning description of two snails mating" or you could ask it "I want the trashiest piece of garbage smut you can write about two snails mating." Or even "rewrite this description of two snails mating to be less trashy and smutty." In order for the LLM to be able to give the user what they want they need to know what "trashy piece of garbage smut" is. Negative examples are still very useful for LLM training.