this post was submitted on 12 Apr 2025
1254 points (98.5% liked)
Programmer Humor
22444 readers
1702 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Co"worker" spent 7 weeks building a simple C# MVC app with ChatGPT
I think I don't have to tell you how it went. Lets just say I spent more time debugging "his" code than mine.
I tried out the new copilot agent in VSCode and I spent more time undoing shit and hand holding than it would have taken to do it myself
Things like asking it to make a directory matching a filename, then move the file in and append _v1 would result in files named simply "_v1" (this was a user case where we need legacy logic and new logic simultaneously for a lift and shift).
When it was done I realized instead of moving the file it rewrote all the code in the file as well, adding several bugs.
Granted I didn't check the diffs thoroughly, so I don't know when that happened and I just reset my repo back a few cookies and redid the work in a couple minutes.
I will give it this. It's been actually pretty helpful in me learning a new language because what I'll do is that I'll grab an example of something in working code that's kind of what I want, I'll say "This, but do X" then when the output doesn't work, I study the differences between the chatGPT output & the example code to learn why it doesn't work.
It's a weird learning tool but it works for me.
It's great for explaining snippets of code.
I've also found it very helpful with configuration files. It tells me how someone familiar with the tool would expect it to work. I've found it's rarely right, but it can get me to something reasonable and then I can drill into why it doesn't work.
Yes, and I think this is how it should be looked at. It is a hyper focused and tailored search engine. It can provide info, but the "doing" not as well.
I do enjoy the new assistant in JetBrains tools, the one that runs locally. It truly helps with the trite shit 90% of the time. Every time I tried code gen AI for larger parts, it's been unusable.
It works quite nice as autocomplete
Yes, exactly.
Except in the 10% of times, in 30% of those you'll have a hell of a lot of fun finding which exact line has one little variable name mismatch. But if you're actually very careful, it's a nice feature.
I will be downvoted to oblivion, but hear me out: local llm's isn't that bad for simple scripts development. NDA? No problem, that a local instance. No coding experience? No problems either, QWQ can create and debug whole thing. Yeah, it's "better" to do it yourself, learn code and everything. But I'm simple tech support. I have no clue how code works (that kinda a lie, but you got the idea), nor do I paid to for that. But I do need to sort 500 users pulled from database via corp endpoint, that what I paid for. And I have to decide if I want to do that manually, or via script that llm created in less than ~5 minutes. Cause at the end of the day, I will be paid same amount of money.
It even can create simple gui with Qt on top of that script, isn't that just awesome?
As someone who somewhat recently wasted 5 hours debugging a "simple" bash script that Cursor shit out which was exploding k8s nodes—nah, I'll pass. I rewrote the script from scratch in 45 minutes after I figured out what was wrong. You do you, but I don't let LLMs near my software.
I’ve had success with Claude, but there’s always a layer of separation. I ask it to do something, read what it produced, and decide if it’s garbage or not. And rewrite or discard as necessary. Though counting by LOC mainly I’ve used it for writing tests.