this post was submitted on 08 Apr 2025
495 points (98.2% liked)

Technology

68724 readers
3528 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 6 days ago* (last edited 6 days ago) (3 children)

I just spent about a month using Claude 3.7 to write a new feature for a big OSS product. The change ended up being about 6k loc with about 14k of tests added to an existing codebase with an existing test framework for reference.

For context I'm a principal-level dev with ~15 years experience.

The key to making it work for me was treating it like a junior dev. That includes priming it ("accuracy is key here; we can't swallow errors, we need to fail fast where anything could compromise it") as well as making it explain itself, show architecture diagrams, and reason based on the results.

After every change there's always a pass of "okay but you're violating the layered architecture here; let's refactor that; now tell me what the difference is between these two functions, and shouldn't we just make the one call the other instead of duplicating? This class is doing too much, we need to decompose this interface." I also started a new session, set its context with the code it just wrote, and had it tell me about assumptions the code base was making, and what failure modes existed. That turned out to be pretty helpful too.

In my own personal experience it was actually kinda fun. I'd say it made me about twice as productive.

I would not have said this a month ago. Up until this project, I only had stupid experiences with AI (Gemini, GPT).

[–] [email protected] 15 points 6 days ago

Agreed. I use it in my daily workflow but you as the senior developer have to understand what can and cannot be delegated, and how to stop it from doing stupid things.

For instance when I work in computer vision or other math-heavy code, it’s basically useless.

[–] [email protected] 13 points 6 days ago (1 children)

Typically working with a junior on a project is slower than not working with them. It's a little odd you see this as like that and that it's also faster.

[–] [email protected] 12 points 6 days ago

I don't think it's odd, because LLMs are just way faster than any junior (or senior) Dev. So it's more like working with four junior devs but with the benefit of having tasks done sequentially without the additional overhead of having to give tasks to individual juniors and context switching to review their changes.

(Obviously, there are a whole lot of new pitfalls, but there a real benefits in some circumstances)

[–] [email protected] 3 points 6 days ago (1 children)
[–] [email protected] 3 points 6 days ago

The PR isn't public yet (it's in my fork) but even once I submit it upstream I don't think I'm ready to out my real identity on Lemmy just yet.