I've found it's pretty good for translating between steps so to speak.
Converted some bash to python relatively quickly by giving it snippets and fixing errors as it made them.
I also had success generating an ansible playbook based on my own previously written install instructions for SillyTavern and llama.cpp.
I could do both of those tasks myself, but thar would be more difficult than having a mostly correct translation and fixing some errors.
If you're using llama.cpp, have a look at the GGUF models by TheBloke on huggingface. He puts approximate RAM required in the readme based on the quantisation level.
From personal experience I'd estimate 12G for 7B models based on how full RAM was with 16 gigs. For mixtral at least 32G.