I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.
The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
Do you use Claude Code? It’s the only time I’ve had 90%+ success rate.
Do you use Claude Code? It’s the only time I’ve had 90%+ success rate.
I have, and it doesn’t at least not on the dev-ops stuff I work on.
Do you use Claude Code? It’s the only time I’ve had 90%+ success rate.
I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.
The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.
So duck programming right?
And fancier intellisense
That’s actually true. I read some research on that and your feeling is correct.
Can’t be bothered to google it right now.