Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
And if you do that without saying you want to refactor, I likely won’t stand up for you on the next round of layoffs. If I wanted to make the codebase worse, I’d use AI.
I’ve been in this scenario and I didn’t wait for layoffs. I left and applied my skills where shit code is not tolerated, and quality is rewarded.
But in this hypothetical, we got this shit code not by management encouraging the right behavior, and giving time to make it right. They’re going to keep the yes men and fire the “unproductive” ones (and I know fully, adding to the pile is not, in the long run, productive, but what does the management overseeing this mess think?)
Fair.
That said, we have a lot of awful code at my org, yet we also have time to fix it. Most of the crap came from the “move fast and break things” period, but now we have the room to push back a bit.
There’s obviously a balance, and as a lead, I’m looking for my devs to push back and make the case for why we need the extra time. If you convince me, I’ll back you up and push for it, and we’ll probably get the go-ahead. I’m not going to approve everything though because we can’t fix everything at once. But if you ignore the problems and trudge along anyway, I’ll be disappointed.