Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
That’s right. You watch it type it out and right where it gets to the important part you realize that’s not what you meant at all, so you hit the stop button. Then you modify the prompt and repeat that one more time. That’s when you realize there are so many things it’s not even considering which gives you the satisfaction that your job is still secure. Then you write a more focused prompt for one aspect of them problem and take whatever good enough bullshit it spewed as a starting point for you to do the manual work. Rinse and repeat.