The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?
This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.
Removed by mod
Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’
Absolutely true, but many direction into implementing those solution with AIs.
“I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well”
For how long will we be forced to endure different versions of the same article?
The study said 86.66% of the generated software systems were “executed flawlessly.”
Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it “execute flawlessly.” Whether or not code executes it not the bar by which we judge it.
Even if it were to hit 100%, which it does not, there’s so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.
LLMs are not capable of this kind of meaningful collaboration, despite all this hype.
AI regularly hallucinates API endpoints that don’t exist, functions that aren’t part of that language, libraries that don’t exist. There’s no fucking way it did any of this bullshit. Like, yeah - it can probably do a mean autocomplete, but this is being pushed so hard because they want to drive wages down even harder. They want know-nothing middle-managers to point to this article and say “I can replace you with AI, get to work!”…that’s the only purpose of this crap.
I think there is less of a conspiracy, and it’s just pushing investment. These AI articles sound exactly like when the internet was new and most people only had a cursory experience with it and people were pumping any company if they just said the word internet.
Now that “Blockchain” has been beaten to death, they need a new hype word to drive mindless investment.
But they could replace CEOs from what I can tell.
So what you’re saying is that 86.66% of the time, it works every time.
80% accuracy, that is trash
More than 80% of most codebases is boilerplate stuff: including the right files for dependencies, declaring functions with the right number of parameters using the right syntax, handling basic easily anticipated errors, etc. Sometimes there’s even more boilerplate, like when you’re iterating over a list, or waiting for input and handling it.
The rest of the stuff is why programming is a highly paid job. Even a junior developer is going to be much better than an LLM at this stuff because at least they understand it’s hard, and at least often know when they should ask for help because they’re in over their heads. An LLM will “confidently” just spew out plausible bullshit and declare the job done.
Because an LLM won’t ask for help, won’t ask for clarifications, and can’t understand that it might have made a mistake, you’re going to need your highly paid programmers to go in and figure out what the LLM did and why it’s wrong.
Even perfecting self-driving is going to be easier than a truly complex software engineering project. At least with self-driving, the constraints are going to be limited because you’re dealing with the real world. The job is also always the same – navigate from A to B. In the software world you’re only limited by the limits of math, and math isn’t very limiting.
I have no doubt that LLMs and generative AI will change the job of being a software engineer / programmer. But, fundamentally programming comes down to actually understanding the problem, and while LLMs can pretend they understand things, they’re really just like well-trained parrots who know what sounds to make in specific situations, but with no actual understanding behind it.
But did you hear that it uses more water than regular data centers?
LLMs are not capable of this kind of meaningful collaboration
Which is why they’re a tool for professionals to amplify their workload, not a replacement for them.
“We asked a Chat Bot to solve a problem that already has a solution and it did ok.”
to solve a problem that already has a solution
And whose solution was part of its training set…
deleted by creator
deleted by creator
This also completely glosses over the fact that AI capable of writing this had huge R&D costs to get to that point and also have ongoing costs associated with running them. This whole article is a fucking joke, probably written by AI
Plot twist - the AI just cut and paste from stack overflow like real devs.
deleted by creator
It should generate its own acceptance tests and keep asking itself to fix it until they all pass
It cost less than a dollar to run all those chatbots?
Doubt
Please ignore the hundreds of thousands of dollars and the corresponding electricity that was required to run the servers and infrastructure required to train and use this models, please. Or the master cracks the whip again, please, just say you’ll invest in our startup, please!
But did it work?
As someone that uses ChatGPT daily for boilerplate code because it’s super helpful…
I call complete bullshite
The program here will be “hello world” or something like that.
Absolutely I can create a code for your app.
void myApp(void) { // add the code for your app here return true; }
You may need to change the code above to fit your needs. Make sure you replace the comment with the proper code for your app to work.
Couldn’t even write a void method right, return true!
LMAO. At list it didn’t
sudo void…
(:
“hello world” as a service?
And how long did it take to compose the “assignments?” Humans can work with less precise instructions than machines, usually, and improvise or solve problems along the way or at least sense when a problem should be flagged for escalation and review.
deleted by creator
This is who will get replaced first, and they don’t want to see it. They’re the most important, valuable part of the company in their own mind, yet that was the one thing the AI got right, the management part. It still needed the creative mind of a human programmer to do the code properly, or think outside the box.
A test that doesn’t include a real commercial trial or A/B test with real human customers means nothing. Put their game in the App Store and tell us how it performs. We don’t care that it shat out code that compiled successfully. Did it produce something real and usable or just gibberish that passed 86% of its own internal unit tests, which were also gibberish?
This research seems to be more focused on whether the bots would interoperate in different roles to coordinate on a task than about creating the actual software. The idea is to reduce “halucinations” by providing each bot a more specific task.
The paper goes into more about this:
Similar to hallucinations encountered when using LLMs for natural language querying, directly generating entire software systems using LLMs can result in severe code hallucinations, such as incomplete implementation, missing dependencies, and undiscovered bugs. These hallucinations may stem from the lack of specificity in the task and the absence of cross-examination in decision- making. To address these limitations, as Figure 1 shows, we establish a virtual chat -powered software tech nology company – CHATDEV, which comprises of recruited agents from diverse social identities, such as chief officers, professional programmers, test engineers, and art designers. When presented with a task, the diverse agents at CHATDEV collaborate to develop a required software, including an executable system, environmental guidelines, and user manuals. This paradigm revolves around leveraging large language models as the core thinking component, enabling the agents to simulate the entire software development process, circumventing the need for additional model training and mitigating undesirable code hallucinations to some extent.
Future software is going to be written by AI, no matter how much you would like to avoid that.
My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.
So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.
That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.
EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…
That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?
Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.
For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.
It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.
deleted by creator
deleted by creator
Future software is going to be written by AI
Of course, if you look far enough into the future. Look far enough and the whole concept of “software” itself could become obsolete.
The main disagreements are about how close that future is (years, decades, etc), and whether just expanding upon current approaches to AI will get us there, or we will need a completely different approach.
I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.