Future software is going to be written by AI, no matter how much you would like to avoid that.
My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.
So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.
That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.
EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…
That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?
Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.
For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.
It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.
I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.
Of course, if you look far enough into the future. Look far enough and the whole concept of “software” itself could become obsolete.
The main disagreements are about how close that future is (years, decades, etc), and whether just expanding upon current approaches to AI will get us there, or we will need a completely different approach.
Future software is going to be written by AI, no matter how much you would like to avoid that.
My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.
So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.
That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.
EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…
That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?
Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.
For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.
It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.
deleted by creator
deleted by creator
I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.
Of course, if you look far enough into the future. Look far enough and the whole concept of “software” itself could become obsolete.
The main disagreements are about how close that future is (years, decades, etc), and whether just expanding upon current approaches to AI will get us there, or we will need a completely different approach.