Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.
As a developer building on top of LLMs, my advice is to learn programming architecture. There’s a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn’t writing low level functions, it’s architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won’t go away, they’ll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.
I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.
I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they’re perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.
Having used Chat GPT to try to find solutions to software development challenges, I don’t think programmers will be at that much risk from AI for at least a decade.
Generative AI is great at many things, including assistance with basic software development tasks (like spinning up blueprints for unit tests). And it can be helpful filling in code gaps when provided with a very specific prompt… sometimes. But it is not great at figuring out the nuances of even mildly complex business logic.
This.
I got a github copilot subscription at work and its useful for suggesting code in small parts, but i would never let it decide what design pattern to use to tackle the problem we are solving. Once i know the solution i can use ai, and verify its output to use in the codeI’m using it at work as well and Copilot has been pretty decent with writing out entire methods when I start with the jsdoc or code comments before writing the actual method. It’s now becoming my habit to have it generate some near-working code or decent boilerplate.
If you haven’t tried it yet, give this a shot!
I’m a junior dev that has been on the job for ~6 months. I found AI to be useful for learning when I had to make an application in Swift and had zero experience of the language. It presented me with some turd responses, but from this it gave me the idea of what to try and what to look into to find answers.
I find that sometimes AI can present a concept to me in a way I can understand, where blogs can fail. I’m not worried about AI right now, it’s a tool to make our jobs easier!
Yeah it’s great as a companion tool.
I think it will get good enough to do simple tickets on its own with oversight, but I would not trust it without it submitting it via a pr for review and iteration.
I agree, it would take at least a decade for fully autonomous programming, and frankly, by the time it can fully replace programmers it will be able to fully replace every office job, at which point were going to have to rethink everything.
I worry for the future generations that cant debug cos they dont know how to program and just use ai.
Don’t worry, they’ll have AI animated stick figures telling them what to do instead…
I’m sure they’ll have bigger things to worry about, ie climate apocalypse.
Well. That’s stupid.
Large language models are amazingly useful coding tools. They help developers write code more quickly.
They are nowhere near being able to actually replace developers. They can’t know when their code doesn’t make sense (which is frequently). They can’t know where to integrate new code into an existing application. They can’t debug themselves.
Try to replace developers with an MBA using a large language model AI, and once the MBA fails, you’ll be hiring developers again - if your business still exists.
Every few years, something comes along that makes bean counters who are desperate to cut costs, and scammers who are desperate for a few bucks, declare that programming is over. Code will self-write! No-code editors will replace developers! LLMs can do it all!
No. No, they can’t. They’re just another tool in the developer toolbox.
I’ve been a developer for over 20 years and when I see Autogen generate code, decide to execute that code and then fix errors by making a decision to install dependencies, I can tell you I’m concerned. LLMs are a tool, but a tool that might evolve to replace us. I expect a lot of software roles in ten years to look more like an MBA that has the ability to orchestrate AI agents to complete a task. Coding skills will still matter, but not as much as soft skills will.
I really don’t see it.
Think about a modern application. Think about the file structure, how the individual sources interrelate, how non-code assets are stored, how applications are deployed, and all the other bits and pieces that go into an application. An AI can’t know any of that without being trained - by a human - on the specifics of that application’s needs.
I use Copilot for my job. It’s very nice, and makes my job easier. And if my boss fired me and the rest of the team and tried to do it himself, the application would be down in a day, then irrevocably destroyed in a week. Then he’d be fired, we’d be rehired, and we - unlike my now-former boss - would know things like how to revert the changes he made when he broke everything while trying to make Copilot create a whole new feature for the application.
AI code generation is pretty cool, but without the capacity to know what code actually should be generated, it’s useless.
It’s just going to create a summary story about the code base and reference that story as it implements features, not that different that a human. It’s not necessarily something it can do now but it will come. Developers are not special, and I was never talking about Copilot.
I don’t think most people grok just how hard implementing that kind of joined-up thinking and metacognition is.
You’re right, developers aren’t special, except in those ways all humans are, but we’re a very long way indeed from being able to simulate them in AI - especially in large language models. Humans automatically engage in joined-up thinking, second-order logic, and so on, without having to consciously try. Those are all things a large language model literally can’t do.
It doesn’t know anything. It can’t conceptualize a “summary story,” or understand parts that it might get wrong in such a story. It’s glorified autocomplete.
And that can be extraordinarily useful, but only if we’re honest with ourselves about what it is and is not capable of.
Companies that decide to replace their developers with one guy using ChatGPT or Gemini or something will fail, and that’s going to be true for the foreseeable future.
deleted by creator
Try for a second to think beyond what they’re able to do now and think about the future. Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.
Try for a second to think beyond what they’re able to do now and think about the future.
I am. In the future, they will need to be able to perform tasks using joined-up thinking, second-order logic, and metacognition if they’re going to replace people like me with AI. And that is a very hard goal to achieve. Maybe not P = NP hard, but by no means trivial.
Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.
I have. My company looked at Autogen. We concluded it wasn’t worth it. The solution to AI agents not being able to actually understand what they’re doing isn’t to amplify the problem by creating teams of them.
Every few years, something new comes along driven by incredible hype, and people declare programming to be dead. They insist a robot will be able to do my job. I have yet to see a technology that will plausibly do that in ten years, let alone now. And all the hype is built on a foundation of ignorance over how complicated a modern, enterprise-ready application is, and how necessary being able to think about its many moving parts is.
You know who doesn’t suffer from that ignorance? Microsoft, the creators of Autogen. And they’re currently hiring developers, not laying them off and replacing them with Autogen.
Lmao do the opposite of whatever this guy says, he only wants his 2 trillion dollar stockmarket bubble not to burst
You remember when everyone was predicting that we are a couple of years away from fully self-driving cars. I think we are now a full decade after those couple of years and I don’t see any fully self driving car on the road taking over human drivers.
We are now at the honeymoon of the AI and I can only assume that there would be a huge downward correction of some AI stocks who are overvalued and overhyped, like NVIDIA. They are like crypto stock, now on the moon tomorrow, back to Earth.
Two decades. DARPA Grand Challenge was in 2004.
Yeah, everybody always forgets the hype cycle and the peak of inflated expectations.
Waymo exists and is now moving passengers around in three major cities. It’s not taking over yet, but it’s here and growing.The timeframe didn’t meet the hype but the technology is there.
Yes, the technology is there but it is not Level 5, it is 3.5-4 at best.
The point with a full self-driving car is that complexity increases exponentially once you reach 98-99% and the last 1-2% are extremely difficult to crack, because there are so many corner cases and cases you can’t really predict and you need to make a car that drives safer than humans if you really want to commercialize this service.
Same with generative AI, the leap at first was huge, but comparing GPT 3.5 to 4 or even 3 to 4 wasn’t so great. And I can only assume that from now on achieving progress will get exponentially harder and it will require usage of different yet unknown algorithms and models and advances will be a lot more modest.
And I don’t know for you but ChatGPT isn’t 100% correct especially when asking more niche questions or sending more complex queries and often it hallucinates and sometimes those hallucinations sound extremely plausible.
Quantuum computing is going to make all encryption useless!! Muwahahahahaaa!
. . . Any day now . . Maybe- ah! No, no thought this might be the day, but no, not yet.
Any day now.
This overglorified snake oil salesman is scared.
Anyone who understands how these models works can see plain as day we have reached peak LLM. Its enshitifying on itself and we are seeing its decline in real time with quality of generated content. Dont believe me? Go follow some senior engineers.
Why do you think we’ve reached peak LLM? There are so many areas with room for improvement
You asked the question already answered. Pick your platform and you will find a lot of public research on the topic. Specifically for programming even more so
the day programming is fully automated, so will other jobs.
maybe it’d make more sense if he suggested to be a blue collar worker instead.
Human can probably still look forward to back breaking careers of manual labor that consist of complex varied movements!
At best, in the near term (5-10 years), they’ll automate the ability to generate moderate complexity classes and it’ll be up to a human developer to piece them together into a workable application, likely having to tweak things to get it working (this is already possible now with varying degrees of success/utter failure, but it’s steadily improving all the time). Additionally, developers do far more than just purely code. Ask any mature dev team and those who have no other competent skills outside of coding aren’t considered good workers/teammates.
Now, in 10+ years, if progress continues as it has without a break in pace… Who knows? But I agree with you, by the time that happens with high complexity/high reliability for software development, numerous other job fields will have already become automated. This is why legislation needs to be made to plan for this inevitability. Whether that’s thru UBI or some offshoot of it or even banning automation from replacing major job fields, it needs to be seriously discussed and acted upon before it’s too little too late.
It’s just as crazy as saying “We don’t need math, because every problem can be described using human language”.
In other words, that might be true as long as your problem is not complex enough to be able to be understood using human language.
You want to solve a real problem? It’s way more complex with so many moving parts you can’t just take LLM to solve it, because that takes an actual understanding of a problem.
Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?
I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.
Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.
It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.
And because they are such black boxes, there’s the sector of Explainable AI which attempts to provide transparency.
However, in order to understand data from explainable AI, you still need domain experts that have experience in interpreting what that data means and how to make changes.
It’s almost as if any reasonably complex string of operations requires study. And that’s what tech marketing forgets. As you said, it all has to come from somewhere.
Ha
If you ever write code for a living first thing you notice is that people can’t explain what they need by using natural language ( which is what English, Mandarin etc is), even if they don’t need to get into details.
Also, natural language can be vague and confusing. Look at legalese and law statutes. “When it comes to the law, NOTHING is understood!” ‐- Dragline
This seems as wise as Bill Gates claiming 4MB of ram is all you’ll ever need back on 98 🙄
Doubt
Why would he lie? Other than to pump the companies shares
I can kind of see his point, but the things he is suggesting instead (biology, chemistry, finance) don’t make sense for several reasons.
Besides the obvious “why couldn’t AI just replace those people too” (even though it may take an extra few years), there is also a question of how many people can actually have a deep enough expertise to make meaningful contributions there - if we’re talking about a massive increase of the amount of people going into those fields.
I mean why have a CS degree when an AI subscription costs $30/month?
/s
Jensen fucking Huang is a piece of shit and choke full of it too
Actually, AI can replace this dick at a fraction of the cost instead of replacing developers. Bring out the guillotine mfs
Your vulgarity and call to violence are quite convincing, sir. Mayhaps you moonlight as a bard?
deleted by creator
There’s good money to be made in selling leather jackets.