I develop AI agents rn as part time for my work and have yet to see one that can perform a real task unsupervised on their own. It’s not what agents are made for at all - they’re only capable of being an assistant or annotate, summarize data etc. Which is very useful but in an entirely different context.
No agent can create features or even reliably fix bugs on their own yet and probably not for next few years at least. This is because having a dude at 50$ hour is much more reliable than any AI agent long term. If you need to roll back a regression bug introduced by an AI agent it’ll cost you 10-20 developer hours as minimum which negates any value you’ve gained already. Now you spent 1,000$ fix for your 50$ agent run where a person could have done that for 200$. Not to mention regression bugs are so incredibly expensive to fix and maintain so it’ll all scale exponentially. Not to mention liability of not having human oversight - what if the agent stops working? You’ll have to onboarding someone on an entire code base which would take days as very minimum.
So his take on ai agents doing work is pretty dumb for the time being.
That being said, AI tool use proficiency test is very much unavoidable, I don’t see any software company not using AI assistants so anyone who doesn’t will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you’d look stupid.
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn’t even notice any productivity problems from not having them available.
That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.
Let’s all just make new companies that are unionized-cooperatives bringing all our coworkers into them
In this example that CEO isn’t needed
Ah yes more paperwork is certainly going to make your employees more productive. Why don’t you also require them to prototype if kicking a rock against the wall 10 times does the job, instead of actually letting them do the job?
AI is pretty good at spouting bullshit but it doesn’t have the same giant ego that human CEOs have so resources previously spent on coddling the CEO can be spent on something more productive. Not to mention it is a lot less effort to ignore everything an AI CEO says.
should just be a matter of saying “AI can’t do this job because it can’t properly do any job”. could even make that your email signature.
CEOs are obsolete
Should ask the AI model if a CEO is required
Everyone stop doing your jobs
“Stagnation is almost certain, and stagnation is slow-motion failure.”
This has some strong Ricky Bobby vibes, “If you ain’t first, you’re last.” I never have understood how companies are supposed to have unlimited growth. At some point when every human on earth that can use their service/product is already doing so, where else is there to go? Isn’t stagnation being almost certain just a reality of a finite world?
At some point when every human on earth that can use their service/product is already doing so, where else is there to go?
Ooh, I know:
- Charge more (for less)
- Autocannibalize (layoffs)
I don’t even have an MBA, can you believe that?
Let me preface this by saying I’m pretty anticapitalist, but I think the idea is that you create a new product or expand into a new industry. You can maintain growth for a long time that way.
Hard to imagine a CEO doing something that would make me less likely to apply or use their service.
Dear CEOs: I will never accept 0.5% hallucinations as “A.I.” and if you don’t even know that, I want an A.I. machine cooking all your meals. If you aren’t ok with 1/200 of your meals containing poison, you’re expendable.
Humans or even regular ass algorithms are fine. A.I. can predict protein folding. It should do a lot else unless there’s a generational leap from “making shitty images” to “as close to perfect as it gets.”
Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s
Why?
Because it’s alpha software. We’re 40 years away from “A.I.” being able to be competent at anything.
Did you see the wack ass Quake II version Microsoft bragged about? It wasn’t even playable. A fucking 12 year old could do better.
Na man. It’s being used extensively in many jobs. Software development especially. You’re misinformed or have a biased view on it based on your personal experience with it.
As a developer, we use AI “extensively” because it’s currently practically free and we rarely say no to free stuff.
It is, indeed, slightly better than last year’s autocomplete.
AI is also amazing at letting non-developers accomplish routine stuff that isn’t particularly interesting.
If someone is trying to avoid paying for one afternoon of my time, an AI subscription and months of trial and error are a new option for them. So I guess that’s pretty neat.
And in 10 years we will need 128GB RAM in every computer just to load a website that could have been 1MB of html and embedded images in a browser using 256MB of RAM.
I use it in software development and it hasn’t changed my life. It’s slightly more convenient than last gen code completion but I’ve never worked on a project where code per hours was the hold up. One less stand-up per week would probably increase developer productivity more than GitHub Copilot.
Tried using Copilot on a few C# projects. I didn’t find it to be any better than Resharper. If anything it was worse because it would give me auto complete samples that were not even close to what I wanted. Not all the time but not infrequently either.
Even if it does the basic shit at the expense of me working one less hour a week, it’s not worth paying for. And that ignores the downsides like spam, bots, data centers needing power/water, and politicians thinking GPU cards are national security secrets.
I don’t think we need a Skynet scenario to imagine the downsides.
Former shopify employee here. Tobi is scum, and surrounds himself with scum. He looks up to Elon and genuinely admires him.
Shame, because I used to actually admire how he handled layoffs. Was a far sight better (from outside looking in) than the “thanks, here’s one extra paycheck, send your laptop back at your expense please” I’d experienced
what laptop? ^* is what I said
Still have mine gathering dust when one american startup (went under already) laid me off 1 day before I had to be legally granted my equity shares and they had the audacity to ask me to arrange the return lmao
Employees should start setting up an AI to prove it can do Tobi Lutke’s extremely difficult job of making a small number of important decisions every once in a while.
Can you prove that he makes any important decisions?
ask why there is a need for CEO, a job that can be done by AI.
Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”
Boss: “First see if AI can do it.”
A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.
Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”
Dear Tobi Lütke - AI can do your job too. Care to comment?