Are there any other LLM interfaces or co-pilot programming tools you’d recommend over ChatGPT, or wouldn’t you trust LLMs in general?
As I see it, since we can keep iterating, reviewing, rewriting prompts, and even use tools frameworks like autogen to let multiple LLMs interact to solve a problem on our behalf, your take is somehow only diggin’ the surface.
When I see chatgpt spun up code, sometimes its rock solid. Sometimes it uses weird logic that is hard to follow. I prefer it to review my code, rather than review its.
I’m kind of partial to how military concepts use cases for ai. Like anything that can do damage or complex tasks must be done by a human. Mediocre tasks, I can see a use for it.
Like for instance, write a code to automate scheduling jobs to backup multiple systems using this fileset to backup or skip I’d feel OK to let ai do. They should all be basically the same. But to script code that is critical to infrastructure and/or complex I feel it is not the right tool to use.
Edit: all LLMs are basically the same imo. The github one might have access to more code though, idk never used it. If it does look at private repos, then I’d say it would be better, but honestly I think they’re about the same.
Don’t you think it’s just a question of time before it’s becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying “all LLMs are the same”, sounds like saying “all humans are the same”, to which I strongly disagree. Different people are good at different things. Otherwise, there’d be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I’m not sure this thread is the place to find them… However, I’d still like to hear if you have any thoughts to add on autonomous machines.
Chatgpt is good for code reviews. I wouldn’t trust it for making code though.
Are there any other LLM interfaces or co-pilot programming tools you’d recommend over ChatGPT, or wouldn’t you trust LLMs in general?
As I see it, since we can keep iterating, reviewing, rewriting prompts, and even use
toolsframeworks like autogen to let multiple LLMs interact to solve a problem on our behalf, your take is somehow only diggin’ the surface.Mostly, its my own personal choice / preference.
When I see chatgpt spun up code, sometimes its rock solid. Sometimes it uses weird logic that is hard to follow. I prefer it to review my code, rather than review its.
I’m kind of partial to how military concepts use cases for ai. Like anything that can do damage or complex tasks must be done by a human. Mediocre tasks, I can see a use for it.
Like for instance, write a code to automate scheduling jobs to backup multiple systems using this fileset to backup or skip I’d feel OK to let ai do. They should all be basically the same. But to script code that is critical to infrastructure and/or complex I feel it is not the right tool to use.
Edit: all LLMs are basically the same imo. The github one might have access to more code though, idk never used it. If it does look at private repos, then I’d say it would be better, but honestly I think they’re about the same.
Don’t you think it’s just a question of time before it’s becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying “all LLMs are the same”, sounds like saying “all humans are the same”, to which I strongly disagree. Different people are good at different things. Otherwise, there’d be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I’m not sure this thread is the place to find them… However, I’d still like to hear if you have any thoughts to add on autonomous machines.