Don’t you think it’s just a question of time before it’s becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying “all LLMs are the same”, sounds like saying “all humans are the same”, to which I strongly disagree. Different people are good at different things. Otherwise, there’d be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I’m not sure this thread is the place to find them… However, I’d still like to hear if you have any thoughts to add on autonomous machines.
Don’t you think it’s just a question of time before it’s becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying “all LLMs are the same”, sounds like saying “all humans are the same”, to which I strongly disagree. Different people are good at different things. Otherwise, there’d be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I’m not sure this thread is the place to find them… However, I’d still like to hear if you have any thoughts to add on autonomous machines.