• ඞmir@lemmy.ml
      link
      fedilink
      arrow-up
      15
      ·
      6 months ago

      That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”

      That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same

        • ඞmir@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          6 months ago

          Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem

            • MeanEYE@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              You should watch actually AI safety researcher’s thoughts on this. Here’s the link. It’s partially overhyped, but huge strides have been made in this area and it shouldn’t be taken lightly. It’s best to be extra careful than ignorant.

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              And that somehow means we shouldn’t do OCR anymore, or image classification, or text to speech, or speech to text, or anomaly detection, or…?

              Neural networks are really good at pattern recognition, e.g. finding manufacturing defects in expensive products. Why throw all of this away?

    • BlueMagma@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.