• YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    If someone is, let’s say, “limited enough” for it to be a proper replacement for deep human interaction, then maybe it’s not so bad, idk. I haven’t met anyone like that yet, though, idk if they exist.

    • insomniac_lemon@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      18 hours ago

      If someone is … “limited enough” for it to be a proper replacement for deep human interaction

      That and not going so far that goes past* that point. Even if there would be the slightest benefit**, realism about AI as well as cost (or even hardware need/heat+data of local models) makes it easy to ignore/dismiss.

      * depression, isolation, personality disorders etc

      ** in a way some informational mascot or mental health app may be

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        There is no ai (yet) and the advocates of llm’s are with vanishingly rare exceptions: dslusional, fanatical, and the dumbest mother fuckers on the planet; genuinely stupid enough to make you ask some potentially bleak questions about consciousness personhood and humanity. So we can’t be reasonable about it’s vanishingly rare use cases. We cant be nuanced about its deployment in society.