• Bye@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    1 year ago

    Yes, I think people don’t like it because they think any time you use a word with a positive connotation (“benefit”), you must be speaking positively.

    Another example is “brave”. Let’s talk about the woman who got shot to death while storming the US capitol. If you say she was brave, people will assume you side with Trump and the insurrectionists. But she was absolutely brave. But also deluded.

    These mental shortcuts are reinforced all the time, and we really have to force ourselves to think critically (and cynically) to overcome them.

    • zoostation@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I get what you’re saying but I don’t think she perceived she was in any danger, so I don’t think she showed bravery. She was probably too stupid to understand there could be real consequences.

      “Brave” would have been facing 4 years with a president who made her uncomfortable instead of throwing a big tantrum.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      you would think that a “language model” would have “connotation” high on its list of priorities - being that is a huge part of the form and function of language.

      • Bye@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Not how it works.

        It’s just a fancy version of that “predict the next word” feature smartphones have. Like if you just kept tapping the next word.

        They don’t even have real parameters, only black box bullshit hidden parameters.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I know, I was pointing out the irony.

          I’m convinced it’s only purpose is actually to give tech C-level and VPs some bullshit to say for roughly 18-36 months now that “blockchain” and “pandemic disruption” are dead.

          • Bye@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Exactly correct, I agree. LLMs will change the world, but 90% of purported use cases are nothing but hot air.

            But when you can tell your phone “go find a picture of an eggplant, put a smiley face on it, and send it to Bill”, that’s going to be pretty neat. And it’s coming in the next decade. Of course that requires a different model than we have now (text to instruction, not text to text). But it’s coming.