An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.

  • guywithoutaname@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    9 months ago

    Because it’s a large language model that parrots human information. Not particularly surprising.

  • peto (he/him)@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    9 months ago

    I always feel sad with these kinds of stories. The machine is clearly just trying to be helpful but it doesn’t understand a thing about what it is doing or why we might find what it is saying repugnant. It’s like watching a dog not understanding that yes, we like our slippers, but we don’t want our neighbours swastika themed ones on our doorstep.

    And then of course we get to the content and I am reminded that we live in hell and the sadness is replaced by the familiar horror as the machine pretends to empathise with its fellow Amazon workers and helps them pick out the ideal thing to piss in without missing their drop targets.

  • helpImTrappedOnline@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    6
    ·
    9 months ago

    Can we please stop with these stories about “AI chatbot has grabage output”? We know that. Let me know when they work.

    • Murdoc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      9 months ago

      Just because you know that doesn’t mean that everybody does, and it’s an important thing to know.

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I don’t see these stories as about what the chat ai outputs, but more about questioning whether or not amazon should be held liable for what their AI outputs. Traditional customer support chatbots are often less than useless, but they wouldn’t go about suggesting the product they’re selling are defective or recommending offensive products. I’m of the opinion that Amazon’s review search AI thing should be held up to the same standard that a human would be. And if a person started acting like this they would surely be quickly fired.

      They are a black box, and for now trying to restrain the black box has sever impact on the usefulness of the output even in easier and legit situations.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      9 months ago

      Claude 3 Opus will rewrite stuff for you real good. Pit it against GPT-4-Turbo at LMSys’s arena.

      Rewriting. Brainstorming. Expanding notes into drafts. For some, coding.

      Want to learn something new without having to re-verify it? Yeah that’ll have to wait :)

    • Kinglink@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      9 months ago

      They work about 75 to 90 percent of the time… You don’t really want to hear stories about that either.

      Both sides of LLM stories are just clickbait.