• givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    22
    ·
    8 months ago

    If scientists made AI, then it wouldn’t be an issue for AI to say “I don’t know”.

    But capitalists are making it, and the last thing you want is it to tell an investor “I don’t know”. So you tell it to make up bullshit instead, and hope the investor believes it.

    It’s a terrible fucking way to go about things, but this is America…

    • set_secret@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Just put this into GPT 4.

      What’s your view of the fizbang Raspberry blasters?

      Gpt ‘I’m not familiar with “fizbang Raspberry blasters.” Could you provide more details or clarify what they are?’

      It’s a drink making machine from china

      Gpt ‘I don’t have any specific information on the “fizbang Raspberry blasters” drink making machine. If it’s a new or niche product, details might be limited online.’

      So, in this instance is didn’t hallucinate, i tried a few more made up things and it’s consistent in saying it doesn’t know of these.

      Explanations?

    • Meowing Thing@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      8 months ago

      It is made by scientists. The problem is that said scientists are paid by investors mostly, or by grants that come from investors.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    8 months ago

    Just ask ChatGPT what it thinks for some non-existing product and it will start hallucinating.

    This is a known issue of LLMs and DL in general as their reasoning is a black box for scientists.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      8 months ago

      It’s not that their reasoning is a black box. It’s that they do not have reasoning! They just guess what the next word in the sentence is likely to be.

  • SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    8 months ago

    And by the time the system can actually research the facts, the internet is so full of LLM generated nonsense neither human or AI can verify the data.

  • RidcullyTheBrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    8 months ago

    There we go. Now that people have calmed their proverbial tits about these thinking machines, we can start talking maturely about the strengths and limitation of the LLM implementations and find their niche in our tools arsenal.

      • RidcullyTheBrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        There’s definitely a niche for it, more so than for other fruitless hypes like blockchain or IoT. We really need to be able to offload tasks which need autonomous decisions of simple to average complexity to machines. We can’t continuously scale up the population to handle those. But LLMs aren’t the answer to that, unfortunately. They’re just party tricks if the current limitations cannot be overcome.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    8 months ago

    No surprise, and this is going to happen to everybody who uses neural net models for production. You just don’t know where your data is, and therefore it is unbelievably hard to change data.

    So, if you have legal obligations to know it, or to delete some data, then you are deep in the mud.

    • erv_za@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      I think of ChatGPT as a “text generator”, similar to how Dall-E is an “image generator”.
      If I were openai, I would post a fictitious person disclaimer at the bottom of the page and hold the user responsible for what the model does. Nobody holds Adobe responsible when someone uses Photoshop.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    8 months ago

    Asking chatgpt for information is like asking for accurate reports from bards and minstrels. Sure, sometimes it fits, but most of it is random stuff stitched together to sound good.

  • yamanii@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    The technology has to follow the legal requirements, not the other way around.

    That should be obvious to everyone that’s not an evangelist.