• Log in | Sign up@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 hours ago

    Ah, my bad, you’re right, for being consistently correct, I should have done 0.3^10=0.0000059049

    so the chances of it being right ten times in a row are less than one thousandth of a percent.

    No wonder I couldn’t get it to summarise my list of data right and it was always lying by the 7th row.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      That looks better. Even with a fair coin, 10 heads in a row is almost impossible.

      And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Whereas if you ask a human to do the same thing ten times, the probability that they get all ten right is astronomically higher than 0.0000059049.

          • Log in | Sign up@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            You’re better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.

            It’s a born bullshitter. It knows a little about a lot, but it has no clue what’s real and what’s made up, or it doesn’t care.

            If you want some text quickly, that sounds right, but you genuinely don’t care whether it is right at all, go for it, use an LLM. It’ll be great at that.