• Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    The danger of AI isn’t that it’s “too smart”. It’s that it’s able to be stupid faster. If you offload real decisions to a machine without any human oversight, it can make more mistakes in a second than even the most efficient human idiot can make in a week.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      TL;DR: LLMs are like the perfect politician when it comes to output language that makes them “sound” knowledgeable without being so.

      The problem is that it can be stupid whilst sounding smart.

      When we have little or no expertise on a subject matter, we humans use lots of language cues to try and determine the trustworthiness of a source when they tell us something in an area we do not know enough to judge: basically because we don’t know enough about the actual subject being discussed, we try and figure out from the way others present things in general, if the person on the other side knows what they’re talking about.

      When one goes to live in a different country it often becomes noticeable that we ourselves are doing it because the language and cultural cues for a knowledgeable person from a certain area, are often different in different cultural environments - IMHO, our guesswork “trick” was just reading the manners commonly associated with certain educational tracks or professional occupations and some sometimes and in some domains those change from country to country.

      We also use more generic kinds of cues to determine trustworthiness on that subject, such as how assured and confident somebody sounds when talking about something.

      Anyways, this kind of things is often abused by politicians to project an image of being knowledgeable about something when they’re not, so as to get people to trust them and believe they’re well informed decision makers.

      As it so happens, LLMs, being at their core complex language imitation systems, are often better than politicians at outputting just the right language to get us to misevaluate their output as from a knowledgeable source, which is how so many people think they’re General Artificial Intelligence (those people confuse what their own internal shortcuts to evaluate know-how of the source of a piece of text tells them with a proper measurement of cognitive intelligence).