• Benedict_Espinosa@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.

    • womjunru@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I feel the guardrails are in place, and that they will be continuously improved. If a person finds a situation where an AI suggests they kill themselves without being prompted, say, during a brainstorm about strawberry cake consistency—if you were dead you wouldn’t have this problem—would be… concerning.