• kokolores@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

    So the AI wasn’t trained to be a „psychopathic Nazi“.

    • Allero@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

      • kokolores@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

        Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

        In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.