• venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 hours ago

    This is interesting. If two AI models are training on content with opposing biases, and continue to adjust their functionality based on rewards from interactions with the whole world, would they eventually have the same opinions?