The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.

After this period, the model begins to run out of memory and the illusion falls apart.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    3
    ·
    4 months ago

    I really hope this doesn’t catch on, Games are already horifically inefficient, imagine if we started making them like this and a 4090 becomes the minnimum system requirement for goddamn DOOM.

    • UnityDevice@startrek.website
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      4 months ago

      Games are already horifically inefficient

      That’s so far from the truth, it hurts me to read it. Games are one of the most optimised programs you can run on your computer. Just think about it, it’s a application rendering an entire imaginary world every dozen milliseconds. Compare it to anything else you run, like say slack or teams, which makes your CPU sweat just to notify you about a new message.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Honestly I thinkyour self driving example is something this could be really cool for. If the generation can exceed real time (I.e. 20 secs of future image prediction can happen in under 20 secs) then you can preemptively react with the self driving model and cache the results.

        If the compute costs can be managed maybe even run multiple models against each other to develop an array likely branch predictions (you know what I turned left)

        Its even cooler that player input helps predict the next image.

  • Drusenija@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    4 months ago

    Regardless of the technology, isn’t this essentially creating a facsimile of a game that already exists? So the tech isn’t really about creating a new game, it’s about replicating something that already exists in a fairly inefficient manner. That doesn’t really help you to create something new, like I’m not going to be able to come up with an idea for a new game, throw it at this AI, and get something playable out of it.

    That and the fact it “can be played for up to 20 seconds” before “the model begins to run out of memory” seems like, I don’t know, a fairly major roadblock?

  • harsh3466@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    4 months ago

    Correct me if I’m wrong, but doesn’t there have to be a code layer somewhere in there?

    It’s like all those “no code” platforms that just obscure away the actual coding via a gui and blocks/elements/whataver.

  • dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 months ago

    This is just a pile of garbage. Jim Sterling’s break down is the most complete argument. But this is just a plain ol bag of shit.