Toggle light / dark theme

NitroGen: A Foundation Model for Generalist Gaming Agents

We introduce NitroGen, a vision-action foundation model for generalist gaming agents that is trained on 40,000 hours of gameplay videos across more than 1,000 games. We incorporate three key ingredients: 1) an internet-scale video-action dataset constructed by automatically extracting player actions from publicly available gameplay videos, 2) a multi-game benchmark environment that can measure cross-game generalization, and 3) a unified vision-action policy trained with large-scale behavior cloning. NitroGen exhibits strong competence across diverse domains, including combat encounters in 3D action games, high-precision control in 2D platformers, and exploration in procedurally generated worlds. It transfers effectively to unseen games, achieving up to 52% relative improvement in task success rates over models trained from scratch.

Highly insulating polymer film that shields satellites to boost flexible electronics’ performance

Researchers have found that they could use highly insulating aluminum-coated polymer film to improve the performance of flexible electronics and medical sensors.

Currently, the aluminum-coated polymer film is used to shield satellites from temperature extremes.

Researchers at Empa have succeeded in making the material even more resistant by implementing an ultra-thin intermediate layer.

Genie 3: Creating dynamic worlds that you can navigate in real-time

Genie 3 is a world builder powered by generative AI. It appears that it could in principle be built into a game engine.

One thing I’d like to do is have procedural generation as the backbone, and have generative AI modify things further that regular proc-gen textures just are not able to accomplish.


Introducing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p.

Watch the Google DeepMind episode on Genie 3 with Hannah Fry here: • Genie 3: An infinite world model | Shlomi…

Our team has been pioneering research in simulated environments for over a decade, from training agents to master real-time strategy games to developing simulated environments for open-ended learning and robotics. This work motivated our development of world models, which are AI systems that can use their understanding of the world to simulate aspects of it, enabling agents to predict both how an environment will evolve and how their actions will affect it.

RPG dev pushes back against Steam review AI accusations: ‘We poured years of our lives into this game and only worked with real human artists on everything’

The ubiquity of generative AI is a hard pill to swallow, but even harder is figuring out what’s AI and isn’t. It’s easier than ever now to reach for that low-hanging fruit of critique in saying that something looks like an AI spat it out, especially now that games are claiming they were, in fact, spat out entirely by AI. Positive Concept Games, the developer of SNES-esque RPG Shrine’s Legacy, found that out the hard way, as it shared in a post on X last Wednesday.

Please don’t do this. We poured years of our lives into this game and only worked with real human artists on everything: From the writing to the coding, all work was done by human hands. We do not endorse generative AI and will never use it. pic.twitter.com/3L7NKVX1L8 December 10, 2025

The dev shared a Steam review of the game that calls it “AI slop,” claims the “story is dogshit mixed with catshit,” and reiterates that the game was “made in CHAT GPT.” The developer caption reads: “Please don’t do this. We poured years of our lives into this game and only worked with real human artists on everything … We do not endorse generative AI and will never use it.”

The Next Giant Leap For AI Is Called World Models

Only world models respond to the user’s input as they navigate around the world by moving the camera, or interacting with people and objects it contains, rather than just interpreting prompts to decide what video should be generated.

Using this method, the entire world is continuously generated, frame-by-frame, based on the model’s internal understanding of how the environment and objects should behave.

This method allows the creation of highly flexible, realistic and unique environments. Imagine a video game world, for example, where literally anything can happen. The possibilities aren’t limited to situations and choices that have been written into the code by a game programmer, because the model generates visuals and sounds to match any choice the player makes.

Scientists develop a glasses-free 3D system with a little help from AI

Watching 3D movies and TV shows is a fun and exciting experience, where images leap out of the screen. To get this effect, you usually have to wear a special pair of glasses. But that could soon be a thing of the past as scientists have developed a new display system that delivers a realistic 3D experience without the need for any eyewear.

The main reason why we’ve waited so long for a screen like this is a tough physics rule called the Space-Bandwidth Product (SBP). To get a perfect 3D image, you need a big screen (the “space”) and a wide viewing area (the “bandwidth”) so the picture looks good even when you turn your head. Unfortunately, according to the rule, you can’t have both at the same time. If you make the screen big, the viewing angle shrinks. If you increase the viewing area, the TV must get smaller. All previous attempts to break this trade-off have failed. But not this time.

/* */