Toggle light / dark theme

The Frontier Labs War: Opus 4.6, GPT 5.3 Codex, and the SuperBowl Ads Debacle

Questions to inspire discussion AI Model Performance & Capabilities.

đŸ€– Q: How does Anthropic’s Opus 4.6 compare to GPT-5.2 in performance?

A: Opus 4.6 outperforms GPT-5.2 by 144 ELO points while handling 1M tokens, and is now in production with recursive self-improvement capabilities that allow it to rewrite its entire tech stack.

🔧 Q: What real-world task demonstrates Opus 4.6’s agent swarm capabilities?

A: An agent swarm created a C compiler in Rust for multiple architectures in weeks for **$20K, a task that would take humans decades, demonstrating AI’s ability to collapse timelines and costs.

🐛 Q: How effective is Opus 4.6 at finding security vulnerabilities?

The insect-inspired bionic eye that sees, smells and guides robots

The compound eyes of the humble fruit fly are a marvel of nature. They are wide-angle and can process visual information several times faster than the human eye. Inspired by this biological masterpiece, researchers at the Chinese Academy of Sciences have developed an insect-scale compound eye that can both see and smell, potentially improving how drones and robots navigate complex environments and avoid obstacles.

Traditional cameras on robots and drones may excel at capturing high-definition photos, but struggle with a narrow field of view and limited peripheral vision. They also tend to be bulky and power-hungry.

Silicon metasurfaces boost optical image processing with passive intensity-based filtering

Of the many feats achieved by artificial intelligence (AI), the ability to process images quickly and accurately has had an especially impressive impact on science and technology. Now, researchers in the McKelvey School of Engineering at Washington University in St. Louis have found a way to improve the efficiency and capability of machine vision and AI diagnostics using optical systems instead of traditional digital algorithms.

Mark Lawrence, an assistant professor of electrical and systems engineering, and doctoral student Bo Zhao developed this approach to achieve efficient processing performance without high energy consumption. Typically, all-optical image processing is highly constrained by the lack of nonlinearity, which usually requires high light intensities or external power, but the new method uses nanostructured films called metasurfaces to enhance optical nonlinearity passively, making it practical for everyday use.

Their work shows the ability to filter images based on light intensity, potentially making all-optical neural networks more powerful without using additional energy. Results of the research were published online in Nano Letters on Jan. 21, 2026.

Fake AI Chrome extensions with 300K users steal credentials, emails

A set of 30 malicious Chrome extensions that have been installed by more than 300,000 users are masquerading as AI assistants to steal credentials, email content, and browsing information.

Some of the extensions are still present in the Chrome Web Store and have been installed by tens of thousands of users, while others show a small install count.

Researchers at browser security platform LayerX discovered the malicious extension campaign and named it AiFrame. They found that all analyzed extensions are part of the same malicious effort as they communicate with infrastructure under a single domain, tapnetic[.]pro.

The Singularity: Everyone’s Certain. Everyone’s Guessing

The Technological Singularity is the most overconfident idea in modern futurism: a prediction about the point where prediction breaks. It’s pitched like a destination, argued like a religion, funded like an arms race, and narrated like a movie trailer — yet the closer the conversation gets to specifics, the more it reveals something awkward and human. Almost nobody is actually arguing about “the Singularity.” They’re arguing about which future deserves fear, which future deserves faith, and who gets to steer the curve when it stops looking like a curve and starts looking like a cliff.

The Singularity begins as a definitional hack: a word borrowed from physics to describe a future boundary condition — an “event horizon” where ordinary forecasting fails. I. J. Good — British mathematician and early AI theorist — framed the mechanism as an “intelligence explosion,” where smarter systems build smarter systems and the loop feeds on itself. Vernor Vinge — computer scientist and science-fiction author — popularized the metaphor that, after superhuman intelligence, the world becomes as unreadable to humans as the post-ice age would have been to a trilobite.

In my podcast interviews, the key move is that “Singularity” isn’t one claim — it’s a bundle. Gennady Stolyarov II — transhumanist writer and philosopher — rejects the cartoon version: “It’s not going to be this sharp delineation between humans and AI that leads to this intelligence explosion.” In his framing, it’s less “humans versus machines” than a long, messy braid of tools, augmentation, and institutions catching up to their own inventions.

Brett Adcock: Humanoids Run on Neural Net, Autonomous Manufacturing, and $50 Trillion Market #229

Humanoid robots with full-body autonomy are rapidly advancing and are expected to create a $50 trillion market, transforming industries, economy, and daily life ## ## Questions to inspire discussion.

Neural Network Architecture & Control.

đŸ€– Q: How does Figure 3’s neural network control differ from traditional robotics? A: Figure 3 uses end-to-end neural networks for full-body control, manipulation, and room-scale planning, replacing the previous C++-based control stack entirely, with System Zero being a fully learned reinforcement learning controller running with no code on the robot.

🎯 Q: What enables Figure 3’s high-frequency motor control for complex tasks? A: Palm cameras and onboard inference enable high-frequency torque control of 40+ motors for complex bimanual tasks, replanning, and error recovery in dynamic environments, representing a significant improvement over previous models.

🔄 Q: How does Figure’s data-driven approach create competitive advantage? A: Data accumulation and neural net retraining provides competitive advantage over traditional C++ code, allowing rapid iteration and improvement, with positive transfer observed as diverse knowledge enables emergent generalization with larger pre-training datasets.

🧠 Q: Where is the robot’s compute located and why? A: The brain-like compute unit is in the head for sensors and heat dissipation, while the torso contains the majority of onboard computation, with potential for latex or silicone face for human-like interaction.

Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey

The research of artificial intelligence is undergoing a paradigm shift from prioritizing model innovations over benchmark scores towards emphasizing problem definition and rigorous real-world evaluation. As the field enters the “second half,” the central challenge becomes real utility in long-horizon, dynamic, and user-dependent environments, where agents face context explosion and must continuously accumulate, manage, and selectively reuse large volumes of information across extended interactions. Memory, with hundreds of papers released this year, therefore emerges as the critical solution to fill the utility gap. In this survey, we provide a unified view of foundation agent memory along three dimensions: memory substrate (internal and external), cognitive mechanism (episodic, semantic, sensory, working, and procedural), and memory subject (agent- and user-centric). We then analyze how memory is instantiated and operated under different agent topologies and highlight learning policies over memory operations. Finally, we review evaluation benchmarks and metrics for assessing memory utility, and outline various open challenges and future directions.

/* */