After years of false starts, the future of augmented reality may depend not on chips or software, but on how light moves through glass.
Using a powerful RNA labeling method called RNAscope with high-resolution microscopy imaging, the team captured clear snapshots of single-molecule gene expression to identify CA1 cell types inside mouse brain tissue. Within 58.065 CA1 pyramidal cells, they visualized more than 330,000 RNA molecules—the genetic messages that show when and where genes are turned on. By tracing these activity patterns, the researchers created a detailed map showing the borders between different types of nerve cells across the CA1 region of the hippocampus.
The results showed that the CA1 region consists of four continuous layers of nerve cells, each marked by a distinct set of active genes. In 3D, these layers form sheets that vary slightly in thickness and structure along the length of the hippocampus. This clear, layered pattern helps make sense of earlier studies that saw the region as a more gradual mix or mosaic of cell types.
“When we visualized gene RNA patterns at single-cell resolution, we could see clear stripes, like geological layers in rock, each representing a distinct neuron type,” said a co–first author of the paper. “It’s like lifting a veil on the brain’s internal architecture. These hidden layers may explain differences in how hippocampal circuits support learning and memory.”
The hippocampus is among the first regions affected in Alzheimer’s disease and is also implicated in epilepsy, depression, and other neurological conditions. By revealing the CA1’s layered structure, the study provides a roadmap to investigate which specific neuron types are most vulnerable in these disorders.
The new CA1 cell-type atlas, built using data from the Hippocampus Gene Expression Atlas (HGEA), is freely available to the global research community. The dataset includes interactive 3D visualizations accessible through the Schol-AR augmented-reality app, which allows scientists to explore hippocampal layers in unprecedented detail.
Researchers have identified a previously unknown pattern of organization in one of the brain’s most important areas for learning and memory. The study, published in Nature Communications, reveals that the CA1 region of a mouse’s hippocampus, a structure vital for memory formation, spatial navigation, and emotions, has four distinct layers of specialized cell types. This discovery changes our understanding of how information is processed in the brain and could explain why certain cells are more vulnerable in diseases like Alzheimer’s and epilepsy.
Stanford engineers debuted a new framework introducing computational tools and self-reflective AI assistants, potentially advancing fields like optical computing and astronomy.
Hyper-realistic holograms, next-generation sensors for autonomous robots, and slim augmented reality glasses are among the applications of metasurfaces, emerging photonic devices constructed from nanoscale building blocks.
Now, Stanford engineers have developed an AI framework that rapidly accelerates metasurface design, with potential widespread technological applications. The framework, called MetaChat, introduces new computational tools and self-reflective AI assistants, enabling rapid solving of optics-related problems. The findings were reported recently in the journal Science Advances.
Researchers have designed and demonstrated a new optical component that could significantly enhance the brightness and image quality of augmented reality (AR) glasses. The advance brings AR glasses a step closer to becoming as commonplace and useful as today’s smartphones.
“Many of today’s AR headsets are bulky and have a short battery life with displays that are dim and hard to see, especially outdoors,” said research team leader Nick Vamivakas from the University of Rochester. “By creating a much more efficient input port for the display, our work could help make AR glasses much brighter and more power-efficient, moving them from being a niche gadget to something as light and comfortable as a regular pair of eyeglasses.”
In an article published in the journal Optical Materials Express, the researchers describe how they replaced a single waveguide in-coupler—the input port where the image enters the glass—with one featuring three specialized zones, each made of a metasurface material, to achieve improved performance.
Reinforcement learning is terrible — but everything else is worse.
Karpathy’s sharpest takes yet on AGI, RL, and the future of learning.
Andrej Karpathy’s vision of AGI isn’t a bang — it’s a gradient descent through human history.
Karpathy on AGI & Superintelligence.
* AGI won’t be a sudden singularity — it will blend into centuries of steady progress (~2% GDP growth).
* Superintelligence is uncertain and likely gradual, not an instant “explosion.”
Interesting.
GRENOBLE, France – Sept. 16, 2025 – CEA-Leti and the Centre for Research on Heteroepitaxy and its Applications (CRHEA) today announced R&D results that have cleared a path toward full-color microdisplays based on a single material system, a long-standing goal for augmented and virtual reality (AR/VR) technologies.
The project, presented in a paper published in Nature Communications Materials, developed a technique for growing high-quality InGaN-based quantum wells on sub-micron nanopyramids, enabling native emission of red, green, and blue (RGB) light from a single material system. Titled “Regular Red-Green-Blue InGaN Quantum Wells With In Content Up To 40% Grown on InGaN Nanopyramids”, the paper will be presented at the MicroLED Connect Conference on Sept. 24, in Eindhoven, the Netherlands.
Microdisplays for immersive devices require bright RGB sub-pixels smaller than 10 × 10 microns. According to the paper, “the use of III-nitride materials promises high efficiency micro-light emitting diodes (micro-LEDs) compared to their organic counterparts. However, for such a pixel size, the pick and place process is no longer suitable for combining blue and green micro-LEDs from III-nitrides and red micro-LEDs from phosphide materials on the same platform.” Red-emitting phosphide micro-LEDs also suffer from efficiency losses at small sizes, while color conversion methods face challenges in deposition precision and stability.
Meta has developed a new flat ultra-thin panel laser display that could lead to lighter, more immersive augmented reality (AR) glasses and improve the picture quality of smartphones, tablets and televisions. The new display is only two millimeters thick and produces bright, high-resolution images.
Flat-panel displays, particularly those illuminated by LEDs, are ubiquitous, seen in everything from smartphones and televisions to laptops and computer monitors. But no matter how good the current technology is, the search for better is always ongoing. Lasers promise superior brightness and the possibility of making the technology smaller and more energy efficient by replacing bulky and power-hungry components with compact laser-based ones.
However, current laser displays still need large, complex optical systems to shine light onto screens. Previous attempts at making flat-panel laser displays have come up short as they required complex setups or were too difficult to manufacture in large quantities.
Researchers made a robot that can make deliveries to VR. They call it Skynet.