The science of memories has been pursued and studied since the days of ancient Greece and Aristotle. Today, research conducted by Dima Bolmatov, assistant professor in the Department of Physics & Astronomy at Texas Tech University, is considering how memories are stored on a cellular level.
Bolmatov’s research centers on lipid bilayers, membranes that serve as a continuous barrier around cells. These membranes, he noted, were traditionally viewed as passive barriers.
“I began to see that they behave more like dynamic, adaptive materials,” he stated. “They respond to electrical stimulation, retain history and exhibit collective behavior. This realization suggests that membranes themselves may participate in information processing, bridging physics and biology in a fundamentally new way.”
There’s something quietly unsettling about placing a photograph of a human neuron next to a simulated image of the large-scale cosmic web. The two look almost identical: delicate, branching filaments connecting dense clusters, with vast open spaces in between. One fits inside your skull. The other stretches across billions of light-years. The resemblance is hard to dismiss, and for a growing number of researchers, it’s far more than a visual coincidence.
What started as a striking observation in cosmology and neuroscience has evolved into a serious theoretical question. Could the universe, at its most fundamental level, operate the way a brain does? The ideas being put forward aren’t purely philosophical. Some of them come with testable mathematics, published peer-reviewed papers, and the names of well-regarded physicists attached. What follows is an honest look at where the science currently stands.
The estimated 200 billion detectable galaxies aren’t distributed randomly, but are lumped together by gravity into clusters that form even larger clusters, which are connected to one another by “galactic filaments,” long thin threads of galaxies. This vast architecture is what scientists call the cosmic web. When you zoom far enough out, the structure of the entire observable universe begins to take on a shape that looks startlingly familiar.
If you walk across the open yard in front of the Physics, Math and Astronomy building at the University of Texas at Austin, you’ll see a 17-story tower and a huge L-shaped building. What you won’t see is what’s underneath you. Two floors below ground, behind heavy double doors stamped with a logo that most students have never noticed, sits one of the most powerful lasers in the United States.
I was the lead laser scientist on the Texas Petawatt, or TPW as we called it, from 2020 to 2024. Texas Petawatt, which is currently closed due to funding cuts, was a government-funded research center where scientists from across the country applied for time to use specialized equipment. It was part of LaserNetUS, a Department of Energy network of high-power laser labs.
This type of laser takes a tiny pulse of light, stretches it out so it doesn’t blast optics to pieces, and amplifies it until, for a brief instant, it carries more power than the entire U.S. electrical grid. Then it compresses the pulse back to a trillionth of a second to create a star in a vacuum chamber.
In about one out of every 1,000 pregnancies, the neural tube, a key nervous system structure, fails to close properly. Georgia Tech physicists are now helping explain why this happens, having uncovered the physics that drive neural tube closure in a pregnancy’s earliest stages.
Working with collaborators at University College London (UCL), Georgia Tech researchers used computer models to reveal how, during early development, forces generated by cells physically pull the neural tube closed—like a drawstring. This discovery offers new insight into a critical process that—when disrupted—can result in severe birth defects such as spina bifida.
“Understanding a complex developmental process like neural tube closure requires a highly interdisciplinary approach,” said Shiladitya Banerjee, an associate professor in the School of Physics. “By combining advanced biological imaging with theoretical physics, we were able to uncover the mechanical rules that drive cells to close the tube. My lab builds computational models to uncover the physical rules of living systems. The neural tube is an ideal focus because its formation requires incredible mechanical coordination.”
In this presentation, Raia Hadsell, VP of Research at Google DeepMind and AI Ambassador for the United Kingdom, opens AIE Europe and explores what’s open in Frontier AI and the future of intelligence by focusing on advancements beyond standard large language models. She categorizes these innovations into three key areas:
00:00 Introduction. 05:05 Advanced Embedding Models: Raia discusses the importance of embedding models for fast retrieval and recognition, similar to how the human brain uses ‘Jennifer Aniston cells’ to identify concepts across modalities. She highlights Gemini Embeddings 2, a fully omnimodal model that processes text, video, and audio into unified semantic vectors. 09:53 AI for Weather Forecasting: The team has developed revolutionary models for atmospheric prediction, moving away from traditional physics simulations. Notable breakthroughs include: 11:00 GraphCast: A spherical graph neural network that provides accurate 15-day weather forecasts. 12:47 GenCast: A probabilistic model that offers higher efficiency and accuracy (97% of the time compared to gold-standard benchmarks). 13:51 FGN: A functional generative network that directly predicts cyclone behavior, which is currently being utilized by the US National Hurricane Center. 14:35 World Models: Hadsell introduces Genie, a project focused on creating interactive, real-time environments. Starting from Genie 1 (2D platformers) and progressing to Genie 3, these models allow users to create and interact with high-quality, 3D photorealistic worlds. These environments demonstrate capabilities like memory, consistency, and the ability to be dynamically prompted by the user to change the surroundings in real-time.
Life of leonard susskind;leonard susskind physics.
*Description:* Can scientists really simulate a full human brain now? In this video, we explore the latest study claiming that supercomputers may soon be powerful enough to simulate the human brain. We break down how this new method works, why previous brain simulation projects failed, what makes this new research different, and the big ethical questions that come with it. Is this the future of neuroscience and artificial intelligence, or are we still far from creating a true digital human mind? Watch till the end to understand the science in simple words.
*Tags:* human brain simulation, brain simulation, scientists show, human brain, supercomputer, artificial intelligence, neuroscience, digital brain, brain model, neural network, AI news, science news, future technology, exascale computing, Jupiter supercomputer, brain research, machine learning, GPU computing, Nvidia A100, neuron simulation, connectome, Allen Institute, Human Brain Project, technology explained, science explained, brain technology, computer simulation, future of AI, digital neuroscience, human mind simulation, latest science, research breakthrough, advanced computing.
*Keywords with commas:* human brain simulation, simulate human brain, scientists show brain simulation, human brain study, brain simulation study, can we simulate a human brain, supercomputer brain simulation, neuroscience breakthrough, artificial intelligence and brain, digital human brain, full brain simulation, neuron network simulation, brain research latest, new science study, AI and neuroscience, future of brain simulation, latest neuroscience news, exascale supercomputer, Jupiter supercomputer brain, human mind simulation.
Inverse lithography takes a radically different approach. Instead of starting with the desired circuit pattern and tweaking it to compensate for optical distortions, ILT works backwards. It asks: “What mask pattern would produce the exact shape we want after the light does its distorting work?” It’s like designing a funhouse mirror that makes your reflection look perfectly normal.
What’s particularly elegant are the “model-driven deep learning” approaches, which combine the physics of how light actually behaves with AI’s pattern-recognition abilities. Rather than making the AI learn optics from scratch, these hybrid methods embed the known laws of physics into the learning process, creating solutions that are both fast and physically accurate.
AD | Up to 30% off on the Hoverpen Traverse on Kickstarter : https://www.kickstarter.com/projects/.… code DRBECKY to receive 10% off all Hoverpens and free shipping to most countries. North America / UK / Australia / International: https://bit.ly/drbecky_noviumeu EU: https://bit.ly/drbecky_noviumeu Does the Universe spin? Think about it, planets spin, the Sun spins, galaxies spin, even black holes spin — so what about the entire Universe? And if it was spinning could this help solve one of the biggest problems in astrophysics today — the \.
In the era of precision cosmology, research often means big science: large observatories, highly complex instruments, international collaborations and substantial funding. Yet even in such an advanced field, progress is still possible—including in the search for elusive dark matter—through more agile approaches, driven by small teams and young researchers, supported by institutions and a good dose of ingenuity.
In a paper titled “A New Limit for Axion Dark Matter with SPACE” published in the Journal of Cosmology and Astroparticle Physics, a group of then-undergraduate students from the University of Hamburg built a cavity detector to search for axions—among the most promising candidates for dark matter—and set new experimental limits on their properties.
The result was achieved with relatively limited resources, showing that even small-scale experiments can make a meaningful contribution to one of the most open challenges in modern physics.