Toggle light / dark theme

AI at the speed of light just became a possibility

Researchers at Aalto University have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.

Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially , yet they extend beyond the simple math we’re familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik’s cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.

Today, every task in AI, from image recognition to , relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.

New Proofs Probe Soap-Film Singularities

It would take nearly a century for mathematicians to prove him right. In the early 1930s, Jesse Douglas and Tibor Radó independently showed that the answer to the “Plateau problem” is yes: For any closed curve (your wire frame) in three-dimensional space, you can always find a minimizing two-dimensional surface (your soap film) that has the same boundary. The proof later earned Douglas the first-ever Fields Medal.

Since then, mathematicians have expanded on the Plateau problem in hopes of learning more about minimizing surfaces. These surfaces appear throughout math and science — in proofs of important conjectures in geometry and topology, in the study of cells and black holes, and even in the design of biomolecules. “They’re very beautiful objects to study,” said Otis Chodosh (opens a new tab) of Stanford University. “Very natural, appealing and intriguing.”

Mathematicians now know that Plateau’s prediction is categorically true up through dimension seven. But in higher dimensions, there’s a caveat: The minimizing surfaces that form might not always be nice and smooth, like the disk or hourglass. Instead, they might fold, pinch or intersect themselves in places, forming what are known as singularities. When minimizing surfaces have singularities, it becomes much harder to understand and work with them.

Software optimizes brain simulations, enabling them to complete complex cognitive tasks

A new software enables brain simulations which both imitate the processes in the brain in detail and can solve challenging cognitive tasks. The program was developed by a research team at the Cluster of Excellence “Machine Learning: New Perspectives for Science” at the University of Tübingen. The software thus forms the basis for a new generation of brain simulations which allow deeper insights into the functioning and performance of the brain. The Tübingen researchers’ paper has been published in the journal Nature Methods.

For decades, researchers have been trying to create computer models of the brain in order to increase understanding of the organ and the processes that take place there. Using , they have simulated the behavior and interaction of nerve cells and their compounds.

However, previous models had significant weaknesses: They were either based on oversimplified neuron models and therefore strayed significantly from biological reality, or they depicted the biophysical processes within cells in detail, but were incapable of carrying out similar tasks to the brain.

Cracking the code of complexity in computer science’s P vs. NP problem

New research from the University of Waterloo is making inroads on one of the biggest problems in theoretical computer science. But the way to do it, according to Cameron Seth, a Ph.D. researcher working in the field of algorithmic approximation, is by breaking the problem down into smaller pieces.

“Everyone working in computer science and mathematics knows about the ‘P vs. NP’ problem,” Seth says. “It’s one of the notorious Millennium Prize Problems: so famous and so difficult that solving one will earn you a million dollars.”

To understand the crux of the “P vs. NP” problem, imagine an enormous jigsaw puzzle or a Sudoku puzzle. It would be a “P” problem if it could be solved relatively quickly by a computer, whereas they would be an “NP” problem if they were extremely difficult to solve, but a provided solution could be quickly verified.

AI math genius delivers 100% accurate results

At the 2024 International Mathematical Olympiad (IMO), one competitor did so well that it would have been awarded the Silver Prize, except for one thing: it was an AI system. This was the first time AI had achieved a medal-level performance in the competition’s history. In a paper published in the journal Nature, researchers detail the technology behind this remarkable achievement.

The AI is AlphaProof, a sophisticated program developed by Google DeepMind that learns to solve complex mathematical problems. The achievement at the IMO was impressive enough, but what really makes AlphaProof special is its ability to find and correct errors. While (LLMs) can solve , they often can’t guarantee the accuracy of their solutions. There may be hidden flaws in their reasoning.

AlphaProof is different because its answers are always 100% correct. That’s because it uses a specialized software environment called Lean (originally developed by Microsoft Research) that acts like a strict teacher verifying every logical step. This means the computer itself verifies answers, so its conclusions are trustworthy.

How sound and light act alike—and not—at the smallest scale

A world-famous light experiment from 1801 has now been carried out with sound for the first time. Research by physicists in Leiden has produced new insights that could be applied in 5G devices and the emerging field of quantum acoustics. The study is published in the journal Optics Letters.

Ph.D. student Thomas Steenbergen says, “We saw that in materials behave in the same way as light, but also slightly differently. With a mathematical model, we can now explain and predict this behavior.”

Dr. Julia Moore Vogel — Scripps Research — Visionary, Patient-Centric Health Research For All

Visionary, patient-centric health research for all — dr. julia moore vogel, phd — scripps research / long covid treatment trial.


Dr. Julia Moore Vogel, PhD, MBA is Assistant Professor and Senior Program Director at The Scripps Research Institute (https://www.scripps.edu/science-and-me… where she is responsible for managing a broad portfolio of patient-centric health research studies, including The Long COVID Treatment Trial (https://longcovid.scripps.edu/locitt-t/), a fully remote, randomized, placebo-controlled clinical trial targeting individuals with long COVID, testing whether the drug Tirzepatide can reduce or alleviate symptoms of long COVID.

Prior to this current role, Dr. Vogel managed The Participant Center (TPC) for the NIH All of Us Research Program (https://www.scripps.edu/science-and-me… which was charged with recruiting and retaining 350,000 individuals that represent the diversity of the United States. TPC aims to make it possible for interested individuals anywhere in the US to become active participants, for example by collaborating with numerous outreach partners to raise awareness, collecting biosamples nationwide, returning participants’ results and developing self-guided workflows that enable participants to join whenever is convenient for them.

Prior to joining the Scripps Research Translational Institute, Dr. Vogel created, proposed, fundraised for, and implemented research and clinical genomics initiatives at the New York Genome Center and The Rockefeller University. She oversaw the proposal and execution of grants, including a $44M NIH Center for Common Disease Genomics in collaboration with over 20 scientific contributors across seven institutions. She also managed corporate partnerships, including one with IBM that assessed the relative value of several genomic assays for cancer patients.

Dr. Vogel has a BS in Mathematics from Rensselaer Polytechnic Institute, a PhD in Computational Biology and Medicine from Cornell and an MBA from Cornell.

A Mathematician’s Model Brings Science Fiction’s Wormholes Closer to Reality

Could a tunnel through space and time—long a dream of science fiction—ever exist in theory? According to Arya Dutta, a Ph.D. student in Mathematics at the Katz School, the answer might be yes, at least on paper.

Accepted for publication in the International Journal of Geometric Methods in Modern Physics, Dutta’s study, “Thin-shell Wormhole with a Background Kalb–Ramond Field,” explored a mathematical model of a wormhole—a hypothetical shortcut through spacetime that could, in theory, connect two distant regions of the universe. “A wormhole allows faster-than-light travel or even time travel,” said Dutta. “It hasn’t been observed yet, but theoretical research has advanced a lot.”

/* */