Toggle light / dark theme

Computers reconstruct 3D environments from 2D photos in a fraction of the time

Imagine trying to make an accurate three-dimensional model of a building using only pictures taken from different angles—but you’re not sure where or how far away all the cameras were. Our big human brains can fill in a lot of those details, but computers have a much harder time doing so.

This scenario is a well-known problem in and robot navigation systems. Robots, for instance, must take in lots of 2D information and make 3D —collections of data points in 3D space—in order to interpret a scene. But the mathematics involved in this process is challenging and error-prone, with many ways for the computer to incorrectly estimate distances. It’s also slow, because it forces the computer to create its 3D point cloud bit by bit.

Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) think they have a better method: A breakthrough algorithm that lets computers reconstruct high-quality 3D scenes from 2D images much more quickly than existing methods.

Vibration energy harvesting by ferrofluids in external magnetic fields

The development of wearable electronics and the current era of big data requires the sustainable power supply of numerous distributed sensors. In this paper, we designed and experimentally studied an energy harvester based on ferrofluid sloshing. The harvester contains a horizontally positioned cylindrical vial, half-filled with a ferrofluid exposed to a magnetic field. The vial is excited by a laboratory shaker and the induced voltage in a nearby coil is measured under increasing and decreasing shaking rates. Five ferrofluid samples are involved in the study, yielding the dependence of the electromotive force on the ferrofluid magnetization of saturation. The energy harvesting by ferrofluid sloshing is investigated in various magnetic field configurations. It is found that the most effective magnetic field configuration for the energy harvesting is characterized by the field intensity perpendicular to the axis of the vial motion and gravity. The harvested electric power linearly increases with the ferrofluid magnetization of saturation. The electromotive force generated by each ferrofluid is found identical for measurements in acceleration and deceleration mode. A significant reduction in the induced voltage is observed in a stronger magnetic field. The magneto-viscous effect and partial immobilization of the ferrofluid in the stronger magnetic field is considered. The magneto-viscous effect is documented by a supplementing experiment. The results extend knowledge on energy harvesting by ferrofluid sloshing and may pave the way to applications of ferrofluid energy harvesters for mechanical excitations with changing directions in regard to the magnetic field induction.


Rajnak, M., Kurimsky, J., Paulovicova, K. et al. Vibration energy harvesting by ferrofluids in external magnetic fields. Sci Rep 15, 26,701 (2025). https://doi.org/10.1038/s41598-025-12490-w.

Download citation.

Quantum framework offers new approach to analyzing complex network data

Whenever we mull over what film to watch on Netflix, or deliberate between different products on an e-commerce platform, the gears of recommendation algorithms spin under the hood. These systems sort through sprawling datasets to deliver personalized suggestions. However, as data becomes richer and more interconnected, today’s algorithms struggle to keep pace with capturing relationships that span more than just pairs, such as group ratings, cross-category tags, or interactions shaped by time and context.

A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analyzing higher-order network data.

Their work centers on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network.

Optimized cycle system recovers waste heat from fusion reactor

A research team led by Prof. Guo Bin from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has designed and optimized an organic Rankine cycle (ORC) system specifically for recovering low-grade waste heat from the steady-state Chinese Fusion Engineering Testing Reactor (CFETR) based on organic fluid R245fa, achieving enhanced thermal efficiency and reduced heat loss.

CFETR, a steady-state magnetic reactor, is a crucial step toward realizing commercial fusion energy. However, managing the large amount of low-grade waste heat produced by components such as the divertor and blanket remains a key challenge.

To solve the thermodynamic and heat integration issues, the researchers developed advanced simulation models using Engineering Equation Solver for cycle analysis and MATLAB-based LAMP modeling for dynamic system configuration. These tools enabled a comprehensive investigation and optimization of the ORC configuration, leading to significantly improved thermal performance.

New algorithms enable efficient machine learning with symmetric data

MIT researchers designed a computationally efficient algorithm for machine learning with symmetric data that also requires fewer data for training than conventional approaches. Their work could inform the design of faster, more accurate machine-learning models for tasks like discovering new drugs or identifying astronomical phenomena.

Improved slime mold algorithm boosts efficiency in e-commerce cloud data migration

As e-commerce platforms grow ever more reliant on cloud computing, efficiency and sustainability have come to the fore as urgent pressures on development. A study published in the International Journal of Reasoning-based Intelligent Systems has introduced an innovative approach to the problem based on a slime mold algorithm (SMA). The work could improve both performance and energy efficiency for e-commerce systems.

At the core of the work is the development of BOSMA—the Balanced Optimization Slime Mold Algorithm. The SMA is a heuristic optimization technique inspired by the natural behavior of slime molds.

Slime molds are useful models for algorithms because they excel at finding efficient paths through complex environments and adapting to changing conditions. Moreover, they do so without any central control system. They can explore their surroundings by sending out multiple tendrils, pseudopodia, in different directions, adjusting their shape and connections in response to feedback such as nutrient availability or obstacles.

When space becomes time: A new look inside the BTZ black hole

Exploring the BTZ black hole in (2+1)-dimensional gravity took me down a fascinating rabbit hole, connecting ideas I never expected—like black holes and topological phases in quantum matter! When I swapped the roles of space and time in the equations (it felt like turning my map upside down when I was lost in a new city), I discovered an interior version of the solution existing alongside the familiar exterior, each with its own thermofield double state.

A thermodynamic approach to machine learning: How optimal transport theory can improve generative models

Joint research led by Sosuke Ito of the University of Tokyo has shown that nonequilibrium thermodynamics, a branch of physics that deals with constantly changing systems, explains why optimal transport theory, a mathematical framework for the optimal change of distribution to reduce cost, makes generative models optimal. As nonequilibrium thermodynamics has yet to be fully leveraged in designing generative models, the discovery offers a novel thermodynamic approach to machine learning research. The findings were published in the journal Physical Review X.

Image generation has been improving in leaps and bounds over recent years: a video of a celebrity eating a bowl of spaghetti that represented the state of the art a couple of years ago would not even qualify as good today. The algorithms that power image generation are called diffusion models, and they contain randomness called “noise.”

During the training process, noise is introduced to the original data through diffusion dynamics. During the generation process, the model must eliminate the noise to generate new content from the noisy data. This is achieved by considering the time-reversed dynamics, as if playing the video in reverse. One piece of the art and science of building a model that produces high-quality content is specifying when and how much noise is added to the data.

Memories Go Where?

How does your brain decide where to store a brand-new piece of information—like a new face, word, or concept? In this video, we’ll explore a working neural circuit that demonstrates how cortical columns could be allocated dynamically and efficiently—using real spikes, real timing, and biologically realistic learning rules. Instead of vague theories or abstract algorithms, we’ll show a testable mechanism that selects the first available cortical column in just 5 milliseconds, highlighting the incredible speed and parallelism of the brain. This is a crucial first step in building intelligence from the ground up—one circuit at a time.

Useful links:
The Future AI Society: https://futureaisociety.org.
The Brain Simulator III (UKS) project: https://github.com/FutureAIGuru/BrainSimIII
The Brain Simulator II (Neural Simulator) project: https://github.com/FutureAIGuru/BrainSimII
Overview Video: https://youtu.be/W2uauk2bFjs.
More Details Video: https://youtu.be/6po1rMFZkik.
How the UKS Learns Video: https://youtu.be/Rv0lrem3lVs.

A new open-source program for quantum physics helps researchers obtain results in record time

Scientists at the Institute for Photonic Quantum Systems (PhoQS) and the Paderborn Center for Parallel Computing (PC2) at Paderborn University have developed a powerful open-source software tool that allows them to simulate light behavior in quantum systems.

The unique feature of this tool, named “Phoenix,” is that researchers can use it to very quickly investigate complex effects to a level of detail that was previously unknown, and all without needing knowledge of high-performance computing. The results have now been published in Computer Physics Communications.

Phoenix solves equations that describe how light interacts with material at the , which is essential for understanding and for the design of future technologies such as quantum computers and advanced photonic devices.

/* */