Toggle light / dark theme

In world first, antimatter taken on test drive at CERN

CERN scientists on Tuesday pulled off the unprecedented feat of transporting antiprotons by road, successfully test-driving the world’s first antimatter delivery system, with an eye to one day supplying research labs across Europe.

“The particles returned… so this was a success,” CERN physicist Stefan Ulmer told reporters after the large truck came back from a 10-kilometer drive around the campus of Europe’s main physics laboratory.

While that might not sound like a big distance, Ulmer, a spokesman for CERN’s BASE experiment probing the asymmetry between matter and antimatter in the universe, said it marked the “starting point to a new era.”

‘Gray-box’ AI reveals why catalysts work while speeding discovery

Self-driving laboratories (SDLs) powered by artificial intelligence (AI) are rapidly accelerating materials discovery, but can they also explain their results? Researchers from the Theory Department of the Fritz Haber Institute, in collaboration with BASF, and BasCat—UniCat BASF JointLab, show that they can.

Their new AI-driven strategy works hand-in-hand with SDLs to identify better catalysts while revealing the chemistry behind their performance. The approach was validated on the industrially crucial conversion of propane into propylene.

An SDL integrates an AI doing the experiment planning with lab automation and robotics. In the race to develop better materials, AI and SDLs are often celebrated for one main reason: speed.

CERN hails delicate test on transporting antimatter as a scientific success

Scientists in Geneva took some antiprotons out for a spin—a very delicate one—in a truck, in a never-tried-before test drive that has been deemed a success.

If this so-called antimatter had come into contact with actual matter, even for a fraction of an instant, it would have been annihilated in a quick flash of energy. So experts at the European Organization for Nuclear Research, known as CERN, had to be extra careful when they took 92 antiprotons on the road for a short ride on Tuesday.

The antiprotons were suspended in a vacuum inside a specially designed box and held in place by supercooled magnets.

Mussel-inspired glue from recycled plastics can be detached and reused

Researchers at the Department of Energy’s Oak Ridge National Laboratory have invented a reusable adhesive from waste polymers that is tougher than commercial glues, works underwater as well as in dry environments, and bonds a variety of materials, including wood, glass, metal, paper and polymers.

Inspired by the way mussels stick stubbornly to surfaces, the innovative adhesive contains reversible chemical crosslinkers that allow the hardened glue to soften, detach and be reused, unlike current glues, which set permanently after one use.

Today’s projects typically require different glues for different material surfaces—white glue for grade-school art projects, polyvinyl acetates for bookbinding, polyurethanes for shoemaking, silicones for sealing windows and affixing electronic parts, and industrial epoxies for joining aircraft and automobile components.

Topology helps build more robust photonic networks

Penn-led researchers have shown for the first time that multiple, information-carrying light signals can be safely guided through chip-based, reconfigurable networks using topology, the esoteric branch of mathematics that says donuts and mugs are identical. Because topological properties remain stable even when objects are deformed—hence the field equating mugs and donuts, since both have one opening—the advance could help make light-based technologies for computing and communications more powerful and reliable.

“We already knew how to guide light using topology,” says Liang Feng, Professor in Materials Science and Engineering (MSE) with a secondary appointment in Electrical and Systems Engineering (ESE) within Penn Engineering and senior author of a study in Nature Physics describing the result. “But we had never been able to guide multiple, concurrent signals before.”

That opens the door to building networks of chips that communicate using light while taking advantage of the robustness topology provides. “Signals guided by these principles can be extremely reliable,” says Feng. “It’s like building a highway for light where even large potholes have no effect on traffic—it’s as if the defects simply aren’t there.”

Pareto optimality reveals an atlas of cellular archetypes

This pattern is the signature of Pareto optimality, a mathematical concept describing how competing objectives create a “frontier” of optimal solutions. Just as you can’t make a car both maximally fast and maximally fuel-efficient without compromise, cells can’t simultaneously optimize all biological functions. A cell might specialize in energy production, defense, or growth—but rarely all three equally.


We hypothesized that the phenotypic variation within cell types is explained by multiobjective optimization and used Tabula Sapiens to test this hypothesis. The Tabula Sapiens Atlas v1 is a single-cell RNA sequencing dataset containing 456,101 high-quality single cell transcriptomes processed via droplet microfluidic emulsion, covering 58,870 genes across 174 cell types, 25 tissues, and 15 donors (16). We applied quality control filters to remove outlier cells on several metrics, yielding 309,193 cells across 173 cell types, 24 tissues, and 14 donors, SI Appendix, Fig. S1 and Table S1. Cell type abundance filters left 110 cell types across the same number of tissues and donors, yielding 440 distinct donor-tissue-cell type strata for analysis (15, 17).

The only assumption we make in this analysis is that fitness is an increasing function of performance (14). Then, if there is a trade-off in performing multiple tasks, optimal phenotypes (i.e., those that maximize fitness) must lie in a region described by convex combinations of points that each maximize a single task’s performance (14). This region is called the Pareto front. Any pruning mechanism that removes nonoptimal phenotypes would restrict observed phenotypes to the Pareto front; pruning is a pervasive strategy across biology, and there could be a host of pruning mechanisms in multicellular organisms.

This approach does not require any assumptions about underlying regulatory dynamics or interactions among units. The Pareto front simply describes the region of optimal phenotypes, and its vertices are phenotypes each optimal at some task. Etiology and underlying regulatory dynamics can shape the Pareto front, but do not contradict that optimal phenotypes must lie on it (18). The elegance and power of Pareto optimality are that no specific selection mechanism or regulatory dynamics are required to arrive at its conclusions.

How an acid found in grapes could help recycle battery metals

Cobalt and nickel are vital components for batteries, superalloys and catalysts, used in technologies ranging from smartphones to jet engines. But when it comes to recycling, they are notoriously difficult to separate because they are chemically nearly identical. To solve this, a team led by scientists at Johns Hopkins University in the United States has developed a cleaner and cheaper way to extract these elements. And it is thanks in part to grapes.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

/* */