Toggle light / dark theme

Machine learning algorithm fully reconstructs LHC particle collisions

The CMS Collaboration has shown, for the first time, that machine learning can be used to fully reconstruct particle collisions at the LHC. This new approach can reconstruct collisions more quickly and precisely than traditional methods, helping physicists better understand LHC data. The paper has been submitted to the European Physical Journal C and is currently available on the arXiv preprint server.

Each proton–proton collision at the LHC sprays out a complex pattern of particles that must be carefully reconstructed to allow physicists to study what really happened. For more than a decade, CMS has used a particle-flow (PF) algorithm, which combines information from the experiment’s different detectors, to identify each particle produced in a collision. Although this method works remarkably well, it relies on a long chain of hand-crafted rules designed by physicists.

The new CMS machine-learning-based particle-flow (MLPF) algorithm approaches the task fundamentally differently, replacing much of the rigid hand-crafted logic with a single model trained directly on simulated collisions. Instead of being told how to reconstruct particles, the algorithm learns how particles look in the detectors, like how humans learn to recognize faces without memorizing explicit rules.

The Truth About Merging With AI

Will humans one day merge with artificial intelligence? Futurist Ray Kurzweil predicts a coming “singularity” where humans upload their minds into digital systems, expanding intelligence and potentially achieving immortality. But critics argue that consciousness, creativity, love, and spiritual awareness cannot be reduced to algorithms. This discussion explores brain-computer interfaces, quantum mechanics and the mind, the Ship of Theseus identity paradox, and whether a digital copy of your brain would actually be you. Is AI-driven immortality possible—or does it misunderstand what it means to be human?

Every year the Center sponsors COSM an exclusive national summit on the converging technologies remaking the world as we know it. Visit COSM.TECH (https://cosm.tech/) for information on COSM 2025, November 19–21 at the beautiful Hilton Scottsdale Resort and Spas in Scottsdale, AZ. For more information. Registration will launch mid-July.

The mission of the Walter Bradley Center for Natural and Artificial Intelligence at Discovery Institute is to explore the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism. People know at a fundamental level that they are not machines. But faulty thinking can cause people to assent to views that in their heart of hearts they know to be untrue. The Bradley Center seeks to help individuals—and our society at large—to realize that we are not machines while at the same time helping to put machines (especially computers and AI) in proper perspective.

Be sure to subscribe to the Center for Natural and Artificial Intelligence.
on Youtube: / @discoverycnai.

Follow Walter Bradley Center for Natural and Artificial Intelligence on.
X: / cnaintelligence, @cnaintelligence.
Facebook: / bradleycenterdi.

Visit other Youtube channels connected to the Discovery Institute:

Comparative single-cell lineage bias in human and murine hematopoietic stem cells

A comparative single-cell analysis reveals similarities and differences in lineage bias between human and murine hematopoietic stem cells. This work deepens our understanding of how lineage commitment is regulated across species and provides a valuable framework for translating insights from mouse models to human hematopoiesis.


The commitment of hematopoietic stem cells (HSC) to myeloid, erythroid, and lymphoid lineages is influenced by microenvironmental cues, and governed by cell-intrinsic and epigenetic characteristics that are unique to the HSC population. To investigate the nature of lineage commitment bias in human HSC, mitochondrial single-cell assay for transposase-accessible chromatin (ATAC)-sequencing was used to identify somatic mutations in mitochondrial DNA to act as natural genetic barcodes for tracking the ex vivo differentiation potential of HSC to mature cells. Clonal lineages of human CD34+ cells and their mature progeny were normally distributed across the hematopoietic lineage tree without evidence of significant skewing. To investigate commitment bias in vivo, mice were transplanted with limited numbers of long-term HSC (LT-HSC). Variation in the ratio of myeloid and lymphoid cells between donors was suggestive of a skewed output but was not altered by increasing numbers of LT-HSC. These data suggest that the variation in myeloid and lymphoid engraftment is a stochastic process dominated by the irradiated recipient niche with minor contributions from cell-intrinsic lineage biases of LT-HSC.

Hematopoietic stem cells (HSC) are classically considered to have the capacity for complete regeneration of the hematopoietic compartment. More recent analyses indicate additional complexity and heterogeneity in the HSC compartment, with lineage-restricted or lineage-biased HSC considered a feature of mammalian hematopoiesis.1–13 A partial differential equation model to study relationships between hematopoietic stem and progenitor cells (HSPC) emphasizes that myeloid bias cannot be accounted for solely by short-term HSC bias during inflammation but rather involves a combination of HSC and progenitor cell biases.14 Central to the concept of lineage bias is an assumption that cells used for studying HSC commitment are HSC and not multipotent progenitors or lineage-committed progenitors. Changes in differentiation of cells downstream of the long-term HSC (LT-HSC) must also be evaluated when considering the potential lineage bias of a LT-HSC.

Q-Day: Catastrophic For Businesses Ignoring Quantum-Resistant Encryption

#Quantum #CyberSecurity


Quantum computing is not merely a frontier of innovation; it is a countdown. Q-Day is the pivotal moment when scalable quantum computers undermine the cryptographic underpinnings of our digital realm. It is approaching more rapidly than many comprehend.

For corporations and governmental entities reliant on outdated encryption methods, Q-Day will not herald a smooth transition; it may signify a digital catastrophe.

Comprehending Q-Day: The Quantum Reckoning

Q-Day arrives when quantum machines using Shor’s algorithm can dismantle public-key encryption within minutes—a task that classical supercomputers would require billions of years to accomplish.

Brain inspired machines are better at math than expected

Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information.

Why Time Doesn’t Exist | Leonard Susskind

We experience time as something that flows. Seconds pass. Moments disappear. The future becomes the present and then turns into the past.

But modern physics does not describe time this way.

In this video, we explore why time — as we intuitively understand it — may not exist at the fundamental level of reality.

Drawing on ideas associated with Leonard Susskind, this documentary examines how relativity and quantum physics challenge the idea of a flowing temporal river. Einstein’s theory removes the notion of a universal present. There is no global “now” that sweeps across the universe.

Without a universal present, the idea of time flowing becomes difficult to define physically.

In the relativistic picture, spacetime is a four-dimensional structure. Events are not created moment by moment. They are embedded in geometry. The equations of physics do not contain a moving present. They describe relations between events.

Silicon metasurfaces boost optical image processing with passive intensity-based filtering

Of the many feats achieved by artificial intelligence (AI), the ability to process images quickly and accurately has had an especially impressive impact on science and technology. Now, researchers in the McKelvey School of Engineering at Washington University in St. Louis have found a way to improve the efficiency and capability of machine vision and AI diagnostics using optical systems instead of traditional digital algorithms.

Mark Lawrence, an assistant professor of electrical and systems engineering, and doctoral student Bo Zhao developed this approach to achieve efficient processing performance without high energy consumption. Typically, all-optical image processing is highly constrained by the lack of nonlinearity, which usually requires high light intensities or external power, but the new method uses nanostructured films called metasurfaces to enhance optical nonlinearity passively, making it practical for everyday use.

Their work shows the ability to filter images based on light intensity, potentially making all-optical neural networks more powerful without using additional energy. Results of the research were published online in Nano Letters on Jan. 21, 2026.

AI method accelerates liquid simulations by learning fundamental physical relationships

Researchers at the University of Bayreuth have developed a method using artificial intelligence that can significantly speed up the calculation of liquid properties. The AI approach predicts the chemical potential—an indispensable quantity for describing liquids in thermodynamic equilibrium. The researchers present their findings in a new study published in Physical Review Letters.

Many common AI methods are based on the principle of supervised machine learning: a model—for instance, a neural network—is specifically trained to predict a particular target quantity directly. One example that illustrates this approach is image recognition, where the AI system is shown numerous images in which it is known whether or not a cat is depicted. On this basis, the system learns to identify cats in new, previously unseen images.

“However, such a direct approach is difficult in the case of the chemical potential, because determining it usually requires computationally expensive algorithms,” says Prof. Dr. Matthias Schmidt, Chair of Theoretical Physics II at the University of Bayreuth. He and his research associate Dr. Florian Sammüller address this challenge with their newly developed AI method. It is based on a neural network that incorporates the theoretical structure of liquids—and more generally, of soft matter—allowing it to predict their properties with great accuracy.

JUST RECORDED: Elon Musk Announces MAJOR Company Shakeup

Elon Musk Announces MAJOR Company Changes as XAI/SpaceX ## Elon Musk is announcing significant changes and advancements across his companies, primarily focused on developing and integrating artificial intelligence (AI) to drive innovation, productivity, and growth ## ## Questions to inspire discussion.

Product Development & Market Position.

🚀 Q: How fast did xAI achieve market leadership compared to competitors?

A: xAI reached number one in voice, image, video generation, and forecasting with the Grok 4.20 model in just 2.5 years, outpacing competitors who are 5–20 years old with larger teams and more resources.

📱 Q: What scale did xAI’s everything app reach in one year?

A: In one year, xAI went from nothing to 2M Teslas using Grok, deployed a Grok voice agent API, and built an everything app handling legal questions, slide decks, and puzzles.

AI Discovers Geophysical Turbulence Model

One of the biggest challenges in climate science and weather forecasting is predicting the effects of turbulence at spatial scales smaller than the resolution of atmospheric and oceanic models. Simplified sets of equations known as closure models can predict the statistics of this “subgrid” turbulence, but existing closure models are prone to dynamic instabilities or fail to account for rare, high-energy events. Now Karan Jakhar at the University of Chicago and his colleagues have applied an artificial-intelligence (AI) tool to data generated by numerical simulations to uncover an improved closure model [1]. The finding, which the researchers subsequently verified with a mathematical derivation, offers insights into the multiscale dynamics of atmospheric and oceanic turbulence. It also illustrates that AI-generated prediction models need not be “black boxes,” but can be transparent and understandable.

The team trained their AI—a so-called equation-discovery tool—on “ground-truth” data that they generated by performing computationally costly, high-resolution numerical simulations of several 2D turbulent flows. The AI selected the smallest number of mathematical functions (from a library of 930 possibilities) that, in combination, could reproduce the statistical properties of the dataset. Previously, researchers have used this approach to reproduce only the spatial structure of small-scale turbulent flows. The tool used by Jakhar and collaborators filtered for functions that correctly represented not only the structure but also energy transfer between spatial scales.

They tested the performance of the resulting closure model by applying it to a computationally practical, low-resolution version of the dataset. The model accurately captured the detailed flow structures and energy transfers that appeared in the high-resolution ground-truth data. It also predicted statistically rare conditions corresponding to extreme-weather events, which have challenged previous models.

/* */