Toggle light / dark theme

Nuclei Limit Neural Network Quantum Simulations

For a fixed number of configurations, representing quantum states becomes less accurate as their non-stabilizerness increases. This demonstrates a clear limit to how well restricted Boltzmann machines can compress and represent highly entangled systems. Calculations using ground states of medium-mass atomic nuclei reveal non-stabilizerness as a key property governing neural network performance.

Efficacy and Safety of VMAT2 Inhibitors in the Treatment of Huntington DiseaseA Meta-Analysis of Randomized Clinical Trials

In patients with Huntington disease, vesicular monoamine transporter 2 inhibitors (VMAT2is) treatment improved chorea without significant changes in adverse effects or depressive symptoms.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

Influence of Decreased Kidney Function on Plasma Biomarkers of Neurodegenerative Disorders in Routine Care: Confirmation of the Interest of Ratios

This study found that impaired kidney function was linked to increased plasma cerebral amyloidosis biomarkers, but ratio-based measures showed stable sensitivity and specificity for detecting cerebral amyloidosis across all eGFR groups.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

AI automates quantum dot voltage tuning for scaling up quantum computing

Semiconductor spin qubits are a promising candidate for the building blocks of next-generation quantum computers due to their high potential for integration and compatibility with existing semiconductor technologies. Qubits—like the 0s and 1s of a traditional computer—serve as a basic unit of information for quantum computers. However, the practical realization of these computers requires a massive number of qubits, making the development of more efficient adjustment methods a critical challenge for the field.

A research group including Yui Muto from Tohoku University’s Graduate School of Engineering, Assistant Professor Motoya Shinozaki and Associate Professor Tomohiro Otsuka from the Advanced Institute for Materials Research (WPI-AIMR), and their colleagues have successfully demonstrated a method that may help make this massive number of qubits much more manageable, moving us one step closer toward scaling up quantum computing. The findings are published in Scientific Reports.

AI accelerators deliver accurate models for challenging quantum chemistry calculations

The most demanding calculations in quantum chemistry can now be solved with graphics processing unit (GPU) supercomputers. A recently published study shows that software adapted to use GPU hardware can provide not just speed, but also the accuracy needed to solve complex chemistry problems. The work solved the two chemical structures often seen as too complex and expensive to tackle. The advance, published in the Journal of Chemical Theory and Computation, could allow researchers to make meaningful progress in designing new catalysts and improve predicted behaviors of magnetic and electronic materials.

Specifically, the research team—led by computational chemists from NVIDIA, Sandbox AQ, the Wigner Research Centre in Hungary, the Institute for Advanced Study of the Technical University of Munich in Germany, and the Department of Energy’s Pacific Northwest National Laboratory—showed that NVIDIA Blackwell architecture effectively tackles complex simulations. Here, the researchers used a mixture of mathematically precise and approximated approaches to accomplish their goal.

“Our study shows that AI-oriented hardware can do more than provide speed—it can also power chemically accurate, strongly correlated quantum chemistry at the frontier of what is computationally feasible,” said Sotiris Xantheas, a computational chemist at PNNL and study author. Xantheas also serves as the principal investigator of Scalable Predictive methods for Excitations and Correlated phenomena (SPEC), a Department of Energy initiative.

Training compute of frontier AI models grows by 4-5x per year

I’m curious if anyone knows what this translates to in terms of physical infrastructure — i.e. How many m^3 of data center are need for x FLOP of compute/day?


Our expanded AI model database shows that training compute grew 4-5x/year from 2010 to 2024, with similar trends in frontier and large language models.

/* */