Toggle light / dark theme

How Tesla’s New Products Will Change Energy Forever

Tesla’s new energy products, such as the Mega Pack and Megablock, have the potential to revolutionize energy storage and generation, drive decentralization and grid resilience, and support widespread AI adoption, potentially driving its energy business to $50 billion in revenue and generating $10 billion in annual gross margin ## Questions to inspire discussion.

Energy Storage and Grid Management.

🔋 Q: How does Tesla’s Mega Pack improve energy storage? A: Tesla’s Mega Pack offers 20% more energy density and 25% more energy per unit, providing 8 hours of storage to expand the total addressable market for renewable energy.

⚡ Q: What is the Mega Block and how does it enhance efficiency? A: The Mega Block is a transformer and switchgear all-in-one unit that simplifies processes, reduces cabling and on-site assembly, making the product more streamlined and efficient.

🔌 Q: How do battery storage systems compare to traditional grid power? A: Battery storage is significantly more capable at dumping power instantly compared to the grid, which needs to spool up and down, making it better for managing wild swings in data center load profiles.

Data centers and AI energy demands.

VaultGemma: The world’s most capable differentially private LLM

As AI becomes more integrated into our lives, building it with privacy at its core is a critical frontier for the field. Differential privacy (DP) offers a mathematically sound solution by adding calibrated noise to prevent memorization. However, applying DP to LLMs introduces trade-offs. Understanding these trade-offs is crucial. Applying DP noise alters traditional scaling laws — rules describing performance dynamics — by reducing training stability (the model’s ability to learn consistently without experiencing catastrophic events like loss spikes or divergence) and significantly increasing batch size (a collection of training examples sent to the model simultaneously for processing) and computation costs.

Our new research, “Scaling Laws for Differentially Private Language Models”, conducted in partnership with Google DeepMind, establishes laws that accurately model these intricacies, providing a complete picture of the compute-privacy-utility trade-offs. Guided by this research, we’re excited to introduce VaultGemma, the largest (1B-parameters), open model trained from scratch with differential privacy. We are releasing the weights on Hugging Face and Kaggle, alongside a technical report, to advance the development of the next generation of private AI.

Doing The Math On CPU-Native AI Inference

A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.

Just to rattle off a few of them, consider the impending “Cirrus” Power10 processor from IBM, which is due in a matter of days from Big Blue in its high-end NUMA machines and which has a new matrix math engine aimed at accelerating machine learning. Or IBM’s “Telum” z16 mainframe processor coming next year, which was unveiled at the recent Hot Chips conference and which has a dedicated mixed precision matrix math core for the CPU cores to share. Intel is adding its Advanced Matrix Extensions (AMX) to its future “Sapphire Rapids” Xeon SP processors, which should have been here by now but which have been pushed out to early next year. Arm Holdings has created future Arm core designs, the “Zeus” V1 core and the “Perseus” N2 core, that will have substantially wider vector engines that support the mixed precision math commonly used for machine learning inference, too. Ditto for the vector engines in the “Milan” Epyc 7,003 processors from AMD.

All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.

An AI model can forecast harmful solar winds days in advance

Scientists at NYU Abu Dhabi (NYUAD) have developed an artificial intelligence (AI) model that can forecast solar wind speeds up to four days in advance, significantly more accurately than current methods. The study is published in The Astrophysical Journal Supplement Series.

Solar wind is a continuous stream of charged particles released by the sun. When these particles speed up, they can cause “space weather” events that disrupt Earth’s atmosphere and drag satellites out of orbit, damage their electrons, and interfere with power grids. In 2022, a strong event caused SpaceX to lose 40 Starlink satellites, showing the urgent need for better forecasting.

The NYUAD team, led by Postdoctoral Associate Dattaraj Dhuri and Co-Principal Investigator at the Center for Space Science (CASS) Shravan Hanasoge, trained their AI model using high-resolution ultraviolet (UV) images from NASA’s Solar Dynamics Observatory, combined with historical records of solar wind.

Machine learning unravels quantum atomic vibrations in materials

Caltech scientists have developed an artificial intelligence (AI)–based method that dramatically speeds up calculations of the quantum interactions that take place in materials. In new work, the group focuses on interactions among atomic vibrations, or phonons—interactions that govern a wide range of material properties, including heat transport, thermal expansion, and phase transitions. The new machine learning approach could be extended to compute all quantum interactions, potentially enabling encyclopedic knowledge about how particles and excitations behave in materials.

Scientists like Marco Bernardi, professor of applied physics, physics, and at Caltech, and his graduate student Yao Luo (MS ‘24) have been trying to find ways to speed up the gargantuan calculations required to understand such particle interactions from first principles in real materials—that is, beginning with only a material’s atomic structure and the laws of quantum mechanics.

Last year, Bernardi and Luo developed a data-driven method based on a technique called singular value decomposition (SVD) to simplify the enormous mathematical matrices scientists use to represent the interactions between electrons and phonons in a material.

Machine learning and quantum chemistry unite to simulate catalyst dynamics

Catalysts play an indispensable role in modern manufacturing. More than 80% of all manufactured products, from pharmaceuticals to plastics, rely on catalytic processes at some stage of production. Transition metals, in particular, stand out as highly effective catalysts because their partially filled d-orbitals allow them to easily exchange electrons with other molecules. This very property, however, makes them challenging to model accurately, requiring precise descriptions of their electronic structure.

Designing efficient transition-metal catalysts that can perform under realistic conditions requires more than a static snapshot of a reaction. Instead, we need to capture the dynamic picture—how molecules move and interact at different temperatures and pressures, where atomic motion fundamentally shapes catalytic performance.

To meet this challenge, the lab of Prof. Laura Gagliardi at the University of Chicago Pritzker School of Molecular Engineering (UChicago PME) and Chemistry Department has developed a powerful new tool that harnesses electronic structure theories and machine learning to simulate transition metal catalytic dynamics with both accuracy and speed.

New system dramatically speeds the search for polymer materials

MIT researchers developed a fully autonomous platform that can identify, mix, and characterize novel polymer blends until it finds the optimal blend. This system could streamline the design of new composite materials for sustainable biocatalysis, better batteries, cheaper solar panels, and safer drug-delivery materials.

/* */