Improvements in the Articulate Medical Intelligence Explorer, a large language model designed for diagnostic dialogue, enable the model to request, interpret and reason about multimodal medical data.
Eighty years ago, Penn researchers J. Presper Eckert and John Mauchly launched the age of electronic computing by harnessing electrons to solve complex numerical problems with ENIAC, the world’s first general-purpose electronic computer. Today, that same architecture still underlies general computing, but electrons are beginning to show their limits. Because they carry a charge, they lose energy as heat, encounter resistance as they move through materials, and become harder to manage as chips incorporate more transistors and handle larger volumes of data.
With artificial intelligence pushing today’s hardware to process, move, and cool more, Penn physicists led by Bo Zhen in the School of Arts & Sciences are looking to the electron’s massless counterpart, the photon, to shoulder more of the load.
“Because they are charge-neutral and have zero rest mass, photons can carry information quickly over long distances with minimal loss, dominating communications technology,” explains Li He, co-first author of a paper published in Physical Review Letters and a former postdoctoral researcher in the Zhen Lab. “But that neutrality means they barely interact with their environment, making them bad at the sort of signal-switching logic that computers depend on.”
During the second day of Pwn2Own Berlin 2026, competitors collected $385,750 in cash awards after exploiting 15 unique zero-day vulnerabilities in multiple products, including Windows 11, Microsoft Exchange, and Red Hat Enterprise Linux for Workstations.
The Pwn2Own Berlin 2026 hacking competition takes place at the OffensiveCon conference from May 14 to May 16 and focuses on enterprise technologies and artificial intelligence.
Security researchers can earn over $1,000,000 in cash and prizes by hacking fully patched products in the web browser, enterprise applications, cloud-native/container environments, virtualization, local privilege escalation, servers, local inference, and LLM categories.
In the 1960s the Hungarian-born American mathematician John von Neumann wrote about machines that could make exact copies of themselves. He envisaged a kind of robot equipped with a computer brain that could be programmed to reproduce itself from raw materials taken from its surroundings. It wasn’t long before some people suggested that von Neumann machines, in the form of robot spacecraft, would be a great way for us to explore the Galaxy.
My other YouTube channels:
The Science Fiction Rock Experience (the music show I produce):
/ @sciencefictionrockexperience.
Discover Maths (with Juan Medina):
/ @discovermaths.
Science World (with Emrah Polat):
/ @scienceworld1
My website:
https://www.daviddarling.info
Yann LeCun, Turing Award winner and former Chief AI Scientist at Meta, joins Jacob Effron. The conversation centers on Yann’s contrarian thesis that LLMs are a dead-end on the path to human-level intelligence, despite being useful products — because they can’t predict the consequences of their actions, can’t plan, and fundamentally can’t model the messy, high-dimensional real world. He unpacks his alternative architecture, JEPA (Joint Embedding Predictive Architecture), which learns abstract representations rather than generating pixel-level predictions, and explains why this approach is essential for robotics, industrial applications, and any system that needs to operate beyond the substrate of language. Yann also reveals the real story behind his departure from Meta (he had zero technical influence on Llama, contrary to public narrative), the genesis of his Tapestry project for sovereign open-source AI, why he believes LLMs are intrinsically unsafe, where he diverges from his fellow Turing laureates Hinton and Bengio, and why he predicts the industry will recognize the paradigm shift by early 2027. Throughout, he offers candid reflections on the tension between research and product at major labs, and why he intentionally headquartered AMI Labs in Paris with zero Silicon Valley VC money.
0:00 Intro.
01:45 Why LLMs Aren’t the Path to Intelligence.
07:51 AMI and World Models.
12:07 The JEPA Architecture Explained.
15:55 Problems with Robotics Models Today.
20:37 Silicon Valley Herd Behavior.
28:18 Tapestry: Sovereign AI for the Rest of the World.
35:49 OpenAI Is the Next Sun Microsystems.
40:51 Why Yann’s Views Diverged from Hinton & Bengio.
44:32 LLMs Are Intrinsically Unsafe.
58:00 Why Yann Left Meta.
1:00:26 Reflections on FAIR
1:12:11 Advice for PhD Students.
LeWorldModel Paper: https://arxiv.org/abs/2603.
With your host:
@jacobeffron.
Partner at Redpoint.
Potassium ions (K⁺) are essential for all cells and living organisms. Scientists have long believed that K⁺ merely passes through ion channels and transporters, rather than acting as an extracellular ligand or molecular “switch.” Indeed, there had been no clear evidence that K⁺ functions as a ligand for membrane proteins in animals or plants—until now.
“Unexpectedly, we made this discovery serendipitously while testing the effect of aspartic acid, with K⁺ added as a counter cation, on Alka, an ion channel located in the brain of Drosophila melanogaster,” said the author. “The compound was effective. At first, we thought the effect was due to aspartic acid, but we ultimately realized that it was caused by K⁺, meaning that Alka functions as a membrane receptor that detects extracellular K⁺ as a ligand.”
Ion channel currents in Alka-expressing cells changed significantly in response to K⁺ levels. The researchers combined electrophysiological analysis with AlphaFold3, an AI-based protein structure prediction tool. This allowed them to identify the K⁺-binding site in Alka. This site creates a chemical environment favorable for K⁺, similar to that found in aqueous solution or in the well-known selectivity filter of K⁺ channels.
Animals move with a level of precision and adaptability that robots struggle to match. In Carnegie Mellon University’s Department of Mechanical Engineering, researchers are developing a new AI-driven approach to uncover how brains and bodies work together. By turning complex biological systems into models that can be tested and refined, the team seeks to understand and replicate animal performance in robotic systems.
One focus of The Biohybrid and Organic Robotics Lab are neuromechanical models that simulate how neural signals and physical movement continuously inform one another. These models are powerful, but difficult to build because, with countless parameters, even the smallest miscalculation can lead to large gaps between simulated behavior and what researchers observe in real animals.
“Biological systems are incredibly complex,” said Camila Fernandez, Ph.D. Candidate in the department of mechanical engineering. “We’re trying to model something where everything affects everything, and it’s not always clear which piece we need to adjust when outcomes don’t match predictions.”