Large language models (LLMs) could help human scientists identify interesting research topics that have not previously been explored, say scientists at Germany’s Karlsruhe Institute of Technology (KIT). By analysing abstracts in materials science publications and mapping connections between different concepts, the model was able to generate predictions for future areas of interest that the KIT team says are more precise than those produced by traditional, rule-based algorithms.
The number of research articles published each year is increasing so quickly that it is impossible for scientists to keep up with everything, observes team leader Pascal Friederich, who heads a KIT research group on artificial intelligence for materials sciences. While experienced scientists know how to find connections between research areas within their field, identifying links between these and other, unfamiliar topics is a different story.
The Academy of Motion Picture Arts and Sciences—probably better known to the world as the Oscars folks—have drawn a firm line in the sand against the use of generative AI, changing its eligibility rules to exclude AI-generated performances and scripts.
The new rules, via The Wrap, state that in acting categories, only roles “demonstrably performed by humans with their consent” will be considered eligible for consideration, while in the writing categories, only “human-authored” screenplays will be eligible.
Sections 0:00 — Intro 2:28 — The Problem with Deep Learning 4:17 — Intelligence is a Cake 5:15 — The Rise of Generative AI 8:00 — Blurry Images 8:54 — HRT is an awesome place to work 11:16 — But why so Blurry? 13:30 — Do our models need to be generative? 15:16 — Siamese Networks 17:53 — Representation Collapse 19:54 — Yann’s Epiphany & Barlow Twins 27:22 — DINO 28:58 — JEPA & World Models 34:09 — But is JEPA good? 36:19 — Welch Labs Book.
Special thanks to: Yann LeCun, Stephane Deny, David Fan, Nicolas Ballas.
Clip of Yann from 1989: • Convolutional Network Demo from 1989
CNN Paper: http://yann.lecun.com/exdb/publis/pdf… LeNet-5 paper: http://vision.stanford.edu/cs598_spri… Dashcam video https://commons.wikimedia.org/wiki/Fi… Image Credits https://en.wikipedia.org/wiki/File: Do… https://commons.wikimedia.org/wiki/Fi… https://commons.wikimedia.org/wiki/Fi… https://commons.wikimedia.org/wiki/Fi… https://commons.wikimedia.org/wiki/Fi… V-JEPA2 Robot Arm Videos https://ai.meta.com/research/vjepa/ PATRONS Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti, Brian Henry, Tim Palade, Petar Vecutin, Nicolas baumann, Jason Singh, Robert Riley, vornska, Barry Silverman, Jake Ehrlich, Mitch Jacobs, Lauren Steely, Jeff Eastman, Rodolfo Ibarra, Clark Barrus, Rob Napier, Andrew White, Richard B Johnston, abhiteja mandava, Burt Humburg, Kevin Mitchell, Daniel Sanchez, Ferdie Wang, Tripp Hill, Richard Harbaugh Jr, Prasad Raje, Kalle Aaltonen, Midori Switch Hound, Zach Wilson, Chris Seltzer, Ven Popov, Hunter Nelson, Amit Bueno, Scott Olsen, Johan Rimez, Shehryar Saroya, Tyler Christensen, Beckett Madden-Woods, Darrell Thomas, Javier Soto, U007D, Caleb Begly, Rick Rubenstein, Brent Hunsaker, Dan Patterson, Tchsurvives, Alex Adai, Walter Reade, Zyansheep, Walter Reade, Duncan Stannett, Reginald Carey, Jean-Manuel Izaret, dh71633, Adrian Rodriguez, Dimitar Stojanovski, Michael Harder, Peter Maldonado, Emily Pesce, David Johnston, Insang Song, FaeTheWolf, Stephen Taylor, KittenKaboodle, EMatter, PATRICKMCCORMACK, John Beahan, Cameron, Cole Jones, Garrett Thornburg, Jeroen W, Rohit Sharma, GlennB, Emmanuel Cortes, Katie Quinn, Karina C, Cakra WW, Mike Ton, Eric Gometz, MacCallister Higgins, Niko Drossos, David Eraso, Tom Zehle, Steve, Brian Lineburg, rjbl, Michael Loh, Perry Vais, Bengal0, Farhad Manjoo, Sara Chipps, Ellis Driscoll, William Taysom, Will Harmon, CK, Abdullah, Peter Cho, Leo Nikora, Griffin Smith, Ash Katnoria, Alex, Markus Hays Nielsen, Catherine H., Vi, David Dobáš, Peter Wang, Sina Sohangir, Danny Thomas, Julian Francis, Hans Adler, Jiayu Peng, Weston M, Youssouf da Silva, John Thomas, Samuel Costello, Sam Adams, Bryan Liles, Malaya Zemlya, Karl, Vahe Andonians, Mike Doughty, Larry Novelo, Jonas Acres, Ludicrum Rex, Robert Blumofe, Anthony Z, Alex Zhao, Dan Babitch, Nikko Patten Supporting code: https://github.com/WelchLabs/videos Created by: Sam Baskin, Pranav Gundu, and Stephen Welch Content ID: CFAQJOTYQHT7JYIT.
April 9, 2026 This seminar covers: • How world models are increasingly moving away from reconstruction and toward prediction in latent space. • Two recent JEPA-based approaches that illustrate this shift from complementary angles.
Guest Speakers: Hazel Nam & Lucas Maes (Brown University)
Instructors: • Steven Feng, Stanford Computer Science PhD student and NSERC PGS-D scholar. • Karan P. Singh, Electrical Engineering PhD student and NSF Graduate Research Fellow in the Stanford Translational AI Lab. • Michael C. Frank, Benjamin Scott Crocker Professor of Human Biology Director, Symbolic Systems Program. • Christopher Manning, Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science, Co-Founder and Senior Fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI)
The FDNY’s 11-year-old robotics unit has shown that mechanical firefighters can be a valuable part of New York’s Bravest — and is now teaching that to others.
I originally created a list of 160+ companies with detailed descriptions for each one. But updating the list manually takes a lot of time. So, I used ChatGPT and Claude to add a new batch of company website links I had collected (190 entries are now on the list). Hopefully I can continue expanding using this method. While I don’t learn about the new entries as directly since I’m not the one adding them, this will nonetheless be useful for keeping up with the fast-paced biotech world. I hope you find it useful as well!
I used ChatGPT and Claude to expand and revise/update my original 160+ entry list of biotech companies (now at 190 entries). I hope you find this expanded list and its descriptions useful!
Penn Engineers have developed a new way to use AI to solve inverse partial differential equations (PDEs), a particularly challenging class of mathematical problems with broad implications for understanding the natural world.
The advance, which the researchers call “Mollifier Layers,” could benefit fields as varied as genetics and weather forecasting, because inverse PDEs help scientists work backward from observable patterns to infer the hidden dynamics that produced them.
“Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell,” says Vivek Shenoy, Eduardo D. Glandt President’s Distinguished Professor in Materials Science and Engineering (MSE) and senior author of a study published in Transactions on Machine Learning Research (TMLR), which will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026). “You can see the effects clearly, but the real challenge is inferring the hidden cause.”
As technology advances, and the demand for faster, higher-bandwidth, and more energy-efficient data processing continues to grow, scientists and engineers search for ways to improve electronic systems. One avenue they have been exploring is optoelectronics—the study and application of electronic devices that interface with light by detecting, emitting, or converting it into electrical signals.
Optoelectronics offers significant advantages over conventional electronics, including faster speed, higher bandwidth, lower power consumption, and improved reliability.
One particularly promising direction in optoelectronics has been the development of the photonic integrated circuit—an optical microchip that uses light (photons) instead of electricity (electrons) to sense, process, and transmit information. These optical chips are already being used in many advanced technologies today, such as high-speed fiber-optic communications, data center interconnects, sensors for autonomous vehicles, and hardware accelerators for machine learning and artificial intelligence.
Does quantum mechanics actually imply that every possible outcome of every decision happens somewhere in an expansive reality? And if so, what does that mean for probability, free will, and our understanding of the universe itself?
Brian Greene sits down with David Deutsch, widely regarded as the father of quantum computing, to examine what many physicists are still reluctant to accept about their own theory. They explore why the many-worlds interpretation isn’t just a philosophical curiosity, what the wave function is really telling us about reality, and how decision theory may rescue probability in a fully deterministic multiverse. Deutsch also introduces constructor theory, his framework for rethinking the foundations of physics entirely and explains why the questions we’ve been trained not to ask might be the most important ones in all of science.
This program is part of the Rethinking Reality series, supported by the John Templeton Foundation.
Participant: David Deutsch. Moderator: Brian Greene.
Even the best-trained robots struggle when they leave the lab. They face “distribution shifts”—situations they didn’t see in training, like a brand of cereal with a new box design or a human suddenly walking into their personal space. Static datasets (fixed instructions) simply can’t prepare a robot for every “what if” scenario.
To make sense of all this messy real-world data, the researchers introduced two key technical innovations to the robot’s “Vision-Language-Action” (VLA) brain.
Imagine bringing home a single robot to be your all-in-one kitchen assistant—you want it to brew your morning Gongfu tea, make fresh juice in the afternoon, and mix the perfect cocktail at night. While it might have been trained extensively in a lab, in your house, the counter is slightly higher, the fruit is shaped differently, and your cocktail shaker is transparent. Pre-trained Vision-Language-Action (VLA) models provide an incredible starting point, yet real-world deployment is never a fixed test distribution. This leaves a critical, unsolved challenge: how do we take the heterogeneous experience generated across a fleet of robots and use it to post-train a single, generalist model across a wide range of tasks simultaneously?
We present Learning While Deploying (LWD), a fleet-scale offline-to-online RL framework for continual post-training of generalist VLA policies. Instead of treating deployment as the finish line where a policy is merely evaluated, LWD turns it into a training loop through which the policy improves. A pre-trained policy is deployed across a robot fleet, and both autonomous rollouts and human interventions are aggregated into a shared replay buffer for offline and online updates. The updated policy is then redeployed, enabling continuous improvement by leveraging interaction data from the entire fleet.
A Generalist Learns Beyond Demonstrations
Some robot learning systems have explored data flywheels: deploying a policy, collecting new robot data, extracting high-quality behaviors, and training the next policy to imitate them. While this supports scalable improvement, it still treats deployment mainly as a source of expert demonstrations. Prior post-training systems mainly focus on specialist policies, leaving fleet-scale post-training of a single generalist policy across diverse tasks unresolved.