JUST PUBLISHED: jellyfish-inspired ultrafast and versatile magnetic soft robots for biomedical applications
Click here to read the latest free, Open Access article from Cyborg and Bionic Systems.
Tungsten’s superior performance in extreme environments makes it a leading candidate for plasma-facing components (PFCs) in fusion reactors, but the ultra-high heat can damage its microscopic structure and lead to component failure. Scanning electron microscopy (SEM) can capture and quantify these microstructure changes, but assembling a sufficiently large dataset of SEM imagery is expensive and logistically challenging.
To augment this dataset, researchers at Oak Ridge National Laboratory trained a generative machine learning model using 3,200 SEM images of tungsten samples exposed to fusion-relevant conditions. The model can generate novel SEM images with realistic microstructures and surface features, such as cracks and pores, without replicating the original images.
“This work is not about making pretty pictures, it’s about capturing the statistics of real damage on these materials,” said ORNL’s Rinkle Juneja, the project’s principal investigator. “We train our generative workflow to learn tungsten’s microstructure signatures, like crack patterns, so it can generate new, statistically consistent microstructures, laying the groundwork for robust, data-driven assessment of PFC fusion materials.”
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolution—this time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information.
The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Microsoft has awarded $2.3 million to security researchers after receiving nearly 700 submissions during this year’s Zero Day Quest hacking contest.
Tom Gallagher, Vice President of Engineering at Microsoft Security Response Center (MSRC), said that over 80 flaws found during the live event at Microsoft’s Redmond campus were high-impact cloud and AI security vulnerabilities.
“During the 2026 live hacking event, Microsoft partnered with the global security research community, representing more than 20 countries and a wide range of professional backgrounds, from high school students to college professors,” Gallagher said.
Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.
LLMs can generate datasets to train other models through a process called distillation, in which a “student” model is taught to mimic the outputs of a “teacher” model. While this process can be used to produce cheaper versions of an LLM, it is unclear which properties of the teacher model are transferred to the student.
Alex Cloud and colleagues used GPT-4.1, which was prompted to have traits unrelated to a core task (a preference for owls or certain trees, for instance), to train a student model with output consisting only of numerical data, with no references to the trait. When the resulting student was subsequently prompted, it mentioned the teacher’s favorite animal or tree over 60% of the time, compared to 12% for a student trained by a teacher with no favorite animal or tree. This effect was also observed when the student was trained on a teacher’s output that contained code instead of numbers.
We are already gene editing humans. You just haven’t noticed.
George Church, Harvard geneticist and Human Genome Project pioneer, explains why CRISPR wasn’t the real breakthrough, how multiplex gene editing unlocked organ transplants and de-extinction, and why aging will likely require rewriting many genes at once.
Hosted by Mgoes → https://twitter.com/m_goes_distance
Brought to you by SuperHuman Fund → https://superhuman.fund/
0:00 — Gene Editing Mammals → Humans
8:36 — Germline vs Somatic
14:56 — Modified Humans Are Already Here
18:50 — Enhancing Healthy Humans
25:00 — Aging Therapies vs Cognitive Enhancement
30:20 — Embryo Selection
38:10 — Is US Losing To UAE?
42:33 — Biotech Failures
49:31 — Next Dire Wolf Moment
54:21 — AI x Science
1:02:07 — Synthetizing Entire Genomes.
The Accelerate Bio Podcast explores the future of humanity in the age of Artificial Intelligence. Subscribe for deep-dive conversations with founders, scientists, and investors shaping AI, biotechnology, and human progress.
This episode discusses George Church, gene editing, CRISPR, human enhancement, longevity, aging, embryo selection, synthetic biology, multiplex editing, AI biotech.
Today, we’re introducing Gemini Robotics-ER 1.6, a significant upgrade to our reasoning-first model that enables robots to understand their environments with unprecedented precision. By enhancing spatial reasoning and multi-view understanding, we are bringing a new level of autonomy to the next generation of physical agents.
This model specializes in reasoning capabilities critical for robotics, including visual and spatial understanding, task planning and success detection. It acts as the high-level reasoning model for a robot, capable of executing tasks by natively calling tools like Google Search to find information, vision-language-action models (VLAs) or any other third-party user-defined functions.
Gemini Robotics-ER 1.6 shows significant improvement over both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash, specifically enhancing spatial and physical reasoning capabilities such as pointing, counting, and success detection. We are also unlocking a new capability: instrument reading, enabling robots to read complex gauges and sight glasses — a use case we discovered through close collaboration with our partner, Boston Dynamics.
Things are going well with this startup that we are building in stealth mode, and I couldn’t be happier about our progress.
But, we are still pre-investment / pre-revenue and I need to find some income to keep the bills paid while we are continuing to build.
I have the free time, and I have all the equipment necessary to do almost anything related to sales, marketing, production, and promotion of products and services.
I’m adept with AI and can help you with real solutions for your business or personal life.
I’m not looking for full-time, or long term work. Campaigns, projects, implementation and development of products and services.
I also have experience in events, trade shows, and conferences if anyone needs additional hands on their next summit or symposium.
I appreciate anything that comes along, at 66 the corporations won’t hire me anymore and so I forced to reach out to my friends and neighbors for opportunities and income.
Can AI really be moral — or does it just produce moral-sounding answers? Wendell Wallach, co-author of Moral Machines, joins me to discuss machine ethics, moral motivation, AI governance, and why controlling AI may not be enough.
Video in reply.