Toggle light / dark theme

‘Learn-to-Steer’ method improves AI’s ability to understand spatial instructions

Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA’s AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as “a cat under the table” or “a chair to the right of the table,” frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.

The new method, called Learn-to-Steer, works by analyzing the internal attention patterns of an image-generation model, effectively offering insight into how the model organizes objects in space. A lightweight classifier then subtly guides the model’s internal processes during image creation, helping it place objects more precisely according to user instructions. The approach can be applied to any existing trained model, eliminating the need for costly retraining.

The results show substantial performance gains. In the Stable Diffusion SD2.1 model, accuracy in understanding spatial relationships increased from 7% to 54%. In the Flux.1 model, success rates improved from 20% to 61%, with no negative impact on the models’ overall capabilities.

In defense of artificial suffering

Perhaps our last line of defense.


Philosophical Studies — The ability to suffer, in the case of artificial entities, is often viewed as a moral turning point—once detected, there is no going back, and the moral landscape is irreversibly altered. The presence of entities capable of suffering imposes moral and legal obligations on humans. It is therefore unsurprising that many have urged caution in pursuing artificial suffering, with some even proposing a moratorium. In this paper, however, I argue that the emergence of artificial suffering need not entail moral disaster. On the contrary, I defend its development and contend that it may be a necessary feature of superintelligent robots. I suggest that artificial suffering could be essential for enabling human-like ethics in machines, bridging the retribution gap, and functioning as a control mechanism to mitigate existential risks. Rather than constraining research in this area, I maintain that work on artificial suffering should be actively intensified.

A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge

We introduce Layered Self-Supervised Knowledge Distillation (LSSKD) framework for training compact deep learning models. Unlike traditional methods that rely on pre-trained teacher networks, our approach appends auxiliary classifiers to intermediate feature maps, generating diverse self-supervised knowledge and enabling one-to-one transfer across different network stages. Our method achieves an average improvement of 4.54\% over the state-of-the-art PS-KD method and a 1.14% gain over SSKD on CIFAR-100, with a 0.32% improvement on ImageNet compared to HASSKD. Experiments on Tiny ImageNet and CIFAR-100 under few-shot learning scenarios also achieve state-of-the-art results. These findings demonstrate the effectiveness of our approach in enhancing model generalization and performance without the need for large over-parameterized teacher networks. Importantly, at the inference stage, all auxiliary classifiers can be removed, yielding no extra computational cost. This makes our model suitable for deploying small language models on affordable low-computing devices. Owing to its lightweight design and adaptability, our framework is particularly suitable for multimodal sensing and cyber-physical environments that require efficient and responsive inference. LSSKD facilitates the development of intelligent agents capable of learning from limited sensory data under weak supervision.

Stunning new maps of myelin-making mouse brain cells advance understanding of nervous system disorders

Johns Hopkins scientists say they have used 3D imaging, special microscopes and artificial intelligence (AI) programs to construct new maps of mouse brains showing a precise location of more than 10 million cells called oligodendrocytes. These cells form myelin, a protective sleeve around nerve cell axons, which speeds transmission of electrical signals and support brain health.

Published online Feb. 18 in Cell and funded by the National Institutes of Health, the maps not only paint a whole-brain picture of how myelin content varies between brain circuits, but also provide insights into how the loss of such cells impacts human diseases such as multiple sclerosis, Alzheimer’s disease and other disorders that affect learning, memory, sensory ability and movement, say the researchers. Although mouse and human brains are not the same, they share many characteristics and most biological processes.

“Our study identifies not only the location of oligodendrocytes in the brain, but also integrates information about gene expression and the structural features of neurons,” says Dwight Bergles, Ph.D., the Diana Sylvestre and Charles Homcy Professor in the Department of Neuroscience at the Johns Hopkins University School of Medicine. “It’s like mapping the location of all the trees in a forest, but also adding information about soil quality, weather and geology to understand the forest ecosystem.”

AI in Pathology Fails Without Pathologists

🧠 AI in pathology cannot succeed without pathologists. As computational pathology advances, clinical expertise remains the critical link between algorithms and real-world impact.

In this discussion, Diana Montezuma, Pathologist and Head of R&D at IMP Diagnostics, explains why pathologist involvement is essential to building AI tools that are usable, clinically relevant, and truly valuable in practice.

👉 Read the discussion:


Pathologists play a key role in AI development for pathology – providing the expertise needed to bridge data and clinical application. To discuss this role and its importance in the development of computational pathology tools, we connected with Diana Montezuma, Pathologist and Head of the R&D Unit at IMP Diagnostics.

From your perspective, what is the most important contribution that diagnosticians bring to AI and algorithm development?

Pathologists bring essential clinical expertise and practical insight to any computational pathology project. Without their involvement, such initiatives risk becoming disconnected from real-world practice and ultimately failing to deliver meaningful clinical value.

Advances and Integrations of Computer-Assisted Planning,… : Operative Neurosurgery

ONSNew ONSReview Advances and Integrations of Computer-Assisted Planning, Artificial Intelligence, and Predictive Modeling Tools for Laser Interstitial Thermal Therapy in Neurosurgical Oncology by Warman et al Johns Hopkins Medicine Congress of Neurological Surgeons (CNS) Isaac Yang.


E to surrounding healthy tissue, LiTT offers promising therapeutic outcomes for both newly diagnosed and recurrent tumors. However, challenges such as postprocedural edema, unpredictable heat diffusion near blood vessels and ventricles in real time underscore the need for improved planning and monitoring. Incorporating artificial intelligence (AI) presents a viable solution to many of these obstacles. AI has already demonstrated effectiveness in optimizing surgical trajectories, predicting seizure-free outcomes in epilepsy cases, and generating heat distribution maps to guide real-time ablation. This technology could be similarly deployed in neurosurgical oncology to identify patients most likely to benefit from LiTT, refine trajectory planning, and predict tissue-specific heat responses.

New Technique for 3D Printing Artificial Muscle Paves the Way for More Freaky Robots

While 2026 has been an objectively terrible year for humans thus far, it’s turning out—for better or worse—to be a banner year for robots. (Robots that are not Tesla’s Optimus thingamajig, anyway.) And it’s worth thinking about exactly how remarkable it is that the new humanoid robots are able to replicate the smooth, fluid, organic movements of humans and other animals, because the majority of robots do not move like this.

Take, for example, the robot arms used in factories and CNC machines: they glide effortlessly from point to point, moving with both speed and exquisite precision, but no one would ever mistake one of these arms for that of a living being. If anything, the movements are too perfect. This is at least partly due to the way these machines are designed and built: they use the same ideas, components, and principles that have characterised everything from the water wheel to the combustion engine.

But that’s not how living creatures work. While the overwhelming majority of macroscopic living beings contain some sort of “hard” parts—bones or exoskeletons—our movements are driven by muscles and ligaments that are relatively soft and elastic.

/* */