Toggle light / dark theme

Fruit fly ‘Fox’ neurons show how brains assign value to food

Why do we sometimes keep eating even when we’re full and other times turn down food completely? Why do we crave salty things at certain times, and sweets at other times? The answers, according to new neuroscience research at the University of Delaware, may lie in a tiny brain in an organism you might not expect.

Lisha Shao, assistant professor in the Department of Biological Sciences in the College of Arts and Sciences, has uncovered a neural network in the brains of fruit flies that represents a very early step in how the brain decides—minute by minute—whether a specific food is worth eating. The work was published in the journal Current Biology.

“Our goal is to understand how the brain assigns value—why sometimes eating something is rewarding and other times it’s not,” Shao said.

AI to predict the risk of cancer metastases

Metastasis remains the leading cause of death in most cancers, particularly colon, breast and lung cancer. Currently, the first detectable sign of the metastatic process is the presence of circulating tumor cells in the blood or in the lymphatic system. By then, it is already too late to prevent their spread. Furthermore, while the mutations that lead to the formation of the original tumors are well understood, no single genetic alteration can explain why, in general, some cells migrate and others do not.

“The difficulty lies in being able to determine the complete molecular identity of a cell – an analysis that destroys it – while observing its function, which requires it to remain alive,” explains the senior author. “To this end, we isolated, cloned and cultured tumor cells,” adds a co-first author of the study. “These clones were then evaluated in vitro and in a mouse model to observe their ability to migrate through a real biological filter and generate metastases.”

The analysis of the expression of several hundred genes, carried out on about thirty clones from two primary colon tumors, identified gene expression gradients closely linked to their migratory potential. In this context, accurate assessment of metastatic potential does not depend on the profile of a single cell, but on the sum of interactions between related cancer cells that form a group.

The gene expression signatures obtained were integrated into an artificial intelligence model developed by the team. “The great novelty of our tool, called ‘Mangrove Gene Signatures (MangroveGS)’, is that it exploits dozens, even hundreds, of gene signatures. This makes it particularly resistant to individual variations,” explains another co-first author of the study. After training, the model achieved an accuracy of nearly 80% in predicting the occurrence of metastases and recurrence of colon cancer, a result far superior to existing tools. In addition, signatures derived from colon cancer can also predict the metastatic potential of other cancers, such as stomach, lung and breast cancer.

After training, the model achieved an accuracy of nearly 80% in predicting the occurrence of metastases and recurrence of colon cancer, a result far superior to existing tools. In addition, signatures derived from colon cancer can also predict the metastatic potential of other cancers, such as stomach, lung and breast cancer.

Thanks to MangroveGS, tumor samples are sufficient: cells can be analysed and their RNA sequenced at the hospital, then the metastatic risk score quickly transmitted to oncologists and patients via an encrypted Mangrove portal that has analysed the anonymised data.

“This information will prevent the overtreatment of low-risk patients, thereby limiting side effects and unnecessary costs, while intensifying the monitoring and treatment of those at high risk,” adds the senior author. “It also offers the possibility of optimising the selection of participants in clinical trials, reducing the number of volunteers required, increasing the statistical power of studies, and providing therapeutic benefits to the patients who need it most.” ScienceMission sciencenewshighlights.

A new flexible AI chip for smart wearables is thinner than a human hair

The promise of smart wearables is often talked up, and while there have been some impressive innovations, we are still not seeing their full potential. Among the things holding them back is that the chips that operate them are stiff, brittle, and power-hungry. To overcome these problems, researchers from Tsinghua University and Peking University in China have developed FLEXI, a new family of flexible chips. They are thinner than a human hair, flexible enough to be folded thousands of times, and incorporate AI.

A flexible solution

In a paper published in the journal Nature, the team details the design of their chip and how it can handle complex AI tasks, such as processing data from body sensors to identify health indicators, such as irregular heartbeats, in real time.

Tiny silicon structures compute with heat, achieving 99% accurate matrix multiplication

MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation. In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device.

The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is a thermostat at a fixed temperature.

The researchers used these structures to perform matrix vector multiplication with more than 99% accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.

New light-emitting artificial neurons could run AI systems more reliably

Over the past decades, computer scientists have developed increasingly advanced artificial intelligence (AI) systems that perform well on various tasks, including the analysis or generation of images, videos, audio recordings and texts. These systems power various highly performing software, including automated transcription apps, large language model (LLM)-powered conversational agents like ChatGPT, and various other platforms.

A Breakthrough That Cuts Blockchain Delays Nearly in Half

The idea of a fully connected digital world is quickly becoming real through the Internet of Things (IoT). This expanding network includes physical devices such as small sensors, autonomous vehicles, and industrial machines that collect and exchange data online.

Protecting this data from tampering is essential, which has led engineers to explore blockchain as a security solution. Although blockchain is widely known for its role in cryptocurrencies, its core function is as a decentralized digital ledger. Instead of data being controlled by a single organization, information is shared and maintained across many computers.

Hugging Face abused to spread thousands of Android malware variants

A new Android malware campaign is using the Hugging Face platform as a repository for thousands of variations of an APK payload that collects credentials for popular financial and payment services.

Hugging Face is a popular platform that hosts and distributes artificial intelligence (AI), natural language processing (NLP), and machine learning (ML) models, datasets, and applications.

It is considered a trusted platform unlikely to trigger security warnings, but bad actors have abused it in the past to host malicious AI models.

Unlike traditional #AI that just generates static images or videos

Genie is a “World Model.” It doesn’t just show you a scene; it simulates the physics, the depth, and the logic of a world you can actually control and navigate in real-time.


Stay up to date with the latest Google AI experiments, innovative tools, and technology. Explore the future of AI responsibly with Google Labs.

Researchers Show AI Robots Vulnerable to Text Attacks

“I expect vision-language models to play a major role in future embodied AI systems,” said Dr. Alvaro Cardenas.


How can misleading texts negatively affect AI behavior? This is what a recently submitted study hopes to address as a team of researchers from the University of California, Santa Cruz and Johns Hopkins University investigated the potential security risks of embodied AI, which is AI fixed in a physical body that uses observations to adapt to its environment, as opposed to using text and data, and include cars and robots. This study has the potential to help scientists, engineers, and the public better understand the risks for AI and the steps to take to mitigate them.

For the study, the researchers introduced CHAI (Command Hijacking against embodied AI), which is designed to combat outside threats to embodied AI systems, including misleading text and imagery. Instead, CHAI employs counterattacks that embodied Ais can use to disseminate right from wrong regarding text and images. The researchers tested CHAI on a variety of AI-based systems, including drone emergency landing, autonomous driving, aerial object tracking, and robotic vehicles. In the end, the researchers discovered that CHAI successfully identified incoming attacks while emphasizing the need for enhancing security measures for embodied AI.

/* */