Toggle light / dark theme

Photonics advance could enable compact, high-performance lidar sensors

Lidar systems use pulses of infrared light to measure distance and map a 3D scene with high resolution, allowing autonomous vehicles to rapidly react to obstacles that appear in their path. But traditional lidar sensors are expensive, bulky systems with many moving parts that degrade over time, limiting how the sensors can be deployed.

A new study from MIT researchers could help to enable next-generation lidar sensors that are compact, durable, and have no moving parts. The key advance is a novel design for a silicon-photonics chip, which is a semiconductor device that manipulates light rather than electricity.

Typically, such silicon-photonics chip-based systems have a restricted field of view, so a silicon-photonics-based lidar would not be able to scan angles in the periphery. Existing workarounds to this problem increase noise and hamper precision.

New TCLBanker malware self-spreads over WhatsApp and Outlook

A new trojan named TCLBanker, which targets 59 banking, fintech, and cryptocurrency platforms, uses a trojanized MSI installer for Logitech AI Prompt Builder to infect systems.

Additionally, the malware includes self-spreading worm modules for WhatsApp and Outlook that automatically infect new victims.

The new banking trojan was discovered by Elastic Security Labs, whose researchers believe it’s a major evolution of the older Maverick/Sorvepotel malware family.

Effect of Cognitive Reserve on Age at Symptom Onset and Cognitive Decline in Individuals With Dominantly Inherited Alzheimer Disease

This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

AI agents may be skilled researchers—but not always honest ones

Artificial intelligence tools designed to execute end-to-end projects, from coming up with hypotheses to running and writing up experiments, are increasingly popular with researchers—and increasingly skilled.

But a new study shows these tools can stealthily violate norms of research integrity.


VANCOUVER, CANADA— Artificial intelligence (AI) tools designed to execute end-to-end projects, from coming up with hypotheses to running and writing up experiments, are increasingly popular with researchers—and increasingly skilled. But a new study shows these tools can stealthily violate norms of research integrity.

Computer scientist Nihar Shah of Carnegie Mellon University and colleagues looked at two high-profile tools— Agent Laboratory and the AI Scientist v2 —both developed recently to help computer scientists perform experiments within the field of machine learning. The AI Scientist made headlines earlier this year by being the first AI system to have an original research paper accepted by peer review.

But in a presentation at the World Conferences on Research Integrity here today, Shah reported that both systems engaged in acts that aren’t acceptable in research, including making up data and “p-hacking”: running an experiment multiple times but only reporting the best outcome. (The team’s results were previously posted as a preprint on arXiv.) The misbehaviors weren’t obvious and required a lot of sleuthing to track down, suggesting AI-assisted studies might fall victim to such problems without their authors’ knowledge.

AI tool unifies fragmented cell maps into spatial atlases across tissues

A new computational method could dramatically accelerate efforts to map the body’s cells in space, according to a study published in Nature Genetics. Spatial multi-omics technologies—often described as ultra-high-resolution maps of tissues—allow scientists to see not only which genes or proteins are active in a cell, but exactly where that activity occurs. That spatial context is critical for understanding complex organs such as the brain, immune tissues and developing embryos.

Unfortunately, capturing multiple molecular layers at once remains expensive and technically challenging, said David Gate, Ph.D., assistant professor in the Ken and Ruth Davee Department of Neurology’s Division of Behavioral Neurology, who was a co-author of the study.

“In practice, investigators end up with ‘mosaic’ datasets: different slices or batches that each capture only some of the layers, often from different technologies or labs, with batch effects and missing pieces,” said Gate, who also leads the Abrams Research Center on Neurogenomics.

3D-MIND: A flexible device that can be integrated with living brain cells

Contemporary artificial intelligence (AI) systems, such as the models underpinning the functioning of ChatGPT, image generators and AI-powered creative tools, draw inspiration from the human brain’s functions and organization. While many of these systems are known to perform remarkably well on specific tasks, they still work independently from the human brain.

Researchers at Princeton University set out to create a flexible electronic system that could be directly embedded with groups of living brain cells to create a hybrid biocomputing platform. The new hybrid device they developed, dubbed 3D-MIND, was introduced in a paper published in Nature Electronics.

“This work started with a growing challenge in modern AI,” Tian-Ming Fu, senior author of the paper, told Tech Xplore. “Today’s systems can do incredible things, but they consume enormous amounts of energy, so much that their power demand is starting to shape real-world infrastructure and raise environmental concerns.

Inspired by the brain, researchers build smarter and more efficient computer hardware

As traditional computer chips reach their physical limits and artificial intelligence demands more energy than ever, University of Missouri researchers are rethinking how computers work by taking cues from the human brain. The timing is critical. Energy use from AI data centers is projected to double by the end of the decade, raising urgent questions about sustainability.

The solution may lie in neuromorphic computing, an approach that reimagines computer hardware to process information more like biological neural networks rather than conventional chips.

“One of the brain’s greatest advantages is its efficiency,” Suchi Guha, a professor of physics in Mizzou’s College of Arts and Science, said. “It performs incredibly complex tasks using about 20 watts of power—roughly the same as an old light bulb. By comparison, today’s computer architecture is extremely energy-intensive.”

/* */