Toggle light / dark theme

Aptima will lead the commercialization arm of DARPA’s Semantic Forensics program (SemaFor), building on its prior role as the test and evaluation lead for the initiative. Launched by DARPA’s Information Innovation Office in 2020, SemaFor aims to detect and analyze media not just at the signal level such as alterations in pixel data or compression artifacts, but also at the semantic level.

The new contract represents DARPA’s attempt to push SemaFor’s cutting-edge research beyond the defense and Intelligence Community and into broader commercial and public sector adoption. The program represents a conceptual leap from earlier forensics programs by targeting the intent behind media manipulation and its effects on public understanding and discourse.

The “SemaFor program is developing technologies to defend against multimedia falsification and disinformation campaigns,” DARPA explained in its FY 2025 budget justification document. “Statistical detection techniques have been successful, but media generation and manipulation technologies applicable to imagery, voice, video, text, and other modalities are advancing rapidly. Purely statistical detection methods are now insufficient to detect these manipulations, especially when multiple modalities are involved.”

TAE’s “Norm” development, for instance, may “[chart] a path for streamlined devices that directly addresses the commercially critical metrics of cost, efficiency, and reliability,” theorized Michl Binderbauer, CEO of TAE Technologies.

“This milestone significantly accelerates TAE’s path to commercial hydrogen-boron fusion that will deliver a safe, clean, and virtually limitless energy source for generations to come,” Binderbauer added.

“Norm” is set to precede TAE’s next reactor prototype, “Copernicus,” which TAE engineers anticipate will demonstrate fusion as a viable energy source before 2030.

Imagine developing a finer control knob for artificial intelligence (AI) applications like Google Gemini and OpenAI ChatGPT.

Mikhail Belkin, a professor with UC San Diego’s Halıcıoğlu Data Science Institute (HDSI)—part of the School of Computing, Information and Data Sciences (SCIDS)—has been working with a team that has done just that. Specifically, the researchers have discovered a method that allows for more precise steering and modification of large language models (LLMs)—the powerful AI systems behind tools like Gemini and ChatGPT. Belkin said that this breakthrough could lead to safer, more reliable and more adaptable AI.

The research relies on recent work that has been published in Science and Proceedings of the National Academy of Sciences.

Technology is being pushed to its very limits. The upgrades to the Large Hadron Collider (LHC) at CERN slated for the next few years will increase data transfer rates beyond what the current neutrino detector for the FASER experiment can cope with, requiring it to be replaced by a new kind of more powerful detector.

This is a task that physicist Professor Matthias Schott from the University of Bonn will be tackling.

Extremely lightweight, electrically neutral and found almost everywhere in the universe, neutrinos are among its most ubiquitous particles and thus one of its basic building blocks. To researchers, however, these virtually massless elementary particles are still “ghost particles.”

Engineers from Australia and China have invented a sponge-like device that captures water from thin air and then releases it in a cup using the sun’s energy, even in low humidity where other technologies such as fog harvesting and radiative cooling have struggled.

The water-from-air device remained effective across a broad range of humidity levels (30–90%) and temperatures (5–55 degrees Celsius).

Senior researcher Dr. Derek Hao, from RMIT University in Melbourne, said the invention relied on refined balsa wood’s naturally spongy structure, modified to absorb water from the atmosphere and release it on demand.

IN A NUTSHELL 🌌 Astronomers discovered Eos, a massive molecular cloud just 300 light-years from Earth, using innovative detection methods. 🔍 Eos eluded previous detection due to its low carbon monoxide content, highlighting the need for new observational techniques. 🌠 The cloud’s crescent shape is influenced by interactions with the North Polar Spur, offering insights

Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads.

The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.

“If you look at the foundation models, they’re very big, very slow,” Dorit Zilbershot, ServiceNow’s group vice president of AI experiences and innovation, said in an interview. “This is only a 15-billion-parameters model, it’s highly trained on reasoning. We expect the reasoning to be very, very important.