Google faces $314M fine for misusing Android users’ cellular data while devices are idle.

How reliable is artificial intelligence, really? An interdisciplinary research team at TU Wien has developed a method that allows for the exact calculation of how reliably a neural network operates within a defined input domain. In other words: It is now possible to mathematically guarantee that certain types of errors will not occur—a crucial step forward for the safe use of AI in sensitive applications.
From smartphones to self-driving cars, AI systems have become an everyday part of our lives. But in applications where safety is critical, one central question arises: Can we guarantee that an AI system won’t make serious mistakes—even when its input varies slightly?
A team from TU Wien—Dr. Andrey Kofnov, Dr. Daniel Kapla, Prof. Efstathia Bura and Prof. Ezio Bartocci—bringing together experts from mathematics, statistics and computer science, has now found a way to analyze neural networks, the brains of AI systems, in such a way that the possible range of outputs can be exactly determined for a given input range—and specific errors can be ruled out with certainty.
Year 2013 face_with_colon_three Basically this is the light based nanotransfection version that can eventually be put on a simple smartphone or smartwatch that can be an entire hospital in one touch healing the entire body in one touch or just areas that need healing.
Antkowiak, M., Torres-Mapa, M., Witts, E. et al. Sci Rep 3, 3,281 (2013). https://doi.org/10.1038/srep03281
Researchers have discovered a modern solution to detect vault applications (apps) on smartphones, which could be a game-changer for law enforcement. The paper is published in the journal Future Internet.
The analysis, led by researchers from Edith Cowan University (ECU) and University of Southern Queensland, demonstrates that machine learning (ML) can be used to effectively identify vault apps.
Smartphones are an integral part of daily life, used by an estimated 5 billion people around the world.
A new mobile crypto-stealing malware called SparkKitty was found in apps on Google Play and the Apple App Store, targeting Android and iOS devices.
The malware is a possible evolution of SparkCat, which Kaspersky discovered in January. SparkCat used optical character recognition (OCR) to steal cryptocurrency wallet recovery phrases from images saved on infected devices.
When installing crypto wallets, the installation process tells users to write down the wallet’s recovery phrase and store it in a secure, offline location.
During her uncle’s treatment in 2003, Green experienced what she refers to as a “divine download”—an electrifying idea inspired by her college internships at NASA’s Marshall Space Flight Center and the Institute of Optics. “If a satellite in outer space can tell if a dime on the ground is face up or face down, and if a cell phone can target just one cell phone on the other side of the planet,” she recalls thinking, “surely we should be able to harness the technology of lasers to treat cancer just at the site of the tumor, so we won’t have all of these side effects.”
In the nearly two decades that followed, Dr. Green rerouted her career, earned a physics PhD from the University of Alabama at Birmingham—the second Black woman to do so—and dove into cancer treatment research, with physics as her guide. In 2009, she developed a treatment that uses nanoparticles and lasers in tandem: Specially designed nanoparticles are injected into a solid tumor, and, when the tumor is hit with near infrared light, the nanoparticles heat up, killing the cancer cells. In a preliminary animal study published in 2014, Green tested the treatment on mice, whose tumors were eliminated with no observable side effects.
When Hadiyah-Nicole Green crossed the stage at her college graduation, she felt sure about what would come next. She’d start a career in optics—a good option for someone with a bachelor’s degree in physics—and that would be that.
Life, though, had other plans. The day after she graduated from Alabama A&M University, she learned that her aunt, Ora Lee Smith, had cancer. Smith and her husband had raised Green since she was four years old, after the death of Green’s mother and then grandparents.
Her aunt “said she’d rather die than experience the side effects of chemo or radiation,” says Green, now a medical physicist and founder and CEO of the Ora Lee Smith Cancer Research Foundation.
You stayed up too late scrolling through your phone, answering emails or watching just one more episode. The next morning, you feel groggy and irritable. That sugary pastry or greasy breakfast sandwich suddenly looks more appealing than your usual yogurt and berries. By the afternoon, chips or candy from the break room call your name. This isn’t just about willpower. Your brain, short on rest, is nudging you toward quick, high-calorie fixes.
There is a reason why this cycle repeats itself so predictably. Research shows that insufficient sleep disrupts hunger signals, weakens self-control, impairs glucose metabolism and increases your risk of weight gain. These changes can occur rapidly, even after a single night of poor sleep, and can become more harmful over time if left unaddressed.
I am a neurologist specializing in sleep science and its impact on health.
As artificial intelligence and smart devices continue to evolve, machine vision is taking an increasingly pivotal role as a key enabler of modern technologies. Unfortunately, despite much progress, machine vision systems still face a major problem: Processing the enormous amounts of visual data generated every second requires substantial power, storage, and computational resources. This limitation makes it difficult to deploy visual recognition capabilities in edge devices, such as smartphones, drones, or autonomous vehicles.
Interestingly, the human visual system offers a compelling alternative model. Unlike conventional machine vision systems that have to capture and process every detail, our eyes and brain selectively filter information, allowing for higher efficiency in visual processing while consuming minimal power.
Neuromorphic computing, which mimics the structure and function of biological neural systems, has thus emerged as a promising approach to overcome existing hurdles in computer vision. However, two major challenges have persisted. The first is achieving color recognition comparable to human vision, whereas the second is eliminating the need for external power sources to minimize energy consumption.
As neuro-ophthalmology educators, we have sought ways to improve the teaching of pupil-related disorder, focusing on incorporating their dynamic aspects and active learning. Our solution is an app for smartphone and tablet devices. The app, Pupil Wizard, provides a digital textbook featuring a dynamic presentation of the key pupillary abnormalities. It allows the users to interact with a digital patient and explore how each condition responds to direct and indirect light stimuli, near focus, and changes in ambient light (Fig. 1). Moreover, the users can test their knowledge in quiz mode, where random pupillary abnormalities must be correctly identified and multiple-choice questions about them answered.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.