Toggle light / dark theme

Brain-computer interfaces are already letting people with paralysis control computers and communicate their needs, and will soon enable them to manipulate prosthetic limbs without moving a muscle.

The year ahead is pivotal for the companies behind this technology.

Fewer than 100 people to date have had brain-computer interfaces permanently installed. In the next 12 months, that number will more than double, provided the companies with new FDA experimental-use approval meet their goals in clinical trials. Apple this week announced its intention to allow these implants to control iPhones and other products.

Light is all around us, essential for one of our primary senses (sight) as well as life on Earth itself. It underpins many technologies that affect our daily lives, including energy harvesting with solar cells, light-emitting-diode (LED) displays and telecommunications through fiber optic networks.

The smartphone is a great example of the power of light. Inside the box, its electronic functionality works because of quantum mechanics. The front screen is an entirely photonic device: liquid crystals controlling light. The back too: white light-emitting diodes for a flash, and lenses to capture images.

We use the word photonics, and sometimes optics, to capture the harnessing of light for and technologies. Their importance in is celebrated every year on 16 May with the International Day of Light.

As demand surges for batteries that store more energy and last longer—powering electric vehicles, drones, and energy storage systems—a team of South Korean researchers has introduced an approach to overcome a major limitation of conventional lithium-ion batteries (LIBs): unstable interfaces between electrodes and electrolytes.

Most of today’s consumer electronics—such as smartphones and laptops—rely on graphite-based batteries. While graphite offers long-term stability, it falls short in .

Silicon, by contrast, can store nearly 10 times more lithium ions, making it a promising next-generation anode material. However, silicon’s main drawback is its dramatic volume expansion and contraction during charge and discharge, swelling up to three times its original size.

These AI’s won’t just respond to prompts – although they will do that too – they are working in the background acting on your behalf, pursuing your goals with independence and competence.

Your main interface to the world will, as it is today, be a device; a smartphone or whatever replaces it. It will host your personal AI agent, not a stripped-down thing with limited capabilities of knowledge. It’s a sophisticated model, more capable than GPT-4 is today. It’ll run locally and privately, so all your core interactions are yours and yours alone. It will be a digital Chief of Staff, an extension of your will, with a separate initiative.

In Alan Kay’s visionary Knowledge Navigator video from 1987, we saw an early, eerily prescient depiction of an AI-powered assistant: a personable digital agent helping a university professor manage his day. It converses naturally, juggles scheduling conflicts, surfaces relevant academic research, and even initiates a video call with a colleague — all through a touchscreen interface with a calm, competent virtual presence.

Apple is making progress on a standard for brain implant devices that can help people with disabilities control devices such as iPhones with their thoughts. As reported in The Wall Street Journal, Apple has plans to release that standard to other developers later this year.

The company has partnered with Synchron, which has been working with other companies, including Amazon, on ways to make devices more accessible. Synchron makes an implant called a Stentrode that is implanted in a vein on the brain’s motor cortex. Once implanted, the Stentrode can read brain signals and translate that to movement on devices including iPhones, iPads and Apple’s Vision Pro VR headset.

As we saw last year, a patient with ALS testing the Synchron technology was able to navigate menus in the Vision Pro device and use it to experience the Swiss Alps in VR. The technology could become more widely available to people with paralysis. The company has a community portal for those interested in learning about future tests.

A new AI model from Tokyo called the Continuous Thought Machine mimics how the human brain works by thinking in real-time “ticks” instead of layers. Built by Sakana, this brain-inspired AI allows each neuron to decide when it’s done thinking, showing signs of what experts call proximate consciousness. With no fixed depth and a flexible thinking process, it marks a major shift away from traditional Transformer models in artificial intelligence.

🔍 What’s Inside:
Sakana’s brain-like AI thinks in real-time ticks instead of fixed layers.
https://shorturl.at/UPSTt.
Deep Agent’s new MCP connects AI to over 5,000 real-world tools via Zapier.
http://deepagent.abacus.ai.
also visit: http://chatllm.abacus.ai.
Alibaba’s ZEROSEARCH fakes Google results and slashes training costs by 88%
https://www.alizila.com/alibabas-new–… phones run Google’s new Veo 2 model before Pixel even gets access https://shorturl.at/Ki0YP Tencent drops a deepfake engine with shocking face accuracy https://github.com/Tencent/HunyuanCustom Apple uses on-device AI in iOS 19 to predict and extend battery life https://www.theverge.com/news/665249/.… Saudi Arabia launches a \$940B AI empire with support from Musk and Altman https://techcrunch.com/2025/05/12/sau… 🎥 What You’ll See:

  • How Sakana’s AI mimics human neurons and rewrites how machines process thought
  • Why Abacus’ Deep Agent now acts like a fully autonomous digital worker
  • How Alibaba trains top AI models without using live search engines
  • Why Google let Honor debut its video AI before its own users
  • What Tencent’s face-swapping tech means for the future of video generation
  • How iPhones will soon think ahead to save power
  • Why Saudi Arabia’s GPU superpower plan could shake the entire AI industry

📊 Why It Matters: AI is breaking out of the lab—thinking like brains, automating your work, and reshaping global power. From self-regulating neurons to trillion-dollar GPU wars, this is where the future starts. #ai #robotics #consciousai.
Honor phones run Google’s new Veo 2 model before Pixel even gets access.
https://shorturl.at/Ki0YP
Tencent drops a deepfake engine with shocking face accuracy.
https://github.com/Tencent/HunyuanCustom.
Apple uses on-device AI in iOS 19 to predict and extend battery life.
https://www.theverge.com/news/665249/.
Saudi Arabia launches a \$940B AI empire with support from Musk and Altman.
https://techcrunch.com/2025/05/12/sau

🎥 What You’ll See:
How Sakana’s AI mimics human neurons and rewrites how machines process thought.
Why Abacus’ Deep Agent now acts like a fully autonomous digital worker.
How Alibaba trains top AI models without using live search engines.
Why Google let Honor debut its video AI before its own users.
What Tencent’s face-swapping tech means for the future of video generation.
How iPhones will soon think ahead to save power.
Why Saudi Arabia’s GPU superpower plan could shake the entire AI industry.

📊 Why It Matters:
AI is breaking out of the lab—thinking like brains, automating your work, and reshaping global power. From self-regulating neurons to trillion-dollar GPU wars, this is where the future starts.

#ai #robotics #consciousai

Just 10 to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute for Brain Research. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.

Mindfulness is a state in which the mind is focused only on the . It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises—and evidence is accumulating that practicing mindfulness has positive effects on mental health. The open-access study, reported April 8 in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for .

“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern investigator and MIT Professor John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab.

AI is a computing tool. It can process and interrogate huge amounts of data, expand human creativity, generate new insights faster and help guide important decisions. It’s trained on human expertise, and in conservation that’s informed by interactions with local communities or governments—people whose needs must be taken into account in the solutions. How do we ensure this happens?

Last year, Reynolds joined 26 other conservation scientists and AI experts in an “Horizon Scan”—an approach pioneered by Professor Bill Sutherland in the Department of Zoology—to think about the ways AI could revolutionize the success of global biodiversity conservation. The international panel agreed on the top 21 ideas, chosen from a longlist of 104, which are published in the journal Trends in Ecology and Evolution.

Some of the ideas extrapolate from AI tools many of us are familiar with, like phone apps that identify plants from photos, or birds from sound recordings. Being able to identify all the species in an ecosystem in real time, over long timescales, would enable a huge advance in understanding ecosystems and species distributions.