Spintronics—a technology that harnesses the electron’s magnetic quantum states to carry information—could pave the way for a new generation of ultra-energy-efficient electronics. Yet a major challenge has been the ability to control these delicate quantum properties with sufficient precision for practical applications. By combining different quantum materials, researchers at Chalmers University of Technology have now taken a decisive step forward, achieving unprecedented control over spin phenomena. The advance opens the door to next-generation low-power data processing and memory technologies.
Data centers, cloud services, AI and connected systems account for a rapidly growing share of global energy consumption. In the quest for new, more energy-efficient technological solutions, spin electronics, or spintronics, has proven to be a new and promising approach. Instead of relying solely on the movement of electric charge, spintronics use magnetic states to carry information. More specifically, it takes advantage of a quantum property of electrons known as spin, which makes electrons behave like tiny magnets.
“Just like a compass needle, an electron’s spin can point in one of two directions—up or down. These two directions can be used to represent digital information, in the same way today’s electronics use 0s and 1s,” explains Saroj Dash, Professor of Quantum Device Physics at Chalmers University of Technology.
Hintze, A., Adami, C. Promoting cooperation in the public goods game using artificial intelligent agents. npj Complex3, 3 (2026). https://doi.org/10.1038/s44260-025-00065-9
The reef is a home and feeding ground for dozens of species that depend on it the way a woodland creature depends on trees. It has survived ice ages – but whether it will survive increasing pressures from industrial fishing, deep-sea mining and climate change is, in part, a question about data. If we don’t know it exists, how can we protect it?
A new project called Deep Vision could fundamentally transform our understanding of the deep ocean by digging into pictures and videos sat largely unexamined in research archives around the world. By using AI, thousands of hours of seafloor footage can be analysed to produce the first comprehensive maps of vulnerable marine ecosystems across the entire Atlantic basin.
Over the past two decades, robotic and autonomous underwater vehicles have collected vast quantities of footage from the deep sea. This represents an extraordinary resource – a record of ecosystems that most humans will never see.
Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!
0:00 Intro. 0:37 What is consciousness? Phenomenology — functionalism & panpsychism. 1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity. 3:20 Minds are not states — they are processes. We don’t see causal filtering in tables. 5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism. 9:49 Methodological humility about armchair philosophy of mind. 12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat. 16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well? 22:35 Why stepping outside yourself is powerful — seeing. 25:12 Are AIs born enlightened? 26:25 Are LLMs AGI yet? What’s still missing. 28:16 AI, hybrid minds, and the limits of human augmentation. 32:32 Can minds be extended — in humans, dogs, and cats? 36:19 Why human language may not be open-ended enough. 39:41 Why AI is so data-hungry — and why better algorithms must exist. 43:39 Why better representations matter more than raw compute (grokking was surprising) 48:46 How babies build a world model from touch and perception. 51:05 What comes after copilots: agent teams, multimodality and new AI workflows. 55:32 Can AI help us discover new forms of taste and aesthetics. 59:49 Using AI to learn art history and invent a transhumanist aesthetic. 1:01:47 When AI helps everyone looks professional, what still counts as real skill? 1:03:56 What happens when the self starts to merge with AI 1:05:43 How AI changes the way we think and create. 1:08:10 What happens when AI starts shaping human relationships. 1:11:18 Why feeling in control can matter more than being right. 1:12:58 Why intelligence without wisdom is very dangerous. 1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation? 1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere. 1:24:02 10 years to the singularity? 1:25:27 AI, coordination and the corruption problem. 1:29:47 Can AI become more moral than us (humans)? and if so, should it? 1:34:31 Why pluralism still leaves moral collisions unresolved. 1:34:31 Traversing the landscape of norms (value) 1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view) 1:43:08 Moral realism, evolution & game-theoretic symmetries. 1:48:01 Is there a global optimum of moral coordination? Is that god? 1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan. 1:59:36 Will superintelligences converge into a cosmic singleton?
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford
Welcome to the Research Symposium on Enabling AI at Nation Scale, hosted by the Ministry of Electronics and Information Technology (MeitY).
This landmark event brings together the world’s leading pioneers in Artificial Intelligence to discuss the future of discovery, engineering, and national infrastructure. Featuring keynote addresses from Turing Award winners and industry visionaries, we explore how AI acts as a catalyst for scientific breakthroughs and the \.
AI chatbots are homogenizing human expression and risk reducing humanity’s collective wisdom, computer scientists and psychologists say. http://spkl.io/6181AI6Jh.
TrendsInCognitiveScience.
Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet, as large language models (LLMs) become deeply embedded in people’s lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts.
Artificial intelligence, or AI, is not merely a tool in our age of rapid technological advancement; rather, it is the fundamental force behind innovation in all spheres of society. Our world is changing due to AI’s capabilities, which range from real-time decision-making in national security to predictive analytics in healthcare.
The contemporary data center, the digital stronghold that stores, processes and drives the enormous computing demands of AI models, is at the center of this change. However, as AI adoption picks up speed, these vital
infrastructures are confronted with two existential challenges: an unparalleled increase in power usage and a changing environment of increasingly complex security risks. For operational continuity, economic stability and national resilience, addressing both is now essential and no longer discretionary.
In this JACCBTS article, Joachimbauer et al. demonstrate that cardiopathogenic CD4+ T cells induce acute yet reversible inflammation-driven myocardial changes, and that the persistence of these cells is a key factor driving functional cardiac remodeling.
This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.
Artificial intelligence now plays Go, paints pictures, and even converses like a human. However, there remains a decisive difference: AI requires far more electricity than the human brain to operate. Scientists have long asked the question, “How can the brain learn so intelligently using so little energy?” KAIST researchers have moved one step closer to the answer.
A research team led by Distinguished Professor Sang Wan Lee of the Department of Brain and Cognitive Sciences has developed a new technology that applies the learning principles of the human brain to deep learning, enabling stable training even in deep artificial intelligence models.
Our brain does not passively receive the world. Instead of merely perceiving what is happening in the present, it first predicts what will happen next and, when reality differs from that prediction, adjusts itself to reduce the difference (i.e., prediction error). This is similar to anticipating an opponent’s next move in Go and changing strategy if the prediction turns out to be wrong. This mode of information processing is known as “Predictive Coding.”