Why the AI alignment problem is not merely a technical hurdle, but a civilizational rite of passage in the evolution of intelligence
Artificial intelligence is often discussed as a technological threat, yet the deeper challenge lies not within the machines themselves, but within the values guiding how humanity chooses to use this unprecedented form of power.
Throughout history, every major leap in automation has multiplied productivity while simultaneously concentrating influence in the hands of those who control it. The emergence of artificial intelligence represents the most powerful form of automation ever created, capable of reshaping economies, redefining work, and transforming the nature of human connection itself.
This conversation explores how AI amplifies existing human systems rather than replacing them, why questions of power, wealth, authenticity, and trust are becoming more important than technological capability, and how the future shaped by artificial intelligence will ultimately reflect human intentions rather than machine decisions.
The technology is neutral. The outcome is not.
#AI #ArtificialIntelligence #Future #Technology #MoGawdat
Researchers at Karlsruhe Institute of Technology (KIT) and École Polytechnique Fédérale de Lausanne (EPFL) present a novel component that enables very fast, economical, and reliable data transmission thanks to an advanced manufacturing technology. Their new electro-optical modulator transmits data efficiently through fiber-optic cables and can be manufactured inexpensively in large quantities on standard semiconductor wafers. This is important, as AI applications and growing data traffic are pushing data centers and fiber-optic networks to their performing limits. The researchers present their findings in Nature Communications.
Similar to modern computer chips, the modulator can be manufactured using established semiconductor processes. The researchers combine lithium tantalate —a material that guides light particularly well and serves as the heart of the modulator—with a proven chip manufacturing technique from microelectronics. To date, these two technologies have never been used together. For the first time now, they enable reliable mass production.
Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terawatt hours of power in 2024—more than 10% of that year’s nationwide energy output—and it’s expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories.
Researchers have developed a new kind of nanoelectronic device that could dramatically cut the energy consumed by artificial intelligence hardware by mimicking the human brain. The researchers, led by the University of Cambridge, developed a form of hafnium oxide that acts as a highly stable, low-energy “memristor”—a component designed to mimic the efficient way neurons are connected in the brain. The results are reported in the journal Science Advances.
Current AI systems rely on conventional computer chips that shuttle data back and forth between memory and processing units. This constant movement consumes large amounts of electricity, and global demand is exploding as AI adoption expands across industries.
Brain-inspired, or neuromorphic, computing is an alternative way to process information that could reduce energy use by as much as 70% by storing and processing information in the same place, and doing so with extremely low power. Such a system would also be far more adaptable, in the same way our own brains are able to learn and adapt.
Humans excel at transmitting ideas, skills, and knowledge across generations, and at building on those competencies in a cumulative manner. James Rilling, Professor of Psychology at Emory University, explores how the transmission of our cumulative culture is assumed to depend on both language and mental perspective-taking, or theory of mind. If humans have specialized abilities in these domains, we must have neurobiological specializations to support them. Our research has used comparative primate neuroimaging to attempt to identify such specializations. The arcuate fasciculus is a white matter fiber tract that links Wernicke’s and Broca’s language areas. It is known to be involved in multiple, high level linguistic functions such as lexical semantics, complex syntax, and speech fluency. Using diffusion weighted imaging and tractography, we have demonstrated human specializations in the size and trajectory of the arcuate fasciculus that may partially explain human linguistic abilities. Theory of Mind depends on a set of cortical regions that belong to a neural network known as the default mode network that is functionally connected, highly active at rest, and deactivated by attention-demanding cognitive tasks. We and others have used functional neuroimaging to show that chimpanzees and other primates appear to have a default mode network that is similar to that of humans. However, the non-human primate default mode network seems to have weaker connectivity between certain key nodes, suggesting that these connections could play a role in human theory of mind specializations. Recorded on 02/27/2026. [3/2026] [Show ID: 41329]
Donate to UCTV to support informative & inspiring programming:
https://www.uctv.tv/donate.
Learn more about anthropogeny on CARTA’s website:
https://carta.anthropogeny.org/
More videos from: CARTA — The Idea Organ.
(https://www.uctv.tv/carta-idea-organ)
Explore More Science & Technology on UCTV
(https://www.uctv.tv/science)
Science and technology continue to change our lives. University of California scientists are tackling the important questions like climate change, evolution, oceanography, neuroscience and the potential of stem cells.
UCTV is the broadcast and online media platform of the University of California, featuring programming from its ten campuses, three national labs and affiliated research institutions. UCTV explores a broad spectrum of subjects for a general audience, including science, health and medicine, public affairs, humanities, arts and music, business, education, and agriculture. Launched in January 2000, UCTV embraces the core missions of the University of California — teaching, research, and public service – by providing quality, in-depth television far beyond the campus borders to inquisitive viewers around the world.
Become a Big Think member to unlock expert classes, premium print issues, exclusive events and more: https://bigthink.com/membership/?utm_… “If science aims to describe everything, how can it not describe the simple fact of our existence?” On this episode of Dispatches, Kmele speaks with the scientists, mathematicians, and spiritual leaders trying to do just that:
This video is an episode from @The-Well, our publication about ideas that inspire a life well-lived, created with the @JohnTempletonFoundation
Watch the full podcast now ► • Dispatches from The Well.
In the newest episode of Dispatches from The Well, we’re diving deep into the “hard problem of consciousness.” Here, Kmele combines the perspectives of five different scientists, philosophers, and spiritual leaders to approach one of humanity’s most pressing questions: what is consciousness?
In the AI age, the question of consciousness is more prevalent than ever. Is every single thing in the universe self-aware? What does it actually mean to be conscious? Are our bodies really just a vessel for our thoughts? Kmele asks these questions, and many more, in the most thought-provoking episode yet. This is Dispatches from The Well.
Featuring: sir roger penrose, christof koch, melanie mitchell, reid hoffman, swami sarvapriyananda.
Recent reporting on SpaceX’s proposal to deploy up to one million satellites in low Earth orbit — paired with a vision of AI-enabled, autonomous orbital infrastructure — marks a decisive moment for the space community. Regardless of whether these numbers ultimately materialize, the direction is unmistakable: space is moving toward unprecedented scale, autonomy and strategic importance.
That reality demands a fundamental reassessment of what space awareness really means.
For decades, space situational awareness (SSA) focused on orbital mechanics: where an object is, where it will be and whether it might collide with something else. That model is now insufficient. Satellites are no longer passive nodes governed primarily by physics; they are software-defined, networked systems deeply integrated with terrestrial cyber infrastructure, global supply chains and increasingly AI-driven decision loops.