Toggle light / dark theme

This week, major AI breakthroughs were announced, including Microsoft’s new Copilot agents, Sand AI’s long video generation, and Baidu’s faster, cheaper ERNIE models. Perplexity launched a voice assistant for iPhone, ByteDance introduced screen-controlling AI, and UC San Diego showed GPT-4.5 passing a real Turing Test. DeepMind warned about AI hallucinations caused by rare words, while YouTube started testing AI-generated video clips in search results.

Join our free AI content course here 👉 https://www.skool.com/ai-content-acce… the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/ 🔍 What’s Inside: •⁠ ⁠Microsoft’s Copilot Wave Two introduces powerful AI agents like Researcher and Analyst •⁠ ⁠Sand AI and Sky Reels revolutionize video generation with long-form and infinite content breakthroughs •⁠ ⁠Baidu’s ERNIE Turbo models offer faster performance at lower costs, challenging OpenAI’s dominance 🎥 What You’ll See: •⁠ ⁠How AI now creates live sports commentary, animates 3D faces, and controls computers from screenshots •⁠ ⁠Why DeepMind warns about hidden risks in AI training and how UC San Diego’s research changes Turing tests •⁠ ⁠How YouTube’s AI-generated video clips and Perplexity’s new iPhone assistant could reshape online content 📊 Why It Matters: This wave of AI advancements shows how fast technology is evolving, with smarter agents, endless video creation, cheaper high-end models, and new challenges in AI reliability, content creation, and human-like behavior. DISCLAIMER: This video covers major AI updates from Microsoft, Sand AI, Baidu, Perplexity, DeepMind, and others, highlighting the rapid shifts in AI capabilities, risks, and opportunities across real-world applications. #ai #microsoft #deepmind.
Get the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/

🔍 What’s Inside:
• ⁠ ⁠Microsoft’s Copilot Wave Two introduces powerful AI agents like Researcher and Analyst.
• ⁠ ⁠Sand AI and Sky Reels revolutionize video generation with long-form and infinite content breakthroughs.
• ⁠ ⁠Baidu’s ERNIE Turbo models offer faster performance at lower costs, challenging OpenAI’s dominance.

🎥 What You’ll See:
• ⁠ ⁠How AI now creates live sports commentary, animates 3D faces, and controls computers from screenshots.
• ⁠ ⁠Why DeepMind warns about hidden risks in AI training and how UC San Diego’s research changes Turing tests.
• ⁠ ⁠How YouTube’s AI-generated video clips and Perplexity’s new iPhone assistant could reshape online content.

📊 Why It Matters:
This wave of AI advancements shows how fast technology is evolving, with smarter agents, endless video creation, cheaper high-end models, and new challenges in AI reliability, content creation, and human-like behavior.

DISCLAIMER:

Lithium-ion batteries have been a staple in device manufacturing for years, but the liquid electrolytes they rely on to function are quite unstable, leading to fire hazards and safety concerns. Now, researchers at Penn State are pursuing a reliable alternative energy storage solution for use in laptops, phones and electric vehicles: solid-state electrolytes (SSEs).

According to Hongtao Sun, assistant professor of industrial and manufacturing engineering, solid-state batteries—which use SSEs instead of liquid electrolytes—are a leading alternative to traditional . He explained that although there are key differences, the batteries operate similarly at a fundamental level.

“Rechargeable batteries contain two internal electrodes: an anode on one side and a cathode on the other,” Sun said. “Electrolytes serve as a bridge between these two electrodes, providing fast transport for conductivity. Lithium-ion batteries use liquid electrolytes, while solid-state batteries use SSEs.”

In a new Nature Communications study, researchers have developed an in-memory ferroelectric differentiator capable of performing calculations directly in the memory without requiring a separate processor.

The proposed differentiator promises energy efficiency, especially for edge devices like smartphones, autonomous vehicles, and security cameras.

Traditional approaches to tasks like image processing and motion detection involve multi-step energy-intensive processes. This begins with recording data, which is transmitted to a , which further transmits the data to a microcontroller unit to perform differential operations.

Until now, Google’s Android XR glasses had only appeared in carefully curated teaser videos and limited hands-on previews shared with select publications. These early glimpses hinted at the potential of integrating artificial intelligence into everyday eyewear but left lingering questions about real-world performance. That changed when Shahram Izadi, Google’s Android XR lead, took the TED stage – joined by Nishtha Bhatia – to demonstrate the prototype glasses in action.

The live demo showcased a range of features that distinguish these glasses from previous smart eyewear attempts. At first glance, the device resembles an ordinary pair of glasses. However, it’s packed with advanced technology, including a miniaturized camera, microphones, speakers, and a high-resolution color display embedded directly into the lens.

The glasses are designed to be lightweight and discreet, with support for prescription lenses. They can also connect to a smartphone to leverage its processing power and access a broader range of apps.

Billions of heat exchangers are in use around the world. These devices, whose purpose is to transfer heat between fluids, are ubiquitous across many commonplace applications: they appear in HVAC systems, refrigerators, cars, ships, aircraft, wastewater treatment facilities, cell phones, data centers, and petroleum refining operations, among many other settings.

Researchers at the National University of Singapore (NUS) have shown that a single, standard silicon transistor, the core component of microchips found in computers, smartphones, and nearly all modern electronics, can mimic the functions of both a biological neuron and synapse.

A synapse is a specialized junction between nerve cells that allows for the transfer of electrical or chemical signals, through the release of neurotransmitters by the presynaptic neuron and the binding of receptors on the postsynaptic neuron. It plays a key role in communication between neurons and in various physiological processes including perception, movement, and memory.

Year 2021 face_with_colon_three


Communication between brain activity and computers, known as brain-computer interface or BCI, has been used in clinical trials to monitor epilepsy and other brain disorders. BCI has also shown promise as a technology to enable a user to move a prosthesis simply by neural commands. Tapping into the basic BCI concept would make smart phones smarter than ever.

Research has zeroed in on retrofitting wireless earbuds to detect neural signals. The data would then be transmitted to a smartphone via Bluetooth. Software at the smartphone end would translate different brain wave patterns into commands. The emerging technology is called Ear EEG.

Rikky Muller, Assistant Professor of Electrical Engineering and Computer Science, has refined the physical comfort of EEG earbuds and has demonstrated their ability to detect and record brain activity. With support from the Bakar Fellowship Program, she is building out several applications to establish Ear EEG as a new platform technology to support consumer and health monitoring apps.

Elon Musk’s Tesla is on the verge of launching a self-driving platform that could revolutionize transportation with millions of affordable robotaxis, positioning the company to outpace competitors like Uber ## ## Questions to inspire discussion ## Tesla’s Autonomous Driving Revolution.

🚗 Q: How is Tesla’s unsupervised FSD already at scale? A: Tesla’s unsupervised FSD is currently deployed in 7 million vehicles, with millions of units of hardware 4 dormant in older vehicles, available at a price point of $30,000 or less.

🏭 Q: What makes Tesla’s autonomous driving solution unique? A: Tesla’s solution is vertically integrated with end-to-end ownership of the entire system, including silicon design, software platform, and OEM, allowing them to keep costs low and push down utilization on ride-sharing networks. Impact on Ride-Sharing Industry.

💼 Q: How will Tesla’s autonomous vehicles affect Uber drivers? A: Tesla’s unsupervised self-driving cars will likely replace Uber’s 1.2 million US drivers, being 4x more useful due to no breaks and no human presence, operating at a per-mile cost below 50% of current Uber rates.

💰 Q: What economic pressure will Tesla’s solution put on Uber? A: Tesla’s autonomous driving solution will create tremendous pressure on Uber, with its cost structure acting as a magnet for high utilization, maintaining low pre-pressure costs for Tesla due to their fundamentally different design. Future Implications.

🤝 Q: What potential strategy might Uber adopt to compete with Tesla? A: Uber may need to approach Tesla to pre-buy their first 2 million Cyber Caps upfront, including production costs, as potentially the only path to compete with Tesla’s autonomous driving solution.