Toggle light / dark theme

Why Hollywood Is Facing a Very Unhappy Ending

Layoffs, consolidation, streaming losses, artificial intelligence and the rise of the creator economy are reshaping Hollywood, raising questions about whether the industry is just hitting a rough patch or in terminal decline.

#hollywood #film #tv ——– Like this video? Subscribe: https://www.youtube.com/Bloomberg?sub_confirmation=1

Get unlimited access to Bloomberg.com for just $1.99 your first month: https://www.bloomberg.com/subscriptions?in_source=YoutubeOriginals Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.

Subscribe for business news, but not as you’ve known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.

Visit our partner channel Bloomberg News for global news and insight in an instant.

China’s New DuClaw AI Just Made OpenClaw Instant and Unstoppable

China just released DuClaw, a new platform that lets anyone run OpenClaw AI agents instantly from a web browser without dealing with deployment, servers, or API keys. At the same time, researchers at Stanford introduced OpenJarvis, a framework that allows personal AI assistants to run entirely on your own computer instead of the cloud. Meanwhile Google is using Gemini to build the largest flash flood dataset ever created, mapping millions of disaster events across the planet. And a new toolkit called gstack is turning AI coding into something far more autonomous, allowing AI systems to plan software, test applications, and review code automatically.

📩 Brand Deals & Partnerships: collabs@nouralabs.com.
✉ General Inquiries: airevolutionofficial@gmail.com.

🧠 What You’ll See.
Baidu launches DuClaw to run OpenClaw AI agents directly from a browser.
SOURCE: https://pandaily.com/baidu-ai-cloud-l… introduces OpenJarvis for fully local AI assistants SOURCE: https://www.marktechpost.com/2026/03/.… Google uses Gemini to build the largest flash flood dataset ever created SOURCE: https://www.wsj.com/articles/google-t… gstack toolkit organizes AI into automated software development workflows SOURCE: https://www.producthunt.com/products/.… 🚨 Why It Matters These developments show how quickly artificial intelligence is moving toward more autonomous systems. From browser based AI agents that run instantly, to personal assistants that operate entirely on local machines, the way people interact with AI is changing rapidly. At the same time, large scale AI systems are being used to analyze global disasters and predict floods, while new developer tools are allowing AI to plan, test, and review software almost like an engineering team. #ai #artificialintelligence #ainews.

Stanford introduces OpenJarvis for fully local AI assistants.
SOURCE: https://www.marktechpost.com/2026/03/.

Google uses Gemini to build the largest flash flood dataset ever created.
SOURCE: https://www.wsj.com/articles/google-t

Read more

The Rapid Trajectory Of Artificial Intelligence

Please see my latest Forbes article: The Rapid Trajectory of Artificial Intelligence: From Machine Learning Foundations to Generative Creativity, Agentic Autonomy, Human Augmentation, Neuromorphic Intelligence, and the Cyborg Horizon.

Thanks and have a great weekend!

#artificialintelligence #tech #ai #future @forbes


Artificial intelligence continues to evolve at an accelerating pace, transitioning from narrow, data-driven tools to systems capable of reasoning, and autonomous action.

Circulating Markers of Neutrophil Extracellular Traps for Long‐Term Prognosis in Patients With Acute Chest Pain

Whole-brain cell mapping using AI

The researchers developed a highly multiplexed whole-mount staining technique, utilizing the repeated application of fluorescence in situ hybridization.

The technique called mFISH3D for multiplexed mRNA staining in whole mouse organs and human tissue.

The technique helps to visualize 10 types of mRNAs in an intact mouse brain.

This workflow provides a robust approach to studying selective cell vulnerability in disease. sciencenewshighlights ScienceMission https://sciencemission.com/Artificial-intelligence-driven-wh…ll-mapping


Murakami et al. developed mFISH3D for multiplexed mRNA staining in whole-mouse organs and human tissue. Analysis of the stained mouse brains using the AI-driven ZenCell platform reveals unique cell populations activated by pharmacological perturbation. This workflow provides a robust approach to studying selective cell vulnerability in disease.

From guesswork to guidance: How machine learning speeds dopant design for water-splitting photocatalysts

MLIP calculations successfully identify suitable dopants for a novel photocatalytic material, report researchers from the Institute of Science Tokyo. As demonstrated in their study, published in the Journal of the American Chemical Society, a materials informatics approach could predict which ions can be stably introduced into orthorhombic Sn3O4, a promising and recently discovered photocatalytic tin oxide.

Their experiments revealed that aluminum-doped samples achieved 16 times greater hydrogen production than the undoped material, paving the way for next-generation clean energy applications.

Building a sustainable hydrogen economy requires clean and efficient ways to produce hydrogen at scale. One particularly attractive approach is photocatalysis—using materials called photocatalysts to split water into hydrogen and oxygen utilizing sunlight.

New chip lets robots see in 4D by tracking distance and speed simultaneously

Current vision systems for robots and drones rely on 3D sensors that, although powerful, do not always keep up with the fast-paced, unpredictable movement of the real world. These systems often struggle to measure speed instantly or are too bulky and expensive for everyday use. Now, in a paper published in the journal Nature, scientists report how they have developed a 4D imaging sensor on a chip that creates 3D maps of an environment while simultaneously tracking the speed of moving objects.

The researchers built a focal plane array (FPA), a physical grid of 61,952 stationary pixels etched onto a single silicon chip. Each one is a tiny sensor that emits laser light toward a scene and detects the reflected signal.

To “see” its surroundings, laser light from an external source is fed into the chip. This light is routed across the chip through a network of optical switches that sequentially direct it to groups of pixels. Each pixel then uses a technique called FMCW LiDAR to measure the returning signal, which is later processed to determine distance and speed. In many LiDAR systems, one set of pixels sends the light, and another receives it, but here, all pixels both send and receive, making the system much more compact.

Comprehensive digital materials ecosystem can perform ‘sanity check’ to guide design

There is a near-infinite number of material candidates out there—and simply not enough time to hunker down in the lab and test them all. Thankfully, researchers have a variety of tools (such as AI) at their disposal to streamline what would otherwise be a time-consuming process of trial-and-error.

To create an efficient materials design workflow, a team of researchers at Tohoku University is suggesting not just one tool—but a whole toolbox that works together as a cohesive kit. The work is published in the journal Chemical Science.

This comprehensive system is called a “digital materials ecosystem” because it integrates multiple processes together instead of treating them as disconnected steps. For example, the ecosystem is capable of not only predicting how certain materials will react, but also orchestrating multi-step scientific workflows including searching for evidence, screening candidates, and deciding what to test next.

Fundamental constraints to the logic of living systems

Excellent review in which Solé et al. explore how physical/mathematical constraints may determine what subset of biological systems could theoretically evolve in the universe. Lots of fascinating ideas applying concepts like Turing machines, cellular automata, McCulloch-Pitts networks, energy minimization, and phase transitions to multiscale biological and evolutionary phenomena!

I found the description of how parasites almost inevitably emerge and drive increased biodiversity in computational models of evolution particularly fascinating. Interestingly, I recall this idea was featured in the Hyperion Cantos novels during an explanation of the history of artificial intelligence in their fictional universe!


Abstract. It has been argued that the historical nature of evolution makes it a highly path-dependent process. Under this view, the outcome of evolutionary dynamics could have resulted in organisms with different forms and functions. At the same time, there is ample evidence that convergence and constraints strongly limit the domain of the potential design principles that evolution can achieve. Are these limitations relevant in shaping the fabric of the possible? Here, we argue that fundamental constraints are associated with the logic of living matter. We illustrate this idea by considering the thermodynamic properties of living systems, the linear nature of molecular information, the cellular nature of the building blocks of life, multicellularity and development, the threshold nature of computations in cognitive systems and the discrete nature of the architecture of ecosystems. In all these examples, we present available evidence and suggest potential avenues towards a well-defined theoretical formulation.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

/* */