Toggle light / dark theme

Is Big Tech Teaching Machines To Be Conscious? Google Mind’s Move To Hire A Philosopher Raises Eyebrows

The race to build smarter artificial intelligence has taken an unexpected philosophical turn after Google DeepMind quietly hired an in-house philosopher to investigate the potential for machine consciousness…

…DeepMind is now integrating philosophical reasoning directly into its research pipeline rather than treating ethics as an external concern. This move suggests that Big Tech is no longer viewing sentience as a science-fiction trope but as a technical and moral hurdle, thereby witnessing a transition from building tools to questioning the nature of those tools themselves.

The Google DeepMind philosopher role focuses on the machine sentience debate, aiming to define what it means for a digital system to ‘feel’ or ‘experience’

This internal appointment comes at a time when large language models are becoming increasingly indistinguishable from human interlocutors. While most researchers maintain that these systems are mere statistical predictors, the boundary is thinning. The decision to bring a philosopher into the core development team indicates that Google expects its path toward artificial general intelligence to raise profound questions about awareness and machine rights.


Google DeepMind has hired an in-house philosopher to explore the boundaries of machine consciousness and ethics. This move follows years of controversy surrounding AI sentience and the limits of large language models.

Elon Musk’s xAI sues over Colorado’s AI antidiscrimination law, claiming it’s a threat to Grok’s free speech

Senate Bill 205, passed in 2024, is one of the nation’s first attempts to regulate ‘high-risk’ AI systems and protect consumers from ‘algorithmic discrimination’ — or disparate treatment or impacts on protected classes under Colorado law.

In the complaint, which was filed in federal court in Denver, Musk’s lawyers contend that the law is ‘unconstitutionally vague’ and ‘invites arbitrary enforcement’ because it fails to define some key terms. They also contend that Colorado’s law would cause Musk’s AI chatbot, Grok, to ‘abandon its disinterested pursuit of truth and instead promote the State’s ideological views on various matters, racial justice in particular,’ which they say violates the First Amendment.

‘Unless the implementation and enforcement of SB24-205 is enjoined, it will violate xAI’s constitutional rights and cause irreparable constitutional harm, impose enormous burdens on xAI and the AI industry, and substitute Colorado’s political preferences for the national economic and security imperative of American AI dominance,’ the complaint reads in part…

…State Rep. Briana Titone, D-Arvada, one of Senate Bill 205’s lead sponsors, told The Sun that Musk’s lawsuit seems like a ‘fishing expedition’ that misinterprets the core of the law.

‘This is where the disconnect is. SB 205 is about consequential decisions, not about freedom of speech,’ Titone said. ‘It’s completely detached from it. And they’re trying to use this argument for a law that has nothing to do with what he’s saying. We’re not restricting speech. Our bill does not say that Grok still can’t be a dick.’


The lawsuit was filed at a time when the Trump administration looks to preempt state regulation of AI models through executive fiat.

AI Model Can Help Cut Hospitalizations in Patients With Dialysis

AI models identified patients with end-stage kidney disease (ESKD) receiving hemodialysis who faced an imminent risk for hospital admission due to infections or fluid status abnormalities. When paired with nurse-led case reviews and targeted interventions, this strategy helped avert short-term admissions, demonstrating AI’s potential to guide timely, focused care.


AI-driven interventions reduce the odds of hospitalization within 7 days by 8% in patients with end-stage kidney disease receiving hemodialysis, according to a recent study.

HarmonyGNN boosts graph AI accuracy on four tough benchmarks by up to 9.6%

Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).

For example, in a graph of a neural system there would be edges between nodes representing two neurons that enhance each other, but there would also be edges between nodes that suppress each other.

Because graphs can be used to represent everything from social networks to molecular structure, GNNS are able to capture complex relationships better than many other types of AI systems.

In Active Solids, Connectivity Is as Important as Activity

A robotic metamaterial shows that the odd mechanics of active solids depend on how the active constituents connect across the system.

Active materials, composed of microscopic constituents that continuously inject motional energy into the system, can exhibit odd mechanical responses, such as stretching vertically when sheared horizontally. Such properties can be used to make materials that can spontaneously crawl or roll over a difficult terrain [1]. One might naively think that these desirable odd responses could be increased by making the components more active. Jack Binysh of the University of Amsterdam and his colleagues now find that this doesn’t always work [2]. The researchers show that in active solids a collective response only emerges when system-spanning connective networks are formed among the individual constituents of the system. Without such networks, the effects of microscopic activity remain confined locally and the macroscopic response disappears.

An active solid is, fundamentally, an elastic lattice made up of self-driving constituents. Examples include robotic lattices composed of motorized units [1, 2], magnetic colloidal crystals [3], and chiral living embryos [4]. The active solids that Binysh and his colleagues examined are examples of nonreciprocal active solids, meaning that the interactions between elements are directional. Interactions may become directional when individual constituents process information about their neighbors. Such nonreciprocal interactions arise in a wide range of settings. In robotic metamaterials, local control loops impose directional responses on adjacent mechanical units [1]. And in living chiral collectives, hydrodynamic flows allow rotating embryos to exchange momentum with the surrounding media [4].

Retinal Vessel Dysfunction in Cerebral Autosomal Dominant Arteriopathy With Subcortical Infarcts and Leukoencephalopathy

An Ultra-Widefield Fluorescein Angiography Study.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

Toward a policy for machine-learning tools in kernel development

The first topic of discussion at the 2025 Maintainers Summit has been in the air for a while: what role — if any — should machine-learning-based tools have in the kernel development process? While there has been a fair amount of controversy around these tools, and concerns remain, it seems that the kernel community, or at least its high-level maintainership, is comfortable with these tools becoming a significant part of the development process.

Sasha Levin began the discussion by pointing to a summary he had sent to the mailing lists a few days before. There is some consensus, he said, that human accountability for patches is critical, and that use of a large language model in the creation of a patch does not change that. Purely machine-generated patches, without human involvement, are not welcome. Maintainers must retain the authority to accept or reject machine-generated contributions as they see fit. And, he said, there is agreement that the use of tools should be disclosed in some manner.

But, he asked the group: is there agreement in general that these tools are, in the end, just more tools? Steve Rostedt said that LLM-generated code may bring legal concerns that other tools do not raise, but Greg Kroah-Hartman answered that the current developers certificate of origin (“Signed-off-by”) process should cover the legal side of things. Rostedt agreed that the submitter is ultimately on the hook for the code they contribute, but he wondered about the possibility of some court ruling that a given model violates copyright years after the kernel had accepted code it generated. That would create the need for a significant cleanup effort.

/* */