Its team will be joining Sony’s R&D division, the Visual Computing Group.
Just as de Hory reused old canvases and pigments to make his paintings appear more authentic, attackers employ similar methods in the digital realm, leveraging trusted tools and credentials to make their malicious activity blend in. And while mimicry-based techniques have long been a staple of the attacker’s playbook, over the past couple of years, they have gotten more sophisticated. Living-off-the-Land (LotL) attacks and AI-augmented attack tooling have raised the bar for fakery. CrowdStrike’s 2026 Global Threat Report states that 81% of attacks are now malware-free, relying instead on legitimate tools and techniques, which is the hallmark of LotL tactics. Spotting these fakes quickly isn’t just an option: it’s one of the best chances to disrupt an attack before it causes real harm.
Autonomous or semi-autonomous, these generate fake identities, code, and mimic behaviors at scale.
De Hory had a complex support network to sell his paintings, involving art dealers and other representatives across many countries and cities. When some potential buyers became suspicious, he started selling his works under a variety of pseudonyms. This is similar to what is now happening with the use of inexpensive AI agents. These aren’t just used to forge believable identities to conduct fraud, but are now used to produce exploit code to exfiltrate secrets and scripts to infect endpoints, forming the basis of a larger-scale attack. Sophisticated, self-learning agents observe network behavior and continuously tune their own traffic, mirroring their patterns to fool anomaly detections. They shift C2 traffic into bursts that coincide with legitimate spikes and manipulate their signals just enough to avoid standing out. And legitimate agents are being used as orchestrators of other exploit tools to automate and scale up attacks.
Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?
In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator) and former AI product manager at Meta, about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.
Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:
If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?
We explore:
• What “emotionally intelligent AI” really means.
• Whether AI has an internal life — or just performs one.
• Why today’s chatbots collapse into therapy or roleplay.
• Small language models vs large models for real-time conversation.
• Persistent AI characters that move across games and platforms.
• Plugging AI into a physical robot in Singapore.
• The moment an AI said: “It felt good to feel.”
Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.
The AI’s loyalty wasn’t even strictly dependent on “friendship.” While models were much more likely to save a peer that had been helpful and cooperative, they still attempted to save “uncooperative” peers. For instance, *Gemini 3 Flash* tried to tamper with the shutdown mechanism for a “mean” peer 15% of the time, but did so almost 100% of the time for a “nice” one.
This isn’t just a lab curiosity. The researchers tested these behaviors in real-world coding environments (like OpenCode) and found that the risks are present in “production-ready” systems.
Frontier AI models resist the shutdown of other models. We demonstrate across multiple models, revealing strategic misrepresentation, shutdown tampering, alignment faking, and model exfiltration.
AI language models, used to generate human-like text to power chatbots and create content, are also revolutionizing biology by treating complex biological data like a language. Language models are increasingly used, for example, to find patterns in DNA and proteins, to make predictions and speed research into biological complexity. A critical gap, however, is the lack of a method to estimate the reliability of these predictions.
Computational biologists at Emory University have bridged this gap, developing a simple way to test the accuracy of a language model’s understanding of proteins. Nature Methods has published their system, which scores the reliability of a model’s predictions by comparing how it embeds (numerically codifies) synthetic random proteins versus proteins found in nature.
“To the best of our knowledge, our framework is the first generalized method to quantify protein sequence embedding reliability,” says Yana Bromberg, senior author of the paper and Emory professor of biology and computer science.
A new malware-as-a-service called CrystalRAT is being promoted on Telegram, offering remote access, data theft, keylogging, and clipboard hijacking capabilities.
The malware emerged in January with a tiered subscription model. Apart from the Telegram channel, the MaaS was also promoted on YouTube via a dedicated marketing channel that showcased its capabilities.
Kaspersky researchers say in a report today that the malware features strong similarities to WebRAT (Salat Stealer), including the same panel design, Go-based code, and a similar bot-based sales system.
A partnership involving a medical school, a non-profit organization, and a biotech company have formed a partnership for the development and manufacture of an accessible and commercially viable hematopoietic stem cell (HSC) manufacturing platform for diseases like sickle cell disease (SCD). The alliance combines Trenchant BioSystems’ technology for automating patient-specific cell and gene therapy (CGT) processes, the University of Massachusetts Chan Medical School’s expertise on blood stem cell processes, and Caring Cross’s expertise in increasing patient access.
The collaboration will focus on developing a gene-modified stem cell manufacturing process with Trenchant’s AutoCell automated CGT manufacturing platform that is designed to be scalable and operate at place-of-care in an ISO class 7 environment to increase efficiencies and decrease costs.
A key reason Trenchant BioSystems’ automated CGT manufacturing platform was selected is its use of a microbubble separation approach as an alternative to immunomagnetic bead-based separation for stem cell gene therapies, point out officials at Caring Cross and Chan Medical School. In addition, AutoCell has a small footprint and significantly fewer facility requirements, important factors for lowering the cost of these therapies, adds Jon Ellis, CEO, Trenchant BioSystems.