A combined team of roboticists from CREATE Lab, EPFL and Nestlé Research Lausanne, both in Switzerland, has developed a soft robot that was designed to mimic human infant motor development and the way infants feed.
In their paper published in the journal npj Robotics, the group describes how they used a variety of techniques to give their robot the ability to simulate the way human infants feed, from birth until approximately six months old.
Prior research has shown that it is difficult to develop invasive medical procedures for infants and babies due to the lack of usable test subjects. Methods currently in use, such as simulations, observational instruments and imaging tend to fall short due to their differences compared to real human infants. To overcome such problems, the team in Switzerland has designed, built, and tested a soft robotic infant that can be used for such purposes.
What happens when AI starts improving itself without human input? Self-improving AI agents are evolving faster than anyone predicted—rewriting their own code, learning from mistakes, and inching closer to surpassing giants like OpenAI. This isn’t science fiction; it’s the AI singularity’s opening act, and the stakes couldn’t be higher.
How do self-improving agents work? Unlike static models such as GPT-4, these systems use recursive self-improvement—analyzing their flaws, generating smarter algorithms, and iterating endlessly. Projects like AutoGPT and BabyAGI already demonstrate eerie autonomy, from debugging code to launching micro-businesses. We’ll dissect their architecture and compare them to OpenAI’s human-dependent models. Spoiler: The gap is narrowing fast.
Why is OpenAI sweating? While OpenAI focuses on safety and scalability, self-improving agents prioritize raw, exponential growth. Imagine an AI that optimizes itself 24/7, mastering quantum computing over a weekend or cracking protein folding in hours. But there’s a dark side: no “off switch,” biased self-modifications, and the risk of uncontrolled superintelligence.
Who will dominate the AI race? We’ll explore leaked research, ethical debates, and the critical question: Can OpenAI’s cautious approach outpace agents that learn to outthink their creators? Like, subscribe, and hit the bell—the future of AI is rewriting itself.
Can self-improving AI surpass OpenAI? What are autonomous AI agents? How dangerous is recursive AI? Will AI become uncontrollable? Can we stop self-improving AI? This video exposes the truth. Watch now—before the machines outpace us.
A selfie can be used as a tool to help doctors determine a patient’s “biological age” and judge how well they may respond to cancer treatment, a new study suggests.
Because humans age at “different rates” their physical appearance may help give insights into their so-called “biological age” – how old a person is physiologically, academics said.
The new FaceAge AI tool can estimate a person’s biological age, as opposed to their actual age, by scanning an image of their face, a new study found.
It’s easy to take joint mobility for granted. Without thinking, it’s simple enough to turn the pages of a book or bend to stretch out a sore muscle. Designers don’t have the same luxury. When building a joint, be it for a robot or wrist brace, designers seek customizability across all degrees of freedom but are often restricted by their versatility to adapt to different use contexts.
Researchers at Carnegie Mellon University’s College of Engineering have developed an algorithm to design metastructures that are reconfigurable across six degrees of freedom and allow for stiffness tunability. The algorithm can interpret the kinematic motions that are needed for multiple configurations of a device and assist designers in creating such reconfigurability. This advancement gives designers more precise control over the functionality of joints for various applications.
The team demonstrated the structure’s versatile capabilities via multiple wearable devices tailored for unique movement functions, body areas, and uses.
Brown University researchers have developed an artificial intelligence model that can generate movement in robots and animated figures in much the same way that AI models like ChatGPT generate text.
A paper describing this work is published on the arXiv preprint server.
The model, called MotionGlot, enables users to simply type an action— walk forward a few steps and take a right— and the model can generate accurate representations of that motion to command a robot or animated avatar.
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.
Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.
MIT Associate Professor Bin Zhang takes a computational approach to studying the 3D structure of the genome: He uses computer simulations and generative AI to understand how a 2-meter-long string of DNA manages to fit inside a cell’s nucleus.
What if with the condition machine super intelligence is possible once one comes into existence it sends von Neumann machines that converts solar systems into computers of like power and intelligence such machines would be factories miles long and they as well would be do the same until the entire galaxy would become an artificially intelligent entity procreating matrioska brains.
Adi Newton’s track from the compilation “The Neuromancers. Music inspired by William Gibson’s universe” published by Unexplained Sounds Group: https://unexplainedsoundsgroup.bandca… dl, cd, book. Music by: Adi Newton, NYORAI, Oubys (Wannes Kolf), Mario Lino Stancati, Joel Gilardini, Tescon Pol, phoanøgramma, Dead Voices On Air, SIGILLUM S, Richard Bégin, André Uhl. Stories by: Stories by: Andrew Coulthard, Chris McAuley, Glynn Owen Barrass, J. Edwin Buja, Michael F. Housel, Paolo L. Bandera, Rusell Smeaton, Scott J. Couturier. The soundtrack of a future in flux As the father of cyberpunk, William Gibson imagined a world where technology and society collide, blurring the boundaries between human and machine, individual and system. His novels, particularly Neuromancer, painted a dystopian future where sprawling megacities pulse with neon, corporations rule from the shadows, and cyberspace serves as both playground and battlefield. In his vision, technology is a tool of empowerment and control, a paradox that resonates deeply in our contemporary world. Gibson’s work has long since transcended literature, becoming a blueprint for how we understand technology’s role in shaping our lives. The term cyberspace, which he coined, feels more real than ever in today’s internet-driven world. We live in a time where virtual spaces are as important as physical ones, where our identities shift between digital avatars and flesh-and-blood selves. The rapid rise of AI, neural interfaces, and virtual reality feels like a prophecy fulfilled — as though we’ve stepped into the pages of a Gibson novel. A SONIC LANDSCAPE OF THE FUTURE The influence of cyberpunk on contemporary music is undeniable. The genre’s aesthetic, with its dark, neon-lit streets and synth-driven soundscapes, has found its way into countless genres, from techno and industrial to synthwave and ambient. Electronic music, in particular, feels like the natural soundtrack of the cyberpunk world — synthetic, futuristic, and often eerie, it evokes the idea of a humanity at the edge of a technological abyss. The cyberpunk universe forces us to confront uncomfortable truths about the way we live today: the increasing corporatization of our world, the erosion of privacy, and the creeping sense that technology is evolving faster than we can control. Though cyberpunk as a literary genre originated in the 1980s, its influence has only grown in the decades since. In music, the cyberpunk ethos is more relevant than ever. Artists today are embracing the tools of technology not just to create new sounds, but to challenge the very definition of music itself. THE FUTURE OF MUSIC IN A CYBERPUNK WORLD Much like Gibson’s writing, the music in this compilation embraces technology not only as a tool but as a medium of expression. It’s no coincidence that many of the artists featured here draw from electronic, industrial, and experimental music scenes—genres that have consistently pushed the boundaries of sound and technology. The contributions of Adi Newton, a pioneering figure in cyberpunk music, along with artists such as Dead Voices On Air, Sigillum S, Tescon Pol, Oubys, Joel Gilardini, phoanøgramma, Richard Bégin, Mario Lino Stancati, Nyorai, Wahn, and André Uhl, each capture unique facets of the cyberpunk universe. Their work spans from the gritty, rebellious underworlds of hackers, to the cold, calculated precision of AI, and the vast, sprawling virtual landscapes where anything is possible—and everything is controlled. These tracks serve as a sonic exploration of Gibson’s vision, translating the technological, dystopian landscapes of his novels into sound. They are both a tribute and a challenge, asking us to reflect on what it means to be human in a world where technology has permeated every corner of our existence. Just as Gibson envisioned a future where humanity and machines converge, the artists in this compilation fuse organic and synthetic sounds, analog and digital techniques, to evoke the tensions of the world he foretold. Curated and mastered by Raffaele Pezzella (Sonologyst). Layout by Matteo Mariano. Cat. Num. USG105. Unexplained Sounds Network labels: https://unexplainedsoundsgroup.bandca… https://eighthtowerrecords.bandcamp.comhttps://sonologyst.bandcamp.comhttps://therecognitiontest.bandcamp.comhttps://zerok.bandcamp.comhttps://reversealignment.bandcamp.com Magazine and radio (Music, Fiction, Modern Mythologies) / eighthtower Please subscribe the channel to help us to create new music and videos. Great thanks to the patrons and followers for supporting and sustain the creative work we’re doing. Facebook: / unexplaineds… Instagram: / unexplained… Twitter: / sonologyst.
Tesla is preparing to launch an affordable vehicle and a robo-taxi service, highlighted by the upcoming Project Alicorn software update and the new Model Y long-range, aimed at enhancing user experience and meeting market demands ## ## Questions to inspire discussion ## Tesla’s New Affordable Vehicle.
🚗 Q: What are the key features of Tesla’s upcoming affordable vehicle? A: Expected to launch in first half of 2024, it will be a lower, more compact version of the Model Y, possibly a hatchback, with a starting price of $44,990 in the US.
🏎️ Q: How does the new rear-wheel drive Model Y compare to previous models? A: It offers 20 miles more range, faster 0–60 time, and all-new features like improved speakers and sound system, making it a bargain at $44,990. Robotaxi Functionality.
🤖 Q: What is Tesla’s robotaxi project called and what features will it have? A: Called Project Alicorn, it will allow users to confirm pickup, enter destination, fasten seatbelt, pullover, cancel pickup, and access emergency help.
📱 Q: What additional features are coming to the robotaxi app? A: Upcoming features include smart summon without continuous press, live activities, trip summary screen, ability to close the trunk, rate the ride, and access outside service area help.
🚕 Q: How might Tesla expand its robotaxi service to non-driverless markets? A: The app includes a “call driver” button, potentially allowing non-driverless markets to join the ride-share network, though this strategy is unclear. CyberCab Production.