Mar 22, 2024
AI Gets Inner Monologue And Becomes Incredibly Smarter
Posted by Chris Smedley in category: robotics/AI
The good news is that a new company has developed an AI inner monologue, the bad news is this makes it significantly smarter.
The good news is that a new company has developed an AI inner monologue, the bad news is this makes it significantly smarter.
A groundbreaking nanosurgical tool — about 500 times thinner than a human hair — could be transformative for cancer research and give insights into treatment resistance that no other technology has been able to do, according to a new study.
The high-tech double-barrel nanopipette, developed by University of Leeds scientists, and applied to the global medical challenge of cancer, has — for the first time — enabled researchers to see how individual living cancer cells react to treatment and change over time — providing vital understanding that could help doctors develop more effective cancer medication.
The tool has two nanoscopic needles, meaning it can simultaneously inject and extract a sample from the same cell, expanding its potential uses. And the platform’s high level of semi-automation has sped up the process dramatically, enabling scientists to extract data from many more individual cells, with far greater accuracy and efficiency than previously possible, the study shows.
In the last decade, thanks to advances in AI, the internet of things, machine learning and sensor technologies, the fantasy of digital twins has taken off. BMW has created a digital twin of a production plant in Bavaria. Boeing is using digital twins to design airplanes. The World Economic Forum hailed digital twins as a key technology in the “fourth industrial revolution.” Tech giants like IBM, Nvidia, Amazon and Microsoft are just a few of the big players now providing digital twin capabilities to automotive, energy and infrastructure firms.
The inefficiencies of the physical world, so the sales pitch goes, can be ironed out in a virtual one and then reflected back onto reality. Test virtual planes in virtual wind tunnels, virtual tires on virtual roads. “Risk is removed” reads a recent Microsoft advertorial in Wired, and “problems can be solved before they happen.”
All of a sudden, Dirk Helbing and Javier Argota Sánchez-Vaquerizo wrote in a 2022 paper, “it has become an attractive idea to create digital twins of everything.” Cars, trains, ships, buildings, airports, farms, power plants, oil fields and entire supply chains are all being cloned into high-fidelity mirror images made of bits and bytes. Attempts are being undertaken to twin beaches, forests, apple orchards, tomato plants, weapons and war zones. As beaches erode, forests grow and bombs explode, so too will their twins, watched closely by technicians for signals to improve outcomes in the real world.
Modified protein-design tool could make it easier to tackle challenging drug targets — but AI antibodies are still a long way from reaching the clinic.
“They remove some of the magic,” said Dimitris Papailiopoulos, a machine learning researcher at the University of Wisconsin, Madison. “That’s a good thing.”
Training Transformers
Large language models are built around mathematical structures called artificial neural networks. The many “neurons” inside these networks perform simple mathematical operations on long strings of numbers representing individual words, transmuting each word that passes through the network into another. The details of this mathematical alchemy depend on another set of numbers called the network’s parameters, which quantify the strength of the connections between neurons.
I think AI agent workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it.
CNBC’s Steve Kovach joins ‘Halftime Report’ to discuss the latest news on Microsoft’s new AI PC launch.
Machine-learning system trained on millions of human audio clips shows promise for detecting COVID-19 and tuberculosis.
This robot is equipped with AI-backed deep learning algorithms to autonomously manage assisting users with underlying physiological conditions.
The robot illustrated seamless functioning that supports users in walking, standing, and climbing stairs or ramps. Scientists call it, a “unified control framework.”
Apple quietly submitted a research paper last week related to its work on a multimodal large language model (MLLM) called MM1. Apple doesn’t explain what the meaning behind the name is, but it’s possible it could stand for MultiModal 1.
Being multimodal, MM1 is capable of working with both text and images. Overall, its capabilities and design are similar to the likes of Google’s Gemini or Meta’s open-source LLM Llama 2.
An earlier report from Bloomberg said Apple was interested in incorporating Google’s Gemini AI engine into the iPhone. The two companies are reportedly still in talks to let Apple license Gemini to power some of the generative AI features coming to iOS 18.