Menu

Blog

Archive for the ‘robotics/AI’ category: Page 207

May 11, 2024

Scientists uncover quantum-inspired vulnerabilities in neural networks: the role of conjugate variables in system attacks

Posted by in categories: mathematics, quantum physics, robotics/AI

In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics. Their work, published in the National Science Review, draws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle—a well-established theory in quantum physics that highlights the challenges of measuring certain pairs of properties simultaneously.

The researchers’ quantum-inspired analysis of neural network vulnerabilities suggests that adversarial attacks leverage the trade-off between the precision of input features and their computed gradients. “When considering the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” stated in the paper by Dr. Jun-Jie Zhang, whose expertise lies in mathematical physics.

This research is hopeful to prompt a reevaluation of the assumed robustness of neural networks and encourage a deeper comprehension of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Meng observed a compromise between the model’s accuracy and its resilience.

May 11, 2024

Optimizing Graph Neural Network Training with DiskGNN: A Leap Toward Efficient Large-Scale Learning

Posted by in categories: innovation, robotics/AI

Graph Neural Networks (GNNs) are crucial in processing data from domains such as e-commerce and social networks because they manage complex structures. Traditionally, GNNs operate on data that fits within a system’s main memory. However, with the growing scale of graph data, many networks now require methods to handle datasets that exceed memory limits, introducing the need for out-of-core solutions where data resides on disk.

Despite their necessity, existing out-of-core GNN systems struggle to balance efficient data access with model accuracy. Current systems face a trade-off: either suffer from slow input/output operations due to small, frequent disk reads or compromise accuracy by handling graph data in disconnected chunks. For instance, while pioneering, these challenges have limited previous solutions like Ginex and MariusGNN, showing significant drawbacks in training speed or accuracy.

The DiskGNN framework, developed by researchers from Southern University of Science and Technology, Shanghai Jiao Tong University, Centre for Perceptual and Interactive Intelligence, AWS Shanghai AI Lab, and New York University, emerges as a transformative solution specifically designed to optimize the speed and accuracy of GNN training on large datasets. This system utilizes an innovative offline sampling technique that prepares data for quick access during training. By preprocessing and arranging graph data based on expected access patterns, DiskGNN reduces unnecessary disk reads, significantly enhancing training efficiency.

May 11, 2024

The Quest for AGI Continues Despite Dire Warnings From Experts

Posted by in category: robotics/AI

Musk, Gates, Hawking, Altman and Putin all fear artificial general intelligence, AGI. But what is AGI and why might it be an advantage that more people are trying to develop it despite very serious risks?

“We are all so small and weak. Imagine how easy life would be if we had an owl to help us build nests,” said one sparrow to the flock. Others agreed:

“Yes, and we could use it to look after our elderly and our children. And it could give us good advice and keep an eye on the cat.”

May 11, 2024

Nick Bostrom’s ‘Deep Utopia’ On Our AI Future: Can We Have Meaning And Fun?

Posted by in categories: cosmology, robotics/AI

A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book, Superintelligence, helped to wake the world up to the impact of the first Big Bang in AI, the arrival of deep learning. Since then we have had a second Big Bang in AI, with the introduction of transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside.

Deep Utopia is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have to look up words like “modulo” and “simpliciter.” Despite its density and its sometimes grim conclusions, Superintelligence had a sprinkling of playful self-ridicule and snark. There is much more of this in the current offering.

The structure of Deep Utopia is deeply odd. The book’s core is a series of lectures by an older version of the author, which are interrupted a couple of times by conflicting bookings of the auditorium, and once by a fire alarm. The lectures are attended and commented on by three students, Kelvin, Tessius and Firafax. At one point they break the theatrical fourth wall by discussing whether they are fictional characters in a book, a device reminiscent of the 1991 novel Sophie’s World.

May 11, 2024

Dom Heinrich on LinkedIn: Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes…

Posted by in categories: innovation, robotics/AI

Efficient. Fast. Autonomous. And one day it will erase humans: #AI I personally always said there is another perspective to artificial intelligence and the only thing that is super is the outcome for humans. Philosopher Nick Bostom has a new book, and it’s finally acknowledging the potential of a harmonious human-AI relationship and its problem solving capabilities. AI = augmented intelligence #design #ai #problemsolving #innovation #creativeai

May 11, 2024

AI Ethics Surpass Human Judgment in New Moral Turing Test

Posted by in categories: ethics, law, robotics/AI, transportation

A recent study revealed that when individuals are given two solutions to a moral dilemma, the majority tend to prefer the answer provided by artificial intelligence (AI) over that given by another human.

The recent study, which was conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations, and that they’re not necessarily operating in the way we think when we’re interacting with them.”

May 11, 2024

AlphaFold 3 offers even more accurate protein structure prediction

Posted by in categories: biotech/medical, robotics/AI

AlphaFold 3 by DeepMind accurately predicts protein structures and interactions, transforming drug discovery and structural biology.

May 11, 2024

Bumble Founder Says Future of Dating Is Your AI Will Date Other People’s AIs and Hook You Up With the Best Matches

Posted by in categories: futurism, robotics/AI

The founder of the dating app Bumble Whitney Wolfe Herd believes the future of dating will involve having your personal AI “dating concierge” talk to hundreds of other AIs to find a match.

That unabashed vision may sound familiar: it’s literally the plot of a 2017 episode of “Black Mirror,” as countless people on social have pointed out.

“You could, in the near future, be talking to your AI dating concierge,” Wolfe Herd, who stepped down as Bumble CEO in 2023 but remains involved in the company, told an audience at the Bloomberg Technology Summit on Thursday. “You could share your insecurities. There is a world where your dating concierge could go and date for you with other dating concierges.”

May 11, 2024

$100b Slaughterbots. Godfather of AI shows how AI will kill us, how to avoid it

Posted by in categories: Elon Musk, existential risks, robotics/AI

New Atlas robot from Boston Dynamics and Figure 1 from OpenAI, leaked $100b OpenAI plan and a new project to avoid our extinction.
Sam Altman, Elon Musk, Geoffrey Hinton, Sora.

To support us and learn more about the project, please visit: / digitalengine.

Continue reading “$100b Slaughterbots. Godfather of AI shows how AI will kill us, how to avoid it” »

May 11, 2024

Brain-Inspired Computer Approaches Brain-Like Size

Posted by in categories: robotics/AI, supercomputing

Human Brain as Supercomputer

Brain-emulating computers hold the promise of vastly lower energy computation and better performance on certain tasks. “The human brain is the most advanced supercomputer in the universe, and it consumes only 20 watts to achieve things that artificial intelligence systems today only dream of,” says Hector Gonzalez, cofounder and co-CEO of SpiNNcloud Systems. “We’re basically trying to bridge the gap between brain inspiration and artificial systems.”

Aside from sheer size, a distinguishing feature of the SpiNNaker2 system is its flexibility. Traditionally, most neuromorphic computers emulate the brain’s spiking nature: Neurons fire off electrical spikes to communicate with the neurons around them. The actual mechanism of these spikes in the brain is quite complex, and neuromorphic hardware often implements a specific simplified model. The SpiNNaker2 can implement a broad range of such models however, as they are not hardwired into its architecture.

Page 207 of 2,432First204205206207208209210211Last