Menu

Blog

Archive for the ‘information science’ category: Page 5

Oct 10, 2024

Overcoming ‘catastrophic forgetting’: Algorithm inspired by brain allows neural networks to retain knowledge

Posted by in categories: biological, information science, robotics/AI, transportation

Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience “catastrophic forgetting” when taught additional tasks: They can successfully learn the new assignments, but “forget” how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.

Biological brains, on the other hand, are remarkably flexible. Humans and animals can easily learn how to play a new game, for instance, without having to re-learn how to walk and talk.

Inspired by the flexibility of human and animal brains, Caltech researchers have now developed a new type of that enables neural networks to be continuously updated with new data that they are able to learn from without having to start from scratch. The algorithm, called a functionally invariant path (FIP) algorithm, has wide-ranging applications from improving recommendations on online stores to fine-tuning self-driving cars.

Oct 10, 2024

New Algorithm Enables Neural Networks to Learn Continuously

Posted by in categories: biological, information science, robotics/AI, transportation

Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience “catastrophic forgetting” when taught additional tasks: They can successfully learn the new assignments, but “forget” how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.

Biological brains, on the other hand, are remarkably flexible. Humans and animals can easily learn how to play a new game, for instance, without having to re-learn how to walk and talk.

Inspired by the flexibility of human and animal brains, Caltech researchers have now developed a new type of algorithm that enables neural networks to be continuously updated with new data that they are able to learn from without having to start from scratch. The algorithm, called a functionally invariant path (FIP) algorithm, has wide-ranging applications from improving recommendations on online stores to fine-tuning self-driving cars.

Oct 8, 2024

Quantum state tomography with locally purified density operators and local measurements

Posted by in categories: information science, quantum physics

Quantum state tomography plays a fundamental role in characterizing and evaluating the quality of quantum states produced by quantum devices. It serves as a crucial element in the advancement of quantum hardware and software, regardless of the underlying physical implementation and potential applications1,2,3. However, reconstructing the full quantum state becomes prohibitively expensive for large-scale quantum systems that exhibit potential quantum advantages4,5, as the number of measurements required increases exponentially with system size.

Recent protocols try to solve this challenge through two main steps: efficient parameterization of quantum states and utilization of carefully designed measurement schemes and classical data postprocessing algorithms. For one-dimensional (1D) systems with area law entanglement, the matrix product state (MPS)6,7,8,9,10,11,12 provides a compressed representation. It requires only a polynomial number of parameters that can be determined from local or global measurement results. Two iterative algorithms using local measurements, singular value thresholding (SVT)13 and maximum likelihood (ML)14, have been demonstrated in trapped-ion quantum simulators with up to 14 qubits15. However, SVT is limited to pure states and thus impractical for noisy intermediate-scale quantum (NISQ) systems. Meanwhile, although ML can handle mixed states represented as matrix product operators (MPOs)16,17, it suffers from inefficient classical data postprocessing.

Oct 8, 2024

Exploring the frontiers of neuromorphic engineering: A journey into brain-inspired computing

Posted by in categories: information science, nanotechnology, neuroscience, robotics/AI

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.

Oct 7, 2024

New algorithm could reduce energy requirements of AI systems by up to 95 percent

Posted by in categories: information science, robotics/AI

Researchers have developed an algorithm that could dramatically reduce the energy consumption of artificial intelligence systems.

Ad.

Scientists at BitEnergy AI created a method called “Linear-complexity multiplication” (L-Mul) that replaces complex floating-point multiplications in AI models with simpler integer additions.

Oct 6, 2024

Researchers Say Quantum Machine Learning, Quantum Optimization Could Enhance The Design And Efficiency of Clinical Trials

Posted by in categories: biotech/medical, information science, quantum physics, robotics/AI

Despite the promising findings, the study acknowledges several limitations of quantum computing. One of the primary challenges is hardware noise, which can reduce the accuracy of quantum computations. Although error correction methods are improving, quantum computing has not yet reached the level of fault tolerance needed for widespread commercial use. Additionally, the study notes that while quantum computing has shown promise in PBPK/PD modeling and site selection, further research is needed to fully realize its potential in these areas.

Looking ahead, the study suggests several future directions for research. One of the key areas for improvement is the integration of quantum algorithms with existing clinical trial infrastructure. This will require collaboration between researchers, pharmaceutical companies and regulators to ensure that quantum computing can be effectively applied in real-world clinical settings. Additionally, the study calls for more work on developing quantum algorithms that can handle the inherent variability in biological data, particularly in genomics and personalized medicine.

The research was conducted by a team from several prominent institutions. Hakan Doga, Aritra Bose, and Laxmi Parida are from IBM Research and IBM Quantum. M. Emre Sahin is affiliated with The Hartree Centre, STFC, while Joao Bettencourt-Silva is based at IBM Research, Dublin, Ireland. Anh Pham, Eunyoung Kim, Anh Pham, Eunyoung Kim and Alan Andress are from Deloitte Consulting LLP. Sudhir Saxena and Radwa Soliman are from GNQ Insilico Inc. Jan Lukas Robertus is affiliated with Imperial College London and Royal Brompton and Harefield Hospitals and Hideaki Kawaguchi is from Keio University. Finally, Daniel Blankenberg is from the Lerner Research Institute, Cleveland Clinic.

Oct 5, 2024

Numerical simulation of deformable droplets in three-dimensional, complex-shaped microchannels

Posted by in categories: computing, information science, physics

The physics of drop motion in microchannels is fundamental to provide insights when designing applications of drop-based microfluidics. In this paper, we develop a boundary-integral method to simulate the motion of drops in microchannels of finite depth with flat walls and fixed depth but otherwise arbitrary geometries. To reduce computational time, we use a moving frame that follows the droplet throughout its motion. We provide a full description of the method, including our channel-meshing algorithm, which is a combination of Monte Carlo techniques and Delaunay triangulation, and compare our results to infinite-depth simulations. For regular geometries of uniform cross section, the infinite-depth limit is approached slowly with increasing depth, though we show much faster convergence by scaling with maximum vs average velocities. For non-regular channel geometries, features such as different branch heights can affect drop partitioning, breaking the symmetric behavior usually observed in regular geometries. Moreover, non-regular geometries also present challenges when comparing the results for deep and infinite-depth channels. To probe inertial effects on drop motion, the full Navier–Stokes equations are first solved for the entire channel, and the tabulated solution is then used as a boundary condition at the moving-frame surface for the Stokes flow inside the moving frame. For moderate Reynolds numbers up to Re = 5, inertial effects on the undisturbed flow are small even for more complex geometries, suggesting that inertial contributions in this range are likely small. This work provides an important tool for the design and analysis of three-dimensional droplet-based microfluidic devices.

Oct 4, 2024

AI can reduce a 100,000-equation quantum problem to just 4 equations

Posted by in categories: information science, quantum physics, robotics/AI

The Hubbard model is a studied model in condensed matter theory and a formidable quantum problem. A team of physicists used deep learning to condense this problem, which previously required 100,000 equations, into just four equations without sacrificing accuracy. The study, titled “Deep Learning the Functional Renormalization Group,” was published on September 21 in Physical Review Letters.

Dominique Di Sante is the lead author of this study. Since 2021, he holds the position of Assistant Professor (tenure track) at the Department of Physics and Astronomy, University of Bologna. At the same time, he is a Visiting Professor at the Center for Computational Quantum Physics (CCQ) at the Flatiron Institute, New York, as part of a Marie Sklodowska-Curie Actions (MSCA) grant that encourages, among other things, the mobility of researchers.

He and colleagues at the Flatiron Institute and other international researchers conducted the study, which has the potential to revolutionize the way scientists study systems containing many interacting electrons. In addition, if they can adapt the method to other problems, the approach could help design materials with desirable properties, such as superconductivity, or contribute to clean energy production.

Oct 3, 2024

How Big Data is Saving Earth from Asteroids: A Cosmic Shield

Posted by in categories: information science, robotics/AI, space

As technology advances, Big Data will play an increasingly important role in protecting Earth from asteroids. By harnessing the power of data analytics, AI, and machine learning, scientists can monitor and predict asteroid movements with greater accuracy than ever before. This enables us to develop early warning systems and potentially deflect asteroids before they can cause harm. Aspiring data scientists interested in contributing to such significant fields can gain the necessary skills by enrolling in a data science course in Chennai, where they can learn to utilize these advanced tools and techniques.

Oct 3, 2024

AI Innovations in Diagnosing Myopic Maculopathy

Posted by in categories: biotech/medical, information science, robotics/AI

What methods can be developed to help identify symptoms of myopia and its more serious version, myopic maculopathy? This is what a recent study published in JAMA Ophthalmology hopes to address as an international team of researchers investigated how artificial intelligence (AI) algorithms can be used to identify early signs of myopic maculopathy, as left untreated it can lead to irreversible damage to a person’s eyes. This study holds the potential to help researchers develop more effective options for identifying this worldwide disease, as it is estimated that approximately 50 percent of the global population will suffer from myopia by 2050.

“AI is ushering in a revolution that leverages global knowledge to improves diagnosis accuracy, especially in its earliest stage of the disease,” said Dr. Yalin Wang, who is a professor in the School of Computing and Augmented Intelligence at Arizona State University and a co-author on the study. “These advancements will reduce medical costs and improve the quality of life for entire societies.”

For the study, the researchers used a novel AI algorithm known as NN-MobileNet to scan retinal images and classify the severity of myopic maculopathy, which currently has five levels of severity in the medical field. The team then used deep neural networks to determine what’s known as the spherical equivalent, which is how eye doctors prescribe glasses and contacts to their patients. Combining these two methods enabled researchers to create a new AI algorithm capable of identifying early signs of myopic maculopathy.

Page 5 of 322First23456789Last