Menu

Blog

Archive for the ‘information science’ category: Page 153

Dec 14, 2021

Artificial intelligence can create better lightning forecasts

Posted by in categories: climatology, information science, robotics/AI

Lightning is one of the most destructive forces of nature, as in 2020 when it sparked the massive California Lightning Complex fires, but it remains hard to predict. A new study led by the University of Washington shows that machine learning—computer algorithms that improve themselves without direct programming by humans—can be used to improve lightning forecasts.

Better lightning forecasts could help to prepare for potential wildfires, improve safety warnings for lightning and create more accurate long-range climate models.

“The best subjects for machine learning are things that we don’t fully understand. And what is something in the atmospheric sciences field that remains poorly understood? Lightning,” said Daehyun Kim, a UW associate professor of atmospheric sciences. “To our knowledge, our work is the first to demonstrate that machine learning algorithms can work for lightning.”

Dec 12, 2021

Department of Energy Announces $5.7 Million for Research on Artificial Intelligence and Machine Learning (AI/ML) for Nuclear Physics Accelerators and Detectors

Posted by in categories: information science, particle physics, robotics/AI

WASHINGTON, D.C. — Today, the U.S. Department of Energy (DOE) announced $5.7 million for six projects that will implement artificial intelligence methods to accelerate scientific discovery in nuclear physics research. The projects aim to optimize the overall performance of complex accelerator and detector systems for nuclear physics using advanced computational methods.

“Artificial intelligence has the potential to shorten the timeline for experimental discovery in nuclear physics,” said Timothy Hallman, DOE Associate Director of Science for Nuclear Physics. “Particle accelerator facilities and nuclear physics instrumentation face a variety of technical challenges in simulations, control, data acquisition, and analysis that artificial intelligence holds promise to address.”

The six projects will be conducted by nuclear physics researchers at five DOE national laboratories and four universities. Projects will include the development of deep learning algorithms to identify a unique signal for a conjectured, very slow nuclear process known as neutrinoless double beta decay. This decay, if observed, would be at least ten thousand times more rare than the rarest known nuclear decay and could demonstrate how our universe became dominated by matter rather than antimatter. Supported efforts also include AI-driven detector design for the Electron-Ion Collider accelerator project under construction at Brookhaven National Laboratory that will probe the internal structure and forces of protons and neutrons that compose the atomic nucleus.

Dec 12, 2021

3 Areas Where AI Will Boost Your Competitive Advantage

Posted by in categories: information science, robotics/AI

Algorithms are now essential for making predictions, boosting efficiency, and optimizing in real-time.

Dec 11, 2021

Why AI is the future of fraud detection

Posted by in categories: information science, robotics/AI, security

The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

Can AI find what’s hiding in your data? In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Continue reading “Why AI is the future of fraud detection” »

Dec 11, 2021

Machine learning speeds up vehicle routing

Posted by in categories: information science, mathematics, robotics/AI, transportation

Strategy accelerates the best algorithmic solvers for large sets of cities.

Waiting for a holiday package to be delivered? There’s a tricky math problem that needs to be solved before the delivery truck pulls up to your door, and MIT researchers have a strategy that could speed up the solution.

The approach applies to vehicle routing problems such as last-mile delivery, where the goal is to deliver goods from a central depot to multiple cities while keeping travel costs down. While there are algorithms designed to solve this problem for a few hundred cities, these solutions become too slow when applied to a larger set of cities.

Continue reading “Machine learning speeds up vehicle routing” »

Dec 11, 2021

The US is worried that hackers are stealing data today so quantum computers can crack it in a decade

Posted by in categories: computing, encryption, government, information science, quantum physics

While they wrestle with the immediate danger posed by hackers today, US government officials are preparing for another, longer-term threat: attackers who are collecting sensitive, encrypted data now in the hope that they’ll be able to unlock it at some point in the future.

The threat comes from quantum computers, which work very differently from the classical computers we use today. Instead of the traditional bits made of 1s and 0s, they use quantum bits that can represent different values at the same time. The complexity of quantum computers could make them much faster at certain tasks, allowing them to solve problems that remain practically impossible for modern machines—including breaking many of the encryption algorithms currently used to protect sensitive data such as personal, trade, and state secrets.

While quantum computers are still in their infancy, incredibly expensive and fraught with problems, officials say efforts to protect the country from this long-term danger need to begin right now.

Dec 11, 2021

Robot artist to perform AI generated poetry in response to Dante

Posted by in categories: information science, robotics/AI

Dante’s Divine Comedy has inspired countless artists, from William Blake to Franz Liszt, and from Auguste Rodin to CS Lewis. But an exhibition marking the 700th anniversary of the Italian poet’s death will be showcasing the work of a rather more modern devotee: Ai-Da the robot, which will make history by becoming the first robot to publicly perform poetry written by its AI algorithms.

The ultra-realistic Ai-Da, who was devised in Oxford by Aidan Meller and named after computing pioneer Ada Lovelace, was given the whole of Dante’s epic three-part narrative poem, the Divine Comedy, to read, in JG Nichols’ English translation. She then used her algorithms, drawing on her data bank of words and speech pattern analysis, to produce her own reactive work to Dante’s.

Continue reading “Robot artist to perform AI generated poetry in response to Dante” »

Dec 10, 2021

Neural network analyzes gravitational waves in real time

Posted by in categories: cosmology, information science, physics, robotics/AI

Black holes are one of the greatest mysteries of the universe—for example, a black hole with the mass of our sun has a radius of only 3 kilometers. Black holes in orbit around each other emit gravitational radiation—oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the universe at the speed of light, and are detected by observatories in the U.S. (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.

An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a , a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results are published today in Physical Review Letters.

“Our method can make very accurate statements in a few seconds about how big and massive the two were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.

Dec 10, 2021

DeepMind Says Its New AI Has Almost the Reading Comprehension of a High Schooler

Posted by in categories: information science, robotics/AI

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

It says the language model, called Gopher, was able to significantly improve its reading comprehension by ingesting massive repositories of texts online.

DeepMind boasts that its algorithm, an “ultra-large language model,” has 280 billion parameters, which are a measure of size and complexity. That means it falls somewhere between OpenAI’s GPT-3 (175 billion parameters) and Microsoft and NVIDIA’s Megatron, which features 530 billion parameters, The Verge points out.

Dec 10, 2021

Crucial leap in error mitigation for quantum computers

Posted by in categories: computing, information science, quantum physics

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed (AQT) demonstrated that an experimental method known as randomized compiling (RC) can dramatically reduce error rates in quantum algorithms and lead to more accurate and stable quantum computations. No longer just a theoretical concept for quantum computing, the multidisciplinary team’s breakthrough experimental results are published in Physical Review X.

The experiments at AQT were performed on a four-qubit superconducting quantum processor. The researchers demonstrated that RC can suppress one of the most severe types of errors in quantum computers: coherent errors.

Akel Hashim, AQT researcher, involved in the experimental breakthrough and a graduate student at the University of California, Berkeley explained: “We can perform quantum computations in this era of noisy intermediate-scale quantum (NISQ) computing, but these are very noisy, prone to errors from many different sources, and don’t last very long due to the decoherence—that is, information loss—of our qubits.”