Menu

Blog

Archive for the ‘information science’ category: Page 216

Feb 23, 2020

RAFT 2035: Roadmap to Abundance, Flourishing, and Transcendence, by 2035 by David Wood

Posted by in categories: biotech/medical, drones, information science, nanotechnology, robotics/AI

I’ve been reading an excellent book by David Wood, entitled, which was recommended by my pal Steele Hawes. I’ve come to an excellent segment of the book that I will quote now.

“One particular challenge that international trustable monitoring needs to address is the risk of more ever powerful weapon systems being placed under autonomous control by AI systems. New weapons systems, such as swarms of miniature drones, increasingly change their configuration at speeds faster than human reactions can follow. This will lead to increased pressures to transfer control of these systems, at critical moments, from human overseers to AI algorithms. Each individual step along the journey from total human oversight to minimal human oversight might be justified, on grounds of a balance of risk and reward. However, that series of individual decisions adds up to an overall change that is highly dangerous, given the potential for unforeseen defects or design flaws in the AI algorithms being used.”


The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities: enormous opportunities and enormous risks.

Feb 23, 2020

AI Just Discovered a New Antibiotic to Kill the World’s Nastiest Bacteria

Posted by in categories: biotech/medical, information science, robotics/AI

An AI algorithm found an antibiotic that wipes out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria in the world.

Feb 21, 2020

Solving a Higgs optimization problem with quantum annealing for machine learning

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

A machine learning algorithm implemented on a quantum annealer—a D-Wave machine with 1,098 superconducting qubits—is used to identify Higgs-boson decays from background standard-model processes.

Feb 20, 2020

Mixed-signal hardware security thwarts powerful electromagnetic attacks

Posted by in categories: encryption, information science, internet, security

Security of embedded devices is essential in today’s internet-connected world. Security is typically guaranteed mathematically using a small secret key to encrypt the private messages.

When these computationally secure encryption algorithms are implemented on a physical hardware, they leak critical side-channel information in the form of power consumption or electromagnetic radiation. Now, Purdue University innovators have developed technology to kill the problem at the source itself—tackling physical-layer vulnerabilities with physical-layer solutions.

Continue reading “Mixed-signal hardware security thwarts powerful electromagnetic attacks” »

Feb 20, 2020

New artificial intelligence algorithm better predicts corn yield

Posted by in categories: food, information science, robotics/AI

With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real time. A new study from an interdisciplinary research group at University of Illinois offers a promising approach to efficiently and accurately process precision ag data.

Feb 17, 2020

Researchers devise approach to reduce biases in computer vision data sets

Posted by in categories: information science, robotics/AI

Addressing problems of bias in artificial intelligence, computer scientists from Princeton and Stanford University have developed methods to obtain fairer data sets containing images of people. The researchers propose improvements to ImageNet, a database of more than 14 million images that has played a key role in advancing computer vision over the past decade.

ImageNet, which includes images of objects and landscapes as well as people, serves as a source of training data for researchers creating machine learning algorithms that classify images or recognize elements within them. ImageNet’s unprecedented scale necessitated automated image collection and crowdsourced image annotation. While the database’s person categories have rarely been used by the research community, the ImageNet team has been working to address biases and other concerns about images featuring people that are unintended consequences of ImageNet’s construction.

“Computer vision now works really well, which means it’s being deployed all over the place in all kinds of contexts,” said co-author Olga Russakovsky, an assistant professor of computer science at Princeton. “This means that now is the time for talking about what kind of impact it’s having on the world and thinking about these kinds of fairness issues.”

Feb 15, 2020

How to Make a Consciousness Meter

Posted by in categories: information science, neuroscience

Yes, you can detect another person’s consciousness. Christof Koch described a method called ‘zap and zip’. Transcranial magnetic stimulation is the ‘zap’. Brain activity is detected with an EEG and analyzed with a data compression algorithm, which is the ‘zip’. Then the value of the perturbational complexity index (PCI) is calculated. If the PCI is above 0.31 then you are conscious. If the PCI is below 0.31 then you are unconscious. If this link does not work then go to the library and look at the November 2017 issue of Scientific American. It is the cover story.


Zapping the brain with magnetic pulses while measuring its electrical activity is proving to be a reliable way to detect consciousness.

Feb 14, 2020

Study unveils security vulnerabilities in EEG-based brain-computer interfaces

Posted by in categories: information science, robotics/AI, security

Brain-computer interfaces (BCIs) are tools that can connect the human brain with an electronic device, typically using electroencephalography (EEG). In recent years, advances in machine learning (ML) have enabled the development of more advanced BCI spellers, devices that allow people to communicate with computers using their thoughts.

So far, most studies in this area have focused on developing BCI classifiers that are faster and more reliable, rather than investigating their possible vulnerabilities. Recent research, however, suggests that algorithms can sometimes be fooled by attackers, whether they are used in computer vision, speech recognition, or other domains. This is often done using , which are tiny perturbations in data that are indistinguishable by humans.

Researchers at Huazhong University of Science and Technology have recently carried out a study investigating the security of EEG-based BCI spellers, and more specifically, how they are affected by adversarial perturbations. Their paper, pre-published on arXiv, suggests that BCI spellers are fooled by these perturbations and are thus highly vulnerable to adversarial attacks.

Feb 13, 2020

These bionic shorts help turn an epic hike into a leisurely stroll

Posted by in categories: cyborgs, information science, robotics/AI, transhumanism, wearables

Forget the Thighmaster. Someday you might add a spring to your step when walking or running using a pair of mechanically powered shorts.

Step up: The lightweight exoskeleton-pants were developed by researchers at Harvard University and the University of Nebraska, Omaha. They are the first device to assist with both walking and running, using an algorithm that adapts to each gait.

Making strides: The super-shorts show how wearable exoskeleton technology might someday help us perform all sorts of tasks. Progress in materials, actuators, and machine learning has led to a new generation of lighter, more powerful, and more adaptive wearable systems. Bulkier and heavier commercial systems are already used to help people with disabilities and workers in some factories and warehouses.

Feb 10, 2020

Particle Tracking at CERN with Machine Learning

Posted by in categories: information science, nuclear energy, particle physics, robotics/AI

TrackML was a Kaggle competition in 2018 with $25 000 in cash prizes where the challenge was to reconstruct particle tracks from 3D points left in silicon detectors. CERN (the European Organization for Nuclear Research) provided data over particles collision events. The rate at which they occur over there is in the neighborhood of hundreds of millions of collisions per second, or tens of petabytes per year. There is a clear need to be as efficient as possible when sifting through such an amount of data, and this is where machine learning methods may be of help.

Particles, in this case protons, are boosted to high energies inside the Large Hadron Collider (LHC) — each beam can reach 6.5 TeV giving a total of 13 TeV when colliding. Electromagnetic fields are used to accelerate the electrically charged protons in a 27 kilometers long loop. When the proton beams collide they produce a diverse set of subatomic byproducts which quickly decay, holding valuable information for some of the most fundamental questions in physics.

Detectors are made of layers upon layers of subdetectors, each designed to look for specific particles or properties. There are calorimeters that measure energy, particle-identification detectors to pin down what kind of particle it is and tracking devices to calculate the path of a particle. [1] We are of course interested in the tracking, tiny electrical signals are recorded as particles move through those types of detectors. What I will discuss is methods to reconstruct these recorded patterns of tracks, specifically algorithms involving machine learning.