Archive for the ‘information science’ category: Page 239
Apr 13, 2019
Environmentalists are Wrong: Nature Isn’t Sacred and We Should Replace It
Posted by Zoltan Istvan in categories: biotech/medical, ethics, food, information science, life extension, robotics/AI, space, sustainability, transhumanism
Environmentalism and climate change are increasingly being pushed on us everywhere, and I wanted to write the transhumanism and life extension counter argument on why I prefer new technology over nature and sustainability. Here’s my new article:
On a warming planet bearing scars of significant environmental destruction, you’d think one of the 21st Century’s most notable emerging social groups—transhumanists—would be concerned. Many are not. Transhumanists first and foremost want to live indefinitely, and they are outraged at the fact their bodies age and are destined to die. They blame their biological nature, and dream of a day when DNA is replaced with silicon and data.
Their enmity of biology goes further than just their bodies. They see Mother Earth as a hostile space where every living creature—be it a tree, insect, mammal, or virus—is out for itself. Everything is part of the food chain, and subject to natural law: consumption by violent murder in the preponderance of cases. Life is vicious. It makes me think of pet dogs and cats, and how it’s reported they sometimes start eating their owner after they’ve died.
Continue reading “Environmentalists are Wrong: Nature Isn’t Sacred and We Should Replace It” »
Apr 11, 2019
A New Treatment for Alzheimer’s? It Starts With Lifestyle
Posted by Genevieve Klien in categories: biotech/medical, information science, neuroscience
Armed with big data, researchers turn to customized lifestyle changes to fight the disease.
Apr 10, 2019
New algorithm optimizes quantum computing problem-solving
Posted by Quinn Sena in categories: business, computing, information science, particle physics, quantum physics
Tohoku University researchers have developed an algorithm that enhances the ability of a Canadian-designed quantum computer to more efficiently find the best solution for complicated problems, according to a study published in the journal Scientific Reports.
Quantum computing takes advantage of the ability of subatomic particles to exist in more than one state at the same time. It is expected to take modern-day computing to the next level by enabling the processing of more information in less time.
The D-Wave quantum annealer, developed by a Canadian company that claims it sells the world’s first commercially available quantum computers, employs the concepts of quantum physics to solve ‘combinatorial optimization problems.’ A typical example of this sort of problem asks the question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the original city?” Businesses and industries face a large range of similarly complex problems in which they want to find the optimal solution among many possible ones using the least amount of resources.
Continue reading “New algorithm optimizes quantum computing problem-solving” »
Apr 9, 2019
Scientists build a machine to generate quantum superposition of possible futures
Posted by Genevieve Klien in categories: computing, information science, particle physics, quantum physics
In the 2018 movie Avengers: Infinity War, a scene featured Dr. Strange looking into 14 million possible futures to search for a single timeline in which the heroes would be victorious. Perhaps he would have had an easier time with help from a quantum computer. A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) and Griffith University in Australia have constructed a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.
“When we think about the future, we are confronted by a vast array of possibilities,” explains Assistant Professor Mile Gu of NTU Singapore, who led development of the quantum algorithm that underpins the prototype “These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.” What he and his research group realised, however, was that a quantum computer can examine all possible futures by placing them in a quantum superposition – similar to Schrödinger’s famous cat, which is simultaneously alive and dead.
To realise this scheme, they joined forces with the experimental group led by Professor Geoff Pryde at Griffith University. Together, the team implemented a specially devised photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons – quantum particles of light. They then demonstrated that the state of the quantum device was a superposition of multiple potential futures, weighted by their probability of occurrence.
Apr 9, 2019
The EU releases guidelines to encourage ethical AI development
Posted by Derick Lee in categories: information science, policy, robotics/AI
The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.
The EU wants AI that’s fair and accountable, respects human autonomy and prevents harm.
Apr 8, 2019
AI systems should be accountable, explainable, and unbiased, says EU
Posted by Caycee Dee Neely in categories: governance, information science, robotics/AI, sustainability
Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes. — Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable. — Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen. — Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make. — Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines. — Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change” — Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.
AI technologies should be accountable, explainable, and unbiased, says EU.
Apr 8, 2019
QC — Cracking RSA with Shor’s Algorithm
Posted by Quinn Sena in categories: cybercrime/malcode, encryption, information science
With new advances in technology it all comes down to simple factoring. Classical factoring systems are outdated where some problems would take 80 billion years to solve but with new technologies such as the dwave 2 it can bring us up to speed to do the same problems in about 2 seconds. Shores algorithm shows us also we can hack anything with it simply would need the technology and code simple enough and strong enough. Basically with new infrastructure we can do like jason…
RSA is the standard cryptographic algorithm on the Internet. The method is publicly known but extremely hard to crack. It uses two keys for encryption. The public key is open and the client uses it to encrypt a random session key. Anyone intercepts the encrypted key must use the second key, the private key, to decrypt it. Otherwise, it is just garbage. Once the session key is decrypted, the server uses it to encrypt and decrypt further messages with a faster algorithm. So, as long as we keep the private key safe, the communication will be secure.
RSA encryption is based on a simple idea: prime factorization. Multiplying two prime numbers is pretty simple, but it is hard to factorize its result. For example, what are the factors for 507,906,452,803? Answer: 566,557 × 896,479.
Continue reading “QC — Cracking RSA with Shor’s Algorithm” »
Apr 5, 2019
Using AI to Make Better AI
Posted by Quinn Sena in categories: information science, robotics/AI, space travel
Next month, however, a team of MIT researchers will be presenting a so-called “Proxyless neural architecture search” algorithm that can speed up the AI-optimized AI design process by 240 times or more. That would put faster and more accurate AI within practical reach for a broad class of image recognition algorithms and other related applications.
“There are all kinds of tradeoffs between model size, inference latency, accuracy, and model capacity,” says Song Han, assistant professor of electrical engineering and computer science at MIT. Han adds that:
“[These] all add up to a giant design space. Previously people had designed neural networks based on heuristics. Neural architecture search tried to free this labor intensive, human heuristic-based exploration [by turning it] into a learning-based, AI-based design space exploration. Just like AI can [learn to] play a Go game, AI can [learn how to] design a neural network.”
Apr 5, 2019
Agriculture: Machine learning can reveal optimal growing conditions to maximize taste, other features
Posted by Genevieve Klien in categories: biotech/medical, chemistry, food, genetics, information science, robotics/AI
What goes into making plants taste good? For scientists in MIT’s Media Lab, it takes a combination of botany, machine-learning algorithms, and some good old-fashioned chemistry.
Using all of the above, researchers in the Media Lab’s Open Agriculture Initiative report that they have created basil plants that are likely more delicious than any you have ever tasted. No genetic modification is involved: The researchers used computer algorithms to determine the optimal growing conditions to maximize the concentration of flavorful molecules known as volatile compounds.
But that is just the beginning for the new field of “cyber agriculture,” says Caleb Harper, a principal research scientist in MIT’s Media Lab and director of the OpenAg group. His group is now working on enhancing the human disease-fighting properties of herbs, and they also hope to help growers adapt to changing climates by studying how crops grow under different conditions.