Menu

Blog

Archive for the ‘information science’ category: Page 147

Feb 9, 2022

Top resources to learn quantum machine learning

Posted by in categories: business, information science, quantum physics, robotics/AI

Quantum computing and machine learning are two of the most exciting technologies that can transform businesses. We can only imagine how powerful it can be if we can combine the power of both of these technologies. When we can integrate quantum algorithms in programs based on machine learning, that is called quantum machine learning. This fascinating area has been a major area of tech firms, and they have brought out tools and platforms to deploy such algorithms effectively. Some of these include TensorFlow Quantum from Google, Quantum Machine Learning (QML) library from Microsoft, QC Ware Forge built on Amazon Braket, etc.

Students skilled in working with quantum machine learning algorithms can be in great demand due to the opportunities the field holds. Let us have a look at a few online courses one can use to learn quantum machine learning.

In this course, the students will start with quantum computing and quantum machine learning basics. The course will also cover topics on building Qnodes and Customised Templates. It also teaches students to calculate Autograd and Loss Function with quantum computing using Pennylane and to develop with the Pennylane.ai API. The students will also learn how to build their own Pennylane Plugin and turn Quantum Nodes into Tensorflow Keras Layers.

Feb 8, 2022

A paralyzed man whose spine was severed in a motorbike crash is walking again thanks to electrodes implanted in his spine

Posted by in categories: biotech/medical, information science, robotics/AI

Michel Roccati, 30, was one of three paralyzed men to test a prototype of a spinal implant modified to help them move their limbs.


Some algorithms can now compose a 3D scene from 2D images—creating possibilities in video games, robotics, and autonomous driving.

Feb 7, 2022

A New Trick Lets Artificial Intelligence See in 3D

Posted by in categories: entertainment, information science, robotics/AI

Some algorithms can now compose a 3D scene from 2D images—creating possibilities in video games, robotics, and autonomous driving.

Feb 7, 2022

Alistair Fulton — Connecting & Enabling A Smarter Planet — VP, Wireless & Sensing Products, Semtech

Posted by in categories: computing, information science, internet, satellites

Connecting & enabling a smarter planet — alistair fulton, VP, wireless & sensing products, semtech.


Alistair Fulton (https://www.semtech.com/company/executive-leadership/alistair-fulton) is the Vice President and General Manager of Semtech’s Wireless and Sensing Products Group.

Continue reading “Alistair Fulton — Connecting & Enabling A Smarter Planet — VP, Wireless & Sensing Products, Semtech” »

Feb 7, 2022

Astronomers spot a wandering black hole in empty space for the first time

Posted by in categories: climatology, cosmology, existential risks, information science, robotics/AI, sustainability

Machine learning can work wonders, but it’s only one tool among many.

Artificial intelligence is among the most poorly understood technologies of the modern era. To many, AI exists as both a tangible but ill-defined reality of the here and now and an unrealized dream of the future, a marvel of human ingenuity, as exciting as it is opaque.

It’s this indistinct picture of both what the technology is and what it can do that might engender a look of uncertainty on someone’s face when asked the question, “Can AI solve climate change?” “Well,” we think, “it must be able to do *something*,” while entirely unsure of just how algorithms are meant to pull us back from the ecological brink.

Continue reading “Astronomers spot a wandering black hole in empty space for the first time” »

Feb 6, 2022

AI learns physics to optimize particle accelerator performance

Posted by in categories: biotech/medical, finance, information science, robotics/AI

Machine learning, a form of artificial intelligence, vastly speeds up computational tasks and enables new technology in areas as broad as speech and image recognition, self-driving cars, stock market trading and medical diagnosis.

Before going to work on a given task, algorithms typically need to be trained on pre-existing data so they can learn to make fast and accurate predictions about future scenarios on their own. But what if the job is a completely new one, with no data available for training?

Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated that they can use machine learning to optimize the performance of particle accelerators by teaching the algorithms the basic principles behind operations—no prior data needed.

Feb 4, 2022

Removing water from underwater photography

Posted by in category: information science

A new algorithm for underwater photography makes marine life appear as clear as it would on land, and it’s helping scientists understand the ocean better.

Feb 3, 2022

Mimicking the brain to realize ‘human-like’ virtual assistants

Posted by in categories: information science, robotics/AI

Speech is more than just a form of communication. A person’s voice conveys emotions and personality and is a unique trait we can recognize. Our use of speech as a primary means of communication is a key reason for the development of voice assistants in smart devices and technology. Typically, virtual assistants analyze speech and respond to queries by converting the received speech signals into a model they can understand and process to generate a valid response. However, they often have difficulty capturing and incorporating the complexities of human speech and end up sounding very unnatural.

Now, in a study published in the journal IEEE Access, Professor Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), and Dung Kim Tran, a doctoral course student at JAIST, have developed a system that can capture the information in similarly to how humans perceive speech.

“In humans, the auditory periphery converts the information contained in input speech signals into neural activity patterns (NAPs) that the brain can identify. To emulate this function, we used a matching pursuit algorithm to obtain sparse representations of speech signals, or signal representations with the minimum possible significant coefficients,” explains Prof. Unoki. “We then used psychoacoustic principles, such as the equivalent rectangular bandwidth scale, gammachirp function, and masking effects to ensure that the auditory sparse representations are similar to that of the NAPs.”

Feb 3, 2022

Does AI Improve Human Judgment?

Posted by in categories: business, information science, robotics/AI

Decision-making has mostly revolved around learning from mistakes and making gradual, steady improvements. For several centuries, evolutionary experience has served humans well when it comes to decision-making. So, it is safe to say that most decisions human beings make are based on trial and error. Additionally, humans rely heavily on data to make key decisions. Larger the amount of high-integrity data available, the more balanced and rational their decisions will be. However, in the age of big data analytics, businesses and governments around the world are reluctant to use basic human instinct and know-how to make major decisions. Statistically, a large percentage of companies globally use big data for the purpose. Therefore, the application of AI in decision-making is an idea that is being adopted more and more today than in the past.

However, there are several debatable aspects of using AI in decision-making. Firstly, are *all* the decisions made with inputs from AI algorithms correct? And does the involvement of AI in decision-making cause avoidable problems? Read on to find out: involvement of AI in decision-making simplifies the process of making strategies for businesses and governments around the world. However, AI has had its fair share of missteps on several occasions.

Feb 3, 2022

Mathematicians Prove 30-Year-Old André-Oort Conjecture

Posted by in categories: information science, mathematics

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.