Menu

Blog

Archive for the ‘information science’ category: Page 158

Nov 21, 2021

China unveils detailed goals for 5G-aided Industrial Internet of Things development

Posted by in categories: chemistry, information science, internet, robotics/AI

China’s Ministry of Industry and Information Technology (MIIT) on Saturday released its second batch of extended goals for promoting the usage of China’s 5G network and the Industrial Internet of Things (IIoT).

IIoT refers to the interconnection between sensors, instruments and other devices to enhance manufacturing efficiency and industrial processes. With a strong focus on machine-to-machine communication, big data and machine learning, the IIoT has been applied across many industrial sectors and applications.

The MIIT announced that the 5G IIoT will be applied in the petrochemical industry, building materials, ports, textiles and home appliances as the 2021 China 5G + Industrial Internet Conference kicked off Saturday in Wuhan, central China’s Hubei Province.

Nov 19, 2021

‘Deepfaking the mind’ could improve brain-computer interfaces for people with disabilities

Posted by in categories: information science, robotics/AI

Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs)—technology best known for creating deepfake videos and photorealistic human faces—to improve brain-computer interfaces for people with disabilities.

In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically called spike trains, can be fed into to improve the usability of (BCI).

BCI systems work by analyzing a person’s brain signals and translating that into commands, allowing the user to control like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome—when a person is fully conscious but unable to move or communicate.

Nov 19, 2021

Why This Lab Is Slicing Human Brains Into Little Pieces

Posted by in categories: information science, robotics/AI

There’s a multibillion-dollar race going on to build the first complete map of the brain, something scientists are calling the “connectome.” It involves slicing the brain into thousands of pieces, and then digitally stitching them back together using a powerful AI algorithm.

Presented by Polestar.

Continue reading “Why This Lab Is Slicing Human Brains Into Little Pieces” »

Nov 19, 2021

Researchers Find Human Learning Can be Duplicated in Synthetic Matter

Posted by in categories: information science, robotics/AI

Rutgers researchers and their collaborators have found that learning — a universal feature of intelligence in living beings — can be mimicked in synthetic matter, a discovery that in turn could inspire new algorithms for artificial intelligence (AI).

The study appears in the journal PNAS.

One of the fundamental characteristics of humans is the ability to continuously learn from and adapt to changing environments. But until recently, AI has been narrowly focused on emulating human logic. Now, researchers are looking to mimic human cognition in devices that can learn, remember and make decisions the way a human brain does.

Nov 18, 2021

Understanding Bias in AI: What Is Your Role, and Should You Care?

Posted by in categories: information science, robotics/AI

There are billions of people around the world whose online experience has been shaped by algorithms that utilize artificial intelligence (AI) and machine learning (ML). Some form of AI and ML is employed almost every time people go online, whether they are searching for content, watching a video, or shopping for a product. Not only do these technologies increase the efficiency and accuracy of consumption but, in the online ecosystem, service providers innovate upon and monetize behavioral data that is captured either directly from a user’s device, a website visit or by third parties.

Advertisers are increasingly dependent on this data and the algorithms that adtech and martech employ to understand where their ads should be placed, which ads consumers are likely to engage with, which audiences are most likely to convert, and which publisher should get credit for conversions.

Additionally, the collection and better utilization of data helps publishers generate revenue, minimize data risks and costs, and provide relevant consumer-preference-based audiences for brands.

Nov 17, 2021

A computer algorithm that speeds up experiments on plasma

Posted by in categories: biotech/medical, computing, information science, nuclear energy

A team of researchers from Tri Alpha Energy Inc. and Google has developed an algorithm that can be used to speed up experiments conducted with plasma. In their paper published in the journal Scientific Reports, the group describes how they plan to use the algorithm in nuclear fusion research.

As research into harnessing has progressed, scientists have found that some of its characteristics are too complex to be solved in a reasonable amount of time using current technology. So they have increasingly turned to computers to help. More specifically, they want to adjust certain parameters in a device created to achieve fusion in a reasonable way. Such a device, most in the field agree, must involve the creation of a certain type of that is not too hot or too cold, is stable, and has a certain desired density.

Finding the right parameters that meet these conditions has involved an incredible amount of trial and error. In this new effort, the researchers sought to reduce the workload by using a to reduce some of the needed trials. To that end, they have created what they call the “optometrist’s .” In its most basic sense, it works like an optometrist attempting to measure the visual ability of a patient by showing them images and asking if they are better or worse than other images. The idea is to use the crunching power of a computer with the intelligence of a human being—the computer generates the options and the human tells it whether a given option is better or worse.

Nov 17, 2021

Do Androids Dream of Electric Sheep? Dr. Ben Goertzel with Philip K. Dick at the Web Summit 2019

Posted by in categories: bitcoin, information science, internet, robotics/AI, singularity

Dr. Ben Goertzel with Philip K. Dick at the Web Summit in Lisbon 2019.

Ben showcases the use of OpenCog within the SingularityNET enviroment which is powering the AI of the Philip K. Dick Robot.

Continue reading “Do Androids Dream of Electric Sheep? Dr. Ben Goertzel with Philip K. Dick at the Web Summit 2019” »

Nov 17, 2021

Mathematicians derive the formulas for boundary layer turbulence 100 years after the phenomenon was first formulated

Posted by in categories: information science, mathematics

Turbulence makes many people uneasy or downright queasy. And it’s given researchers a headache, too. Mathematicians have been trying for a century or more to understand the turbulence that arises when a flow interacts with a boundary, but a formulation has proven elusive.

Now an international team of mathematicians, led by UC Santa Barbara professor Björn Birnir and the University of Oslo professor Luiza Angheluta, has published a complete description of boundary turbulence. The paper appears in Physical Review Research, and synthesizes decades of work on the topic. The theory unites empirical observations with the Navier-Stokes equation—the mathematical foundation of dynamics—into a .

This phenomenon was first described around 1920 by Hungarian physicist Theodore von Kármán and German physicist Ludwig Prandtl, two luminaries in fluid dynamics. “They were honing in on what’s called boundary layer turbulence,” said Birnir, director of the Center for Complex and Nonlinear Science. This is turbulence caused when a flow interacts with a boundary, such as the fluid’s surface, a pipe wall, the surface of the Earth and so forth.

Nov 16, 2021

New algorithms advance the computing power of early-stage quantum computers

Posted by in categories: chemistry, computing, information science, quantum physics

A group of scientists at the U.S. Department of Energy’s Ames Laboratory has developed computational quantum algorithms that are capable of efficient and highly accurate simulations of static and dynamic properties of quantum systems. The algorithms are valuable tools to gain greater insight into the physics and chemistry of complex materials, and they are specifically designed to work on existing and near-future quantum computers.

Scientist Yong-Xin Yao and his research partners at Ames Lab use the power of advanced computers to speed discovery in condensed matter physics, modeling incredibly complex quantum mechanics and how they change over ultra-fast timescales. Current high performance computers can model the properties of very simple, small quantum systems, but larger or more rapidly expand the number of calculations a computer must perform to arrive at an , slowing the pace not only of computation, but also discovery.

“This is a real challenge given the current early-stage of existing quantum computing capabilities,” said Yao, “but it is also a very promising opportunity, since these calculations overwhelm classical computer systems, or take far too long to provide timely answers.”

Nov 14, 2021

Physicists take the most detailed image of atoms to date

Posted by in categories: information science, mobile phones, particle physics

Physicists just put Apple’s latest iPhone to shame, taking the most detailed image of atoms to date with a device that magnifies images 100 million times, reports. The researchers, who set the record for the highest resolution microscope in 2018, outdid themselves with a study published last month. Using a method called electron ptychography, in which a beam of electrons is shot at an object and bounced off to create a scan that algorithms use to reverse engineer the above image, were used to visualize the sample. Previously, scientists could only use this method to image objects that were a few atoms thick. But the new study lays out a technique that can image samples 30 to 50 nanometers wide—a more than 10-fold increase in resolution, they report in. The breakthrough could help develop more efficient electronics and batteries, a process that requires visualizing components on the atomic level.