Menu

Blog

Archive for the ‘information science’ category: Page 21

Jun 7, 2024

Brian Greene — What Was There Before The Big Bang?

Posted by in categories: cosmology, evolution, information science, mathematics, quantum physics, singularity

The American theoretical physicist, Brian Greene explains various hypotheses about the causation of the big bang. Brian Greene is an excellent science communicator and he makes complex cosmological concepts more easy to understand.

The Big Bang explains the evolution of the universe from a starting density and temperature that is currently well beyond humanity’s capability to replicate. Thus the most extreme conditions and earliest times of the universe are speculative and any explanation for what caused the big bang should be taken with a grain of salt. Nevertheless that shouldn’t stop us to ask questions like what was there before the big bang.

Continue reading “Brian Greene — What Was There Before The Big Bang?” »

Jun 5, 2024

Flapping frequency of birds, insects, bats and whales predicted with just body mass and wing area

Posted by in categories: information science, mathematics

A single universal equation can closely approximate the frequency of wingbeats and fin strokes made by birds, insects, bats and whales, despite their different body sizes and wing shapes, Jens Højgaard Jensen and colleagues from Roskilde University in Denmark report in a new study published in PLOS ONE on June 5.

The ability to fly has evolved independently in many different animal groups. To minimize the energy required to fly, biologists expect that the that animals flap their wings should be determined by the natural resonance frequency of the wing. However, finding a universal mathematical description of flapping flight has proved difficult.

Researchers used dimensional analysis to calculate an equation that describes the frequency of wingbeats of flying birds, insects and bats, and the fin strokes of diving animals, including penguins and whales.

Jun 5, 2024

Google’s Quantum AI Challenges Long-Standing Physics Theories

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

Quantum simulators are now addressing complex physics problems, such as the dynamics of 1D quantum magnets and their potential similarities to classical phenomena like snow accumulation. Recent research confirms some aspects of this theory, but also highlights challenges in fully validating the KPZ universality class in quantum systems. Credit: Google LLC

Quantum simulators are advancing quickly and can now tackle issues previously confined to theoretical physics and numerical simulation. Researchers at Google Quantum AI and their collaborators demonstrated this new potential by exploring dynamics in one-dimensional quantum magnets, specifically focusing on chains of spin-1/2 particles.

Continue reading “Google’s Quantum AI Challenges Long-Standing Physics Theories” »

Jun 5, 2024

A Safer Future for AI with Stronger Algorithms

Posted by in categories: cybercrime/malcode, information science, robotics/AI

This post is also available in: עברית (Hebrew)

AI technology is spreading quickly throughout many different industries, and its integration depends on users’ trust and safety concerns. This matter becomes complicated when the algorithms powering AI-based tools are vulnerable to cyberattacks that could have detrimental results.

Dr. David P. Woodruff from Carnegie Mellon University and Dr. Samson Zhou from Texas A&M University are working to strengthen the algorithms used by big data AI models against attacks.

Jun 4, 2024

CMSP series of lectures on “Topology and dynamics of higher-order networks”: lecture 3

Posted by in categories: computing, information science, mathematics, quantum physics

ICTP lectures “Topology and dynamics of higher-order networks”

- Network topology: 1 https://youtube.com/watch?v=mbmsv9RS3Pc

Continue reading “CMSP series of lectures on ‘Topology and dynamics of higher-order networks’: lecture 3” »

Jun 4, 2024

Scientists spot 60 stars appearing to show signs of alien power plants

Posted by in categories: alien life, information science, robotics/AI

I don’t know if this true but it definitely could be as most civilizations are probably more advanced than the earth.


A survey of five million distant solar systems, aided by ‘neural network’ algorithms, has discovered 60 stars that appear to be surrounded by giant alien power plants.

Seven of the stars — so-called M-dwarf stars that range between 60 percent and 8 percent the size of our sun — were recorded giving off unexpectedly high infrared ‘heat signatures,’ according to the astronomers.

Continue reading “Scientists spot 60 stars appearing to show signs of alien power plants” »

Jun 4, 2024

New model suggests partner anti-universe could explain accelerated expansion without the need for dark energy

Posted by in categories: cosmology, information science, quantum physics

The accelerated expansion of the present universe, believed to be driven by a mysterious dark energy, is one of the greatest puzzles in our understanding of the cosmos. The standard model of cosmology called Lambda-CDM, explains this expansion as a cosmological constant in Einstein’s field equations. However, the cosmological constant itself lacks a complete theoretical understanding, particularly regarding its very small positive value.

Jun 2, 2024

Memristor-based adaptive neuromorphic perception in unstructured environments

Posted by in categories: information science, robotics/AI, transportation

Differential neuromorphic computing, as a memristor-assisted perception method, holds the potential to enhance subsequent decision-making and control processes. Compared with conventional technologies, both the PID control approach and the proposed differential neuromorphic computing share a fundamental principle of smartly adjusting outputs in response to feedback, they diverge significantly in the data manipulation process (Supplementary Discussion 12 and Fig. S26); our method leverages the nonlinear characteristics of the memristor and a dynamic selection scheme to execute more complex data manipulation than linear coefficient-based error correction in PID. Additionally, the intrinsic memory function of memristors in our system enables real-time adaptation to changing environments. This represents a significant advantage compared to the static parameter configuration of PID systems. To perform similar adaptive control functions in tactile experiments, the von Neumann architecture follows a multi-step process involving several data movements: 1. Input data about the piezoresistive film state is transferred to the system memory via an I/O interface. 2. This sensory data is then moved from the memory to the cache. 3. Subsequently, it is forwarded to the Arithmetic Logic Unit (ALU) and waits for processing.4. Historical tactile information is also transferred from the memory to the cache unless it is already present. 5. This historical data is forwarded to the ALU. 6. ALU calculates the current sensory and historical data and returns the updated historical data to the cache. In contrast, our memristor-based approach simplifies this process, reducing it to three primary steps: 1. ADC reads data from the piezoresistive film. 2. ADC reads the current state of the memristor, which represents the historical tactile stimuli. 3. DAC, controlled by FPGA logic, updates the memristor state based on the inputs. This process reduces the costs of operation and enhances data processing efficiency.

In real-world settings, robotic tactile systems are required to elaborate large amounts of tactile data and respond as quickly as possible, taking less than 100 ms, similar to human tactile systems58,59. The current state-of-the-art robotics tactile technologies are capable of elaborating sudden changes in force, such as slip detection, at millisecond levels (from 500 μs to 50 ms)59,60,61,62, and the response time of our tactile system has also reached this detection level. For the visual processing, suppose a vehicle travels 40 km per hour in an urban area and wants control effective for every 1 m. In that case, the requirement translates a maximum allowable response time of 90 ms for the entire processing pipeline, which includes sensors, operating systems, middleware, and applications such as object detection, prediction, and vehicle control63,64. When incorporating our proposed memristor-assisted method with conventional camera systems, the additional time delay includes the delay from filter circuits (less than 1 ms) and the switching time for the memristor device, which ranges from nanoseconds (ns) to even picoseconds (ps)21,65,66,67. Compared to the required overall response time of the pipeline, these additions are negligible, demonstrating the potential of our method application in real-world driving scenarios68. Although our memristor-based perception method meets the response time requirement for described scenarios, our approach faces several challenges that need to be addressed for real-world applications. Apart from the common issues such as variability in device performance and the nonlinear dynamics of memristive responses, our approach needs to overcome the following challenges:

Currently, the modulation voltage applied to memristors is preset based on the external sensory feature, and the control algorithm is based on hard threshold comparison. This setting lacks the flexibility required for diverse real-world environments where sensory inputs and required responses can vary significantly. Therefore, it is crucial to develop a more automatic memristive modulation method along with a control algorithm that can dynamically adjust based on varying application scenarios.

Jun 2, 2024

A 3D ray traced biological neural network learning model

Posted by in categories: biological, information science, robotics/AI

In artificial neural networks, many models are trained for a narrow task using a specific dataset. They face difficulties in solving problems that include dynamic input/output data types and changing objective functions. Whenever the input/output tensor dimension or the data type is modified, the machine learning models need to be rebuilt and subsequently retrained from scratch. Furthermore, many machine learning algorithms that are trained for a specific objective, such as classification, may perform poorly at other tasks, such as reinforcement learning or quantification.

Even if the input/output dimensions and the objective functions remain constant, the algorithms do not generalize well across different datasets. For example, a neural network trained on classifying cats and dogs does not perform well on classifying humans and horses despite both of the datasets having the exact same image input1. Moreover, neural networks are highly susceptible to adversarial attacks2. A small deviation from the training dataset, such as changing one pixel, could cause the neural network to have significantly worse performance. This problem is known as the generalization problem3, and the field of transfer learning can help to solve it.

Transfer learning4,5,6,7,8,9,10 solves the problems presented above by allowing knowledge transfer from one neural network to another. A common way to use supervised transfer learning is obtaining a large pre-trained neural network and retraining it for a different but closely related problem. This significantly reduces training time and allows the model to be trained on a less powerful computer. Many researchers used pre-trained neural networks such as ResNet-5011 and retrained them to classify malicious software12,13,14,15. Another application of transfer learning is tackling the generalization problem, where the testing dataset is completely different from the training dataset. For example, every human has unique electroencephalography (EEG) signals due to them having distinctive brain structures. Transfer learning solves the generalization problem by pretraining on a general population EEG dataset and retraining the model for a specific patient16,17,18,19,20. As a result, the neural network is dynamically tailored for a specific person and can interpret their specific EEG signals properly. Labeling large datasets by hand is tedious and time-consuming. In semi-supervised transfer learning21,22,23,24, either the source dataset or the target dataset is unlabeled. That way, the neural networks can self-learn which pieces of information to extract and process without many labels.

May 31, 2024

New Machine Learning Algorithm Promises Advances in Computing

Posted by in categories: information science, robotics/AI

Digital twin models may enhance future autonomous systems.

Systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products, a new study suggests.

Using machine learning tools to create a digital twin, or a virtual copy, of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and using that information to control it.

Page 21 of 322First1819202122232425Last