Menu

Blog

Archive for the ‘information science’ category: Page 30

Apr 4, 2024

Joscha Bach — Consciousness as a coherence-inducing operator

Posted by in categories: biological, computing, information science, law, neuroscience

A theory of consciousness should capture its phenomenology, characterize its ontological status and extent, explain its causal structure and genesis, and describe its function. Here, I advance the notion that consciousness is best understood as an operator, in the sense of a physically implemented transition function that is acting on a representational substrate and controls its temporal evolution, and as such has no identity as an object or thing, but (like software running on a digital computer) it can be characterized as a law. Starting from the observation that biological information processing in multicellular substrates is based on self organization, I explore the conjecture that the functionality of consciousness represents the simplest algorithm that is discoverable by such substrates, and can impose function approximation via increasing representational coherence. I describe some properties of this operator, both with the goal of recovering the phenomenology of consciousness, and to get closer to a specification that would allow recreating it in computational simulations.

Apr 4, 2024

Holographic Breakthrough: Scientists Create Full-Color 3D Holographic Displays with Ordinary Smartphone Screen

Posted by in categories: computing, holograms, information science, military, mobile phones

In science fiction, holograms are used for anything from basic communications to advanced military weaponry. In the real world, 3D holographic displays have yet to break through to everyday products and devices. That’s because creating holograms that look real and have significant fidelity requires laser emitters or other advanced pieces of optical equipment. This situation has stymied commercial development, as these components are complex and expensive.

More recently, research scientists were able to create realistic 3D holographic images without lasers by using a white chip-on-board light-emitting diode. Unfortunately, that method required two spatial light modulators to control the wave fronts of the emitted light, adding a prohibitive amount of complexity and cost.

Now, those same scientists say they have created a simpler, more cost-effective way to create realistic-looking 3D holographic displays using only one spatial light modulator and new software algorithms. The result is a simpler and cheaper method for creating holograms that an everyday technology like a smartphone screen can emit.

Apr 3, 2024

Neural feedback loops algorithms and consciousness

Posted by in categories: information science, neuroscience

Shared with Dropbox.

Apr 3, 2024

Classical optical neural network exhibits ‘quantum speedup’

Posted by in categories: information science, quantum physics, robotics/AI

In recent years, artificial intelligence technologies, especially machine learning algorithms, have made great strides. These technologies have enabled unprecedented efficiency in tasks such as image recognition, natural language generation and processing, and object detection, but such outstanding functionality requires substantial computational power as a foundation.

Mar 30, 2024

Novel quantum algorithm proposed for high-quality solutions to combinatorial optimization problems

Posted by in categories: information science, quantum physics, robotics/AI

Combinatorial optimization problems (COPs) have applications in many different fields such as logistics, supply chain management, machine learning, material design and drug discovery, among others, for finding the optimal solution to complex problems. These problems are usually very computationally intensive using classical computers and thus solving COPs using quantum computers has attracted significant attention from both academia and industry.

Mar 27, 2024

Transcerebral information coordination in directional hippocampus-prefrontal cortex network during working memory based on bimodal neural electrical signals

Posted by in categories: information science, neuroscience

Working memory (WM) is a kind of advanced cognitive function, which requires the participation and cooperation of multiple brain regions. Hippocampus and prefrontal cortex are the main responsible brain regions for WM. Exploring information coordination between hippocampus and prefrontal cortex during WM is a frontier problem in cognitive neuroscience. In this paper, an advanced information theory analysis based on bimodal neural electrical signals (local field potentials, LFPs and spikes) was employed to characterize the transcerebral information coordination across the two brain regions. Firstly, LFPs and spikes were recorded simultaneously from rat hippocampus and prefrontal cortex during the WM task by using multi-channel in vivo recording technique. Then, from the perspective of information theory, directional hippocampus-prefrontal cortex networks were constructed by using transfer entropy algorithm based on spectral coherence between LFPs and spikes. Finally, transcerebral coordination of bimodal information at the brain-network level was investigated during acquisition and performance of the WM task. The results show that the transfer entropy in directional hippocampus-prefrontal cortex networks is related to the acquisition and performance of WM. During the acquisition of WM, the information flow, local information transmission ability and information transmission efficiency of the directional hippocampus-prefrontal networks increase over learning days. During the performance of WM, the transfer entropy from the hippocampus to prefrontal cortex plays a leading role for bimodal information coordination across brain regions and hippocampus has a driving effect on prefrontal cortex. Furthermore, bimodal information coordination in the hippocampus → prefrontal cortex network could predict WM during the successful performance of WM.

Keywords: Bimodal neural electrical signals; Graph theory; Transcerebral information coordination; Transfer entropy; Working memory.

© The Author(s), under exclusive licence to Springer Nature B.V. 2022.

Mar 27, 2024

Machines Are on the Verge of Tackling Fermat’s Last Theorem—a Proof That Once Defied Them

Posted by in category: information science

Advanced algorithms are now deciphering what once was the domain of pure human intellect.

Mar 27, 2024

AI’s Learning Path: Surprising Uniformity Across Neural Networks

Posted by in categories: information science, robotics/AI

Summary: Neural networks, regardless of their complexity or training method, follow a surprisingly uniform path from ignorance to expertise in image classification tasks. Researchers found that neural networks classify images by identifying the same low-dimensional features, such as ears or eyes, debunking the assumption that network learning methods are vastly different.

This finding could pave the way for developing more efficient AI training algorithms, potentially reducing the significant computational resources currently required. The research, grounded in information geometry, hints at a more streamlined future for AI development, where understanding the common learning path of neural networks could lead to cheaper and faster training methods.

Mar 24, 2024

God’s Number Revealed: 20 Moves Proven Enough to Solve Any Rubik’s Cube Position

Posted by in categories: alien life, computing, information science, mathematics

Year 2010 😗😁


The world has waited with bated breath for three decades, and now finally a group of academics, engineers, and math geeks has discovered the number that explains life, the universe, and everything. That number is 20, and it’s the maximum number of moves it takes to solve a Rubik’s Cube.

Known as God’s Number, the magic number required about 35 CPU-years and a good deal of man-hours to solve. Why? Because there’s-1 possible positions of the cube, and the computer algorithm that finally cracked God’s Algorithm had to solve them all. (The terms God’s Number/Algorithm are derived from the fact that if God was solving a Cube, he/she/it would do it in the most efficient way possible. The Creator did not endorse this study, and could not be reached for comment.)

Continue reading “God’s Number Revealed: 20 Moves Proven Enough to Solve Any Rubik’s Cube Position” »

Mar 24, 2024

Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing

Posted by in categories: information science, particle physics, robotics/AI

Bayesian neural networks (BNNs) combine the generalizability of deep neural networks (DNNs) with a rigorous quantification of predictive uncertainty, which mitigates overfitting and makes them valuable for high-reliability or safety-critical applications. However, the probabilistic nature of BNNs makes them more computationally intensive on digital hardware and so far, less directly amenable to acceleration by analog in-memory computing as compared to DNNs. This work exploits a novel spintronic bit cell that efficiently and compactly implements Gaussian-distributed BNN values. Specifically, the bit cell combines a tunable stochastic magnetic tunnel junction (MTJ) encoding the trained standard deviation and a multi-bit domain-wall MTJ device independently encoding the trained mean. The two devices can be integrated within the same array, enabling highly efficient, fully analog, probabilistic matrix-vector multiplications. We use micromagnetics simulations as the basis of a system-level model of the spintronic BNN accelerator, demonstrating that our design yields accurate, well-calibrated uncertainty estimates for both classification and regression problems and matches software BNN performance. This result paves the way to spintronic in-memory computing systems implementing trusted neural networks at a modest energy budget.

The powerful ability of deep neural networks (DNNs) to generalize has driven their wide proliferation in the last decade to many applications. However, particularly in applications where the cost of a wrong prediction is high, there is a strong desire for algorithms that can reliably quantify the confidence in their predictions (Jiang et al., 2018). Bayesian neural networks (BNNs) can provide the generalizability of DNNs, while also enabling rigorous uncertainty estimates by encoding their parameters as probability distributions learned through Bayes’ theorem such that predictions sample trained distributions (MacKay, 1992). Probabilistic weights can also be viewed as an efficient form of model ensembling, reducing overfitting (Jospin et al., 2022). In spite of this, the probabilistic nature of BNNs makes them slower and more power-intensive to deploy in conventional hardware, due to the large number of random number generation operations required (Cai et al., 2018a).

Page 30 of 322First2728293031323334Last