Menu

Blog

Archive for the ‘information science’ category: Page 36

Jan 23, 2024

Astrophysicists offer theoretical proof of traversable wormholes in the expanding universe

Posted by in categories: cosmology, evolution, information science, physics

The expansion of the universe at some stage of evolution is well described by the Friedmann model. It was derived from general relativity a hundred years ago, but it is still considered one of the most important and relevant cosmological models.

RUDN University astrophysicists have now proven the theoretical possibility of the existence of traversable wormholes in the Friedmann universe. The research is published in the journal Universe.

“A wormhole is a type of highly curved geometry. It resembles a tunnel either between distant regions of the same universe or between different universes. Such structures were first discussed in the framework of solutions to the gravitational field equations a hundred years ago. But the wormholes considered then turned out to be non-traversable even for photons—they could not move from one ‘end of the tunnel’ to the other, not to mention going back,” said Kirill Bronnikov, doctor of physical and , professor of RUDN University.

Jan 23, 2024

Robotic Breakthrough Mimics Human Walking Efficiency

Posted by in categories: biotech/medical, cyborgs, information science, robotics/AI

The article repeats itself a bit but there’s some good parts about an exoskeleton, advanced algorithm and bipedal robots and prosthetics. It’ll basically apply to those future industries.


We typically don’t think about it whilst doing it, but walking is a complicated task. Controlled by our nervous system, our bones, joints, muscles, tendons, ligaments and other connective tissues (i.e., the musculoskeletal system) must move in coordination and respond to unexpected changes or disturbances at varying speeds in a highly efficient manner. Replicating this in robotic technologies is no small feat.

Now, a research group from Tohoku University Graduate School of Engineering has replicated human-like variable speed walking using a musculoskeletal model – one steered by a reflex control method reflective of the human nervous system. This breakthrough in biomechanics and robotics sets a new benchmark in understanding human movement and paves the way for innovative robotic technologies.

Jan 22, 2024

Can Mamba bite ChatGPT? OpenAI rival ‘outperforms’ AI language models

Posted by in categories: information science, robotics/AI

Another landmark invention in the AI industry?

A recent algorithm breakthrough is shaking things up in the machine learning discussion groups.


Mamba, a breakthrough algorithm, challenges the 21st century’s biggest algorithm, Transformer, by achieving superior language modeling, speed, and cost-effectiveness.

Jan 22, 2024

Machine learning models teach each other to identify molecular properties

Posted by in categories: biotech/medical, information science, robotics/AI

Biomedical engineers at Duke University have developed a new method to improve the effectiveness of machine learning models. By pairing two machine learning models, one to gather data and one to analyze it, researchers can circumvent limitations of the technology without sacrificing accuracy.

This new technique could make it easier for researchers to use machine learning algorithms to identify and characterize molecules for use in potential new therapeutics or other materials.

The research is published in the journal Artificial Intelligence in the Life Sciences.

Jan 22, 2024

Revolutionary Meta-Optical Technology Transforms Thermal Imaging

Posted by in categories: biotech/medical, information science, robotics/AI, security

Researchers have created a novel technology utilizing meta-optical devices for thermal imaging. This method offers more detailed information about the objects being imaged, potentially expanding thermal imaging applications in autonomous navigation, security, thermography, medical imaging, and remote sensing.

“Our method overcomes the challenges of traditional spectral thermal imagers, which are often bulky and delicate due to their reliance on large filter wheels or interferometers,” said research team leader Zubin Jacob from Purdue University. “We combined meta-optical devices and cutting-edge computational imaging algorithms to create a system that is both compact and robust while also having a large field of view.”

In Optica, Optica Publishing Group’s journal for high-impact research, the authors describe their new spectro-polarimetric decomposition system, which uses a stack of spinning metasurfaces to break down thermal light into its spectral and polarimetric components. This allows the imaging system to capture the spectral and polarization details of thermal radiation in addition to the intensity information that is acquired with traditional thermal imaging.

Jan 21, 2024

Dark energy is one of the biggest puzzles in science and we’re now a step closer to understanding it

Posted by in categories: cosmology, information science, mapping, quantum physics, science

Over ten years ago, the Dark Energy Survey (DES) began mapping the universe to find evidence that could help us understand the nature of the mysterious phenomenon known as dark energy. I’m one of more than 100 contributing scientists that have helped produce the final DES measurement, which has just been released at the 243rd American Astronomical Society meeting in New Orleans.

Dark energy is estimated to make up nearly 70% of the , yet we still don’t understand what it is. While its nature remains mysterious, the impact of dark energy is felt on grand scales. Its primary effect is to drive the accelerating expansion of the universe.

The announcement in New Orleans may take us closer to a better understanding of this form of energy. Among other things, it gives us the opportunity to test our observations against an idea called the cosmological constant that was introduced by Albert Einstein in 1917 as a way of counteracting the effects of gravity in his equations to achieve a universe that was neither expanding nor contracting. Einstein later removed it from his calculations.

Jan 19, 2024

Calculus on Computational Graphs: Backpropagation

Posted by in categories: information science, robotics/AI

Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years.

Beyond its use in deep learning, backpropagation is a powerful computational tool in many other areas, ranging from weather forecasting to analyzing numerical stability – it just goes by different names. In fact, the algorithm has been reinvented at least dozens of times in different fields (see Griewank (2010)). The general, application independent, name is “reverse-mode differentiation.”

Fundamentally, it’s a technique for calculating derivatives quickly. And it’s an essential trick to have in your bag, not only in deep learning, but in a wide variety of numerical computing situations.

Jan 19, 2024

The quantum equation suggests that the Big Bang never happened and that the universe has no beginning

Posted by in categories: cosmology, information science, quantum physics

The cosmos may have existed forever, according to a revolutionary model that extends Einstein’s theory of general relativity using quantum correction terms. By taking into consideration dark matter and energy, the model can concurrently address a number of concerns.

Jan 17, 2024

Dangers of superintelligence | Separating sci-fi from plausible speculation

Posted by in categories: employment, governance, information science, robotics/AI

Just after filming this video, Sam Altman, CEO of OpenAI published a blog post about the governance of superintelligence in which he, along with Greg Brockman and Ilya Sutskever, outline their thinking about how the world should prepare for a world with superintelligences. And just before filming Geoffrey Hinton quite his job at Google so that he could express more openly his concerns about the imminent arrival of an artificial general intelligence, an AGI that could soon get beyond our control if it became superintelligent. So, the basic idea is moving from sci-fi speculation into being a plausible scenario, but how powerful will they be and which of the concerns about superAI are reasonably founded? In this video I explore the ideas around superintelligence with Nick Bostrom’s 2014 book, Superintelligence, as one of our guides and Geoffrey Hinton’s interviews as another, to try to unpick which aspects are plausible and which are more like speculative sci-fi. I explore what are the dangers, such as Eliezer Yudkowsky’s notion of a rapid ‘foom’ take over of humanity, and also look briefly at the control problem and the alignment problem. At the end of the video I then make a suggestion for how we could maybe delay the arrival of superintelligence by withholding the ability of the algorithms to self-improve themselves, withholding what you could call, meta level agency.

▬▬ Chapters ▬▬

Continue reading “Dangers of superintelligence | Separating sci-fi from plausible speculation” »

Jan 16, 2024

Toward Early Fault-tolerant Quantum Computing

Posted by in categories: computing, information science, quantum physics

This article introduces new approaches to develop early fault-tolerant quantum computing (early-FTQC) such as improving efficiency of quantum computation on encoded data, new circuit efficiency techniques for quantum algorithms, and combining error-mitigation techniques with fault-tolerant quantum computation.

Yuuki Tokunaga NTT Computer and Data Science Laboratories.

Noisy intermediate-scale quantum (NISQ) computers, which do not execute quantum error correction, do not require overhead for encoding. However, because errors inevitably accumulate, there is a limit to computation size. Fault-tolerant quantum computers (FTQCs) carry out computation on encoded qubits, so they have overhead for the encoding and require quantum computers of at least a certain size. The gap between NISQ computers and FTQCs due to the amount of overhead is shown in Fig. 1. Is this gap unavoidable? Decades ago, many researchers would consider the answer to be in the negative. However, our team has recently demonstrated a new, unprecedented method to overcome this gap. Motivation to overcome this gap has also led to a research trend that started at around the same time worldwide. These efforts, collectively called early fault-tolerant quantum computing “early-FTQC”, have become a worldwide research movement.

Page 36 of 317First3334353637383940Last