Menu

Blog

Archive for the ‘information science’ category: Page 2

Nov 12, 2024

New ‘gold-plated’ superconductor could be the foundation for massively scaled-up quantum computers in the future

Posted by in categories: computing, information science, quantum physics

The interface superconductor underwent a transition under a magnetic field and became more robust, the scientists said in the paper This suggests it has transformed into a “triplet superconductor.” — a type of superconductor that is more resistant to magnetic fields than conventional superconductors.

They conducted the research in conjunction with the National Institute of Standards and Technology. In earlier work, they demonstrated that thin films of gold and niobium naturally suppress decoherence — the loss of quantum properties due to external environmental interference.

Given its robust quantum qualities and its ability to suppress decoherence, this new superconducting material promises to be ideal for use in quantum computers, the scientists said. Minimizing decoherence within the system is a key challenge, which necessitates extreme measures to isolate the quantum computer from external influences, such as shifts in temperature or electromagnetic interference, as well as the use of error-correcting algorithms to ensure calculations remain accurate.

Nov 11, 2024

Autonomous mobile robots for exploratory synthetic chemistry

Posted by in categories: chemistry, information science, robotics/AI

Autonomous laboratories can accelerate discoveries in chemical synthesis, but this requires automated measurements coupled with reliable decision-making.


Much progress has been made towards diversifying automated synthesis platforms4,5,19 and increasing their autonomous capabilities9,14,15,20,21,22. So far, most platforms use bespoke engineering and physically integrated analytical equipment6. The associated cost, complexity and proximal monopolization of analytical equipment means that single, fixed characterization techniques are often favoured in automated workflows, rather than drawing on the wider array of analytical techniques available in most synthetic laboratories. This forces any decision-making algorithms to operate with limited analytical information, unlike more multifaceted manual approaches. Hence, closed-loop autonomous chemical synthesis often bears little resemblance to human experimentation, either in the laboratory infrastructure required or in the decision-making steps.

We showed previously11 that free-roaming mobile robots could be integrated into existing laboratories to perform experiments by emulating the physical operations of human scientists. However, that first workflow was limited to one specific type of chemistry—photochemical hydrogen evolution—and the only measurement available was gas chromatography, which gives a simple scalar output. Subsequent studies involving mobile robots also focused on the optimization of catalyst performance12,13. These benchtop catalysis workflows11,12,13 cannot carry out more general synthetic chemistry, for example, involving organic solvents, nor can they measure and interpret more complex characterization data, such as NMR spectra. The algorithmic decision-making was limited to maximizing catalyst performance11, which is analogous to autonomous synthesis platforms that maximize yield for a reaction using NMR23 or chromatographic10,24 peak areas.

Continue reading “Autonomous mobile robots for exploratory synthetic chemistry” »

Nov 11, 2024

New Spectral Camera Uses AI to Boost Farm Yields by 20%

Posted by in categories: biotech/medical, food, health, information science, robotics/AI

A team of EU scientists is developing a new advanced camera that uses photonics to reveal what the eye cannot see. This innovative system is being developed to transform various industries, including vertical farming. It will allow farmers growing crops like salads, herbs, and microgreens to detect plant diseases early, monitor crop health with precision, and optimise harvest times — boosting yields by up to 20%.

A new European consortium funded under the Photonics Partnership is developing a new imaging platform that ensures everything from crops to factory products is of the highest quality by detecting things humans simply cannot.

Called ‘HyperImage’, the project aims to revolutionise quality assurance and operational efficiency across different sectors. This high-tech imaging system uses AI machine learning algorithms to identify objects for more precise decision-making.

Nov 9, 2024

Forget Black Holes—White Holes Would Break Your Puny Brain

Posted by in categories: cosmology, evolution, information science, mathematics, neuroscience, physics

White holes, the theoretical opposites of black holes, could expel matter instead of absorbing it. Unlike black holes, whose event horizon traps everything, white holes would prevent anything from entering. While no white holes have been observed, they remain an intriguing mathematical possibility. Some astrophysicists have speculated that gamma ray bursts could be linked to white holes, and even the Big Bang might be explained by a massive white hole. Although the second law of thermodynamics presents a challenge, studying these singularities could revolutionize our understanding of space-time and cosmic evolution.

After reading the article, Harry gained more than 724 upvotes with this comment: “It amazes me how Einstein’s theory and equations branched off into so many other theoretical phenomena. Legend legacy.”

Black holes may well be the most intriguing enigmas in the Universe. Believed to be the collapsed remnants of dead stars, these objects are renowned for one characteristic in particular – anything that goes in never comes out.

Nov 9, 2024

A physicist and his cat ‘reveal’ the equation of cat motion

Posted by in categories: information science, physics

In the social media age, there is little doubt about who is the star of the animal kingdom. Cats rule the screens just as their cousins, the lions, rule the savanna. Thanks to Erwin Schrödinger, this feline also has a place of honor in the history of physics. And it was Eme the cat that inspired Anxo Biasi, researcher at the Instituto Galego de Física de Altas Enerxías (IGFAE), to publish an article in the American Journal of Physics.

Nov 8, 2024

A prosthesis driven by the nervous system helps people with amputation walk naturally

Posted by in categories: biotech/medical, cyborgs, information science, robotics/AI

State-of-the-art prosthetic limbs can help people with amputations achieve a natural walking gait, but they don’t give the user full neural control over the limb. Instead, they rely on robotic sensors and controllers that move the limb using predefined gait algorithms.

Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.

In a study of seven patients who had this surgery, the MIT team found that they were able to walk faster, avoid obstacles, and climb stairs much more naturally than people with a traditional amputation.

Nov 2, 2024

Decomposing causality into its synergistic, unique, and redundant components

Posted by in categories: futurism, information science

Information theory, the science of message communication44, has also served as a framework for model-free causality quantification. The success of information theory relies on the notion of information as a fundamental property of physical systems, closely tied to the restrictions and possibilities of the laws of physics45,46. The grounds for causality as information are rooted in the intimate connection between information and the arrow of time. Time-asymmetries present in the system at a macroscopic level can be leveraged to measure the causality of events using information-theoretic metrics based on the Shannon entropy44. The initial applications of information theory for causality were formally established through the use of conditional entropies, employing what is known as directed information47,48. Among the most recognized contributions is transfer entropy (TE)49, which measures the reduction in entropy about the future state of a variable by knowing the past states of another. Various improvements have been proposed to address the inherent limitations of TE. Among them, we can cite conditional transfer entropy (CTE)50,51,52,53, which stands as the nonlinear, nonparametric extension of conditional GC27. Subsequent advancements of the method include multivariate formulations of CTE45 and momentary information transfer54, which extends TE by examining the transfer of information at each time step. Other information-theoretic methods, derived from dynamical system theory55,56,57,58, quantify causality as the amount of information that flows from one process to another as dictated by the governing equations.

Another family of methods for causal inference relies on conducting conditional independence tests. This approach was popularized by the Peter-Clark algorithm (PC)59, with subsequent extensions incorporating tests for momentary conditional independence (PCMCI)23,60. PCMCI aims to optimally identify a reduced conditioning set that includes the parents of the target variable61. This method has been shown to be effective in accurately detecting causal relationships while controlling for false positives23. Recently, new PCMCI variants have been developed for identifying contemporaneous links62, latent confounders63, and regime-dependent relationships64.

The methods for causal inference discussed above have significantly advanced our understanding of cause-effect interactions in complex systems. Despite the progress, current approaches face limitations in the presence of nonlinear dependencies, stochastic interactions (i.e., noise), self-causation, mediator, confounder, and collider effects, to name a few. Moreover, they are not capable of classifying causal interactions as redundant, unique, and synergistic, which is crucial to identify the fundamental relationships within the system. Another gap in existing methodologies is their inability to quantify causality that remains unaccounted for due to unobserved variables. To address these shortcomings, we propose SURD: Synergistic-Unique-Redundant Decomposition of causality. SURD offers causal quantification in terms of redundant, unique, and synergistic contributions and provides a measure of the causality from hidden variables. The approach can be used to detect causal relationships in systems with multiple variables, dependencies at different time lags, and instantaneous links. We demonstrate the performance of SURD across a large collection of scenarios that have proven challenging for causal inference and compare the results to previous approaches.

Nov 1, 2024

Ultra-low power neuromorphic hardware show promise for energy-efficient AI computation

Posted by in categories: information science, internet, robotics/AI

A team including researchers from Seoul National University College of Engineering has developed neuromorphic hardware capable of performing artificial intelligence (AI) computations with ultra-low power consumption. The research, published in the journal Nature Nanotechnology, addresses fundamental issues in existing intelligent semiconductor materials and devices while demonstrating potential for array-level technology.

Currently, vast amounts of power are consumed in parallel computing for processing big data in various fields such as the Internet of Things (IoT), user data analytics, generative AI, large language models (LLM), and autonomous driving. However, the conventional silicon-based CMOS semiconductor computing used for parallel computation faces problems such as high energy consumption, slower memory and processor speeds, and the physical limitations of high-density processes. This results in energy and carbon emission issues, despite AI’s positive contributions to daily life.

To address these challenges, it’s necessary to overcome the limitations of digital-based Von Neumann architecture computing. As such, the development of next-generation intelligent semiconductor-based neuromorphic hardware that mimics the working principles of the human brain has emerged as a critical task.

Oct 29, 2024

Idaho State Researcher Develops Algorithm to Model Brain Activity

Posted by in categories: biotech/medical, information science, robotics/AI

Thanks to an algorithm created by an Idaho State University professor, the way engineers, doctors, and physicists tackle the hard questions in their respective fields could all change.

Emanuele Zappala, an assistant professor of mathematics at ISU, and his colleagues at Yale have developed the Attentional Neural Integral Equations algorithm, or ANIE for short. Their work was recently published in Nature Machine Intelligence and describes how ANIE can model large, complex systems using data alone.

“Natural phenomena–everything from plasma physics to how viruses spread–are all governed by equations which we do not fully understand,” explains Zappala. “One of the main complexities lies in long-distance relations between different data points in the systems over space and time. What ANIE does is it allows us to learn these complex systems using just those known data points.”

Oct 29, 2024

Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds

Posted by in categories: bioengineering, biotech/medical, cyborgs, education, ethics, genetics, information science, robotics/AI

Michael Levin is a Distinguished Professor in the Biology department at Tufts University and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University. @drmichaellevin holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Prior to college, Michael Levin worked as a software engineer and independent contractor in the field of scientific computing. He attended Tufts University, interested in artificial intelligence and unconventional computation. To explore the algorithms by which the biological world implemented complex adaptive behavior, he got dual B.S. degrees, in CS and in Biology and then received a PhD from Harvard University. He did post-doctoral training at Harvard Medical School, where he began to uncover a new bioelectric language by which cells coordinate their activity during embryogenesis. His independent laboratory develops new molecular-genetic and conceptual tools to probe large-scale information processing in regeneration, embryogenesis, and cancer suppression.

TIMESTAMPS:
0:00 — Introduction.
1:41 — Creating High-level General Intelligences.
7:00 — Ethical implications of Diverse Intelligence beyond AI & LLMs.
10:30 — Solving the Fundamental Paradox that faces all Species.
15:00 — Evolution creates Problem Solving Agents & the Self is a Dynamical Construct.
23:00 — Mike on Stephen Grossberg.
26:20 — A Formal Definition of Diverse Intelligence (DI)
30:50 — Intimate relationships with AI? Importance of Cognitive Light Cones.
38:00 — Cyborgs, hybrids, chimeras, & a new concept called “Synthbiosis“
45:51 — Importance of the symbiotic relationship between Science & Philosophy.
53:00 — The Space of Possible Minds.
58:30 — Is Mike Playing God?
1:02:45 — A path forward: through the ethics filter for civilization.
1:09:00 — Mike on Daniel Dennett (RIP)
1:14:02 — An Ethical Synthbiosis that goes beyond “are you real or faking it“
1:25:47 — Conclusion.

Continue reading “Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds” »

Page 2 of 32212345678Last