Menu

Blog

Archive for the ‘information science’ category: Page 3

Dec 12, 2024

PICNIC accurately predicts condensate-forming proteins regardless of their structural disorder across organisms

Posted by in categories: information science, robotics/AI

Here the authors report PICNIC (Proteins Involved in CoNdensates In Cells), a machine learning algorithm that predicts approximately 40–60% of proteins form condensates in various organisms, showing no clear relationship with the complexity of the organism or the content of disordered proteins.

Dec 12, 2024

AI tool will be able to trace dolphins by their regional accent

Posted by in categories: information science, robotics/AI

Sea mammal expert Dr Julie Oswald, of the University of St Andrews’ Scottish Oceans Institute, created the tool, known as the Real-time Odontocete Call Classification Algorithm (Rocca), using AI.

It can categorise dolphin calls by species and comes in different versions linked to different geographical areas.

There are around 42 species of dolphin and they use hundreds of different sounds to communicate.

Dec 12, 2024

Researchers develop spintronics platform for energy-efficient generative AI

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

Researchers at Tohoku University and the University of California, Santa Barbara, have developed new computing hardware that utilizes a Gaussian probabilistic bit made from a stochastic spintronics device. This innovation is expected to provide an energy-efficient platform for power-hungry generative AI.

As Moore’s Law slows down, domain-specific hardware architectures—such as probabilistic computing with naturally stochastic building blocks—are gaining prominence for addressing computationally hard problems. Similar to how quantum computers are suited for problems rooted in , probabilistic computers are designed to handle inherently probabilistic algorithms.

These algorithms have applications in areas like combinatorial optimization and statistical machine learning. Notably, the 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for their groundbreaking work in machine learning.

Dec 11, 2024

A simulated annealing algorithm for randomizing weighted networks

Posted by in category: information science

While we have established, using rank-based methods, that the simulated annealing algorithm outperforms other randomization techniques in preserving the empirical network’s strength sequence, we have not quantified how well the different models preserve the strength distribution. The level to which the empirical strength distribution is preserved in a null network is crucial, because it ensures an accurate representation of influential graph features, such as hubs, whose importance is intricately tied to characteristics of the distribution.

To assess the goodness of fit between the strength distributions of the empirical and the randomized structural networks, we superimpose their cumulative distribution functions (Fig. 2b and Supplementary Fig. 8). Across all datasets, the curves produced via simulated annealing show the best match to the empirical strength cumulative distribution function with almost perfect superposition. Furthermore, the curves obtained using the Rubinov–Sporns and the Maslov–Sneppen algorithms show considerably more variability across null networks as shown by their wider spread, recapitulating previously observed patterns of underestimation and overestimation across datasets (see ‘Null model calibration’ section in Supplementary Information). To confirm these observations quantitatively, we compute Kolmogorov–Smirnov test statistics between the cumulative strength distributions of the empirical and each randomized network, measuring the maximum distance between them (Fig. 2b and Supplementary Fig. 8). Across all datasets, the simulated annealing algorithm outperforms the other two null models with significantly lower Kolmogorov–Smirnov statistics (P ≈ 0, CLES of 100% for all two-tailed, Wilcoxon–Mann–Whitney two-sample rank-sum tests). Furthermore, in the HCP dataset and the higher resolution Lausanne network, the Rubinov–Sporns algorithm generated cumulative strength distributions with slightly worse correspondence to the empirical distribution than the cumulative strength distributions yielded by the Maslov–Sneppen algorithm (LAU, high resolution: P 10−176, CLES of 61.58%; HCP: P ≈ 0, CLES of 100% for all empirical networks, two-tailed, Wilcoxon–Mann–Whitney two-sample rank-sum test).

As an illustration, we consider whether the nulls generated by different algorithms recapitulate fundamental characteristics associated with the empirical strength distribution. Namely, we focus on the heavy tailedness of the strength distribution (that is, does the null network also have a heavy-tailed strength distribution, suggesting the presence of hubs?) and the spatial location of high-strength hub nodes. We assess heavy tailedness and identify hubs using the nonparametric procedure outlined in refs. 73,74 (see Methods for more details).

Dec 11, 2024

Quantum computing’s next step: New algorithm boosts multitasking

Posted by in categories: computing, information science, quantum physics

Quantum computers differ fundamentally from classical ones. Instead of using bits (0s and 1s), they employ “qubits,” which can exist in multiple states simultaneously due to quantum phenomena like superposition and entanglement.

For a quantum computer to simulate dynamic processes or process data, among other essential tasks, it must translate complex input data into “quantum data” that it can understand. This process is known as quantum compilation.

Essentially, quantum compilation “programs” the quantum computer by converting a particular goal into an executable sequence. Just as the GPS app converts your desired destination into a sequence of actionable steps you can follow, quantum compilation translates a high-level goal into a precise sequence of quantum operations that the quantum computer can execute.

Dec 11, 2024

Forget Black Holes — White Holes Would Break Your Puny Brain

Posted by in categories: cosmology, evolution, information science, neuroscience, singularity

Black holes have long fascinated scientists, known for their ability to trap anything that crosses their event horizon. But what if there were a counterpart to black holes? Enter the white hole—a theoretical singularity where nothing can enter, but energy and matter are expelled with immense force.

First proposed in the 1970s, white holes are essentially black holes in reverse. They rely on the same equations of general relativity but with time flowing in the opposite direction. While a black hole pulls matter in and lets nothing escape, a white hole would repel matter, releasing high-energy radiation and light.

Despite their intriguing properties, white holes face significant scientific challenges. The laws of thermodynamics, particularly entropy, make it improbable for matter to move backward in time, as white holes would require. Additionally, introducing a singularity into the Universe without a preceding collapse defies current understanding of cosmic evolution.

Dec 11, 2024

Leaner Large Language Models could enable Efficient Local Use on Phones and Laptops

Posted by in categories: computing, engineering, information science, mobile phones

Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server—a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs. Their findings are published on the arXiv preprint server.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.

Dec 11, 2024

Google DeepMind’s Breakthrough “AlphaQubit” Closing in on the Holy Grail of Quantum Computing

Posted by in categories: information science, quantum physics, robotics/AI

The dream of building a practical, fault-tolerant quantum computer has taken a significant step forward.

In a breakthrough study recently published in Nature, researchers from Google DeepMind and Google Quantum AI said they have developed an AI-based decoder, AlphaQubit, which drastically improves the accuracy of quantum error correction—a critical challenge in quantum computing.

“Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers,” researchers wrote.

Dec 9, 2024

A new way to create realistic 3D shapes using generative AI

Posted by in categories: information science, media & arts, robotics/AI, virtual reality

Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.

While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.

Continue reading “A new way to create realistic 3D shapes using generative AI” »

Dec 9, 2024

Next-Generation Size Selection for Optimized Long-Read Sequencing Workflow

Posted by in categories: biotech/medical, health, information science

All DNA is prone to fragmentation, whether it is derived from a biological matrix or created during gene synthesis; thus, any DNA sample will contain a range of fragment sizes. To really exploit the true benefits of long read sequencing, it is necessary to remove these shorter fragments, which might other wise be sequenced preferentially.

DNA size selection can exclude short fragments, maximizing data yields by ensuring that those fragments with the most informational content are not blocked from accessing detection centers (for example, ZMWs) by shorter DNA fragments.

Next-generation size-selection solutions Starting with clean, appropriate-length fragments for HiFi reads can accelerate research by reducing the computation and data processing time needed post-sequencing. Ranger Technology from Yourgene Health is a patent-protected process for automating electrophoresis-based DNA analysis and size selection. Its fluorescence machine vision system and image analysis algorithms provide real-time interpretation of the DNA separation process.

Page 3 of 32812345678Last