Menu

Blog

Archive for the ‘computing’ category: Page 179

Sep 25, 2023

Canceling Noise: MIT’s Innovative Way To Boost Quantum Devices

Posted by in categories: computing, education, engineering, quantum physics

For years, researchers have tried various ways to coax quantum bits — or qubits, the basic building blocks of quantum computers — to remain in their quantum state for ever-longer times, a key step in creating devices like quantum sensors, gyroscopes, and memories.

A team of physicists from MIT

MIT is an acronym for the Massachusetts Institute of Technology. It is a prestigious private research university in Cambridge, Massachusetts that was founded in 1861. It is organized into five Schools: architecture and planning; engineering; humanities, arts, and social sciences; management; and science. MIT’s impact includes many scientific breakthroughs and technological advances. Their stated goal is to make a better world through education, research, and innovation.

Sep 24, 2023

Controlling Devices with Thought, No Open Brain Surgery Required

Posted by in categories: biotech/medical, computing, neuroscience

Synchron has developed a Brain-Computer Interface that uses pre-existing technologies such as the stent and catheter to allow insertion into the brain without the need for open brain surgery.

Never miss a deal again! See CNET’s browser extension 👉 https://bit.ly/3lO7sOU
Check out CNET’s Amazon Storefront: https://www.amazon.com/shop/cnet?tag=lifeboatfound-20.
Follow us on TikTok: https://www.tiktok.com/@cnetdotcom.
Follow us on Instagram: https://www.instagram.com/cnet/
Follow us on Twitter: https://www.twitter.com/cnet.
Like us on Facebook: https://www.facebook.com/cnet.

#WhatTheFuture #Synchron #BCI

Sep 24, 2023

Thinner Than the Photon Itself — Scientists Invent Smallest Known Way To Guide Light

Posted by in categories: computing, finance, particle physics

Channeling light from one location to another is the backbone of our modern world. Across deep oceans and vast continents, fiber optic cables transport light containing data ranging from YouTube clips to banking transmissions—all within fibers as thin as a strand of hair.

University of Chicago Prof. Jiwoong Park, however, wondered what would happen if you made even thinner and flatter strands—in effect, so thin that they’re actually 2D instead of 3D. What would happen to the light?

Through a series of innovative experiments, he and his team found that a sheet of glass crystal just a few atoms thick could trap and carry light. Not only that, but it was surprisingly efficient and could travel relatively long distances—up to a centimeter, which is very far in the world of light-based computing.

Sep 23, 2023

Step forward for massive DNA computer systems

Posted by in categories: biotech/medical, computing, information science

The group at Shanghai Jiao Tong University has demonstrated a DNA computer system using DNA integrated circuits (DICs) that can solve quadratic equations with 30 logic gates.

Published in Nature, the system integrates multiple layers of DNA-based programmable gate arrays (DPGAs). This uses generic single-stranded oligonucleotides as a uniform transmission signal can reliably integrate large-scale DICs with minimal leakage and high fidelity for general-purpose computing.

To control the intrinsically random collision of molecules, the team designed DNA origami registers to provide the directionality for asynchronous execution of cascaded DPGAs. This was used to assemble a DIC that can solve quadratic equations with three layers of cascade DPGAs comprising 30 logic gates with around 500 DNA strands.

Sep 23, 2023

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Posted by in categories: computing, transportation

Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications due to their sheer size. For instance, serving a single 175 billion LLM requires at least 350GB of GPU memory using specialized infrastructure, not to mention that today’s state-of-the-art LLMs are composed of over 500 billion parameters. Such computational requirements are inaccessible for many research teams, especially for applications that require low latency performance.

To circumvent these deployment challenges, practitioners often choose to deploy smaller specialized models instead. These smaller models are trained using one of two common paradigms: fine-tuning or distillation. Fine-tuning updates a pre-trained smaller model (e.g., BERT or T5) using downstream manually-annotated data. Distillation trains the same smaller models with labels generated by a larger LLM. Unfortunately, to achieve comparable performance to LLMs, fine-tuning methods require human-generated labels, which are expensive and tedious to obtain, while distillation requires large amounts of unlabeled data, which can also be hard to collect.

In “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes”, presented at ACL2023, we set out to tackle this trade-off between model size and training data collection cost. We introduce distilling step-by-step, a new simple mechanism that allows us to train smaller task-specific models with much less training data than required by standard fine-tuning or distillation approaches that outperform few-shot prompted LLMs’ performance. We demonstrate that the distilling step-by-step mechanism enables a 770M parameter T5 model to outperform the few-shot prompted 540B PaLM model using only 80% of examples in a benchmark dataset, which demonstrates a more than 700x model size reduction with much less training data required by standard approaches.

Sep 23, 2023

Unlocking Battery Mysteries: X-Ray “Computer Vision” Reveals Unprecedented Physical and Chemical Details

Posted by in categories: biological, chemistry, computing, nanotechnology, physics

It lets researchers extract pixel-by-pixel information from nanoscale.

The nanoscale refers to a length scale that is extremely small, typically on the order of nanometers (nm), which is one billionth of a meter. At this scale, materials and systems exhibit unique properties and behaviors that are different from those observed at larger length scales. The prefix “nano-” is derived from the Greek word “nanos,” which means “dwarf” or “very small.” Nanoscale phenomena are relevant to many fields, including materials science, chemistry, biology, and physics.

Sep 23, 2023

Unique New Material Could Generate More Computing Power and Memory Storage While Using Significantly Less Energy

Posted by in categories: biotech/medical, chemistry, computing

For the first time, a team from the University of Minnesota Twin Cities has synthesized a thin film of a unique topological semimetal material that has the potential to generate more computing power and memory storage while using significantly less energy. Additionally, the team’s close examination of the material yielded crucial insights into the physics behind its unique properties.

The study was recently published in the journal Nature Communications.

<em>Nature Communications</em> is a peer-reviewed, open-access, multidisciplinary, scientific journal published by Nature Portfolio. It covers the natural sciences, including physics, biology, chemistry, medicine, and earth sciences. It began publishing in 2010 and has editorial offices in London, Berlin, New York City, and Shanghai.

Sep 22, 2023

Mark Zuckerberg and Priscilla Chan announced they’re building a computing system to help eliminate human disease by 2100, but costs may be hefty

Posted by in categories: biotech/medical, computing

‘They’re not announcing like, ‘We have created a model that does a particular thing.’ Instead, they’re saying ‘We are planning to create a resource that is going to be available for biologists to create new models,’ Carpenter said.

The Chan Zuckerberg Initiative, the couple’s LLC, told The Register that they plan to have their product running by 2024. The company also declined to tell the Register how much it’ll have to spend to make its product.

It could be a hefty bill, considering that the computer parts it wants to use are in high demand and low supply, The Register reported.

Sep 22, 2023

SLAC fires up the world’s most powerful X-ray laser: LCLS-II ushers in a new era of science

Posted by in categories: biological, chemistry, computing, quantum physics, science, sustainability

The newly upgraded Linac Coherent Light Source (LCLS) X-ray free-electron laser (XFEL) at the Department of Energy’s SLAC National Accelerator Laboratory successfully produced its first X-rays, and researchers around the world are already lined up to kick off an ambitious science program.

The upgrade, called LCLS-II, creates unparalleled capabilities that will usher in a new era in research with X-rays.

Scientists will be able to examine the details of quantum materials with unprecedented resolution to drive new forms of computing and communications; reveal unpredictable and fleeting chemical events to teach us how to create more sustainable industries and ; study how carry out life’s functions to develop new types of pharmaceuticals; and study the world on the fastest timescales to open up entirely new fields of scientific investigation.

Sep 22, 2023

Intel says there will be one trillion transistors on chips by 2030

Posted by in categories: computing, materials

Year 2022 😗😁


During the IEEE International Electron Devices Meeting (or IEDM), Intel claimed that by 2030, there would be circuits with transistor counts of a trillion, roughly ten times the number of transistors currently available on modern CPUs.

At the meeting, Intel’s Components Research Group laid down its prediction for the future of circuits manufacturing (via sweclockers) and how new packaging technologies and materials will allow chipmakers to build chips with 10x the transistor density, keeping in Moore’s Law.

Continue reading “Intel says there will be one trillion transistors on chips by 2030” »