Toggle light / dark theme

Making predictions is never easy, but it is agreed that cryptography will be altered by the advent of quantum computers.

Thirteen, 53, and 433. That’s the size of quantum computers.


Hh5800/iStock.

In fact, the problems used for cryptography are so complex for our present algorithms and computers that the information exchange remains secure for any practical purposes – solving the problem and then hacking the protocol would take a ridiculous number of years. The most paradigmatic example of this approach is the RSA protocol (for its inventors Ron Rivest, Adi Shamir, and Leonard Adleman), which today secures our information transmissions.

Check out all the on-demand sessions from the Intelligent Security Summit here.

Classical machine learning (ML) algorithms have proven to be powerful tools for a wide range of tasks, including image and speech recognition, natural language processing (NLP) and predictive modeling. However, classical algorithms are limited by the constraints of classical computing and can struggle to process large and complex datasets or to achieve high levels of accuracy and precision.

Enter quantum machine learning (QML).

This might be important. It might not be over for metformin just yet though as a mice study showed that rapamycin combined with metformin removed each other’s side effects.


If you are a non-diabetic who takes metformin for longevity, I highly recommend you stop immediately. Hear me out, and at the end of the video I’ll share what to do instead.

My full supplement stack: https://drstanfield.com/my-supplements/
Supplements I source from Amazon: http://amzn.to/3o2ULOV

✨10% Discount Code: BRAD ✨
• DoNotAge.org: https://donotage.org/products/
• Renue By Science: https://renuebyscience.com/?rfsn=5206061.b626e7&coupon-code=brad.
• ProHealth: https://www.prohealthlongevity.com/collections/best-sellers.
✨10% Discount Code: BRAD ✨

Donate towards my Rapamycin & Exercise clinical study: https://bit.ly/3QwugRx.

Scientists at the University of Cambridge have successfully trialed an artificial pancreas for use by patients living with type 2 diabetes. The device – powered by an algorithm developed at the University of Cambridge – doubled the amount of time patients were in the target range for glucose compared to standard treatment and halved the time spent experiencing high glucose levels.

Around 415 million people worldwide are estimated to be living with type 2 diabetes, which costs around $760 billion in annual global health expenditure. According to Diabetes UK, more than 4.9 million people have diabetes in the UK alone, of whom 90% have type 2 diabetes, and this is estimated to cost the NHS £10 billion per year.

“Many people with type 2 diabetes struggle to manage their blood sugar levels using the currently available treatments, such as insulin injections. The artificial pancreas can provide a safe and effective approach to help them, and the technology is simple to use and can be implemented safely at home.” —

A common idea that our creativity is what makes us uniquely human has shaped society but strides of progress made in the domain of Generative Artificial Intelligence question this very notion. Generative AI is an emerging field that involves the creation of original content or data using machine learning algorithms.

As we think about a future where humans and AI partner in iterative creative cycles, we consider how generative AI could impact current businesses and possibly create new ones. Up until recently, machines were relegated to analysis and cognitive roles, but today algorithms are improving at generating original content. These technologies are iterative in principle, one is built on top of the last one, and each new iteration enhances the algorithm and increases the potential for discovery exponentially.

The technology presents itself as a more refined and mature breed of AI that has sent investors into a frenzy and among all this emerges a clear market leader — OpenAI. Its flagship products-ChatGPT and DALL-E proved to be industry disruptors and brought generative AI tools to the masses. DALL-E allows people to generate and edit photo-realistic images simply by describing what they want to see, while ChatGPT does the same through a text medium.

The early 20th century saw the advent of quantum mechanics to describe the properties of small particles, such as electrons or atoms. Schrödinger’s equation in quantum mechanics can successfully predict the electronic structure of atoms or molecules. However, the “duality” of matter, referring to the dual “particle” and “wave” nature of electrons, remained a controversial issue. Physicists use a complex wavefunction to represent the wave nature of an electron.

“Complex” numbers are those that have both “real” and “imaginary” parts—the ratio of which is referred to as the “phase.” However, all directly measurable quantities must be “real”. This leads to the following challenge: when the electron hits a detector, the “complex” phase information of the disappears, leaving only the square of the amplitude of the wavefunction (a “real” value) to be recorded. This means that electrons are detected only as particles, which makes it difficult to explain their dual properties in atoms.

The ensuing century witnessed a new, evolving era of physics, namely, physics. The attosecond is a very short time scale, a billionth of a billionth of a second. “Attosecond physics opens a way to measure the phase of electrons. Achieving attosecond time-resolution, electron dynamics can be observed while freezing ,” explains Professor Hiromichi Niikura from the Department of Applied Physics, Waseda University, Japan, who, along with Professor D. M. Villeneuve—a principal research scientist at the Joint Attosecond Science Laboratory, National Research Council, and adjunct professor at University of Ottawa—pioneered the field of attosecond physics.

The new algorithm could render mainstream encryption powerless within years.

Chinese researchers claim to have introduced a new code-breaking algorithm that, if successful, could render mainstream encryption powerless within years rather than decades.

The team, led by Professor Long Guilu of Tsinghua University, proclaimed that a modest quantum computer constructed with currently available technology could run their algorithm, South China Morning Post (SCMP) reported on Wednesday.

https://youtu.be/I5Xarr7pBuk

Simon Waslander is the Director of Collaboration, at the CureDAO Alliance for the Acceleration of Clinical Research (https://www.curedao.org/), a community-owned platform for the precision health of the future.

CureDAO is creating an open-source platform to discover how millions of factors, like foods, drugs, and supplements affect human health, within a decentralized autonomous organization (DAO), making suffering optional through the creation of a “WordPress of health data”.

Simon is a native of the Dutch Caribbean island of Aruba, having been born on the island and initially chose to study medicine at the University of Groningen, but then transitioned over to healthcare innovation studies at the University of Maastricht where he wrote his master thesis on the topic of Predictive Healthcare Algorithms.

(For information on the discussion segment on AGI, please contact — www.Norn.AI)

00:00 Trailer.
05:54 Tertiary brain layer.
19:49 Curing paralysis.
23:09 How Neuralink works.
33:34 Showing probes.
44:15 Neuralink will be wayyy better than prior devices.
1:01:20 Communication is lossy.
1:14:27 Hearing Bluetooth, WiFi, Starlink.
1:22:50 Animal testing & brain proxies.
1:29:57 Controlling muscle units w/ Neuralink.

I had the privilege of speaking with James Douma-a self-described deep learning dork. James’ experience and technical understanding are not easily found. I think you’ll find his words to be intriguing and insightful. This is one of several conversations James and I plan to have.

We discuss:
1. Elon’s motivations for starting Neuralink.
2. How Neuralinks will be implanted.
3. Things Neuralink will be able to do.
4. Important takeaways from the latest Show and Tell event.

In future episodes, we’ll dive more into:
- Neuralink’s architectural decisions and plans to scale.
- The spike detection, decoding algorithms, and differences among brain regions.
- Robot/ hardware/ manufacturing.
- Neural shunt concept/ future projects.

Hope you enjoy it as much as I did.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

Portable, low-field strength MRI systems have the potential to transform neuroimaging – provided that their low spatial resolution and low signal-to-noise (SNR) ratio can be overcome. Researchers at Harvard Medical School are harnessing artificial intelligence (AI) to achieve this goal. They have developed a machine learning super-resolution algorithm that generates synthetic images with high spatial resolution from lower resolution brain MRI scans.

The convolutional neural network (CNN) algorithm, known as LF-SynthSR, converts low-field strength (0.064 T) T1-and T2-weighted brain MRI sequences into isotropic images with 1 mm spatial resolution and the appearance of a T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) acquisition. Describing their proof-of-concept study in Radiology, the researchers report that the synthetic images exhibited high correlation with images acquired by 1.5 T and 3.0 T MRI scanners.

Morphometry, the quantitative size and shape analysis of structures in an image, is central to many neuroimaging studies. Unfortunately, most MRI analysis tools are designed for near-isotropic, high-resolution acquisitions and typically require T1-weighted images such as MP-RAGE. Their performance often drops rapidly as voxel size and anisotropy increase. As the vast majority of existing clinical MRI scans are highly anisotropic, they cannot be reliably analysed with existing tools.