Toggle light / dark theme

The End of Programming

The end of classical Computer Science is coming, and most of us are dinosaurs waiting for the meteor to hit.

I came of age in the 1980s, programming personal computers like the Commodore VIC-20 and Apple ][e at home. Going on to study Computer Science in college and ultimately getting a PhD at Berkeley, the bulk of my professional training was rooted in what I will call “classical” CS: programming, algorithms, data structures, systems, programming languages. In Classical Computer Science, the ultimate goal is to reduce an idea to a program written by a human — source code in a language like Java or C++ or Python. Every idea in Classical CS — no matter how complex or sophisticated — from a database join algorithm to the mind-bogglingly obtuse Paxos consensus protocol — can be expressed as a human-readable, human-comprehendible program.

When I was in college in the early ’90s, we were still in the depth of the AI Winter, and AI as a field was likewise dominated by classical algorithms. My first research job at Cornell was working with Dan Huttenlocher, a leader in the field of computer vision (and now Dean of the MIT School of Computing). In Dan’s PhD-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks—it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances. Deep learning was in its infancy, not yet considered mainstream AI, let alone mainstream CS.

Achieving greater entanglement: Milestones on the path to useful quantum technologies

Tiny particles are interconnected despite sometimes being thousands of kilometers apart—Albert Einstein called this “spooky action at a distance.” Something that would be inexplicable by the laws of classical physics is a fundamental part of quantum physics. Entanglement like this can occur between multiple quantum particles, meaning that certain properties of the particles are intimately linked with each other.

Entangled systems containing multiple offer significant benefits in implementing quantum algorithms, which have the potential to be used in communications, or quantum computing. Researchers from Paderborn University have been working with colleagues from Ulm University to develop the first programmable optical quantum memory. The study was published as an “Editor’s suggestion” in the Physical Review Letters journal.

Discovering novel algorithms with AlphaTensor

Algorithms have helped mathematicians perform fundamental operations for thousands of years. The ancient Egyptians created an algorithm to multiply two numbers without requiring a multiplication table, and Greek mathematician Euclid described an algorithm to compute the greatest common divisor, which is still in use today.

During the Islamic Golden Age, Persian mathematician Muhammad ibn Musa al-Khwarizmi designed new algorithms to solve linear and quadratic equations. In fact, al-Khwarizmi’s name, translated into Latin as Algoritmi, led to the term algorithm. But, despite the familiarity with algorithms today – used throughout society from classroom algebra to cutting edge scientific research – the process of discovering new algorithms is incredibly difficult, and an example of the amazing reasoning abilities of the human mind.

In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

How Quantum Physics Leads to Decrypting Common Algorithms

The rise of quantum computing and its implications for current encryption standards are well known. But why exactly should quantum computers be especially adept at breaking encryption? The answer is a nifty bit of mathematical juggling called Shor’s algorithm. The question that still leaves is: What is it that this algorithm does that causes quantum computers to be so much better at cracking encryption? In this video, YouTuber minutephysics explains it in his traditional whiteboard cartoon style.

“Quantum computation has the potential to make it super, super easy to access encrypted data — like having a lightsaber you can use to cut through any lock or barrier, no matter how strong,” minutephysics says. “Shor’s algorithm is that lightsaber.”

According to the video, Shor’s algorithm works off the understanding that for any pair of numbers, eventually multiplying one of them by itself will reach a factor of the other number plus or minus 1. Thus you take a guess at the first number and factor it out, adding and subtracting 1, until you arrive at the second number. That would unlock the encryption (specifically RSA here, but it works on some other types) because we would then have both factors.

As ransomware attacks increase, new algorithm may help prevent power blackouts

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Seeking Stability in a Relativistic Fluid

A fluid dynamics theory that violates causality would always generate paradoxical instabilities—a result that could guide the search for a theory for relativistic fluids.

The theory of fluid dynamics has been successful in many areas of fundamental and applied sciences, describing fluids from dilute gases, such as air, to liquids, such as water. For most nonrelativistic fluids, the theory takes the form of the celebrated Navier-Stokes equation. However, fundamental problems arise when extending these equations to relativistic fluids. Such extensions typically imply paradoxes—for instance, thermodynamic states of the systems can appear stable or unstable to observers in different frames of reference. These problems hinder the description of the dynamics of important fluid systems, such as neutron-rich matter in neutron star mergers or the quark-gluon plasma produced in heavy-ion collisions.

Uncovering Hidden Patterns: AI Reduces a 100,000-Equation Quantum Physics Problem to Only Four Equations

Scientists trained a machine learning tool to capture the physics of electrons moving on a lattice using far fewer equations than would typically be required, all without sacrificing accuracy. A daunting quantum problem that until now required 100,000 equations has been compressed into a bite-size task of as few as four equations by physicists using artificial intelligence. All of this was accomplished without sacrificing accuracy. The work could revolutionize how scientists investigate systems containing many interacting electrons. Furthermore, if scalable to other problems, the approach could potentially aid in the design of materials with extremely valuable properties such as superconductivity or utility for clean energy generation.

How to choose the right NLP solution

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

For decades, enterprises have jury-rigged software designed for structured data when trying to solve unstructured, text-based data problems. Although these solutions performed poorly, there was nothing else. Recently, though, machine learning (ML) has improved significantly at understanding natural language.

Unsurprisingly, Silicon Valley is in a mad dash to build market-leading offerings for this new opportunity. Khosla Ventures thinks natural language processing (NLP) is the most important technology trend of the next five years. If the 2000s were about becoming a big data-enabled enterprise, and the 2010s were about becoming a data science-enabled enterprise — then the 2020s are about becoming a natural language-enabled enterprise.

/* */