Toggle light / dark theme

In this work, we introduce a classical variational method for simulating QAOA, a hybrid quantum-classical approach for solving combinatorial optimizations with prospects of quantum speedup on near-term devices. We employ a self-contained approximate simulator based on NQS methods borrowed from many-body quantum physics, departing from the traditional exact simulations of this class of quantum circuits.

We successfully explore previously unreachable regions in the QAOA parameter space, owing to good performance of our method near optimal QAOA angles. Model limitations are discussed in terms of lower fidelities in quantum state reproduction away from said optimum. Because of such different area of applicability and relative low computational cost, the method is introduced as complementary to established numerical methods of classical simulation of quantum circuits.

Classical variational simulations of quantum algorithms provide a natural way to both benchmark and understand the limitations of near-future quantum hardware. On the algorithmic side, our approach can help answer a fundamentally open question in the field, namely whether QAOA can outperform classical optimization algorithms or quantum-inspired classical algorithms based on artificial neural networks48,49,50.

3D printed rockets save on up front tooling, enable rapid iteration, decrease part count, and facilitate radically new designs. For your chance to win 2 seats on one of the first Virgin Galactic flights to Space and support a great cause, go to https://www.omaze.com/veritasium.

Thanks to Tim Ellis and everyone at Relativity Space for the tour!
https://www.relativityspace.com/
https://youtube.com/c/RelativitySpace.

Special thanks to Scott Manley for the interview and advising on aerospace engineering.
Check out his channel: https://www.youtube.com/user/szyzyg.

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
References:
Benson, T. (2021). Rocket Parts. NASA. — https://ve42.co/RocketParts.

Boen, B. (2009). Winter Wonder: Rocket Icicles. NASA. — https://ve42.co/EngineIcicles.

Hall, N. (2021). Rocket Thrust Equation. NASA. — https://ve42.co/RocketEqn.

For drone racing enthusiasts. 😃


If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.

But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.

Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a .

We combined a machine learning algorithm with knowledge gleaned from hundreds of biological experiments to develop a technique that allows biomedical researchers to figure out the functions of the proteins that turn genes on and off in cells, called transcription factors. This knowledge could make it easier to develop drugs for a wide range of diseases.

Early on during the COVID-19 pandemic, scientists who worked out the genetic code of the RNA molecules of cells in the lungs and intestines found that only a small group of cells in these organs were most vulnerable to being infected by the SARS-CoV-2 virus. That allowed researchers to focus on blocking the virus’s ability to enter these cells. Our technique could make it easier for researchers to find this kind of information.

The biological knowledge we work with comes from this kind of RNA sequencing, which gives researchers a snapshot of the hundreds of thousands of RNA molecules in a cell as they are being translated into proteins. A widely praised machine learning tool, the Seurat analysis platform, has helped researchers all across the world discover new cell populations in healthy and diseased organs. This machine learning tool processes data from single-cell RNA sequencing without any information ahead of time about how these genes function and relate to each other.

Natural language processing continues to find its way into unexpected corners. This time, it’s phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That’s where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.

No, it’s not forbidden to innovate, quite the opposite, but it’s always risky to do something different from what people are used to. Risk is the middle name of the bold, the builders of the future. Those who constantly face resistance from skeptics. Those who fail eight times and get up nine.

(Credit: Adobe Stock)

Fernando Pessoa’s “First you find it strange. Then you can’t get enough of it.” contained intolerable toxicity levels for Salazar’s Estado Novo (Portugal). When the level of difference increases, censorship follows. You can’t censor censorship (or can you?) when, deep down, it’s a matter of fear of difference. Yes, it’s fear! Fear of accepting/facing the unknown. Fear of change.

What do I mean by this? Well, I may seem weird or strange with the ideas and actions I take in life, but within my weirdness, there is a kind of “Eye of Agamotto” (sometimes being a curse for me)… What I see is authentic and vivid. Sooner or later, that future I glimpse passes into this reality.

Transformer-based deep learning models like GPT-3 have been getting much attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience. However, these models can fail to capture more nuanced relationships between query and document terms beyond pure semantics.

The Microsoft team of researchers developed a neural network with 135 billion parameters, which is the largest “universal” artificial intelligence that they have running in production. The large number of parameters makes this one of the most sophisticated AI models ever detailed publicly to date. OpenAI’s GPT-3 natural language processing model has 175 billion parameters and remains as the world’s largest neural network built to date.

Microsoft researchers are calling their latest AI project MEB (Make Every Feature Binary). The 135-billion parameter machine is built to analyze queries that Bing users enter. It then helps identify the most relevant pages from around the web with a set of other machine learning algorithms included in its functionality, and without performing tasks entirely on its own.