“You should have a few good years ahead of you but I wouldn’t hold my Bitcoin,” Peronnin said, laughing. “They need to fork [move to a stronger blockchain] by 2030, basically. Quantum computers will be ready to be a threat a bit later than that,” he said.
Quantum doesn’t just threaten Bitcoin, of course, but all banking encryption. And it is likely that in all these cases companies are developing quantum resistant tools to upgrade their existing security systems.
Defensive security algorithms are improving, Peronnin said, so it’s not certain when the blockchain will become vulnerable to a quantum attack. But “the threshold for such an event is coming closer to us year by year,” he said.
If you read my last post, you may have had the same reaction as the legendary fintech blogger Chris Skinner. On the blog entitled “Fintechs New Power Couple: AI and Trust, he politely corrected, ” AI, trust and DLT sir” as a comment on my post.
As soon as I read his input I knew he was right. I had to write a follow up post, to correct my glaring omission. As there are three forces converging here rather than two, I will update the title to make it both more contemporary, and more accurate at the same time…
Fintech’s New Power Throuple is the convergence of AI, Trust, and Distributed Ledger Technology (DLT).
If I drew a diagram of the relationships between the three different factors I would put it in the form of a triangle. From my viewpoint Trust would hold the uppermost position, with Blockchain and Artificial Intelligence occupying the two lower positions.
They are kind of the technology layer that makes that makes Trust possible.
Artificial general intelligence (AGI) could be humanity’s greatest invention… or our biggest risk.
In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.
We cover: • Is AGI inevitable? How soon will it arrive? • Will AGI kill us … or save us? • Why decentralization and blockchain could make AGI safer. • How large language models (LLMs) fit into the path toward AGI • The risks of an AGI arms race between the U.S. and China. • Why Ben Goertzel created Meta, a new AGI programming language.
📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.
A debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy.
“What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky and Mark Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues that such a pause would make an ASI singleton more likely – which he sees as the greatest danger.”
What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.
They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.
Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.
Cybersecurity researchers have discovered two new malicious packages on the npm registry that make use of smart contracts for the Ethereum blockchain to carry out malicious actions on compromised systems, signaling the trend of threat actors constantly on the lookout for new ways to distribute malware and fly under the radar.
“The two npm packages abused smart contracts to conceal malicious commands that installed downloader malware on compromised systems,” ReversingLabs researcher Lucija Valentić said in a report shared with The Hacker News.
Researchers have developed QuantumShield-BC, a blockchain framework designed to resist attacks from quantum computers by integrating post-quantum cryptography (PQC) utilising algorithms such as Dilithium and SPHINCS+, quantum key distribution (QKD), and quantum Byzantine fault tolerance (Q-BFT) leveraging quantum random number generation (QRNG) for unbiased leader selection. The framework was tested on a controlled testbed with up to 100 nodes, demonstrating resistance to simulated quantum attacks and achieving fairness through QRNG-based consensus. An ablation study confirmed the contribution of each quantum component to overall security, although the QKD implementation was simulated and scalability to larger networks requires further investigation.
Abacus Market, the largest Western darknet marketplace supporting Bitcoin payments, has shut down its public infrastructure in a move suspected to be an exit scam.
Exit scams occur when the operator of a marketplace decides to vanish with the money they hold in escrow for various transactions between platform users.
Blockchain intelligence firm TRM Labs reports that Abacus shutting down so abruptly has all the indications of either an exit scam or a covert law enforcement operation dismantling the activity.