
Quantum Computing and Blockchain: Aligning Urgency with Actual Threats
TechFlow Selected TechFlow Selected

Quantum Computing and Blockchain: Aligning Urgency with Actual Threats
Recent bug risks are far greater than quantum attacks.
By Justin Thaler
Translation: Baicai Blockchain

Timelines for quantum computers related to cryptography are often exaggerated—leading to calls for urgent, comprehensive migration to post-quantum cryptography.
But these calls often ignore the costs and risks of premature migration, and overlook the very different risk profiles across cryptographic primitives:
Post-quantum encryption, despite its costs, demands immediate deployment: "Harvest-Now-Decrypt-Later" (HNDL) attacks are already underway, as sensitive data encrypted today will still be valuable when quantum computers arrive—even decades from now. The performance overhead and implementation risks of post-quantum encryption are real, but HNDL attacks leave no choice for data requiring long-term confidentiality.
Post-quantum signatures face different considerations. They are not vulnerable to HNDL attacks, and their costs and risks—larger sizes, performance overhead, immature implementations, and bugs—demand careful deliberation rather than immediate migration.
These distinctions are crucial. Misunderstanding them distorts cost-benefit analyses, causing teams to overlook more pressing security risks—such as software bugs.
The real challenge in successfully transitioning to post-quantum cryptography lies in aligning urgency with actual threats. Below, I will clarify common misconceptions about quantum threats to cryptography—covering encryption, signatures, and zero-knowledge proofs—with a particular focus on implications for blockchains.
How far along are we?
Despite high-profile claims, the likelihood of a cryptographically relevant quantum computer (CRQC) emerging in the 2020s is extremely low.
By "cryptographically relevant quantum computer," I mean a fault-tolerant, error-corrected quantum computer capable of running Shor’s algorithm at sufficient scale to break {secp}256{k}1 or {RSA-2048}—elliptic curve cryptography or RSA—in a reasonable timeframe (e.g., within a month of continuous computation).
According to any reasonable interpretation of publicly known milestones and resource estimates, we remain far from such a machine. Companies sometimes claim CRQCs could appear before 2030 or even by 2035, but known progress does not support these assertions.
For context, across all current architectures—trapped ions, superconducting qubits, and neutral atom systems—today’s quantum computing platforms are nowhere near the hundreds of thousands to millions of physical qubits required to run Shor’s algorithm against {RSA-2048} or {secp}256{k}1 (depending on error rates and error correction schemes).
The limiting factors go beyond mere qubit count—they include gate fidelity, qubit connectivity, and the depth of sustained error-correction circuits needed for deep quantum algorithms. While some systems now exceed 1,000 physical qubits, raw qubit numbers alone are misleading: these systems lack the qubit connectivity and gate fidelity required for cryptographically relevant computations.
Recent systems approach physical error rates where quantum error correction begins to function, but no one has demonstrated more than a few logical qubits with sustained error-correction circuit depth… let alone the thousands of high-fidelity, deep-circuit, fault-tolerant logical qubits needed to run Shor’s algorithm. The gap between proving quantum error correction feasible in principle and achieving the scale required for cryptanalysis remains vast.
In short: unless both qubit count and fidelity improve by several orders of magnitude, cryptographically relevant quantum computers remain out of reach.
Yet corporate press releases and media coverage easily create confusion. Here are some common sources of misunderstanding:
Demonstrations claiming "quantum advantage" currently target contrived tasks. These are chosen not for utility, but because they can run on existing hardware while appearing to show large quantum speedups—a nuance often obscured in announcements.
Claims of "thousands of physical qubits" refer to quantum annealers, not the gate-model machines required to run Shor’s algorithm against public-key cryptography.
Liberal use of the term "logical qubit." Physical qubits are noisy. As noted, quantum algorithms require logical qubits; Shor’s algorithm needs thousands. Using quantum error correction, one logical qubit can be implemented using many physical ones—typically hundreds to thousands, depending on error rates. But some companies have stretched this term beyond recognition. For example, a recent announcement claimed a logical qubit using a distance-2 code with just two physical qubits. This is absurd: a distance-2 code can only detect errors, not correct them. Truly fault-tolerant logical qubits suitable for cryptanalysis require hundreds to thousands of physical qubits per logical one—not two.
More generally, many quantum computing roadmaps use "logical qubit" to refer to qubits supporting only Clifford operations. These can be efficiently simulated classically and are insufficient for Shor’s algorithm, which requires thousands of error-corrected T-gates (or more generally, non-Clifford gates).
Even if a roadmap targets "thousands of logical qubits by year X," this does not imply the company expects to run Shor’s algorithm breaking classical cryptography in that same year X.
These practices severely distort public perception—even among seasoned observers—of how close we are to cryptographically relevant quantum computers.
That said, some experts are genuinely excited about progress. For instance, Scott Aaronson recently wrote that, given the “current astonishing pace of hardware development,”
I now consider it a realistic possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next U.S. presidential election.
But Aaronson later clarified his statement did not imply a cryptographically relevant quantum computer: he considered even a fully fault-tolerant implementation of Shor’s algorithm factoring 15 = 3 × 5 as sufficient—something faster to do with pencil and paper. The bar remains small-scale execution of Shor’s algorithm, not cryptographically relevant scale, since prior experiments factoring 15 used simplified circuits, not full, fault-tolerant Shor. And there's a reason these experiments always target factoring 15: modular arithmetic mod 15 is easy, while factoring slightly larger numbers like 21 is much harder. Thus, claims of quantum experiments factoring 21 often rely on additional hints or shortcuts.
In short, expectations of a CRQC capable of breaking {RSA-2048} or {secp}256{k}1 within the next five years—what actually matters for practical cryptography—are unsupported by any known public progress.
Even ten years remains ambitious. Given how far we are from CRQCs, excitement about progress aligns well with timelines exceeding a decade.
What about the U.S. government setting 2035 as the deadline for full post-quantum (PQ) migration of government systems? I believe this is a reasonable timeline for completing such a massive transition. However, it is not a prediction that CRQCs will exist by then.
Where HNDL Applies (and Where It Doesn’t)
Harvest-Now-Decrypt-Later (HNDL) attacks refer to adversaries storing encrypted traffic today and decrypting it once a CRQC exists. Nation-state actors are certainly already archiving at scale encrypted communications from U.S. government entities, intending to decrypt them years after CRQCs emerge.
This is why encryption must transition immediately—at least for any data requiring 10–50+ years of confidentiality.
But digital signatures—all blockchains rely on them—differ from encryption: there is no retrospective confidentiality to attack.
In other words, signature forgery becomes possible only once a CRQC arrives, but past signatures do not "hide" secrets like encrypted messages do. As long as you know a digital signature was generated before CRQCs existed, it cannot be forged.
This makes the transition to post-quantum digital signatures less urgent than the post-quantum transition for encryption.
Major platforms are acting accordingly: Chrome and Cloudflare have rolled out hybrid {X}25519+{ML-KEM} for Transport Layer Security (TLS) encryption.
For readability, I use "encryption scheme" here, though strictly speaking, secure communication protocols like TLS use key exchange or key encapsulation mechanisms, not public-key encryption.
Here, **“hybrid” means using both ** a post-quantum secure scheme (i.e., ML-KEM) and the existing one ({X}25519), combining their security guarantees. This way, they can (hopefully) resist HNDL attacks via ML-KEM while maintaining classical security via {X}25519 in case ML-KEM is later proven insecure even against today’s computers.
Apple’s iMessage has also deployed such hybrid post-quantum encryption via its PQ3 protocol, and Signal via its PQXDH and SPQR protocols.
In contrast, rollout of post-quantum digital signatures to critical network infrastructure is being delayed until CRQCs are truly imminent, as current post-quantum signature schemes impose performance penalties (detailed further below).
zkSNARKs—zero-knowledge succinct non-interactive arguments of knowledge, key to blockchain scalability and privacy—occupy a similar position to signatures. Even for non-post-quantum-secure {zkSNARKs} (which use elliptic curve cryptography like today’s non-PQ encryption and signature schemes), their zero-knowledge property is itself post-quantum secure.
The zero-knowledge property ensures no information about the secret witness is leaked in the proof—even to quantum adversaries—so there is no confidential data to “harvest now” for later decryption.
Thus, {zkSNARKs} are not vulnerable to Harvest-Now-Decrypt-Later attacks. Just as non-PQ signatures generated today are secure, any {zkSNARK} proof generated before a CRQC appears is trustworthy (i.e., the proven statement is absolutely true)—even if the {zkSNARK} uses elliptic curve cryptography. Only after a CRQC emerges could an attacker produce convincing proofs of false statements.
What This Means for Blockchains
Most blockchains are not exposed to HNDL attacks:
Most non-private chains, like today’s Bitcoin and Ethereum, primarily use non-post-quantum cryptography for transaction authorization—that is, digital signatures, not encryption.
Likewise, these signatures are not subject to HNDL risk: "Harvest-Now-Decrypt-Later" applies to encrypted data. For example, Bitcoin’s blockchain is public; the quantum threat is signature forgery (deriving private keys to steal funds), not decrypting already public transaction data. This removes the immediate cryptographic urgency driven by HNDL.
Unfortunately, even analyses from credible sources like the Federal Reserve incorrectly claim Bitcoin is vulnerable to HNDL attacks—an error that exaggerates the urgency of transitioning to post-quantum cryptography.
That said, reduced urgency does not mean Bitcoin can wait: it faces different timeline pressures due to the enormous social coordination required to change its protocol.
The exception today is privacy chains, many of which encrypt or otherwise hide recipients and amounts. Once quantum computers can break elliptic curve cryptography, this confidentiality can be harvested and retrospectively deanonymized.
The severity of this threat varies by blockchain design. For example, in Monero, curve-based ring signatures and key images (linkable tags per output to prevent double spends) allow the public ledger itself to be used to reconstruct spending graphs retrospectively. In other chains, damage is more limited—see discussion by Zcash cryptographer and researcher Sean Bowe.
If it is important that users’ transactions not be exposed by a CRQC, privacy chains should transition to post-quantum primitives (or hybrid schemes) as soon as feasible. Alternatively, they should adopt architectures that avoid placing decryptable secrets on-chain.
Bitcoin’s Unique Challenge: Governance + Abandoned Coins
For Bitcoin specifically, two practical concerns drive urgency toward adopting post-quantum digital signatures—neither related to quantum technology itself.
One concern is governance speed: Bitcoin changes slowly. If the community cannot agree on a solution, any contentious issue could trigger a disruptive hard fork.
The other concern is that Bitcoin’s shift to post-quantum signatures cannot be passive: owners must actively migrate their coins. This means abandoned, quantum-vulnerable coins cannot be protected. Some estimates place the number of vulnerable and potentially abandoned BTC at several million coins—worth tens of billions of dollars at current prices (as of December 2025).
However, the quantum threat to Bitcoin will not be a sudden, overnight catastrophe… but rather a selective, gradual targeting process. Quantum computers will not break all encryption simultaneously—Shor’s algorithm must target one public key at a time. Early quantum attacks will be extremely expensive and slow. Therefore, once quantum computers can crack individual Bitcoin signature keys, attackers will selectively target high-value wallets.
Moreover, users who avoid address reuse and do not use Taproot addresses—which expose public keys directly on-chain—are largely protected even without protocol changes: their public keys remain hidden behind hash functions until their coins are spent. When they finally broadcast a spending transaction, the public key becomes visible, triggering a brief real-time race between honest spenders needing confirmation and quantum-equipped attackers trying to derive the private key and spend the coins before the legitimate transaction finalizes. Thus, truly vulnerable coins are those with exposed public keys: early pay-to-public-key (P2PK) outputs, reused addresses, and Taproot holdings.
There is no simple solution for abandoned vulnerable coins. Options include:
The Bitcoin community agrees on a "flag day" after which any unmigrated coins are declared destroyed.
Abandoned quantum-vulnerable coins become claimable by anyone possessing a CRQC.
The second option raises serious legal and security issues. Taking possession of coins via quantum computer without a private key—even with claims of legitimate ownership or good intentions—could trigger severe problems under theft and computer fraud laws in many jurisdictions.
Additionally, "abandoned" is based on inactivity assumptions. But no one truly knows whether these coins lack living owners with valid keys. Evidence of past ownership may not suffice for legal authorization to break cryptographic protections and reclaim them. This legal ambiguity increases the likelihood that abandoned quantum-vulnerable coins fall into the hands of malicious actors willing to ignore legal constraints.
A final Bitcoin-specific issue is its low transaction throughput. Even once a migration plan is finalized, moving all quantum-vulnerable funds to post-quantum secure addresses would take months at Bitcoin’s current transaction rate.
These challenges make it critical for Bitcoin to begin planning its post-quantum transition now—not because CRQCs may arrive before 2030, but because the governance, coordination, and technical logistics required to migrate billions of dollars worth of coins will take years to resolve.
The quantum threat to Bitcoin is real, but the timeline pressure comes from Bitcoin’s own constraints, not an imminent quantum computer. Other blockchains face their own challenges with quantum-vulnerable funds, but Bitcoin faces unique exposure: its earliest transactions used pay-to-public-key (P2PK) outputs, placing public keys directly on-chain, making a significant portion of BTC especially vulnerable to CRQCs. This technical difference—combined with Bitcoin’s age, value concentration, low throughput, and rigid governance—makes the problem particularly severe.
Note that the vulnerability I described above applies to the cryptographic security of Bitcoin’s digital signatures—but not to the economic security of the Bitcoin blockchain. This economic security stems from the Proof-of-Work (PoW) consensus mechanism, which is not easily threatened by quantum computers for three reasons:
PoW relies on hashing, so it suffers only quadratic quantum speedup from Grover’s search algorithm, not exponential speedup from Shor’s algorithm.
The practical overhead of implementing Grover’s search makes it extremely unlikely any quantum computer could achieve even modest real-world acceleration on Bitcoin’s PoW.
Even if significant acceleration were achieved, it would merely give large quantum miners an advantage over smaller ones, without fundamentally undermining Bitcoin’s economic security model.
Costs and Risks of Post-Quantum Signatures
To understand why blockchains should not rush to deploy post-quantum signatures, we must examine performance costs and our evolving confidence in post-quantum security.
Most post-quantum cryptography is based on one of five approaches:
-
Hashing
-
Codes
-
Lattices
-
Multivariate quadratic systems (MQ)
-
Isogenies.
Why five different methods? The security of any post-quantum primitive rests on the assumption that quantum computers cannot efficiently solve certain mathematical problems. The more "structured" the problem, the more efficient the resulting cryptographic protocols can be.
But this cuts both ways: extra structure also creates more attack surface. This creates a fundamental trade-off—stronger assumptions enable better performance, at the cost of greater potential vulnerabilities (i.e., higher chance the assumption proves false).
In general, hash-based approaches are the most conservative in terms of security—we are most confident quantum computers cannot efficiently attack these. But they are also the least performant. For example, NIST-standardized hash-based signature schemes are 7–8 KB even at minimal parameter settings. By comparison, today’s elliptic curve-based digital signatures are just 64 bytes—about 100 times smaller.
Lattice-based schemes are the primary focus today. NIST has selected only lattice-based schemes: the sole encryption standard and two of the three signature algorithms. One lattice scheme (ML-DSA, formerly Dilithium) produces signatures ranging from 2.4 KB (at 128-bit security) to 4.6 KB (at 256-bit security)—making them roughly 40–70 times larger than today’s EC-based signatures. Another lattice scheme, Falcon, offers slightly smaller signatures (666 bytes for Falcon-512, 1.3 KB for Falcon-1024), but involves complex floating-point operations that NIST itself flags as a special implementation challenge. Falcon’s creator Thomas Pornin called it “the most complex cryptographic algorithm I’ve ever implemented.”
Implementing lattice-based digital signatures securely is also more challenging than EC-based schemes: ML-DSA has more sensitive intermediate values and non-trivial rejection sampling logic requiring side-channel and fault protection. Falcon adds constant-time floating-point issues; indeed, several side-channel attacks on Falcon implementations have already recovered secret keys.
These pose immediate risks, unlike the distant threat of CRQCs.
There is strong reason to be cautious when deploying higher-performance post-quantum methods. Historically, leading candidates like Rainbow (an MQ-based signature) and SIKE/SIDH (an isogeny-based encryption scheme) were broken classically—that is, using today’s computers, not quantum ones.
This happened late in NIST’s standardization process. This reflects healthy science at work, but shows that premature standardization and deployment can backfire.
As noted earlier, internet infrastructure is taking a thoughtful approach to signature migration. This is especially notable given how long cryptographic transitions take once initiated. The shift away from MD5 and SHA-1 hash functions—technically deprecated years ago by network authorities—took many years to implement across infrastructure and remains incomplete in some cases. This occurred despite these schemes being fully broken, not merely potentially vulnerable to future technologies.
Unique Challenges for Blockchains vs. Internet Infrastructure
Luckily, blockchains like Ethereum or Solana, actively maintained by open-source developer communities, can upgrade faster than traditional internet infrastructure. On the other hand, traditional infrastructure benefits from frequent key rotation, meaning its attack surface moves faster than early quantum machines could target—luxuries blockchains lack, as coins and their associated keys can remain exposed indefinitely.
Still, blockchains should generally follow the internet’s thoughtful approach to signature migration. Signatures in both settings are not exposed to HNDL attacks, and regardless of key longevity, the costs and risks of premature migration to immature post-quantum schemes remain substantial.
There are blockchain-specific challenges making premature migration especially risky and complex: for example, blockchains have unique requirements for signature schemes, especially the ability to quickly aggregate many signatures. Today, BLS signatures are commonly used for their fast aggregation, but they are not post-quantum secure. Researchers are exploring SNARK-based post-quantum signature aggregation. This work is promising but still in early stages.
For SNARKs, the community currently focuses on hash-based constructions as the leading post-quantum option. But a major shift is coming: I believe lattice-based options will soon become attractive alternatives. These will offer better performance across the board compared to hash-based {SNARKs}, including shorter proofs—similar to how lattice-based signatures are shorter than hash-based ones.
Bigger Problems Today: Implementation Security
In the coming years, implementation vulnerabilities will pose a greater security risk than cryptographically relevant quantum computers. For {SNARKs}, the main concern is software bugs.
Bugs are already a challenge for digital signatures and encryption schemes, but {SNARKs} are vastly more complex. Indeed, a digital signature scheme can be viewed as a very simple {zkSNARK} for the statement “I know the private key corresponding to my public key, and I authorize this message.”
For post-quantum signatures, immediate risks also include implementation attacks like side-channel and fault-injection attacks. These are well-documented and can extract secret keys from deployed systems. They represent a more pressing threat than distant quantum computers.
The community will spend years identifying and fixing bugs in {SNARKs} and hardening post-quantum signature implementations against side-channel and fault-injection attacks. Since the landscape for post-quantum {SNARKs} and signature aggregation remains unsettled, blockchains that transition too early risk locking into suboptimal solutions. They may need to migrate again when better options emerge or when implementation flaws are discovered.
What Should We Do? 7 Recommendations
Given the realities outlined above, I conclude with recommendations for various stakeholders—from builders to policymakers. Core principle: take quantum threats seriously, but do not act on the assumption that CRQCs will arrive before 2030. This assumption is not supported by current progress. Still, there are things we can and should do now:
We should immediately deploy hybrid encryption.
Or at least in cases where long-term confidentiality matters and costs are acceptable.
Many browsers, CDNs, and messaging apps (like iMessage and Signal) have already adopted hybrid approaches. Hybrid—post-quantum plus classical—defends against HNDL attacks while hedging against potential weaknesses in post-quantum schemes.
Immediately use hash-based signatures when size is tolerable.
Software/firmware updates—and other low-frequency, size-insensitive scenarios—should immediately adopt hybrid hash-based signatures. (Hybrid is to hedge against implementation bugs in new schemes, not due to doubts about hash-based security assumptions.)
This is conservative and provides society with a clear "lifeboat" in the unlikely event of an unexpectedly early CRQC. Without pre-deployed post-quantum signature software updates, once a CRQC appears, we face a bootstrapping problem: we won’t be able to securely distribute the post-quantum cryptographic fixes needed to defend against it.
Blockchains need not rush to deploy post-quantum signatures—but should begin planning immediately.
Blockchain developers should follow the lead of the network PKI community in taking a thoughtful approach to post-quantum signature deployment. This allows post-quantum signature schemes time to mature in performance and in our understanding of their security. It also gives developers time to re-architect systems to handle larger signatures and develop better aggregation techniques.
For Bitcoin and other L1s: Communities need to define migration paths and policies regarding abandoned quantum-vulnerable funds. Passive migration is impossible, so planning is essential. And because Bitcoin faces unique non-technical challenges—slow governance and a large number of high-value potentially abandoned quantum-vulnerable addresses—it is especially critical that the Bitcoin community begin planning now.
Meanwhile, we must allow research on post-quantum {SNARKs} and aggregatable signatures to mature (likely requiring several more years). Again, premature migration risks locking into suboptimal schemes or requiring a second migration to fix implementation bugs.
A note on Ethereum’s account model: Ethereum supports two account types, with differing implications for post-quantum migration—Externally Owned Accounts (EOAs), traditional accounts controlled by {secp}256{k}1 private keys; and smart contract wallets with programmable authorization logic.
In non-emergency scenarios, if Ethereum adds post-quantum signature support, upgradable smart contract wallets can switch to post-quantum verification via contract upgrades—while EOAs may need to move funds to new post-quantum secure addresses (though Ethereum will likely provide dedicated migration mechanisms for EOAs).
In a quantum emergency, Ethereum researchers have proposed a hard fork plan to freeze vulnerable accounts and allow users to recover funds by proving knowledge of their mnemonic phrases using post-quantum secure {SNARKs}. This recovery mechanism would apply to both EOAs and any un-upgraded smart contract wallets.
Practical impact for users: Well-audited, upgradable smart contract wallets may offer a slightly smoother migration path—but the difference is minor and comes with trade-offs in trust toward wallet providers and upgrade governance. More importantly, the Ethereum community must continue its work on post-quantum primitives and emergency response planning.
A broader design lesson for builders: Many blockchains today tightly couple account identity with specific cryptographic primitives—Bitcoin and Ethereum with ECDSA signatures on {secp}256{k}1, others with EdDSA. The challenges of post-quantum migration highlight the value of decoupling account identity from any single signature scheme. Ethereum’s move toward smart accounts and similar account abstraction efforts on other chains reflect this trend: allowing accounts to upgrade their authentication logic without losing on-chain history and state. This decoupling doesn't make post-quantum migration trivial, but it does offer more flexibility than hardcoding accounts to a single signature scheme. (It also enables unrelated features like sponsored transactions, social recovery, and multisig.)
For privacy chains that encrypt or hide transaction details, earlier transition should be prioritized if performance permits.
User confidentiality on these chains is currently exposed to HNDL attacks, though severity varies by design. Chains where the public ledger alone enables full retrospective de-anonymization face the most urgent risk.
Consider hybrid schemes (post-quantum + classical) to guard against the possibility that ostensibly post-quantum schemes are later proven classically insecure, or implement architectural changes that avoid placing decryptable secrets on-chain.
In the near term, prioritize implementation security—over quantum threat mitigation.
Especially for complex primitives like {SNARKs} and post-quantum signatures, bugs and implementation attacks (side-channel, fault injection) will be far greater security risks in the coming years than CRQCs.
Invest immediately in audits, fuzzing, formal verification, and defense-in-depth/layered security approaches—don’t let quantum worries overshadow the more immediate threat of software bugs!
Fund quantum computing development.
A key national security implication of all the above is that we need sustained funding and talent development in quantum computing.
If a major adversary achieves cryptographically relevant quantum computing capability before the U.S., it would pose a severe national security risk to us and the rest of the world.
Maintain perspective on quantum computing announcements.
As quantum hardware matures, many milestones will be announced in the coming years. Paradoxically, the frequency of these announcements itself proves how far we remain from CRQCs: each milestone represents just one of many bridges we must cross, each generating its own headlines and waves of excitement.
Treat press releases as progress reports requiring critical evaluation—not triggers for sudden action.
Of course, surprising breakthroughs or innovations could accelerate timelines, just as severe scaling bottlenecks could extend them.
I won’t argue that a CRQC emerging within five years is absolutely impossible—only that it is extremely unlikely. The above recommendations are robust to this uncertainty and following them avoids more immediate, probable risks: implementation bugs, rushed deployments, and the ordinary ways cryptographic transitions go wrong.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














