Chapter I: Counting the Nines - Where Quantum Actually Stands

1.1 The Hardware Landscape: An Honest Taxonomy

Quantum computing is not one technology. It is at least six competing technologies, each encoding quantum information in fundamentally different physical systems. Unlike artificial intelligence, where the Transformer architecture emerged as the dominant paradigm by the late 2010s, quantum computing has not converged on a single winner. This is both a risk (fragmentation of R&D investment, incompatible software ecosystems) and an opportunity (diversified bets, potential for surprise breakthroughs from unexpected directions).

Imagine several groups all trying to build the first airplane, but one uses wood-and-fabric biplanes, another attempts jet engines, a third builds helicopters. They're all "flying machines," but the engineering is totally different. Nobody knows which approach will win, and more than one may succeed for different purposes.

The six primary qubit modalities, as of early 2026, are:

Each occupies a different position on the axes that matter: gate fidelity, coherence time, qubit count, connectivity, and scalability. No modality leads on all axes simultaneously.

Superconducting Qubits

Superconducting qubits are the most mature modality and the one with the largest installed base. Google's Willow chip (105 qubits, December 2024) and IBM's Nighthawk processor (120 qubits, November 2025) represent the current state of the art. Their key advantage is speed: gate times of tens to hundreds of nanoseconds, orders of magnitude faster than trapped ions. They are also fabricated using processes adapted from semiconductor manufacturing, which in principle allows them to leverage decades of industrial fab expertise.

Superconducting qubits — called transmons[15] — are tiny aluminum circuits on silicon chips, cooled to near absolute zero. Their biggest strength: they're blindingly fast. Their biggest weakness: they must be kept colder than deep space, and each qubit can only talk to its nearest neighbors — like students who can only pass notes to the person sitting next to them.

Google Willow's headline numbers: 105 qubits, average T1 (relaxation time) of approximately 68 microseconds, and the first below-threshold surface code error correction[3], with a distance-7 code achieving 0.143% error per cycle. IBM's third-revision Heron chips have achieved median two-qubit CZ gate fidelities of approximately 99.75%, with best individual pairs exceeding 99.9%, and Nighthawk's 120-qubit square lattice[4] with 218 tunable couplers enables circuits of up to 5,000 two-qubit gates.

The binding limitation of superconducting qubits is twofold: they require millikelvin operating temperatures (dilution refrigerators), and they have limited connectivity (typically nearest-neighbor on a 2D grid). This limited connectivity makes error correction expensive-the standard surface code requires roughly 1,000 physical qubits per logical qubit at current fidelities. IBM is actively working to address this with "c-couplers" that enable longer-range qubit connections, demonstrated on the experimental Loon processor in November 2025.

A critical materials-science breakthrough emerged from Princeton in November 2025: a transmon qubit achieved a coherence time (T1 relaxation) exceeding one millisecond[1], three times the previous record and fifteen times the industry standard. (Note: this T1 figure is not directly comparable to Willow's ~68 µs average T1, as the Princeton result was a single-device lab demonstration, not a multi-qubit processor metric.), by substituting tantalum for traditional niobium and replacing the sapphire substrate with high-quality silicon (see Chapter II, Feedback Loop 1 for the broader implications of materials-driven qubit improvements).

Interior of a dilution refrigerator showing gold-plated wiring and cryogenic stages used to cool superconducting qubits
Interior of a dilution refrigerator used to cool superconducting qubits to ~15 millikelvin — colder than outer space. The gold-plated wiring carries microwave signals to the quantum processor at the bottom. Credit: Wikimedia Commons (CC-BY-SA 4.0)

Trapped Ions

Trapped-ion quantum computers, led by Quantinuum (majority-owned by Honeywell) and IonQ, use individual atomic ions suspended in electromagnetic traps as qubits. Their defining advantage is quality: Quantinuum's Helios system achieves 99.921% two-qubit gate fidelity and 99.9975% single-qubit gate fidelity- the highest of any commercial system. Ions also provide all-to-all connectivity (any qubit can interact with any other qubit), which dramatically reduces the overhead required for error correction.

Instead of tiny circuits, this approach uses actual atoms floating in a vacuum, manipulated by lasers. The key advantage: any atom can "talk" to any other, not just its neighbors-like a classroom where every student can whisper to everyone, instead of passing notes through a chain. This full connectivity makes error correction far more efficient.

Helios represents a genuine architectural leap over its predecessor, the H2 system. Where H2 used a linear "racetrack" design with 56 qubits, Helios introduces a ring-storage architecture[2] with a first-of-its-kind commercial ion junction, enabling parallel sorting, cooling, and gating operations. The system produced 94 error-detected logical qubits globally entangled in one of the largest Greenberger-Horne-Zeilinger states ever recorded, using the [[4,2,2]] Iceberg code-a distance-2 code that detects errors via post-selection (discarding faulty results) rather than actively correcting them.

Through code concatenation with symplectic double codes, the system also demonstrated 48 logical qubits at the landmark 2:1 encoding ratio. This is a significant efficiency achievement, but it is important to note that post-selection-based error detection is a less powerful capability than the active error correction (requiring distance-3+ codes) needed for fault-tolerant computation at scale.

The trapped-ion approach's primary limitation is speed: gate times are in the microsecond range, roughly 100 to 1,000 times slower than superconducting gates. Scaling is also an open question-Quantinuum's QCCD (Quantum Charged Coupled Device) architecture plans to scale via junctions arranged in a "city street grid," but this has not yet been demonstrated beyond Helios's single junction. The company's roadmap calls for Sol (192 physical qubits) by 2027 and Apollo (thousands of physical qubits, hundreds of logical qubits, full fault tolerance) by 2029.

IonQ, meanwhile, has pursued an aggressive acquisition strategy in 2025, purchasing Oxford Ionics, ID Quantique, Lightsynq, Capella Space, and Vector Atomic in deals exceeding one billion dollars, positioning itself as a vertically integrated quantum company. Its Forte Enterprise system operates with 36 qubits, and in March 2025, a collaboration with Ansys reported a 12% speedup[5] over classical high-performance computing on a single medical device simulation instance. This result has not been independently replicated, the quality of the classical baseline has not been independently verified, and a 12% margin on a single problem instance does not yet constitute "practical quantum advantage" by rigorous standards-particularly given that the quantum computation likely cost orders of magnitude more per compute hour than the classical alternative.

A quantum computer reportedly edged out a traditional supercomputer on a real engineering problem-modest and unverified, but one of the first times anyone could even make that claim. Think of it as a prototype car completing one lap slightly faster than the reigning champion: noteworthy, but far from winning the race.

NIST trapped-ion quantum computing apparatus showing a chip with gold electrodes where ions are confined and manipulated with lasers
A trapped-ion quantum computing apparatus at NIST. Ions are confined using electromagnetic fields above a chip and manipulated with precision lasers. Credit: NIST/Wikimedia Commons (CC-BY 2.0)

Neutral Atoms

Neutral-atom quantum computers, pioneered by QuEra, Pasqal, and Atom Computing, trap individual atoms (typically rubidium or ytterbium) in arrays of focused laser beams called optical tweezers. Their key advantage is natural scalability: it is relatively straightforward to create large arrays of hundreds or thousands of atomic sites. Atom Computing has demonstrated 1,180 atomic sites[6], and QuEra has operated with 256 qubits.

Imagine using tiny laser "tweezers" to pick up individual atoms and arrange them like pieces on a chessboard, rearranging patterns during computation. This approach is younger than the others, but it's scaling fast.

The architecture offers reconfigurable geometry-atoms can be rearranged in arbitrary patterns during computation-which enables native support for certain error correction codes and algorithms that would require costly SWAP operations on fixed-grid architectures. Ground-state coherence times are on the order of seconds, vastly exceeding superconducting systems.

The limitation is gate maturity: two-qubit gate fidelities are improving rapidly but remain below those of trapped ions and the best superconducting systems, currently around 99.5%. Mid-circuit measurement-essential for real-time error correction-is still under development. However, QuEra's 2025 publications on algorithmic fault-tolerance[7] techniques claiming up to 100× reduction in error correction overhead have generated significant excitement and could be transformative if validated at scale.

Photonic Qubits

Photonic quantum computers, pursued by PsiQuantum and Xanadu, encode quantum information in photons-particles of light. Their theoretical advantages are compelling: photons do not decohere in transit (eliminating the coherence-time problem entirely), they operate at room temperature (no dilution refrigerators), and they naturally integrate with telecommunications infrastructure for distributed quantum computing.

Photonic quantum computers use particles of light instead of atoms or circuits. The huge upside: light doesn't "forget" its quantum state, and you don't need giant freezers. The huge downside: photons are extremely hard to make interact with each other, and interaction is exactly what computation requires.

PsiQuantum, the world's most funded quantum startup[8] ($1 billion raised in September 2025 at a $7 billion valuation, with additional government backing from Australia), unveiled a photonic processor in February 2025. The company's strategy bypasses near-term noisy intermediate-scale quantum (NISQ) applications entirely, aiming directly at a large-scale fault-tolerant machine fabricated in conventional semiconductor fabs. Microsoft and PsiQuantum are the two companies[9] that have advanced to the final stage of DARPA's US2QC program (the precursor to the Quantum Benchmarking Initiative).

The fundamental challenge for photonic quantum computing is deterministic two-qubit gates. Photons do not naturally interact with each other, making the entangling operations central to quantum computation extremely difficult. Photonic approaches instead rely on measurement-based quantum computing and fusion operations, which are probabilistic. Overcoming the resulting photon loss and inefficiency remains the core engineering challenge.

Topological Qubits

Microsoft's approach is the most ambitious and the most contested. Topological qubits would encode quantum information in Majorana quasiparticles-exotic collective states of electrons predicted to exist in certain superconducting nanowires. If they work as theorized, topological qubits would be intrinsically protected from local noise, offering built-in error correction and potentially requiring far fewer physical qubits per logical qubit than any other architecture.

Most approaches accept that qubits will be noisy and try to correct errors after the fact. This approach is different: build qubits where information is inherently immune to noise. It's the difference between constantly repairing a sandcastle versus building with concrete that doesn't wash away-a beautiful idea, but proving it works has been extremely difficult.

Microsoft announced Majorana 1 in February 2025[10], claiming the creation of the first "topoconductor" and an eight-qubit chip with a "Topological Core" architecture designed to scale to one million qubits. The accompanying Nature paper described interferometric parity measurement in InAs-Al hybrid devices-a prerequisite for topological qubit operation.

However, the Nature editorial team itself noted that the peer-reviewed results "do not represent evidence for the presence of Majorana zero modes[11]." At the APS Global Physics Summit in March 2025, Microsoft's Chetan Nayak presented additional data to a packed and largely skeptical audience. A preprint by Henry Legg of the University of St Andrews[12] argued that Microsoft's Topological Gap Protocol-the test used to identify Majoranas-is flawed and susceptible to false positives. A subsequent paper from the University of New South Wales suggested[13] that the decoherence time of Majorana qubits may be too short to support computation. Microsoft vigorously disputes both critiques and says it has made significant additional progress since the Nature paper was submitted in March 2024.

The honest assessment: topological quantum computing remains the highest-risk, highest-reward approach. If Microsoft is right, it could leapfrog all other architectures. If the skeptics are right, the company has spent two decades pursuing a physical phenomenon that may not be practically exploitable. Multiple physicists have noted that even if topological qubits work, the approach is "probably 20-30 years behind[14] the other platforms" (Winfried Hensinger, University of Sussex). DARPA's advancement of Microsoft to the final stage of US2QC, however, suggests that government evaluators believe the approach has at least a plausible path forward.

Silicon Spin Qubits

Silicon spin qubits, pursued by Intel (Tunnel Falls, 12 qubits), Diraq, and Silicon Quantum Computing, encode quantum information in the spin states of individual electrons or nuclei in silicon. Their potential advantage is enormous: full compatibility with existing CMOS semiconductor fabrication, which could enable mass production using the same factories that make conventional computer chips.

Silicon spin qubits are the "what if we could just use regular chip factories?" approach. If it works, quantum processors could be mass-manufactured on the same lines that make phone chips-an unbeatable cost advantage. The problem: it's the youngest and smallest approach, but DARPA thinks it's worth watching.

The approach is the earliest-stage among commercially pursued modalities, with the smallest demonstrated qubit counts and fidelities that are still climbing toward competitive levels (approximately 99%+). However, three silicon-focused companies (Diraq, Quantum Motion, Silicon Quantum Computing) advanced to Stage B of DARPA's QBI, suggesting that evaluators see a credible scaling path. If silicon qubits can close the fidelity gap, their CMOS compatibility could make them the long-term winner for mass-manufactured quantum processors.

Figure 8: Qubit Modality Comparison: No Single Winner
Figure 8: Qubit Modality Comparison: No Single Winner
Each approach leads on different axes. Scores are qualitative assessments (1=weakest, 5=strongest). Note: topological qubit scores for coherence time and qubit count reflect theoretical promise rather than demonstrated capability—no confirmed topological qubits have been independently validated as of February 2026.

1.2 The Error Correction Revolution

Error correction is the most important technical story in quantum computing today. It deserves extended treatment because it is the key that unlocks everything else.

The core problem is straightforward: individual qubits are noisy. Even the best physical qubits in the world-Quantinuum's barium ions at 99.921% two-qubit gate fidelity-make errors roughly once every 1,200 operations. For commercially useful quantum algorithms like simulating a drug molecule or factoring a large number, you need billions or trillions of operations to complete reliably. At current physical error rates, the computation would be corrupted long before it finished.

Even the best qubit makes mistakes. Solving real-world problems requires billions of steps, so without a proofreading system the answer would be gibberish. Error correction is that proofreading system-it catches and fixes mistakes as they happen.

Quantum error correction (QEC) solves this by encoding one "logical" qubit across many "physical" qubits, using redundancy and continuous error detection to suppress the logical error rate exponentially. The catch: this only works if the physical error rate is below a critical threshold. If physical qubits are too noisy, adding more of them makes things worse, not better.

The history of QEC is a story of a thirty-year quest to cross that threshold:

1995: Peter Shor proposes quantum error correction. The theoretical possibility is established, but the required physical qubit quality seems impossibly far away. 1995-2022: Steady progress in qubit quality, but no system demonstrates below-threshold error correction. Scaling up always degrades performance. Skeptics argue that practical QEC may be physically impossible at achievable noise levels.

2023: Microsoft and Quantinuum demonstrate 12 logical qubits on 56 physical qubits using the H2 trapped-ion system. First tangible evidence that practical QEC is within reach.

December 2024: Google Willow achieves below-threshold error correction (announced December 9, 2024; published in Nature in February 2025). This is the moment the threshold is crossed for the first time. The logical error rate halves with each increase in code distance (3×3 → 5×5 → 7×7). The logical qubit lifetime exceeds the best physical qubit by 2.4×. The error suppression factor is Λ = 2.14 ± 0.02 per code distance step.

When physicists say a code has "distance 7," they mean it can catch and correct up to 3 errors before the computation is ruined (the formula: a distance-d code corrects up to ⌊(d−1)/2⌋ errors)[16]. Higher distance = more errors caught = more reliable computation. But higher distance also requires more physical qubits per logical qubit. Distance is the dial that trades hardware cost for reliability.

Figure 11: The 30-Year Quest: Quantum Error Correction Timeline
Figure 11: The 30-Year Quest: Quantum Error Correction Timeline
From theoretical proposal (1995) to practical demonstration (2024) to explosion (2025). Helios demonstrated 94 error-detected qubits (48 via concatenation).

2025: An explosion of progress. In the first ten months of the year, over 120 peer-reviewed papers on quantum error correction are published-described by industry observers as a "tsunami" of activity. Quantinuum's Helios achieves 94 error-detected logical qubits (using [[4,2,2]] detection code) and 48 logical qubits at a 2:1 encoding ratio via code concatenation. IBM demonstrates all hardware elements of fault-tolerant computing with its experimental Loon processor and achieves real-time decoding of qLDPC codes in under 480 nanoseconds-a 10× speedup over previous leading approaches, delivered a full year ahead of schedule. QuEra publishes algorithmic fault-tolerance techniques claiming up to 100× overhead reduction. DARPA QBI: The U.S. government's Quantum Benchmarking Initiative[9], launched in July 2024, aims to determine whether utility-scale quantum computing is achievable by 2033. In April 2025, approximately eighteen companies entered Stage A. By November 2025, eleven companies across five modalities advanced to Stage B for year-long R&D evaluation. Microsoft and PsiQuantum advanced to the final stage of the related US2QC program.

Scientists proposed a way to make quantum computers reliable decades ago, but it took until recently for anyone to prove it actually works in a real machine. Since that proof, progress has exploded. The government is now actively evaluating whether a practical quantum computer can be built within a decade.

The Path Forward: From 48 to 10,000 Logical Qubits

The key question is: what is the path from today's roughly 48-100 logical qubits to the 1,000-10,000+ logical qubits at error rates of 10⁻⁶ to 10⁻¹² needed for commercially transformative algorithms?

Today's best quantum computers have a modest number of reliable qubits. For problems like drug design, we need orders of magnitude more. The gap is large, but multiple approaches to closing it are being pursued simultaneously, each showing promising early results.

The answer depends critically on which error correction codes are used and how efficiently they can be implemented:

Surface codes (the standard workhorse, used by Google on Willow): Relatively simple to implement on nearest-neighbor grids, but very expensive in overhead. At current fidelities, roughly 1,000 physical qubits are required per high-fidelity logical qubit. Google's Nature paper suggests that a distance-27 surface code (approximately 1,457 physical qubits) could achieve a 10⁻⁶ logical error rate. A 1,000-logical-qubit machine would then require roughly 1.5 million physical qubits -an enormous engineering challenge.

qLDPC codes (quantum Low-Density Parity Check, IBM's bet): These newer codes promise dramatically better encoding rates-more logical qubits per physical qubit -but require higher connectivity than nearest-neighbor grids, which is why IBM developed c-couplers for the Loon processor. IBM's "gross code" has attracted over 200 citations in its first year. If qLDPC codes work at scale, the path to thousands of logical qubits becomes much more feasible.

Code concatenation (Quantinuum's approach): By layering codes-combining the [[4,2,2]] Iceberg code with symplectic double codes-Quantinuum achieved the 2:1 encoding ratio on Helios. This approach leverages all-to-all connectivity inherent in trapped-ion architectures. An important caveat: the [[4,2,2]] code is a distance-2 detection code-it identifies errors through post-selection rather than correcting them. Scaling to fault-tolerant computation will require higher-distance codes that have not yet been demonstrated at this encoding efficiency. If the concatenation approach scales to higher-distance codes, it could offer the most qubit-efficient path to fault tolerance.

Genon codes and SWAP-transversal gates (also Quantinuum): Recent work from Quantinuum QEC researchers introduced genon codes that exploit the QCCD architecture's qubit-movement capabilities, performing logical gates by physically relabeling qubits-essentially "for free" in a system where ions can move.

Surface codes, qLDPC codes, genon codes, symplectic double codes — these are all different strategies for the same goal, like different methods of proofreading a book[17]. Surface codes are the simplest and most proven, but they're wasteful (many proofreaders per page). The newer codes try to proofread more efficiently — fewer people, same accuracy — but they're harder to implement with real hardware. The race is to find a code that's both efficient and buildable.

Figure 2: Physical-to-Logical Qubit Encoding Ratio by Approach
Figure 2: Physical-to-Logical Qubit Encoding Ratio by Approach
From 1,000:1 to 2:1 - the overhead wall is cracking. Quantinuum's 2:1 uses error detection (post-selection), not full correction.

Three teams are attacking the efficiency problem from three angles: Google is refining surface codes (the brute-force method), IBM is developing qLDPC codes (a fundamentally different approach that needs less hardware), and Quantinuum is layering code concatenation to squeeze maximum efficiency. Any one of these could break the overhead problem wide open.

1.3 The Trendlines

Five trendlines define the trajectory of quantum computing. Each tells a different part of the story.

Trendline 1: Gate Fidelity Over Time

This is the most important trendline in quantum computing. Each additional "nine" of gate fidelity (99.9% → 99.99%) exponentially reduces the error correction overhead required, which is why counting the nines is the central metric of this report. Best reported two-qubit gate fidelities, by year and platform:

2019: ~99.5% (Google Sycamore, superconducting, 53 qubits)

2022: ~99.7% (various platforms, incremental improvement)

2023: ~99.8% (Quantinuum H2, trapped ion, 56 qubits)

2024: ~99.8% (Google Willow, superconducting, 105 qubits); ~99.9% (Quantinuum H2 upgraded)

2025: ~99.921% (Quantinuum Helios, trapped ion, 98 qubits); >99.9% (IBM Heron Rev 3, superconducting, 57+ coupler pairs)

Industry roadmap targets: 99.99% ("four nines") by ~2027-2028; 99.999% ("five nines") by ~2030

The historical rate of improvement, based on the data above, has been approximately 0.6 additional nines over six years (from ~99.5% in 2019 to ~99.921% in 2025 for the leading platform)-slower than one nine per 3-4 years. Industry roadmaps target four nines (99.99%) by ~2027-2028, but these are aspirational goals, not extrapolations from demonstrated rates.

Moreover, the jump from three nines to four nines faces qualitatively different error sources (correlated errors, leakage, cosmic ray events) that may not yield to the same techniques that achieved the first three nines. Each additional nine, if achieved, has outsized impact due to the exponential relationship between fidelity and error correction overhead: going from three nines (99.9%) to four nines (99.99%) could reduce the physical-to-logical qubit ratio by an order of magnitude for surface codes.

Accuracy has been improving, but each additional step gets harder because new types of errors emerge at higher levels of precision. If the next milestone is reached, the math means it could cut the hardware needed by a factor of ten-dramatically changing the economics. But reaching it is not guaranteed on any particular timeline.

Figure 1: Two-Qubit Gate Fidelity: Counting the Nines
Figure 1: Two-Qubit Gate Fidelity: Counting the Nines
Each additional 'nine' exponentially reduces error correction overhead. Aspirational roadmap targets shown. Historical rate ~0.6 nines per 6 years.

Trendline 2: Logical Qubit Count Over Time

This is the newest trendline and the one most directly predictive of when quantum computers become commercially useful:

2023: First demonstrations, 3-12 logical qubits (Microsoft/Quantinuum on H2)

2024: Google Willow distance-7 surface code (1 high-quality logical qubit); multiple demonstrations across platforms

2025: Quantinuum Helios: 94 error-detected logical qubits (using [[4,2,2]] detection code with post-selection); 48 logical qubits at 2:1 encoding ratio via code concatenation (note: detection, not full error correction). IBM Loon: demonstrated all hardware elements for fault-tolerant computing.

Roadmap targets:

Figure 3: Logical Qubit Count Over Time
Figure 3: Logical Qubit Count Over Time
94 error-detected (48 via concatenation) demonstrated in 2025. Hollow markers represent roadmap targets.

The trajectory from zero to ninety-four error-detected logical qubits in roughly two years is striking, though it is important to distinguish between error detection (Helios's current capability) and the full error correction needed for fault-tolerant computation. If the rate of progress holds-a significant if-hundreds of logical qubits by 2028-2029 is plausible, and thousands by the early 2030s.

Trendline 3: Investment

Capital flows into quantum computing have inflected sharply upward:

2023: ~$1.3-1.6 billion in venture capital and private equity globally

2024: ~$2.0-2.6 billion (50-58% increase year-over-year)

2025 (first 9 months): $3.77 billion in total equity funding (nearly 3× the 2024 full-year figure)

Government investment has also surged. By April 2025, global public quantum commitments exceeded $10 billion, spiked by Japan's $7.4 billion announcement and Spain's €808 million investment. The United States leads in private-sector diversity and VC funding. China is estimated to have invested approximately $15 billion (exact figures are not officially confirmed). The EU's Quantum Flagship has committed over €1 billion.

Notable 2025 funding rounds:

Figure 4: Quantum Computing Equity Investment
Figure 4: Quantum Computing Equity Investment
2025 funding in 9 months nearly tripled 2024's full year.
Figure 5: Largest Quantum Funding Rounds in 2025
Figure 5: Largest Quantum Funding Rounds in 2025
Four companies raised $2.39B - led by photonic and trapped-ion approaches.

NVIDIA emerged as a major quantum investor in September 2025, backing Quantinuum, PsiQuantum, and QuEra within a single week.

The money is real and accelerating—recent investment has more than doubled year over year (on an annualized run-rate basis), and governments worldwide are committing billions. When both Wall Street and the military are this interested simultaneously, it usually means the technology has crossed from "science fiction" to "serious engineering project."

Public quantum company stock performance has been extraordinary if volatile: D-Wave rose approximately 4,700%, Rigetti approximately 7,900%, and IonQ approximately 830% on a trailing twelve-month basis at their October 2025 all-time highs—trailing twelve-month returns measured from post-SPAC lows. All three companies entered public markets via SPAC mergers in 2021–2022 (IonQ October 2021, Rigetti March 2022, D-Wave August 2022), a period of peak SPAC exuberance. All three stocks peaked within days of each other in mid-October 2025 and have since corrected sharply: as of Q1 2026, D-Wave trades at $18.05 (down ~61% from its October high), Rigetti at $15.98 (down ~73%), and IonQ at $30.81 (down ~64%). IonQ trades at a price-to-sales ratio of approximately 134× on trailing twelve-month revenue of ~$80 million (through Q3 2025), against a market capitalization of ~$10.7 billion. For comparison, even the most optimistically valued AI companies trade at 30-60× sales. These valuations remain largely based on future potential rather than current revenue.

A strong caution for investors: these stock prices reflect what the market hopes quantum computing will become, not what it earns today. It's like the early internet era-some companies became giants, but most didn't survive even though the technology was real. The path will be volatile, and not every quantum company will make it.

Figure 14: Public Quantum Company Stock Performance
Figure 14: Public Quantum Company Stock Performance
Peak returns (at October 2025 highs) reflect future potential, not current revenue. All three entered via SPAC mergers (2021-2022) and have since corrected 61–73% from October 2025 highs. IonQ P/S ~134× as of February 2026.

Where Public-Market Exposure Actually Exists

Public-market quantum exposure is more limited and more indirect than most investors realize. It falls into roughly five categories—pure-play quantum companies, a private leader accessible through its parent, big tech divisions, supply chain infrastructure, and a classical-quantum convergence play—and each comes with its own dilution problem. Understanding the map matters more than any individual name on it.

Pure-play companies. IonQ, Rigetti, and D-Wave are the most prominent publicly traded companies whose primary business is quantum computing. All three entered public markets via SPAC mergers in 2021–2022 and trade at valuations reflecting future potential rather than current revenue, as the stock data above illustrates. They offer the most direct exposure—and the most direct risk.

The Quantinuum route. The company this report most frequently identifies as the technical leader, Quantinuum, is private—valued at roughly $10 billion as of its September 2025 funding round. Its majority owner, Honeywell, is publicly traded, making it the most direct public-market route to Quantinuum's trapped-ion program. But Honeywell is a $150 billion industrial conglomerate; quantum is a fraction of its business.

Big tech divisions. IBM, Alphabet, and Microsoft all run significant quantum programs. IBM's 127-qubit Eagle and 1,121-qubit Condor processors are genuine technical achievements; Google's Willow chip demonstrated below-threshold error correction. But quantum revenue is a rounding error in these companies' financials. You are buying exposure to everything else they do, with quantum along for the ride.

Supply chain infrastructure. A less obvious angle—and one that often represents a lower-risk exposure vector tied to industry growth regardless of which qubit modality wins. The binding infrastructure constraints identified in Chapter III (cryogenic equipment, precision photonics, specialized semiconductor fabrication, Helium-3 supply) create a constellation of companies whose products every serious quantum effort requires. The catch: most of the critical supply chain players are private.

In cryogenics, Bluefors and Oxford Instruments together account for over 70% of the dilution refrigerator market. Bluefors, the market leader, is privately held. Oxford Instruments, publicly traded in London, sold its NanoScience cryogenics unit to Quantum Design International in late 2025 for £60 million. New entrants like Maybell Quantum (which raised a $40 million Series B) and Zero Point Cryogenics are private. The market is highly concentrated and a genuine scaling bottleneck.

The Helium-3 supply chain is scarce and geopolitically fraught. Western supply depends primarily on U.S. Department of Energy nuclear weapons byproducts from the Savannah River Site and Canadian reactor extraction via Laurentis Energy Partners and Air Liquide. Prices range from roughly $2,000 to $15,000 per liter depending on purity and quantity. As a signal of how constrained supply is: Interlune, a startup pursuing lunar Helium-3 extraction, has reportedly signed a purchase agreement with Bluefors alone valued at an estimated $300 million, with additional commitments from Maybell Quantum and the U.S. Department of Energy. Whether lunar extraction proves viable or not, the fact that serious buyers are signing contracts for it tells you something about terrestrial supply.

In control electronics, Quantum Machines (private, valued at roughly $700 million) builds the control layer integrated into NVIDIA's DGX Quantum platform. Keysight Technologies, publicly traded, installed what it called the world's largest commercial quantum control system in 2025. Zurich Instruments, now a subsidiary of Rohde & Schwarz, is private. In semiconductor fabrication, GlobalFoundries fabricates PsiQuantum's photonic chips on 300mm wafers at its facility in Malta, New York. IBM's superconducting processors are fabricated at Albany NanoTech, discussed further in Chapter III.

In precision photonics, NKT Photonics (acquired by Hamamatsu Photonics, publicly traded in Tokyo) partnered with IonQ for next-generation laser systems. TOPTICA Photonics, privately held, supplies clock lasers across trapped-ion and neutral-atom platforms. The pattern repeats: the publicly traded companies in the quantum supply chain—Keysight, GlobalFoundries, Hamamatsu—have quantum as a small fraction of their total revenue. The pure-play supply chain companies are almost all private.

NVIDIA as convergence play. NVIDIA's investments in Quantinuum, PsiQuantum, and QuEra within a single week in September 2025 were striking, but its longer-term strategic move may matter more. In October 2025, NVIDIA announced NVQLink, an open architecture for coupling GPUs to quantum processors, with 17 QPU builder partners, 5 control system partners, and 9 national laboratories. Combined with its CUDA-Q software platform, the strategy mirrors NVIDIA's approach to AI: become the indispensable classical co-processor regardless of which quantum architecture wins. If the hybrid classical-quantum computing model described in Chapter II proves correct, NVIDIA is positioning itself at the bottleneck.

This is a map of where exposure exists, not a set of recommendations. The same caution from the stock analysis above applies throughout: these are early-stage dynamics with early-stage risk, and the history of transformative technologies is littered with companies that were genuinely important but still failed to reward their investors.

Trendline 4: Coherence Time

Coherence time-how long a qubit retains its quantum information before environmental noise destroys it-has improved steadily across all modalities:

Figure 6: Qubit Coherence Times by Modality
Figure 6: Qubit Coherence Times by Modality
Trapped ions lead by orders of magnitude; superconductors are closing the gap.

Trendline 5: Physical Qubit Count

Raw qubit count is the least informative metric by itself-what matters is usable qubits with sufficient quality. Nonetheless, the trajectory is worth noting:

Headlines love to report raw qubit counts, but raw count alone is like judging a car by its number of wheels. What matters is the combination of quantity, quality, and how well the qubits can communicate with each other.

Holistic Benchmarks: Quantum Volume and CLOPS

Individual metrics-qubit count, gate fidelity, coherence time-each tell only part of the story. Two holistic benchmarks attempt to capture system-level performance: Quantum Volume (QV) measures the largest random circuit a quantum computer can execute reliably, capturing the interplay of qubit count, connectivity, gate fidelity, and crosstalk in a single number. Quantinuum's H2 system holds the current record at QV 2^25 (approximately 33.6 million, achieved September 2025)-several orders of magnitude above competing platforms. IBM's Eagle-class processors have achieved QV 2^7 (128). QV provides a useful cross-platform comparison but has limitations: it measures a specific random circuit structure, not necessarily performance on practical algorithms. Circuit Layer Operations Per Second (CLOPS) measures how many quantum circuit layers a system can execute per second, capturing not just gate speed but also the classical control overhead, compilation time, and data transfer latency that determine real-world throughput. IBM's systems lead on CLOPS due to fast superconducting gate times and optimized classical control infrastructure. Together, QV and CLOPS provide a more complete picture than any single metric. Their omission from most quantum computing discussions-including, until this section, this report-reflects the field's tendency to emphasize whichever individual metric a given platform leads on.

Think of it this way: individual specs are like a car's horsepower, steering precision, and fuel tank size. Holistic benchmarks are like lap times-they measure how the whole car performs on the track, not just one component. No single number tells you everything, which is why serious evaluation requires looking at multiple metrics together.

Growth rates vary by modality and are not following a simple exponential law. Qubit scaling is qualitatively harder than transistor scaling-each added qubit must maintain coherence, fidelity, and connectivity with all others, which creates engineering challenges that grow non-linearly.

Figure 7: Physical Qubit Count by Modality Over Time
Figure 7: Physical Qubit Count by Modality Over Time
Scaling trajectories diverge - neutral atoms lead in raw count, trapped ions in quality. Dashed lines represent roadmap targets.

These trendlines paint an encouraging picture, but quantum computing faces several binding technical constraints—walls that could stall progress regardless of funding or strategy. We examine these in detail in Chapter III.

Notes

  1. Princeton University, transmon qubit coherence time exceeding 1 ms via tantalum-on-silicon fabrication (November 2025).
  2. Quantinuum, 'Introducing Helios: The Most Accurate Quantum Computer in the World,' quantinuum.com (November 2025).
  3. Acharya, R. et al., 'Quantum error correction below the surface code threshold,' Nature 638, 920-926 (2025). [link]
  4. IBM, Quantum Developer Conference announcements: Nighthawk, Loon, and real-time qLDPC decoding (November 2025).
  5. IonQ/Ansys, reported 12% speedup on medical device simulation using 36-qubit Forte Enterprise system (March 2025).
  6. Atom Computing, demonstration of 1,180 neutral-atom qubit sites.
  7. QuEra Computing, algorithmic fault-tolerance techniques for up to 100× error correction overhead reduction (2025).
  8. PsiQuantum, $1 billion funding round at $7 billion valuation (September 2025).
  9. DARPA, Quantum Benchmarking Initiative (QBI) Stage A/B announcements (April-November 2025).
  10. Aghaee, M. et al., 'Interferometric single-shot parity measurement in an InAs-Al hybrid devices,' Nature 638, 651-655 (2025). [link]
  11. APS Physics, 'Microsoft's Claim of a Topological Qubit Faces Tough Questions,' APS Global Physics Summit (March 2025).
  12. Legg, H. et al., University of St Andrews, critique of Microsoft's Topological Gap Protocol (2025).
  13. University of New South Wales, analysis of Majorana qubit decoherence times (2025).
  14. Hensinger, W., University of Sussex, quoted on topological qubit timeline (2025).
  15. Koch, J. et al., 'Charge-insensitive qubit design derived from the Cooper pair box,' Physical Review A 76, 042319 (2007). The transmon is a superconducting charge qubit shunted by a large capacitance, used by Google (Sycamore, Willow) and IBM (Eagle, Heron).
  16. The error-correcting capability of a distance-d code is ⌊(d−1)/2⌋. See Nielsen, M. A. & Chuang, I. L., Quantum Computation and Quantum Information, Cambridge University Press (2000), Chapter 10. Also consistent with the Google Willow paper (Nature 638, 920–926, 2025), which demonstrates error suppression scaling with code distance.
  17. Surface codes: Kitaev, A. (1997); Bravyi, S. & Kitaev, A. (1998) — require O(d²) physical qubits per logical qubit. qLDPC codes: Breuckmann, N. & Eberhardt, J., 'Quantum Low-Density Parity-Check Codes,' PRX Quantum 2, 040101 (2021); Panteleev, P. & Kalachev, G. (2022) — achieve better encoding rates but require non-local connectivity. The tradeoff between encoding efficiency and hardware implementability is the central tension in QEC code design.