Reproducible, Verifiable Performance
Operational metrics with full methodology disclosure. Click any metric for detailed assumptions.
Operational Benchmarks
Enterprise metrics: cost governance, audit efficiency, evidence integrity
Autopilot Performance
Optimization engine efficiency and circuit quality improvement
Portability (ACOS)
Cross-backend execution consistency and vendor independence
Backend Comparison
QCOS-certified backends with standardized benchmarks. Costs are indicative ranges.
| Backend | Provider | Type | QV | CLOPS | 1Q Fidelity | 2Q Fidelity | Cost Range |
|---|---|---|---|---|---|---|---|
DevLUMI GPU Sim Ideal for development; scales to 40+ qubits | EuroHPC | Simulation | 2^30 (simulated) | 50,000+ | 100% (ideal) | 100% (ideal) | €0.08–€0.15/min |
IBM Heron Pricing varies by plan and queue priority | IBM Quantum | Hardware | 128 | 2,847 | 99.92% | 99.41% | €80–€150/min |
IonQ Aria Per-task pricing; high fidelity for small circuits | AWS Braket | Hardware | 25 | 850 | 99.97% | 99.5% | €0.30–€0.40/task |
Rigetti Ankaa-3 Fast iteration; best for variational workloads | AWS Braket | Hardware | 16 | 1,200 | 99.7% | 98.5% | €0.30–€0.40/task |
* Costs vary by provider plan, queue priority, and contract terms. GPU simulation recommended for development and iteration. Contact sales for enterprise pricing.
Benchmark Methodology
Reproducibility
All benchmarks include cryptographically signed evidence bundles enabling third-party verification.
Standardization
Uses industry-standard metrics: Quantum Volume, CLOPS, Randomized Benchmarking, and Cross-Entropy Benchmarking.
Transparency
Full methodology disclosure for every metric. Click any metric to view assumptions and confidence intervals.
Versioning
Benchmarks are versioned and dated. Historical data retained for trend analysis.
Technical Details
Run Your Own Benchmarks
All benchmark scripts are open source. Run them on your own hardware and compare results with our certified baselines.