Skip to main content

Accelerating quantum computer developments

Abstract

Product development

Given the recent breakthroughs in quantum technology development in R& D labs all over the world, the perspective of high-tech companies has changed. Product development is initiated next to the existing research and technology development activities.

Quantum computer product roadmap

Considering the quantum computer as a product requires standardization and integration of all its building blocks and a mature supply chain that can provide high-quality components and can ensure security of supply. The product development approach puts focus on functionality and performance requirements of the product and uses state-of-the-art technology to build the product. Based on the expected requirements of future products it is possible to outline a product development roadmap.

It is expected that a fully functional quantum computer will be available within a decade from now, and will be used by the High Performance Computing (HPC) market, where it will replace (part of) the supercomputers that are currently used for complex calculations and data management. In the short term, a partly functional quantum computer will be available and of interest to the R&D market, which has a need for such a product to expedite their quantum technology developments.

ImpaQT project

In this paper, we present the product development approach and roadmap for quantum computers, based on superconducting circuits as an example. A group of companies in the Dutch quantum ecosystem (Quantum Delta) have joined forces and have started the ImpaQT project. The companies of the ImpaQT consortium form a local supply chain for key components of quantum computers.

This paper shows that quantum community has reached the next level of maturity and that the quantum computer as a commercial product looks set to become a reality.

1 Introduction

With the recent breakthrough of quantum supremacy [1, 2], which is a result of the steady improvement in performance [35] and a first step towards showing the potential use of quantum computers [6], the quantum computer as a commercial product seems likely to become a reality in the coming years. This article is intended to voice a number of aspects which are related to this development.

The outline of this paper is depicted in Fig. 1, which gives an exemplary overview of what is required to efficiently build a useful and commercially viable quantum computing system. In Sect. 2, we will advocate to start product development activities, to follow a systems engineering approach and derive the requirements of the quantum computer as a line of products that is of commercial interest. In Sect. 3, we put forward a roadmap of quantum computer products which we foresee to be essential for the upcoming decade, while Sect. 4 introduces ImpaQT, a project executed by supply chain partners working together here to highlight how one can get started with the implementation of this product roadmap. Throughout this article, and as already shown in Fig. 1, we are using the superconducting Transmon-based full-stack platform as typical example to illustrate aspects of commercial product development and a way of working. We expect that these insights are, to a large extent, transferable to other quantum computing platforms as well.

Figure 1
figure 1

A full-stack quantum computer consists of many different components that need to work seamlessly together. A simplified version of such a stack, for a quantum computing system based on superconducting devices, is illustrated in this figure on the left. In Sect. 2 we review the status of the technology with a focus on those superconducting-based systems and use this, together with the customer needs, to derive product requirements and specifications. On the right, we show the different products in the product roadmap and what parts of the stack these products relate to. In Sect. 3, we describe the product roadmap with the different products that need to be developed to meet the requirements and specifications

2 Quantum computer product development

Given the recent breakthroughs in quantum technology development in R&D labs all over the world, the perspective of high-tech companies has changed from research and technology development to product development. These product development activities have started alongside the existing research activities. While research focuses purely on developing the technology needed for building quantum computers, product development focuses on performance and functionality. When developing a product, the performance and functionality required by users determines the design decisions that have to be made to build the product. Obviously, the price that users want to pay for the product has an influence as well. Considering the quantum computer as a product requires standardization of interfaces and integration of all its building blocks (as outlined in Fig. 1), as well as integration of the quantum computer itself in a broader ICT system architecture. A quantum computer consists of a stack of components that have to work together harmoniously in order to exploit quantum-mechanical phenomena such as superposition and entanglement. These quantum effects are fragile and hard to control. It is a complex engineering challenge to realize the desired performance and functionality. Due to this complexity, and in order to develop this product in an efficient way, it is recommended to follow a systems engineering approach.Footnote 1 The first step within this approach is to determine the product requirements in terms of the performance and functionality of the quantum computer. The next step is to derive the specifications of the product that are needed to meet these requirements.

2.1 Product requirements and specifications

From a commercial perspective, it is required that the quantum computer should either outperform existing computers or should be significantly cheaper. As classical computers have advanced over several decades, quantum computers will not outperform classical computers on problems that can be efficiently calculated using a classical computer (or only at a higher cost). Therefore, we can conclude that a quantum computer needs to be tailored to commercially interesting problems that are intractable on classical computers. The value of quantum computers is not in doing the same problems faster, it is about solving certain computational problems, such as prime factorization [7], substantially faster than classically possible – to the point that it enables the solution of problems that were previously unsolvable due to the impractical amount of computational resources required.

Requirement 1

Provide solutions for commercially interesting problems

Since the 1980s [8], research groups of mathematicians and information scientists have been investigating which problems can be solved more efficiently or more accurately by a quantum computer than by classical high-performance computing. At this point in time, only a limited set of problems can be addressed by quantum algorithms .Footnote 2

The requirement that the quantum computer needs to solve commercially interesting problems leads to performance specifications. Until recently [911], companies put a lot of emphasis on the number of qubits [1215] when publicly communicating the performance of their quantum computers. If qubits were approximately error free, this would be a sensible simplification. However, while classical bits can be approximated as ideal, this is an oversimplification for qubits. When considering physical error rates on the order of \(\epsilon< 10^{-3}\), one requires \(10^{3} - 10^{4}\) physical qubits per logical qubit to achieve a (close to ideal) logical error rate of \(\epsilon_{L} \sim10^{-15}\) [16, 17]. Current estimates suggest that one would require around 20 million physical qubits to factor a number that is too large to tackle using classical algorithms [18]. A 20-million-qubit quantum processor is, at this point in time, inconceivably large.

The coming decade will see noisy pre-error-corrected devices, known as Noisy Intermediate-Scale Quantum (NISQ) [19] technology, being used to perform useful computations while developments on technology and the value chain continue to push for universal error-corrected machines. It is expected that NISQ computers can help us solve new types of problems efficiently, but will not be useful for all types of problems. Similar to how access to larger (classical) computing power enabled the current explosion of applications of artificial intelligence [20], finding where quantum computers will provide value in practice will require the availability of larger quantum computing power. In other words: the availability of quantum computers will not only enable the implementation of already envisioned practical applications; more importantly, it will allow for the discovery of novel ones.

With current quantity, quality and control of qubits, it is already possible to build a quantum computer that outperforms any classical computer for a specific type of problems, for a specific case, as shown by the Google team [1]. To take full advantage of the capabilities of NISQ computers and work around their limitations, researchers need access to real NISQ computersFootnote 3 to test and develop their applications. At the moment, the success of any quantum algorithm is heavily dependent on the interaction of that algorithm with the specific quantum processor that is used. Quantum processors are in no way standardised yet and therefore each processor has its specific pros and cons in relation to the algorithm that needs to be employed. Therefore, the interplay between quantum algorithms and quantum processors needs to be optimised to have any chance of industry-relevant quantum advantage with NISQ devices. This leads to the following requirement to use a quantum computer for the research and development community:

Requirement 2

Enable the development and execution of NISQ applications

To specify the power of a quantum computer needed to develop and execute NISQ applications, one has to take into account not only the number of qubits n, but also the number of operations that can be performed, typically expressed using the circuit depth d. A metric that combines these two is the quantum volume [2123]Footnote 4\(V_{\mathrm{Q}}=2^{n_{\mathrm{eff}}}\), where \(n_{\mathrm{eff}}= {\operatorname {argmax}_{m} \min(m, d(m))} = \log_{2} (V_{ \mathrm{Q}})\), and the circuit width \(m=n\) in the case of all-to-all connectivity. This definition loosely coincides with the complexity of classically simulating model circuits and has the appealing property that \(V_{\mathrm{Q}}\) doubles for every effective qubit added. In this definition, the circuit depth d corresponds to the number of circuit layers that can be executed before (on average) a single error occurs. A circuit layer corresponds to a combination of arbitrary two-qubit operations between disjoint pairs of qubits. It is possible to estimate d as \(d\approx1/ \epsilon_{1\mathrm{step}} = 1/ n\epsilon_{\mathrm {eff}}\), where the effective error rate \(\epsilon_{\mathrm{eff}}\) is the average error rate per two-qubit operation. In addition to effects like cross talk, the introduction of limitations in connectivity, parallelism and gate set require an overhead in the physical implementation of a circuit layer, so that in general the error rate \(\epsilon_{\mathrm{eff}}\geq\epsilon\), where ϵ is the average error rate for individual physical operations.

Although quantum volume has found widespread adoption [2426], no single performance metric can capture the complexity required to describe the potential utility of a quantum computer. Taken at face value, quantum volume seems to imply that, at current error rates of \(\epsilon\sim10^{-3}\), there is no power in having a device larger than \(n\sim31\) qubits. As this number of qubits can efficiently be simulated using classical hardware, this would be a major setback for NISQ computing. However, this seems to contradict the results of Arute et al. [1], in which a \(n=53\) qubit device with \(\epsilon\sim10^{-3}\) was used to perform a computation that could not be simulated in a reasonable amount of time.Footnote 5

Understanding the characteristics and limitations of different performance metrics can give insight into the kinds of applications suited to NISQ computers. This in turn affects the functional requirements of the different subsystems. To define \(V_{\mathrm{Q}}\), two important simplifications have been made to arrive at a single number.Footnote 6 The quantum volume was designed as a binary metric: can a device run an algorithm? For many algorithms, a single error indicates failure, however, for other applications, such as sampling from a distribution (as is done in [1]), or estimating an eigenvalue [29], one can tolerate a limited amount of errors simply by averaging. From this follows a functionality requirement for NISQ applications: NISQ applications will have to be able to tolerate a limited amount of errors, either because of the nature of the application or by using error mitigation techniques. Another limitation of the quantum volume metric is that it quantifies the ability to run circuits of equal width and depth. However, the computational power of short-depth circuits is not yet fully understood, and it can be argued that even short-depth circuits lie beyond the reach of classical computing [22, 30, 31]. As such, it is likely that potential NISQ applications will be short-depth to limit the amount of errors that accumulate.

Although there are quite a few candidates for NISQ algorithms satisfying these constraints, many in the spirit of Feynman’s original idea [8] of using quantum systems to simulate quantum systems, there is no known useful application for which a NISQ algorithm is guaranteed to significantly outperform the classical alternative. At current error rates for cQED systems of \(\epsilon\sim10^{-3}\), a 1000 qubit (1kQb) system is right at the point where one can can still execute a single layer without a single error occurring. If one considers using a fraction of the qubits as ancillas for error mitigation and uses an algorithm that is somewhat robust to the remaining errors, the kQb processor is the largest-scale NISQ device that is of interest for running algorithms, at current error rates, an estimate consistent with IBMs roadmap [24].

Having set a target for the size of the system (1kQb) and the performance (\(\epsilon< 10^{-3}\)), one has to design and fabricate devices capable of reaching this target. Current qubit coherence of \(\sim50 \mu\mathrm{s}\) should be sufficient to reach \(\epsilon< 10^{-3}\) for every operation. It will be a challenge by itself to scale up the design to \(\sim1 \mathrm{kQb}\) while maintaining these levels of performance. A quantum computer is therefore not only needed to develop NISQ algorithms, it is also required to develop quantum processors. For this purpose a quantum computer will be used as test and development platform of this key component of the quantum computing stack. The resulting functionality requirement is therefore:

Requirement 3

Enable the development of quantum processors

Although many cQED device designs [1, 3234] are to some extent copy/paste-able, that does not mean they are scalable in practice. There will be fundamental physics problems to overcome, some expected [35, 36] and others unknown. Initial experiments [6, 37] indicate that transmon qubits have limited crosstalk, motivating a simplitistic model in which the device yield, defined as all qubits working, is simply the product of the individual qubit yields. Here, we define an individual qubit to be working if the control lines are working, the coherence is larger than a specified target (e.g. \(50 \mu\mathrm{s}\)) and the relevant parameters (charging and Josephson energies, coupling to coupling bus/tunable couplers, readout resonator parameters etc.) are within a specified tolerance. Even when taking into account recent innovations that improve parameter targeting, such as laser annealing [38, 39], the odds of producing a working kQb device are increasingly small with every qubit added to device, even at an exceptional yield of 99% per qubit.

To tackle this problem, one needs to either become robust against missing qubits at the algorithm level, which falls in the domain of Requirement 1, or find a way to increase device yield for a given qubit yield. A promising concept would be to link together smaller devices within the same cryogenic environment. Although the odds of producing a single monolithic kQb are vanishingly small, one can increase the probability by combining multiple smaller patches, which have a reasonable yield, and replacing only the patches that do not work. Existing flip-chip architectures [40], on which the readout resonators, Purcell filters and coupling buses are on a different chip than the qubits, can be seen as a prototype of this technique, as they effectively link together different devices. It is only a small step to use the coupling plane to connect qubits on adjacent chips [41]. Note that what is envisioned is subtly different from the chip-to-chip entanglement discussed in [42], which would be more powerful. This proposal does not require long-distance information transfer (i.e., quantum information transfer between different dilution refrigerators), as it only attempts to create modularity.

Based on the above-mentioned R&D strategies to create better quantum devices, the specifications of the quantum computer as a test and development platform can be derived. Related to the question of yield is the question of size: does a kQb processor fit in a fridge? Although transmon qubits (\({\sim} 400\ \mu\mathrm{m}^{2}\)) are often seen as large in comparison to e.g., semiconductor spin-qubits or dopant-based qubits, the processor sizes are not limited by the qubit size, but rather by the size of the I/O [43]. The footprint of a single VIA is currently \(1 \mathrm{mm}^{2}\),Footnote 7 a single transmon (including tunable couplers) requires on average 4.2 control lines,Footnote 8 putting the total footprint at \(\sim5 \mathrm{mm}^{2}\) per qubit. Assuming that this footprint can be translated into a square with a side of 2.5mm, a kQb processor would be about \(6.4 \mathrm{cm}^{2}\). As such, 1kQb would fit on a 100mm wafer (with a surface area of \(78 \mathrm{cm}^{2}\)).

Although this back of the envelope calculation indicates that a kQb processor would be about the size of a single 100mm wafer, it also highlights the importance of the interconnect size. Where it is possible to reduce the on-chip footprint to about \(1 \mathrm{mm}^{2}\) for each interconnect, regular SMP connectors have a diameter of 4mm, resulting in a footprint of about \(0.65 \mathrm{cm}^{2}\) per connector. At about 4.2 lines per qubit this would mean that a kQb processor would require a solid \(20\times20 \mathrm{cm}\) block of SMP connectors. As cable dimensions are typically significantly smaller than the connector sizes, a natural solution includes the cabling in the sample mount. In this way, the signal integrity can be preserved while the fan-out can be taken care of elsewhere.

Not only the footprint of the lines is relevant, but also the heat load. The heat load consists of two contributions, a passive contribution coming from the fact that there is a conducting line connecting the sample to room temperature, and an active contribution consisting of power dissipation happening in the line. Attenuating the power of signals intended for the qubit is required to manage the noise temperature of the signals. For a system up to 100 qubits, the heat load can be managed by using standard cable technologies and attenuators [44]. To reduce the active contribution one can consider using directional couplers that transmit only part of the signal, while sending the return signal to a higher-temperature stage where more cooling power is available. The passive contribution to the heat load can be reduced by using specialized cable technologies. A promising approach is to use microwave striplines etched on a flexible substrate to produce cables with lower thermal conductivity and a smaller form factor [45]. Because of the reduced form factor, these cabling technologies are a natural candidate for integrating in the sample mount mentioned in the preceding paragraph.

At this point, it is unclear if better interconnects and cabling technologies will be sufficient to realize a kQb device. There are several techniques that can be used to reduce the number of lines by a constant factor. The concept of dedicated drive-lines per qubit can be dropped in favor of a frequency multiplexing scheme in which several qubits (5) operated at different frequencies share a drive line. These changes, however, do not address how the number of lines scales (linearly) but only change the prefactor. At some point one has to consider Rent’s rule [46]. To change the scaling of control, one has to find multiplexing schemes for all types of control (microwave, flux, measurement) similar to the VSM scheme [32, 47] for microwave pulses. The constraints imposed by such a scheme will have significant consequences for how it can execute algorithms and furthermore requires exquisite control over device fabrication. Therefore it is not expected that such a scheme will be viable in the near future.

Until now, we have glossed over an important aspect of the fabrication problem; coherence. Qubit performance is inherently limited by coherence, and it will be a large challenge in itself to better understand what is limiting coherence and to reliably fabricate high-coherence devices. Achieving high coherence will be especially challenging, because significant changes to the design are required, such as the integration of 3D interconnects, tunable couplers and connection between different subpatches. All of these changes have the potential to impact coherence.

The last key functionality requirement for the R&D community relates to the ability to maximize the performance of the quantum device. Due to variations in the fabrication process, all qubits need to be individually characterized and calibrated before the system can be operated as a quantum computer. This task is challenging because system parameters can fluctuate over time, depend on each other and suffer from crosstalk. To address this challenge, novel approaches to calibration [37, 4851] are required as well as specialized characterization protocols [5254] and hybrid control models that support both the pulse- and gate-level abstractions.

Requirement 4

Tune-up the performance of Quantum Devices

To achieve a high yield and coherence, one needs to understand how changes in design and fabrication affect the system. An engineering cycle which can accelerate the development of high-performance quantum devices is depicted in Fig. 2. By connecting automated characterization to a database infrastructure, it is possible for the R&D community to close the loop between design, fabrication, and characterization.

Figure 2
figure 2

The quantum device engineering cycle consists of several steps. A target application influences the chip design, in which an equivalent circuit and its target parameters are determined. This design serves as the input for a second design step in which the circuit design and geometry of the device is determined, taking into account constraints of the fabrication process. After fabricating the device, the system is characterized, resulting in knowledge about the system. This knowledge is then used to iterate on the design steps. Figure from [55]

The analysis of the product requirements of a quantum computer, as outlined above, emphasizes the need to start the product development activities as soon as possible in order to provide the quantum community the tools they need to accelerate their R&D activities resulting in a commercially viable quantum computer.

2.2 Supply chain management

Considering the development of a quantum computer as product development requires a mature supply chain that can provide high-quality components and can ensure security of supply. Supply chain management is key for building a product [56]. The current emerging supply chain offers enabling technologies and supporting component solutions with sufficiently high product maturity, ready for scaling far beyond quantum supremacy-level systems. Then again, some innovation bottlenecks, such as the manufacturing of high-quality quantum devices and overall system integration, still need to be tackled.

State-of-the-art small-scale quantum devices are still being developed overwhelmingly in more-or-less academic environments and shared facilities, with few exceptions. As of now, it is unlikely that large chip manufacturers will establish quantum device development lines at scale any time soon. This situation can in part be attributed to the successful insertion of extreme ultraviolet lithography into high volume CMOS manufacturing. Moore’s law for these players is considered to be alive and well for the decade to come [57]. A proposed solution to this quantum chip development gap, especially in the near-term, could be the implementation of novel technology pilot lines through public-private partnership incentives facilitated by public RTOs and national labs. Those pilot lines should be used to investigate the appropriate approach to quantum device manufacturing, by working out in detail the differences and commonalities with respect to standard CMOS process and technology development in a process that is focused on short development cycle time, rather than high volume. The efforts should then be supplemented with the necessary public-private partnerships for strategic developments to protect the IP.

The integration of all components and subsystems into full-stack systems is considered another bottleneck which needs to be addressed. Not only system complexity and interface definition are a challenge here, but also the considerable price tag. Small-scale demonstrators deployed in the field for education and training purposes already require entry-level startup costs in excess of a few million Euros. It should come as no surprise then that there are currently only very limited full-stack system integration activities to be found in the commercial sector.

Opening up these bottlenecks requires considerable financial strength. On the backdrop of ensuring future technological sovereignty, a few commercial players in the US and China were able to allocate the required resources in such a way that they are currently leading the developments. After decades of considerable federal commitment, US tech giants were among the first to adopt this emerging technology, regardless of its uncertain near-term commercial impact. A future quantum computing technology would offer them a natural extension to their current business portfolio. Likewise, China’s nationally funded and highly coordinated programs in this field are starting to bear fruit [1, 2, 58]. These global examples are utilizing a monolithic approach to the integration of their systems, with key components being developed in-house. While this approach gives full control over the quality and availability of the system and its key components, it limits in turn the ability to pivot to alternative technologies and requires considerable resources which can only be afforded by large organizations (public or private) or extremely-well-funded start-ups.

The financial entry barrier to the monolithic approach, in combination with the current lack of a clear business case, and the technical complexity of the future quantum computing system, makes this a challenging field. Even more so if one needs to remain competitive in performance and timescale against the international developments in the field. Therefore, instead of approaching the task of trying to bridge the quantum advantage gap alone, in a monolithic manner, part of the answer could be to spread the challenge onto more shoulders. Independent players could ensure focus on individual strengths and mitigate risks. For this approach to work, players in this field need to be open to collaborate and co-develop. Examples for such partnership approaches are manifold in high-tech environments, such as airplane and car manufacturing or in the semiconductor industry [5961]. With an additional long-standing public commitment, such alliances can accelerate innovation, will increase the technology readiness, strengthen the value chain and foster standardization and a wider adoption of this technology. For instance, addressing the more professional approach to quantum device manufacturing could be facilitated by RTOs. These organisations are well-equipped to coordinate an alliance for developing pilot lines where Small & Medium enterprises (SMEs), universities and larger industrial companies can benchmark designs and develop new architectures. Complementary, open consortia and public-private partnership incentives could be formed across national borders, instead of pursuing technology development in large publicly traded corporations or monolithic start-ups. This co-development approach requires sufficient alternative suppliers to ensure the quality and availability of key components. The required amount of resources is similar to the monolithic approach, but distributed among more players in the value chain.

The analyses discussed in Sect. 2.1 and Sect. 2.2 show that the quantum computer is expected to be able to solve relevant problems within the next decade, probably sooner for specific problems and specific needs of R&D labs. Furthermore, in recent years a supply chain has emerged that will be able to provide key quantum computer components in a reliable and cost-effective manner.

3 Product roadmap

Based on product Requirements 1, 2, 3, and 4 and the derived product specifications, it is possible to outline a product roadmap. A product roadmap describes how a product is likely to evolve in time based on the expected development of the underlying technologies, as well as customer needs. It is expected that technology will improve over time, although it will be hard to predict when each milestone will be reached. Quantum technology development is still in its embryonic phase and sudden step-changes in improvement of technology are likely to occur, which makes predictions hard. However, the use of the products based on quantum technology is better understood. It is expected that a fully functional quantum computer will be used by the High Performance Computing (HPC) market to do complex calculations and data management. Before that market can be serviced, partly functional quantum computers will already be of interest to players in the R&D market, that have a need for such a product to speed-up their quantum technology developments. Combining the expected technology developments and expected use of the technology leads to a product roadmap. We consider this roadmap largely generic and independent of the underlying quantum technology, although at times we might refer to specifics of a quantum device-based system for clarification:

3.1 Quantum computer demonstration platform

The first archetypal system on the product roadmap is the quantum computer demonstration platform. Such a demonstration platform is already proven technology for some of the currently available qubit technologies even to the level of cloud-accessibilityFootnote 9 [3, 62]. These platforms are used for education, training and testing of algorithms and error models. A slightly more advanced version of this platform consists of a well-defined quantum computer stack that can measure and control a simple quantum device. This is the minimal system that has key quantum computing properties: controlling superposition and entanglement. The interfaces and functionalities of the components of this system should be clear. If that is the case, its upgrading would be of primary interest to quantum computer component suppliers. The suppliers can use the quantum computer demonstration platform to validate the performance of the component they are offering to the market and confirm that it is working well in concert with other components. The quantum computer demonstration platform can also ensure that a supplier’s component is not limiting the performance of the system with respect to controlling the quantum-mechanical properties. The platform will evolve from a test and validation platform to a development platform for key components of a quantum computer as outlined in the following subsections.

3.2 Quantum device development platform

A key component of a quantum computer is the quantum device. As outlined in Requirement 3, the development of a quantum device is challenging and R&D labs of device manufactures need a suitable development platform to improve the performance of a quantum device in an efficient way. In order to meet Requirement 3, one needs to close the quantum device engineering cycle (Fig. 2). To realize this, the device development platform is optimized for Requirement 4. The ultimate version of the quantum device development platform would also be able to function as a benchmarking and certification product that can compare the performance of quantum devices provided by different suppliers in an objective manner.

3.3 Quantum algorithm development platform

The current state-of-the-art quantum technology is not fully ready yet for the next product in the roadmap: the quantum algorithm development platform. Current algorithm development platforms based on classicalFootnote 10 computing technology outperform the quantum computing-basedFootnote 11 algorithm development platform, although the break-even point seems to be close. This transition from using classical to quantum computers as quantum algorithm development platforms will probably take several years. Currently, the classical computers can still simulate more error-free qubits than state-of-the-art quantum computers can provide, a crucial parameter for efficient quantum algorithm development. However, this parameter is less important for the development of NISQ algorithms. Although quantum technology seems not to be ready yet for a quantum computer to be a fully functional algorithm development platform, most of the commercial activities in the quantum community focus on developing this product or developing derived products and services.Footnote 12 The users of this product will be the software developers of ICT companies that are looking for better platforms to develop and test their software on.

3.4 Quantum computer

It will likely take at least a decade before quantum technology has matured enough to give sufficient control of quantum-mechanical properties to make the quantum computer suitable for the HPC market. At this point in time, the real development of the quantum computer as a product will start, and the quantum computer will fulfill the promise of changing the world in a similar fashion to the classical computer previously. The quantum computer will be used by a wide variety of end-users to optimize their own products and services. The preceding quantum computer products, the quantum computer demonstration platform, the quantum device development platform and the quantum algorithm development platform, will have paved the way for a successful insertion of the quantum computer in the HPC market. A supply chain will have formed that ensures security of supply and quality of key components, an ICT workforce will be in place that is acquainted with the quantum computing paradigm and commercial use cases will be available, which prove the added value of the quantum computer.

4 Realizing ImpaQT: building a quantum computer together

One of the key propositions put forward in Sect. 2.2 was to tackle the question of accelerating quantum computing R&D efforts in a collaborative way – by involving independent commercial partners which can leverage their individual strengths, thereby mitigating the risks involved. Following this logic, a four-month-long project called ImpaQT was initiated by companies of the Dutch quantum ecosystemFootnote 13. These companies form a local supply chain for the following off-the-shelf key components: (i) Algorithms to solve a specific problem that is likely to be solved efficiently on a quantum computer, (ii) software to characterize, calibrate and run algorithms on the quantum device, (iii) electronics to enable closed loop control of the quantum device, (iv) cabling and filtering that is scalable to control a multi-qubit quantum device, and (v) multi-qubit Trasmon-based quantum processors.

In the following section we will sketch the goal, approach, implementation, and successful completion of this project, giving support to the above proposition. An in-depth discussion of this project, its technical details and accomplishments will be published in a separate white paper. Together with TNO, the Dutch RTOFootnote 14, acting as facilitator of this project, the companies were designing and creating from scratch a full-stack R&D setup, which allowed the characterization and calibration of superconducting Transmon qubits on an 8-qubit test chip within a time period of 16 weeks (as depicted in the table of Fig. 3).

Figure 3
figure 3

Part of the hardware back-end in TNO’s QITT lab, Delft, during the validation phase of the ImpaQT v1.0 project. The table shows the two phases of the project execution. The first 12 weeks were devoted to definition of system requirements, system design, procurement and assembly. The second part (weeks 13 to 16) were dedicated to the validation of the full-stack integration by executing experiments which led to basic single qubit gate analysis. As an example for coherent qubit control experiments, Rabi oscillations are obtained as a function of microwave drive amplitude and pulse duration. AllXY qubit tune-up exemplifies a subset of basic single-qubit gate experiments

4.1 Quantum computer demonstration platform: Quantum Accelerator v1.0

The supply chain partners used a system engineering approach to develop the Quantum Accelerator, in which the first step consisted of the definition of system functionality, performance requirements and a system design. It was agreed that the system should be able to execute a set of spectroscopy, coherent qubit control and qubit gate analysis experiments. These functionality requirements were put to the test in the final stage of the project.

From the component perspective, off-the-shelf products from all partners were incorporated into the full-stack, interfaces had to be defined and gaps in the architecture had to be assessed and jointly bridged. Having well-defined interfaces was an important prerequisite and implies that components in the system design can be exchanged by alternative components that provide the same functionality but without the need to redesign the complete system.

Procurement, assembly, and hardware integration followed the system requirements and system design decisions and was accomplished by week 12 of the project. Testing and validating the system by performing experiments started in week 13. The functionality and performance requirements were sequentially tested by following a calibration tree procedure. By week 16 and in addition to the aforementioned required set of experiments, even an automated mixer calibration could be implemented. Two of several experiments performed are shown in Fig. 3: On the left, Rabi oscillations are plotted as a function of microwave drive amplitude and pulse duration – and on the right, an AllXY tune-up experiment is presented. Both are performed on the same Transmon qubit with an energy relaxation time \(T{1} \approx15 \mu s\), a dephasing time \(T{2^{*}} \approx6 \mu s\) and echo time \(T{2} \approx9 \mu s\).

4.2 Quantum Accelerator product roadmap

As outlined above, building this baseline quantum R&D setup, called Quantum Accelerator v1.0, with off-the-shelf components, led to a functional multi-qubit full-stack system. This setup can be used by suppliers to validate and test their components while interacting with other components in the integrated system. The performance of this platform can now be incrementally improved simply by improving the performance of the different components. The incremental increase of performance will lead to subsequent versions of the Quantum Accelerator, until the improvement of the components stops. At this point in the future a redesign of the system is required to get to the next level of performance. The redesign of the system will be based either on the next generation of the components or on a completely different technology paradigm, as outlined in the product roadmap section. Another reason to redesign the system is an adjustment in functionality requirements. Quantum Accelerator v1.0 had been designed to provide suppliers of quantum computing components a platform to test their components interacting with other components in a minimal quantum computer demonstrator. The next generation of this product will require an extension of functionality, so that it can not only test components, but also help suppliers to improve their components by providing detailed characterisation and benchmarking information. As outlined in Sect. 3 the functionality and performance requirements will become more challenging for every subsequent product in the product line development roadmap, with the quantum algorithm development platform as a third step and ultimately a commercially viable quantum computer for the HPC market.

5 Conclusion

In this paper, we have analyzed the current state of one of the most mature quantum technologies: superconducting circuits. We have outlined how to use this technology to do product development. The product development approach puts the focus on the functionality requirements of the product and uses state-of-the-art technology to build it. In this way, the development of the quantum computer as a commercially viable product can be accelerated. A series of simple quantum computers with specific functionalities is needed to build quantum computers that can outperform classical computers. An outline was given of a quantum computer product line roadmap. An example of the development of the first product by a local supply chain in the Netherlands was presented. This shows that quantum technology development is not an exclusive area for government-funded universities or RTOs anymore. Nor is it limited to companies with large R&D budgets, such as large ICT companies and scale-up companies. The change in the R&D landscape shows that the quantum community has reached a next level of maturity, indicating that the quantum computer as a commercial product is becoming a reality in the present day.

Availability of data and materials

All data that is used is public and referred to by footnotes or otherwise in the article.

Notes

  1. See https://en.wikipedia.org/wiki/Systems_ engineering.

  2. For a list see quantumalgorithmzoo.

  3. As opposed to simulators based on idealized models.

  4. The definition of \(V_{\mathrm{Q}}\) is different in [21, 22] and [23]. Here we use the definition from [23].

  5. 10.000 years according to the authors, and 20 days according to Alibaba [27]

  6. Recent works on volumetric benchmarks have expanded upon the quantum volume [5, 28], addressing both the circuit width depth trade-off and the binary nature of the quantum volume.

  7. And can be scaled down to \(0.4 \mathrm{mm}^{2}\) [43].

  8. 1 microwave drive, 1 flux bias, 2 coupler bias and a feedline input and output shared by 10 qubits.

  9. Cloud Based Quantum Computing

  10. for instance: ATOS QLM

  11. for instance: IBM QI

  12. Quantum Programming

  13. Quantum Delta NL

  14. TNO

Abbreviations

HPC:

High Performance Computing

ICT:

Information and Communications Technology

NISQ:

Noisy Intermediate Scale Quantum

RTO:

Research & Technology Organisation

SME:

Small & Medium Enterprise

References

  1. Arute F, Arya K, Babbush R, Bacon D, Bardin JC, Barends R, Biswas R, Boixo S, Brandao FGSL, Buell DA, Burkett B, Chen Y, Chen Z, Chiaro B, Collins R, Courtney W, Dunsworth A, Farhi E, Foxen B, Fowler A, Gidney C, Giustina M, Graff R, Guerin K, Habegger S, Harrigan MP, Hartmann MJ, Ho A, Hoffmann M, Huang T, Humble TS, Isakov SV, Jeffrey E, Jiang Z, Kafri D, Kechedzhi K, Kelly J, Klimov PV, Knysh S, Korotkov A, Kostritsa F, Landhuis D, Lindmark M, Lucero E, Lyakh D, Mandrà S, McClean JR, McEwen M, Megrant A, Mi X, Michielsen K, Mohseni M, Mutus J, Naaman O, Neeley M, Neill C, Niu MY, Ostby E, Petukhov A, Platt JC, Quintana C, Rieffel EG, Roushan P, Rubin NC, Sank D, Satzinger KJ, Smelyanskiy V, Sung KJ, Trevithick MD, Vainsencher A, Villalonga B, White T, Yao ZJ, Yeh P, Zalcman A, Neven H, Martinis JM, editors. Quantum supremacy using a programmable superconducting processor. Nature. 2019;574(7779):505–10. https://doi.org/10.1038/s41586-019-1666-5.

    Article  ADS  Google Scholar 

  2. Zhong H-S, Wang H, Deng Y-H, Chen M-C, Peng L-C, Luo Y-H, Qin J, Wu D, Ding X, Hu Y, Hu P, Yang X-Y, Zhang W-J, Li H, Li Y, Jiang X, Gan L, Yang G, You L, Wang Z, Li L, Liu N-L, Lu C-Y, Pan J-W. Quantum computational advantage using photons. Science. 2020. https://doi.org/10.1126/science.abe8770. https://science.sciencemag.org/content/early/2020/12/02/science.abe8770.full.pdf.

    Article  Google Scholar 

  3. IBMQ. IBM Quantum Experience. https://quantum-computing.ibm.com/. (2016).

  4. Bruzewicz CD, Chiaverini J, McConnell R, Sage JM. Trapped-ion quantum computing: progress and challenges. Appl Phys Rev. 2019;6(2):021314. https://doi.org/10.1063/1.5088164.

    Article  Google Scholar 

  5. Blume-Kohout R, Young KC. A volumetric framework for quantum computer benchmarks. arXiv:1904.05546. (2019).

  6. Arute F, Arya K, Babbush R, Bacon D, Bardin JC, Barends R, Boixo S, Broughton M, Buckley BB, Buell DA, Burkett B, Bushnell N, Chen Y, Chen Z, Chiaro B, Collins R, Courtney W, Demura S, Dunsworth A, Eppens D, Farhi E, Fowler A, Foxen B, Gidney C, Giustina M, Graff R, Habegger S, Harrigan MP, Ho A, Hong S, Huang T, Ioffe LB, Isakov SV, Jeffrey E, Jiang Z, Jones C, Kafri D, Kechedzhi K, Kelly J, Kim S, Klimov PV, Korotkov AN, Kostritsa F, Landhuis D, Laptev P, Lindmark M, Leib M, Lucero E, Martin O, Martinis JM, McClean JR, McEwen M, Megrant A, Mi X, Mohseni M, Mruczkiewicz W, Mutus J, Naaman O, Neeley M, Neill C, Neukart F, Neven H, Niu MY, O’Brien TE, O’Gorman B, Ostby E, Petukhov A, Putterman H, Quintana C, Roushan P, Rubin NC, Sank D, Satzinger KJ, Skolik A, Smelyanskiy V, Strain D, Streif M, Sung KJ, Szalay M, Vainsencher A, White T, Yao ZJ, Yeh P, Zalcman A, Zhou L, editors. Quantum approximate optimization of non-planar graph problems on a planar superconducting processor. arXiv:2004.04197. (2020).

  7. Shor PW. Scheme for reducing decoherence in quantum computer memory. Phys Rev A. 1995;52:2493. https://doi.org/10.1103/PhysRevA.52.R2493.

    Article  ADS  Google Scholar 

  8. Feynman RP. Simulating physics with computers. Int J Theor Phys. 1982;21(6–7):467–88.

    Article  MathSciNet  Google Scholar 

  9. IBMQ. IBM Achieves Highest Quantum Volume to Date. Establishes Roadmap for Reaching Quantum Advantage. https://newsroom.ibm.com/2019-03-04-IBM-Achieves-Highest-Quantum-Volume-to-Date-Establishes-Roadmap-for-Reaching-Quantum-Advantage. (2019).

  10. Honeywell. Behind the scenes of a major quantum breakthrough. https://www.honeywell.com/us/en/news/2020/03/behind-the-scenes-of-a-major-quantum-breakthrough (2020).

  11. IonQ. IBM Achieves Highest Quantum Volume to Date. Establishes Roadmap for Reaching Quantum Advantage. 2020. https://ionq.com/posts/december-09-2020-scaling-quantum-computer-roadmap.

  12. Otterbach JS, Manenti R, Alidoust N, Bestwick A, Block M, Bloom B, Caldwell S, Didier N, Schuyler Fried E, Hong S, Karalekas P, Osborn CB, Papageorge A, Peterson EC, Prawiroatmodjo G, Rubin N, Ryan CA, Scarabelli D, Scheer M, Sete EA, Sivarajah P, Smith RS, Staley A, Tezak N, Zeng WJ, Hudson A, Johnson BR, Reagor M, da Silva MP, Rigetti C. Unsupervised Machine Learning on a Hybrid Quantum Computer. 2017. arXiv:1712.05771.

  13. Knight W. IBM Raises the Bar with a 50-Qubit Quantum Computer. MIT Technology Review. https://www.technologyreview.com/s/609451/ibm-raises-the-bar-with-a-50-qubit-quantum-computer/. Accessed 2017-11-10.

  14. Kelly J. A Preview of Bristlecone, Google’s New Quantum Processor. https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html. Accessed 2018-03-05.

  15. Intel. The Future of Quantum Computing is Counted in Qubits. https://newsroom.intel.com/news/future-quantum-computing-counted-qubits/. Accessed 2018-05-02.

  16. Campbell ET, Terhal BM, Vuillot C. Roads towards fault-tolerant universal quantum computation. Nature. 2017;549(7671):172–9. https://doi.org/10.1038/nature23460.

    Article  ADS  Google Scholar 

  17. McArdle S, Endo S, Aspuru-Guzik A, Benjamin SC, Yuan X. Quantum computational chemistry. Rev Mod Phys. 2020;92:015003. https://doi.org/10.1103/RevModPhys.92.015003.

    Article  ADS  MathSciNet  Google Scholar 

  18. Gidney C, Ekerå M. How to factor 2048 bit rsa integers in 8 hours using 20 million noisy qubits. arXiv:1905.09749. (2019).

  19. Preskill J. Quantum Computing in the NISQ era and beyond. Quantum. 2018;2:79. https://doi.org/10.22331/q-2018-08-06-79.

    Article  Google Scholar 

  20. Fetzer JH. Program verification. In: Artificial intelligence: its scope and limits. Berlin: Springer; 1990. https://doi.org/10.1007/978-94-009-1900-6. https://www.springer.com/gp/book/9780792305057.

    Chapter  Google Scholar 

  21. Bishop L, Bravyi S, Cross AW, Gambetta JM, Smolin JA. Quantum volume. (2017). https://www.semanticscholar.org/paper/Quantum-Volume-Bishop-Bravyi/650c3fa2a231cd77cf3d882e1659ee14175c01d5.

  22. Moll N, Barkoutsos P, Bishop LS, Chow JM, Cross A, Egger DJ, Filipp S, Fuhrer A, Gambetta JM, Ganzhorn M. Quantum optimization using variational algorithms on near-term quantum devices. Quantum Sci Technol. 2018;3(3):030503. https://doi.org/10.1088/2058-9565/aab822.

    Article  ADS  Google Scholar 

  23. Cross AW, Bishop LS, Sheldon S, Nation PD, Gambetta JM. Validating quantum computers using randomized model circuits. Phys Rev A. 2019;100:032328. https://doi.org/10.1103/PhysRevA.100.032328.

    Article  ADS  Google Scholar 

  24. Gambetta JM. IBM’s Roadmap For Scaling Quantum Technology. https://www.ibm.com/blogs/research/2020/09/ibm-quantum-roadmap/. (2020). Accessed 2020-12-10.

  25. Pino JM, Dreiling JM, Figgatt C, Gaebler JP, Moses SA, Allman MS, Baldwin CH, Foss-Feig M, Hayes D, Mayer K, Ryan-Anderson C, Neyenhuis B. Demonstration of the QCCD trapped-ion quantum computer architecture. arXiv:2003.01293 [quant-ph] (2020). Accessed 2020-12-14.

  26. Chapman P. IonQ. 2020. https://ionq.com/posts/december-09-2020-scaling-quantum-computer-roadmap. Accessed 2020-12-14.

  27. Huang C, Zhang F, Newman M, Cai J, Gao X, Tian Z, Wu J, Xu H, Yu H, Yuan B, Szegedy M, Shi Y, Chen J. Classical Simulation of Quantum Supremacy Circuits. arXiv:2005.06787 [quant-ph]. (2020). Accessed 2020-12-14.

  28. Proctor T, Rudinger K, Young K, Nielsen E, Blume-Kohout R. Measuring the capabilities of quantum computers. arXiv:2008.11294. (2020).

  29. Aleiner I, Arute F, Arya K, Atalaya J, Babbush R, Bardin JC, Barends R, Bengtsson A, Boixo S, Bourassa A, Broughton M, Buckley BB, Buell DA, Burkett B, Bushnell N, Chen Y, Chen Z, Chiaro B, Collins R, Courtney W, Demura S, Derk AR, Dunsworth A, Eppens D, Erickson C, Farhi E, Fowler AG, Foxen B, Gidney C, Giustina M, Gross JA, Harrigan MP, Harrington SD, Hilton J, Ho A, Hong S, Huang T, Huggins WJ, Ioffe LB, Isakov SV, Jeffrey E, Jiang Z, Jones C, Kafri D, Kechedzhi K, Kelly J, Kim S, Klimov PV, Korotkov AN, Kostritsa F, Landhuis D, Laptev P, Lucero E, Martin O, McClean JR, McCourt T, McEwen M, Megrant A, Mi X, Miao KC, Mohseni M, Mruczkiewicz W, Mutus J, Naaman O, Neeley M, Neill C, Neven H, Newman M, Niu MY, O’Brien TE, Opremcak A, Ostby E, Pató B, Petukhov A, Quintana C, Redd N, Roushan P, Rubin NC, Sank D, Satzinger KJ, Shvarts V, Smelyanskiy V, Strain D, Szalay M, Trevithick MD, Villalonga B, White T, Yao ZJ, Yeh P, Zalcman A, editors. Accurately computing electronic properties of materials using eigenenergies. arXiv:2012.00921 [quant-ph]. (2020).

  30. Terhal BM, DiVincenzo DP. Adaptive quantum computation, constant depth quantum circuits and arthur-merlin games. arXiv:quant-ph/0205133. (2002).

  31. Farhi E, Harrow AW. Quantum supremacy through the quantum approximate optimization algorithm. 2016. https://arxiv.org/abs/1602.07674.

    Google Scholar 

  32. Versluis R, Poletto S, Khammassi N, Tarasinski BM, Haider N, Michalak DJ, Bruno A, Bertels K, DiCarlo L. Scalable quantum circuit and control for a superconducting surface code. Phys Rev Appl. 2017;8:034021. https://doi.org/10.1103/PhysRevApplied.8.034021.

    Article  ADS  Google Scholar 

  33. Chamberland C, Zhu G, Yoder TJ, Hertzberg JB, Cross AW. Topological and Subsystem Codes on Low-Degree Graphs with Flag Qubits. 2020;19.

  34. Córcoles AD, Kandala A, Javadi-Abhari A, McClure DT, Cross AW, Temme K, Nation PD, Steffen M, Gambetta JM. Challenges and opportunities of near-term quantum computing systems. In: Proceedings of the IEEE 108(8). 2020. p. 1338–52. https://doi.org/10.1109/JPROC.2019.2954005. Conference Name: Proceedings of the IEEE.

    Chapter  Google Scholar 

  35. Berke C, Varvelis E, Trebst S, Altland A, DiVincenzo DP. Transmon platform for quantum computing challenged by chaotic fluctuations. arXiv:2012.05923 [cond-mat, physics:quant-ph] (2020). Accessed 2020-12-14.

  36. Martinis JM. Saving superconducting quantum processors from qubit decay and correlated errors generated by gamma and cosmic rays. arXiv:2012.06137 [cond-mat, physics:quant-ph] (2020). Accessed 2020-12-14.

  37. Klimov PV, Kelly J, Martinis JM, Neven H. The Snake Optimizer for Learning Quantum Processor Control Parameters. arXiv:2006.04594 [quant-ph] (2020). Accessed 2020-12-15.

  38. Muthusubramanian N, Bruno A, Tarasinski BM, Fogini A, Hagen R, DiCarlo L. APS-APS March Meeting 2019-Event-Local trimming of transmon qubit frequency by laser annealing of Josephson junctions. In: Bulletin of the American Physical Society. American Physical Society. vol. 64. Am. Physical Soc. 2019. https://meetings.aps.org/Meeting/MAR19/Session/B29.15. Accessed 2020-12-15.

  39. Hertzberg JB, Zhang EJ, Rosenblatt S, Magesan E, Smolin JA, Yau J-B, Adiga VP, Sandberg M, Brink M, Chow JM, Orcutt JS. Laser-annealing Josephson junctions for yielding scaled-up superconducting quantum processors. arXiv:2009.00781 [cond-mat, physics:quant-ph]. (2020). Accessed 2020-12-10.

  40. Rosenberg D, Kim D, Das R, Yost D, Gustavsson S, Hover D, Krantz P, Melville A, Racz L, Samach GO, Weber SJ, Yan F, Yoder JL, Kerman AJ, Oliver WD. 3D integrated superconducting qubits. npj Quantum Inf. 2017;3(1):1–4. https://doi.org/10.1038/s41534-017-0044-0.

    Article  Google Scholar 

  41. Arrazola JM, Bergholm V, Brádler K, Bromley TR, Collins MJ, Dhand I, Fumagalli A, Gerrits T, Goussev A, Helt LG, Hundal J, Isacsson T, Israel RB, Izaac J, Jahangiri S, Janik R, Killoran N, Kumar SP, Lavoie J, Lita AE, Mahler DH, Menotti M, Morrison B, Nam SW, Neuhaus L, Qi HY, Quesada N, Repingon A, Sabapathy KK, Schuld M, Su D, Swinarton J, Száva A, Tan K, Tan P, Vaidya VD, Vernon Z, Zabaneh Z, Zhang Y. Quantum circuits with many photons on a programmable nanophotonic chip. arXiv:2103.02109. (2021).

  42. Dickel C. Scalability and modularity for transmon-based quantum processors. PhD Dissertation. https://doi.org/10.4233/uuid:78155c28-3204-4130-a645-a47e89c46bc5. (2018).

  43. Bruno A, Poletto S, Haider N, DiCarlo L. X48.00004: Extensible circuit QED processor architecture with vertical I/O. APS March Meeting. http://meetings.aps.org/Meeting/MAR16/Event/269614. (2016).

  44. Krinner S, Storz S, Kurpiers P, Magnard P, Heinsoo J, Keller R, Lütolf J, Eichler C, Wallraff A. Engineering cryogenic setups for 100-qubit scale superconducting circuit systems. EPJ Quantum Technol. 2019;6(1). https://doi.org/10.1140/epjqt/s40507-019-0072-0.

  45. Bosman S, Kuitenbrouwer D, Bos W, Vermeulen K, Lindeborg K, Sorgedrager R, Thiney V, Kammhuber J. V26.00012: scaling the input/output architecture of quantum processors to kQbit, and beyond, size in the NISQ era. APS March Meeting. 2019. http://meetings.aps.org/Meeting/MAR19/Session/V26.12.

  46. Franke DP, Clarke JS, Vandersypen LMK, Veldhorst M. Rent’s rule and extensibility in quantum computing. Microprocess Microsyst. 2019;67:1–7. https://doi.org/10.1016/j.micpro.2019.02.006.

    Article  Google Scholar 

  47. Asaad S, Dickel C, Poletto S, Bruno A, Langford NK, Rol MA, Deurloo D, DiCarlo L. Independent, extensible control of same-frequency superconducting qubits by selective broadcasting. npj Quantum Inf. 2016;2:16029.

    Article  ADS  Google Scholar 

  48. Kelly J, O’Malley P, Neeley M, Neven H, Martinis JM. Physical qubit calibration on a directed acyclic graph. arXiv:1803.03226. (2018).

  49. Wittler N, Roy F, Pack K, Werninghaus M, Roy AS, Egger DJ, Filipp S, Wilhelm FK, Machnes S. Integrated tool set for control, calibration, and characterization of quantum devices applied to superconducting qubits. Phys Rev Appl. 2021;15:034080. https://doi.org/10.1103/PhysRevApplied.15.034080.

    Article  ADS  Google Scholar 

  50. Kelly J, Barends R, Fowler AG, Megrant A, Jeffrey E, White TC, Sank D, Mutus JY, Campbell B, Chen Y, Chen Z, Chiaro B, Dunsworth A, Lucero E, Neeley M, Neill C, O’Malley PJJ, Quintana C, Roushan P, Vainsencher A, Wenner J, Martinis JM. Scalable in situ qubit calibration during repetitive error detection. Phys Rev A. 2016;94:032321.

    Article  ADS  Google Scholar 

  51. Majumder S, Andreta de Castro L, Brown KR. Real-time calibration with spectator qubits. npj Quantum Inf. 2020;6(1):1–9. https://doi.org/10.1038/s41534-020-0251-y. Number: 1 Publisher: Nature Publishing Group. Accessed 2020-12-15.

    Article  Google Scholar 

  52. Rol MA, Ciorciaro L, Malinowski FK, Tarasinski BM, Sagastizabal RE, Bultink CC, Salathe Y, Haandbaek N, Sedivy J, DiCarlo L. Time-domain characterization and correction of on-chip distortion of control pulses in a quantum processor. Appl Phys Lett. 2020;116(5):054001. https://doi.org/10.1063/1.5133894.

    Article  ADS  Google Scholar 

  53. Klimov PV, Kelly J, Chen Z, Neeley M, Megrant A, Burkett B, Barends R, Arya K, Chiaro B, Chen Y, Dunsworth A, Fowler A, Foxen B, Gidney C, Giustina M, Graff R, Huang T, Jeffrey E, Lucero E, Mutus JY, Naaman O, Neill C, Quintana C, Roushan P, Sank D, Vainsencher A, Wenner J, White TC, Boixo S, Babbush R, Smelyanskiy VN, Neven H, Martinis JM, editors. Fluctuations of energy-relaxation times in superconducting qubits. Phys Rev Lett. 2018;121:090502. https://doi.org/10.1103/PhysRevLett.121.090502.

    Article  ADS  Google Scholar 

  54. Nielsen E, Rudinger K, Proctor T, Russo A, Young K, Blume-Kohout R. Probing quantum processor performance with pyGSTi. In: Quantum science and technology. vol. 5. Bristol: IOP Publishing; 2020. p. 044002. https://doi.org/10.1088/2058-9565/ab8aa4.

    Chapter  Google Scholar 

  55. Rol MA. Control for programmable superconducting quantum systems. PhD Dissertation. https://doi.org/10.4233/uuid:0a2ba212-f6bf-4c64-8f3d-b707f1e44953. (2020).

  56. Ayers JB. Handbook of supply chain management (resource management). 2nd ed. USA: Auerbach Publ.; 2006.

    Google Scholar 

  57. van de Kerkhof M, Jasper H, Levasier L, Peeters R, van Es R, Bosker J-W, Zdravkov A, Lenderink E, Evangelista F, Broman P, Bilski B, Last T. Enabling sub-10nm node lithography: presenting the NXE:3400B EUV scanner. In: Panning EM, editor. Extreme Ultraviolet (EUV) lithography VIII. vol. 10143. Bellingham: SPIE; 2017. p. 34–47. https://doi.org/10.1117/12.2258025. International Society for Optics and Photonics.

    Chapter  Google Scholar 

  58. Stierle M, Decroix G, Siebert T, Radu I, de Greve K, Govoreanu B, Campenhout JV, de Halleux V, Last T, van Zwet E. Quantum technologies: the future is quantum … and the future is now. https://thertoinnovationsummit.eu/sites/ default/files/inline-files/201001_QT%20RTO%20White%20Paper_The% 20Future%20is%20Now_ Summit%20format_final_1.pdf.

  59. Airbus. https://www.airbus.com/be-an-airbus-supplier.html.

  60. Zeiss. https://www.zeiss.de/semiconductor-manufacturing-technology/news-events/pressemitteilungen/2020/zeiss-trumpf-and-fraunhofer-research-team-awarded-the-deutscher-zukunftspreis-.html.

  61. IBM. https://www-03.ibm.com/press/us/en/pressrelease/32003.ws.

  62. Last T, Samkharadze N, Eendebak P, Versluis R, Xue X, Sammak A, Brousse D, Loh K, Polinder H, Scappucci G, Veldhorst M, Vandersypen L, Maturová K, Veltin J, Alberts G. Quantum inspire: QuTech’s platform for co-development and collaboration in quantum computing. In: Novel patterning technologies for semiconductors, MEMS/NEMS and MOEMS 2020. vol. 11324. Bellingham: SPIE; 2020. p. 49–59. https://doi.org/10.1117/12.2551853. International Society for Optics and Photonics.

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors would like to thank TNO for granting permission to publish a photo of the Quantum Information Technology Test (QITT) facility.

Authors’ information

Garrelt Alberts obtained his MSc degree in Physics and his BSc in Dutch Law at the University of Utrecht in 1998. He finished his PDeng in Mechanical Engineering on Computational Mechanics at the Twente University in 2001. In 2007 he became a TNO Project Manager, and was responsible for several product development projects and research programs. From 2009 onwards he took Department Management responsibility for a group of 60 R&D professionals. In 2017 he started as a QuTech Roadmap Leader, responsible for managing the engineering activities. He is Co-Founder and Managing Director of Orange Quantum Systems.

Adriaan Rol worked on control for superconducting quantum systems during his PhD (cum laude) in the group of Prof. DiCarlo. He has developed new calibration and characterization protocols, as well as a new type of two-qubit gate, and made key contributions to the development of Quantum Infinity, a transmon-based full-stack quantum computer that preceded QuTech’s Quantum Inspire platform. Adriaan has worked closely with experts from all layers of the stack, which was formally recognized through the Zurich Instruments Pioneer Award, and resulted in award-winning papers. Before starting his PhD, Adriaan was on the executive board of a non-profit business consultancy. He is Co-Founder and Director of Research & Development at Orange Quantum Systems.

Thorsten Last received his Diploma (MSc equivalent) in Physics and PhD in Electrical Engineering from the Ruhr Universität Bochum, Germany. He has been involved in various projects in academia and the semiconductor industry, spanning from “beyond CMOS” device physics to Silicon Valley tech consultancy and EUV lithography. Since 2017, Thorsten has been responsible as a Senior Systems Engineer at TNO/QuTech for the organization of Quantum Inspire’s SPIN-2 quantum processor manufacturing effort, and for setting up Quantum Inspire’s Quantum Computing lab. Thorsten is Co-Founder and Director of Development & Engineering at Orange Quantum Systems.

Benno Broer is a quantum-physicist from Delft University in the Netherlands, who graduaded in 1998 on quantum-dot research in the quantum-transport group (now QuTech). Benno is also a serial entrepreneur, experienced boardroom consultant and as Qu&Co CEO dedicated to translating emerging quantum technologies into high-value industry relevant problems. He has over 15 years of business development, B2B commercial and investment experience

Matthijs Rijlaarsdam received his MSc in computer science (annotation quantum technologies) from the TU Delft. He did his thesis research at QuTech. There he was also a project lead responsible for performing market research for the quantum internet field. Matthijs was also project manager at a non-profit strategy consultancy, where he advised technical startups and nonprofits. He is Co-Founder and Managing Director at QuantWare.

Niels Bultink is the CoFounder and CEO of Qblox, a start-up company based in the Netherlands that focuses on the control electronics required for quantum computing. The company is a spin-out of QuTech (TU Delft and TNO), where Niels conducted his PhD research in the group of Prof. DiCarlo. His interests include experimental quantum information processing and implementations of fault-tolerant quantum computing. His work focuses particularly on controlling multi-qubit processors to build quantum computers with an ever increasing number of qubits. His publications and work have been recognized through multiple awards, amongst them the recent CES innovation award honoree.

Funding

The research was funded by the companies that employ the authors, as listed in the Author’s affiliations.

Author information

Authors and Affiliations

Authors

Contributions

GA: Main author of Sect. 1, Sect. 3 and Sect. 4.AR: Main author of Sect. 2.1. TL: Main author of Sect. 2.2.MR, AH, NB and BB have reviewed the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Garrelt J. N. Alberts.

Ethics declarations

Competing interests

All authors are employed by companies active in the Quantum Computing supply chain.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alberts, G.J.N., Rol, M.A., Last, T. et al. Accelerating quantum computer developments. EPJ Quantum Technol. 8, 18 (2021). https://doi.org/10.1140/epjqt/s40507-021-00107-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1140/epjqt/s40507-021-00107-w

Keywords