Skip to main content

Bermudan option pricing by quantum amplitude estimation and Chebyshev interpolation


Pricing of financial derivatives, in particular early exercisable options such as Bermudan options, is an important but heavy numerical task in financial institutions, and its speed-up will provide a large business impact. Recently, applications of quantum computing to financial problems have been started to be investigated. In this paper, we first propose a quantum algorithm for Bermudan option pricing. This method performs the approximation of the continuation value, which is a crucial part of Bermudan option pricing, by Chebyshev interpolation, using the values at interpolation nodes estimated by quantum amplitude estimation. In this method, the number of calls to the oracle to generate underlying asset price paths scales as \(\widetilde{O}(\epsilon ^{-1})\), where ϵ is the error tolerance of the option price. This means the quadratic speed-up compared with classical Monte Carlo-based methods such as least-squares Monte Carlo, in which the oracle call number is \(\widetilde{O}(\epsilon ^{-2})\).


Following the recent advances of quantum computing technologies,Footnote 1 many researches have been done for the application to practical problems in various industries. One of the promising targets is finance (see [24] for reviews). Financial firms have a lot of heavy computational tasks in their daily business,Footnote 2 and therefore the speed-up of such tasks by quantum computers are expected to provide a large impact. For example, previous papers studied option pricing [718], risk measurement [1922], portfolio optimization [2325], and so on.

In this paper, we focus on Bermudan option pricing and consider how to speed it up by quantum algorithms. Let us briefly describe the problem. An option is a financial contract between two parties, the option buyer and seller, which conveys the option buyer the right to buy some underlying assets such as stocks and bonds from or sell them to the option seller, at some specified price on some date. Or, more generally, it can be regarded as a contract, in which the option buyer receives some amount of money (payoff) determined in reference to the underlying asset price, from the option seller. There are some kinds of options with respect to timing of exercise of right. In an European option, the option buyer can exercise the right at one predetermined date, which is called the maturity. On the other hand, there are early-exercisable options, in which the option buyer can choose the exercise date. In an American option, the option buyer can exercise the right at any time before the final maturity T. In a Bermudan option, exercise of right is possible on any of finite predetermined dates including T. We hereafter call such dates exercise dates.

Major financial firms hold large portfolios of a wide variety of options, and therefore pricing them is an important task for business. However, it is also a difficult task. Basically, the option price is the expected value of the payoff under some stochastic model describing random time evolution of the underlying assets. Although European options can be sometimes priced easily, for example, by some analytic formulas, pricing Bermudan and American options typically involves heavy numerical calculations. The difficulty partly stems from the nature of the problem as dynamic programming. That is, pricing early-exercisable options contains determining the optimal exercise time as a crucial part. Although there are some kinds of methods which aim to reflect early exercise to the option price, each one has pros and cons.

One major category of pricing methods is the Monte Carlo-based method,Footnote 3 in which we generate many sample paths of evolution of underlying asset prices, and estimate the expected payoff as an averaged payoff over the paths. This approach has an advantage in the case of multiple underlying assets. Namely, in this approach, the estimation error on the option price decays as \(\widetilde{O}(N^{-1/2})\)Footnote 4 when the sample number N increases, regardless of the number of the underlying assets d. In other words, it suffices to take \(\widetilde{O}(\epsilon ^{-2})\) samples in order to achieve the error tolerance ϵ on the option price. This contrasts to other methods, for example approaches based on solving partial differential equations [27, 28], whose complexity is \(\widetilde{O}((1/\epsilon )^{{\mathrm{poly}}(d)})\). On the other hand, in the Monte Carlo-based methods, it is difficult to precisely determine the optimal exercise time, and we have to approximate this in some way. In many cases, this is done through approximation of the continuation value, which is the option price at each exercise date in the case that the option buyer forgoes the exercise. The option should be exercised if the payoff is larger than the continuation value, and should not be exercised otherwise.

In this category, the least-squares Monte Carlo (LSM) [29] is widely used. LSM estimates the continuation value at each exercise date by linear regression using the generated sample paths as training data, and then, going backward from the final maturity to the present, finds the present option price.

Note that this method can also price American options approximately, replacing exercisability at any point in the continuous time period with that at discrete dates with sufficiently small intervals.

In this paper, we propose a new method for Bermudan option pricing, combining Chebyshev interpolation and quantum algorithm for Monte Carlo integration [3032], which is based on quantum amplitude estimation (QAE) [31, 3342]. As far as the author knows, this is the first proposal on the quantum method for Bermudan option pricing. Chebyshev interpolation is a widely used method for function approximation,Footnote 5 and has already been used in some (classical) methods for Bermudan option pricing [4450]. In the proposed method, given the access to the quantum circuit (or, the oracle) for time evolution of underlying asset prices, we calculate the continuation values at the interpolation nodes by the quantum algorithm, and find Chebyshev interpolation using these values. Importantly, this method outputs an estimation of the option price with the error at most ϵ, calling the oracle only \(\widetilde{O}(\epsilon ^{-1})\) times. Thus, as we commonly observed in applications of QAE to various kinds of problems, we obtain the quadratic speed-up compared with the classical Monte Carlo-based methods such as LSM and the Chebyshev interpolation-based methods.

The rest of this paper is organized as follows. In Sect. 2, we briefly explain Chebyshev polynomials and function approximation by them. We present how to calculate the coefficients of Chebyshev expansion in general and the upper bound for the approximation error. In Sect. 3, we present the general formulation of Bermudan option pricing and explain LSM as a typical classical solutions. In Sect. 4, we explain QAE and QAE-based Monte Carlo integration. Then, in Sect. 5, we present the new algorithm for Bermudan option pricing based on Chebyshev interpolation and QAE. We also present an upper bound on the price error in the method, and that on the complexity sufficient to achieve the given error tolerance. In Sect. 6, we make some remarks: comparison with existing methods, comments on exponential factors with respect to the exercise date number in the error bound, and the possibility of quantization of LSM. Section 7 summarizes this paper. All proofs are presented in the Appendix.


We here explain the notations used in this paper.

\(\mathbb{N}\) denotes the set of all positive integers, and \(\mathbb{N}_{0} := \{0\}\cup \mathbb{N}\). We define \([n]:=\{i\in \mathbb{N} \mid i\le n\}\) for any \(n\in \mathbb{N}\), and \([n]_{0}:=\{i\in \mathbb{N}_{0} \mid i\le n\}\) for any \(n\in \mathbb{N}_{0}\). We also define \(\mathbb{N}_{\ge n}:=\{i\in \mathbb{N} \mid i\ge n\}\) for \(n\in \mathbb{N}\). Similarly, we define \(\mathbb{R}_{>a}:=\{x\in \mathbb{R} \mid x>a\}\) and \(\mathbb{R}_{\ge a}:=\{x\in \mathbb{R} \mid x\ge a\}\) for \(a\in \mathbb{R}\). \(\mathbb{R}_{+}\) denotes the set of all positive real number, that is, \(\mathbb{R}_{>0}\).

For \(a,b\in \mathbb{N}_{0}\), \(\delta _{a,b}\) denotes the Kronecker delta, which is 1 if \(a=b\) or 0 otherwise. For \(d\in \mathbb{N}\) and \(\vec{l}_{1},\vec{l}_{2}\in \mathbb{N}_{0}^{d}\), we also define \(\delta _{\vec{l}_{1},\vec{l}_{2}}\), which is 1 if \(\vec{l}_{1}=\vec{l}_{2}\) and 0 otherwise.

For a measure space \((\Omega ,\mathcal{F},\mu )\) and \(p\in \mathbb{R}_{\ge 1}\), \(L^{p}(\Omega ,\mu )\) denotes the \(L^{p}\) space on it.

The indicator function \(1_{C}\) takes 1 if the condition C is satisfied, and 0 otherwise.

In this paper, we consider quantum states of systems consisting of some quantum registers with some qubits. For \(x\in \mathbb{R}\), \(| x \rangle \) denotes one of the computational basis states on some register, whose bit string corresponds to the binary representation of x with truncation at some digit. For \(d\in \mathbb{N}\) and \(\vec{x}=(x_{1},\ldots,x_{d})^{T}\in \mathbb{R}^{d}\), \(| \vec{x} \rangle :=| x_{1} \rangle \cdots | x_{d} \rangle \) is the state on the d-register system.

Approximation of functions by Chebyshev interpolation

For \(l\in \mathbb{N}_{0}\), the l-th Chebyshev polynomial (of the first kind) is defined as

$$ T_{l}(x):=\cos \bigl(l\cdot \arccos (x)\bigr), $$

where \(x\in [-1,1]\). One of its important properties is the discrete orthogonality: for any \(m\in \mathbb{N}_{0}\) and \(l_{1},l_{2}\in [m]_{0}\),

$$ \sum_{j=0}^{m} T_{l_{1}}(x_{m,j})T_{l_{2}}(x_{m,j})= \textstyle\begin{cases} 0 ;& \text{if } l_{1}\ne l_{2}, \\ m+1 ;& \text{if } l_{1}=l_{2}=0, \\ \frac{m+1}{2} ;& \text{if } l_{1}=l_{2}>0. \end{cases} $$

Here, \(x_{m,j}\) is the Chebyshev node defined as

$$ x_{m,j} = \cos \biggl(\frac{j+\frac{1}{2}}{m+1}\pi \biggr) $$

for \(j\in [m]_{0}\). \(x_{m,0}, \ldots, x_{m,m}\) are the zeros of \(T_{m+1}(x)\).

We also define the tensorized Chebyshev polynomials on a general hyperrectangle in \(\mathbb{R}^{d}\), where \(d\in \mathbb{N}\). That is, given

$$ \mathcal{D}=[L_{1},U_{1}]\times \cdots\times [L_{d},U_{d}], $$

with \(L_{1},\ldots,L_{d},U_{1},\ldots,U_{d}\in \mathbb{R}\) satisfying \(L_{1}< U_{1},\ldots , L_{d}< U_{d}\), we define

$$ \widetilde{T}_{\mathcal{D},\vec{l}} \ (\vec{S}) := \prod _{i=1}^{d} T_{l_{i}} \bigl(\theta _{\mathcal{D},i}(S_{i}) \bigr) $$

for every \(\vec{l}=(l_{1},\ldots,l_{d})^{T}\in \mathbb{N}_{0}^{d}\) and \(\vec{S}=(S_{1},\ldots,S_{d})^{T}\in \mathcal{D}\), where

$$ \theta _{\mathcal{D}}(\vec{S}) := \biggl( \frac{2S_{1}-U_{1}-L_{1}}{U_{1}-L_{1}},\ldots, \frac{2S_{d}-U_{d}-L_{d}}{U_{d}-L_{d}} \biggr)^{T}. $$

For the above polynomials, the orthogonality relation is now

$$ \sum_{\vec{x}\in \mathcal{G}^{d,m}_{\mathcal{D}}} \widetilde{T}_{ \mathcal{D},\vec{l}_{1}}(\vec{s}) \widetilde{T}_{\mathcal{D},\vec{l}_{2}}( \vec{s})= \textstyle\begin{cases} \frac{(m+1)^{d}}{2^{\aleph (\vec{l}_{1} )}} ;& \text{if } \vec{l}_{1}= \vec{l}_{2}, \\ 0 ;& \text{if } \vec{l}_{1}\ne \vec{l}_{2} \end{cases} $$

for every \(m\in \mathbb{N}\) and \(\vec{l}_{1},\vec{l}_{2}\in [m]_{0}^{d}\), where \(\aleph (\vec{l} ) := \# \{i\in [d] \mid l_{i}>0 \}\) for \(\vec{l}=(l_{1},\ldots,l_{d})^{T}\in \mathbb{N}_{0}^{d}\), and \(\mathcal{G}^{d,m}_{\mathcal{D}}\) is the set of points \(\vec{S}^{\mathcal{D},m}_{\vec{j}}\in \mathcal{D}\) written in the form of

$$ \vec{S}^{\mathcal{D},m}_{\vec{j}} := \biggl(\frac{U_{1}-L_{1}}{2}x_{m,j_{1}}+ \frac{U_{1}+L_{1}}{2},\ldots,\frac{U_{d}-L_{d}}{2}x_{m,j_{d}}+ \frac{U_{d}+L_{d}}{2} \biggr)^{T} $$

with \(\vec{j}=(j_{1},\ldots,j_{d})^{T}\in [m]_{0}^{d}\).

We can use the above polynomials for function approximation. Given \(\mathcal{D}\) as (4) and \(m\in \mathbb{N}\), we define the Chebyshev interpolation of a function \(f:\mathcal{D}\rightarrow \mathbb{R}\) as

$$ \Pi _{\mathcal{D},m}[f](\vec{S}) := \sum_{\vec{l}\in [m]_{0}^{d}} a_{f, \vec{l}} \widetilde{T}_{\mathcal{D},\vec{l}} \ (\vec{S}) $$

for every \(\vec{S}\in \mathcal{D}\), where the coefficient \(a_{f,\vec{l}}\) is calculated by

$$ a_{f,\vec{l}} := \frac{2^{\aleph (\vec{l} )}}{(m+1)^{d}} \sum_{\vec{S}\in \mathcal{G}^{d,m}_{\mathcal{D}}} f(\vec{S}) \widetilde{T}_{\mathcal{D},\vec{l}} \ (\vec{S}) $$

for every \(\vec{l}\in [m]_{0}^{d}\). This is in fact an interpolation, since \(\Pi _{\mathcal{D},m}[f](\vec{S})=f(\vec{S})\) for every node \(\vec{S}\in \mathcal{G}^{d,m}_{\mathcal{D}}\).

The error in the above approximation has been investigated in [47, 51]. They gave the error bound, making an assumption on analyticity of the interpolated function f. We here present the theorem on such a error in the case where we are given the values of f at the Chebyshev nodes with some errors [47]. However, let us make some definitions prior to the theorem. For \(\rho \in \mathbb{R}_{>1}\), the Bernstein ellipse \(\mathcal{B}_{\rho }\) is defined as the open region in the complex plane bounded by the ellipse \(\{ \frac{1}{2} (u+u^{-1} ) \mid u\in \mathbb{C}, |u|=\rho \} \). We also define the generalized Bernstein ellipse as \(\mathcal{B}_{\mathcal{D},\rho } := (\eta _{1}\circ \mathcal{B}_{ \rho } )\times \cdots \times (\eta _{d}\circ \mathcal{B}_{ \rho } )\), where, for every \(i\in [d]\), \(\eta _{i}(z)\) is the map from \(\mathbb{C}\) to \(\mathbb{C}\) defined as \(\eta _{i}(z):=\frac{U_{i}-L_{i}}{2}z+\frac{U_{i}+L_{i}}{2}\). Furthermore, we define the multivariate version of the Lebesgue constant of the Chebyshev nodes: for every \(m\in \mathbb{N}\),

$$ \Lambda _{d,m}:=\max_{(x_{1},\ldots,x_{d})^{T}\in [-1,1]^{d}} \sum _{(j_{1},\ldots,j_{d})^{T} \in [m]_{0}^{d}}\prod_{i=1}^{d} \ell ^{m}_{j_{i}}(x_{i}), $$


$$ \ell ^{m}_{j}(x):=\prod_{k\in [m]_{0}\setminus \{j\}} \frac{x-x_{m,k}}{x_{m,j}-x_{m,k}} $$

for every \(j\in [m]_{0}\). As [47] showed,

$$ \Lambda _{d,m} \le \prod_{i=1}^{d} \biggl(\frac{2}{\pi }\log (m+1)+1 \biggr) $$

holds, which is derived from the well-known upper bound \(\Lambda _{1,m} \le \frac{2}{\pi }\log (m+1)+1\) [43]. Then, the theorem is as follows.Footnote 6

Theorem 1

Let d and m be positive integers. Let \(\mathcal{D}\) be a hyper-rectangle like (4). Let \(f:\mathcal{D}\rightarrow \mathbb{R}\) be a function that has an analytic extension to \(\mathcal{B}_{\mathcal{D},\rho }\) for some \(\rho \in \mathbb{R}_{>1}\). Besides, assume that \(\sup_{\vec{s}\in \mathcal{B}_{\mathcal{D},\rho }}|f(\vec{s})|\le B\) for some \(B\in \mathbb{R}\). Moreover, suppose that we are given a real number \(\hat{f}_{\vec{j}}\) for every \(\vec{j}\in [m]_{0}^{d}\), and that there exists \(\epsilon \in \mathbb{R}\) such that

$$ \bigl\vert f \bigl(\vec{S}^{\mathcal{D},m}_{\vec{j}} \bigr)- \hat{f}_{ \vec{j}} \bigr\vert \le \epsilon $$

holds for every \(\vec{j}\in [m]_{0}^{d}\). Then,

$$ \max_{\vec{S}\in \mathcal{D}} \bigl\vert f(\vec{S})-\tilde{f}(\vec{S}) \bigr\vert \le \epsilon ^{\mathrm{int}}(\rho ,d,m,B)+\Lambda _{d,m} \epsilon $$

holds. Here, for every \(\vec{S}\in \mathcal{D}\), \(\tilde{f}(\vec{S})\) is defined as

$$ \tilde{f}(\vec{S}) := \sum_{\vec{l}\in [m]_{0}^{d}} \tilde{a}_{ \vec{l}} \widetilde{T}_{\mathcal{D},\vec{l}} \ (\vec{S}), $$

with the coefficients \(\tilde{a}_{\vec{l}}\) calculated by

$$ \tilde{a}_{\vec{l}} := \frac{2^{\aleph (\vec{l} )}}{(m+1)^{d}}\sum _{\vec{j}\in [m]_{0}^{d}} \hat{f}_{\vec{j}}\widetilde{T}_{\mathcal{D},\vec{l}} \ \bigl(\vec{S}^{ \mathcal{D},m}_{\vec{j}} \bigr) $$

for every \(\vec{l}\in [m]_{0}^{d}\), and

$$ \epsilon ^{\mathrm{int}}(\rho ,d,m,B):=2^{\frac{d}{2}+1}\sqrt{d}B\rho ^{-m} \bigl(1-\rho ^{-2} \bigr)^{-\frac{d}{2}}. $$

Bermudan option pricing

General formulation

In this paper, we consider pricing a Bermudan option with \(d\in \mathbb{N}\) underlying assets and \(K\in \mathbb{N}\) exercise dates \(t_{1}, \ldots, t_{K}\), which satisfy \(t_{0}< t_{1}<\cdots<t_{K}\) with \(t_{0}:=0\) being the present and \(t_{K}:=T\in \mathbb{R}_{+}\) being the final maturity. This is formulated as follows. Under some filtered probability space \((\Omega ,\mathcal{F},(\mathcal{F}_{t})_{t\le 0},\mathbb{P})\), consider the \(\mathcal{S}\)-valued Markov process \(\vec{S}(t):=(S_{1}(t),\ldots,S_{d}(t))^{T}\), where \(\mathcal{S}\) is a subset of \(\mathbb{R}^{d}\) equipped with its Borel σ-algebra inherited from \(\mathbb{R}^{d}\), and \(\vec{S}_{0}:=\vec{S}(0)\) is deterministic. \(\vec{S}(t)\) corresponds to the values of the underlying asset prices at time t, or transformations of them by some function (for example, the logarithms of the asset prices). We are mainly interested in its values at \(t_{1},\ldots, t_{K}\), that is, the discrete-time process \(\vec{S}_{k}=(S_{1,k},\ldots,S_{d,k}):=\vec{S}(t_{k})\), \(k\in [K]_{0}\). We hereafter denote an instance of this process, which is a \((K+1)\)-tuple of elements of \(\mathcal{S}\), as \(\mathbf{S}=(\vec{S}_{0},\vec{S}_{1},\ldots,\vec{S}_{K})\). Besides, suppose that we are given the function \(f^{\mathrm{pay}}_{k}\in L^{2}(\mathcal{S},\rho _{k})\) for every \(k\in [K]\), where \(\rho _{k}\) is the image probability measure on \(\mathcal{S}\) induced by \(\vec{S}_{k}\). This corresponds to the payoff which arises by the exercise at \(t_{k}\). Although we assume that the risk-free rate is 0 for simplicity in this paper, we can consider that \(f^{\mathrm{pay}}_{k}\) is the discounted payoff, that is, the product of the payoff and the discount factor. Then, the price of the Bermudan option at \(t_{k}\) with \(\vec{S}_{k}=\vec{s}\in \mathcal{S}\) is given as

$$ V_{k}(\vec{s}):=\sup_{\tau \in \mathcal{T}_{k}} \mathbb{E} \bigl[f^{\mathrm{pay}}_{\tau }(\vec{S}_{\tau })\mid \vec{S}_{t}=\vec{s}\bigr], $$

where \(\mathbb{E}[\cdot ]\) denotes the (conditional or unconditional) expected value with respect to \(\mathbb{P}\), and \(\mathcal{T}_{k},k\in [K]\) is the set of all \(\{k,\ldots,K\}\)-valued stopping times. In particular, the present option price is

$$ V_{0}(\vec{s}):=\sup_{\tau \in \mathcal{T}} \mathbb{E} \bigl[f^{\mathrm{pay}}_{\tau }(\vec{S}_{\tau })\bigr], $$

where \(\mathcal{T}=\mathcal{T}_{1}\).

The problem to find \(V_{0}\) can be written as a kind of dynamic programming, that is,

$$ V_{k}(\vec{s})= \textstyle\begin{cases} f^{\mathrm{pay}}_{K}(\vec{s}) ;& k=K, \\ \max \{f^{\mathrm{pay}}_{k}(\vec{s}),Q_{k}(\vec{s})\} ;& k=1,\ldots,K-1 \end{cases} $$

for every \(\vec{s}\in \mathcal{S}\), and

$$ V_{0} =\mathbb{E}\bigl[V_{1}(\vec{S}_{1}) \bigr]. $$

Here, for every \(k\in [K-1]\) and \(\vec{s}\in \mathcal{S}\),

$$ Q_{k}(\vec{s}):=\mathbb{E}\bigl[V_{k+1}( \vec{S}_{k+1})\mid \vec{S}_{k}=\vec{s}\bigr] $$

is called the continuation value. This corresponds to the option price at \(t_{k}\) in the case that the option has not been exercised at that time and that \(\vec{S}_{k}=\vec{s}\).

Note that this problem can be seen as finding the optimal exercise date \(\tau _{\mathrm{op}}\in \mathcal{T}\), which maximizes (20). This can be recursively determined as

$$ \begin{aligned} &\tau _{K} = K, \\ &\tau _{k} = k1_{f^{\mathrm{pay}}_{k}(\vec{S}_{k})\ge Q_{k}(\vec{S}_{k})} + \tau _{k+1}1_{f^{\mathrm{pay}}_{k}(\vec{S}_{k})< Q_{k}(\vec{S}_{k})}, \quad k \in [K-1] \end{aligned} $$

and \(\tau _{\mathrm{op}}=\tau _{1}\). Also note that

$$ Q_{k}(\vec{s})=\mathbb{E}\bigl[f^{\mathrm{pay}}_{\tau _{k+1}}( \vec{S}_{\tau _{k+1}})\mid \vec{S}_{k}=\vec{s}\bigr], $$

for every \(k\in [K-1]\), which means that \(Q_{k}\) is the expected value of the payoff under the exercise strategy \(\tau _{k+1}\).

Least squares Monte Carlo

We here explain LSM [29], as an algorithm for Bermudan option pricing. This is one of the widely used methods in practical business, and the theoretical error bound on the price in the method has been investigated [5261].

Omitting some technical details, we describe the outline of LSM as follows. As a preparation, for every \(k\in [K-1]\), we determine the set of functions \(\mathcal{H}_{k}\subseteq L^{2}(\mathcal{S},\rho _{k})\) for approximation of the continuation value \(Q_{k}\). One common choice is \(\mathcal{H}_{k}=\mathcal{R}_{d,m}\), the set of all real-coefficient polynomials on \(\mathbb{R}^{d}\) of degree at most \(m\in \mathbb{N}\), and we hereafter consider this. Next, we generate \(N_{\mathrm{samp}}\in \mathbb{N}_{\ge 2}\) sample paths of underlying asset prices, which are denoted as \(\mathbf{S}_{i}=(\vec{S}_{0},\vec{S}^{(i)}_{1},\ldots,\vec{S}^{(i)}_{K})\), \(i\in [N_{\mathrm{samp}}]\). Then, we determine the stopping time, which approximate the optimal one \(\tau _{\mathrm{op}}\), by the following procedure. First, we set \(\widehat{\tau }_{K,i}=K\) for every \(i\in [N_{\mathrm{samp}}]\). For \(k\in [K-1]\), given \(\widehat{\tau }_{k+1,i}\) for every \(i\in [N_{\mathrm{samp}}]\), we determine the approximate continuation value \(\widehat{Q}_{k}\) as the element \(g_{k}\) in \(\mathcal{R}_{d,m}\), which minimizes

$$ \frac{1}{N_{\mathrm{samp}}}\sum_{i=1}^{N_{\mathrm{samp}}} \bigl(g_{k}\bigl(\vec{S}_{k}^{(i)} \bigr)-f^{ \mathrm{pay}}_{\widehat{\tau }_{k+1,i}}\bigl(\vec{S}^{(i)}_{\widehat{\tau }_{k+1,i}} \bigr) \bigr)^{2}, $$

or, in other words, best fits to the realized payoffs under the stopping time \(\widehat{\tau }_{k+1,i}\) on the sample paths. It is guaranteed by statistical leaning theory that fitting to the sample values of the payoff, which distribute around the continuation value, yields the approximation of the continuation value [55, 57, 5961]. Then, we set

$$ \widehat{\tau }_{k,i}= \textstyle\begin{cases} k ;& \text{if } \widehat{Q}_{k}(\vec{S}^{(i)}_{k})\le f^{\mathrm{pay}}_{k}( \vec{S}^{(i)}_{k}), \\ \widehat{\tau }_{k+1,i} ;& \text{otherwise} \end{cases} $$

for every \(i\in [N_{\mathrm{samp}}]\). By repeating this until we reach \(k=1\), we get \(\widehat{\tau }_{1,i}\), and finally

$$ \widehat{V}_{0}:=\frac{1}{N_{\mathrm{samp}}}\sum _{i=1}^{N_{\mathrm{samp}}}f^{ \mathrm{pay}}_{\widehat{\tau }_{1,i}} \bigl(\vec{S}^{(i)}_{\widehat{\tau }_{1,i}}\bigr) $$

as an approximation of \(V_{0}\).

Let us make some comments on the procedure. First, note that it is assumed that we can generate sample paths \(\mathbf{S}_{i}\). In the usual situation where the stochastic differential equation (SDE) for \(\vec{S}(t)\) is given, we can use some method for numerical simulation of SDEs such as Euler-Maruyama method. Second, we mention how to find \(g_{k}\) minimizing (26). Note that this is just least-squares linear regression, since \(\mathcal{R}_{d,m}\) is a vector space. Therefore, we can solve this by various methods, for example, solving the normal equation of linear regression, some numerical optimization, and so on.

Then, let us mention the relationship between the error and the sample number in LSM. According to [61], under some technical assumptions, taking appropriately large m, we obtain the error bound on the option price which scales as

$$ \mathbb{E}_{\mathrm{samp}}\bigl[ \vert \widehat{V}_{0}-V_{0} \vert \bigr] =\widetilde{O} \bigl( (N_{\mathrm{samp}} )^{-\frac{n(p-2)}{2n(p+2)+d(p-2)}} \bigr). $$

Here, \(\mathbb{E}_{\mathrm{samp}}[\cdot ]\) denotes the expectation with respect to randomness of samples, and n and p are the quantities which characterize smoothness of \(Q_{1}(\vec{S}),\ldots, Q_{K-1}(\vec{S})\) and boundedness of the norms of \(f^{\mathrm{pay}}_{1}(\vec{S}),\ldots, f^{\mathrm{pay}}_{K}(\vec{S})\), respectively (see [61] for more details). For larger p and n, the RHS of (29) decreases faster against the increase of \(N_{\mathrm{samp}}\). In the limit of \(n,p\rightarrow \infty \), which means that \(Q_{k}\)’s are highly smooth and the norms of \(f^{\mathrm{pay}}_{k}\)’s are well-bounded, the RHS of (29) becomes \(\widetilde{O}(N_{\mathrm{samp}}^{-1/2})\), which coincides with the well-known error decay rate in Monte Carlo integration. Conversely, in this limit, it is sufficient to take \(\widetilde{O}(\epsilon ^{-2})\) samples in order to achieve the error tolerance ϵ.

Quantum amplitude estimation and quantum algorithm for Monte Carlo integration

Quantum amplitude estimation (QAE)

We here briefly review QAE. Consider the system consisting of a quantum register \(R_{1}\) and a qubit \(R_{2}\). Suppose that we are given the oracle A, which transforms \(| 0 \rangle | 0 \rangle \), the state in which all qubits in \(R_{1}\) and \(R_{2}\) are set to \(| 0 \rangle \), into

$$ A| 0 \rangle | 0 \rangle =\sqrt{a} | \psi _{1} \rangle | 1 \rangle +\sqrt{1-a} | \psi _{0} \rangle | 0 \rangle =:| \Psi \rangle $$

with some \(a\in (0,1)\). Here, the first and second kets correspond to \(R_{1}\) and \(R_{2}\) respectively, and \(| \psi _{0} \rangle \), \(| \psi _{1} \rangle \) are some quantum states. Then, our goal is estimating a, which is the probability to obtain 1 in \(R_{2}\) when we measure \(| \Psi \rangle \), with the error tolerance ϵ. There exist some algorithms which output such an estimation with \(O(\epsilon ^{-1})\) calls to A and its inverse \(A^{\dagger }\) in total [31, 3336]. Although these QAE algorithms output a number close to a not with certainty but with some probability, we can enhance the success probability by running QAE many times and take the median of the outputs [30, 62]. Let us define the \((N_{\mathrm{QAE}},N_{\mathrm{rep}})\)-QAE as the method for estimating a which runs \(N_{\mathrm{rep}}\) rounds of QAE and makes \(N_{\mathrm{QAE}}\) calls to A and \(A^{\dagger }\) in total in each round. Obviously, in the \((N_{\mathrm{QAE}},N_{\mathrm{rep}})\)-QAE, A and \(A^{\dagger }\) are called \(N_{\mathrm{QAE}}N_{\mathrm{rep}}\) times in total. Then, combining Theorem 12 in [33] and Lemma 6.1 in [62], we obtain the following theorem.

Theorem 2

Suppose that we are given the accesses to A in (30) and its inverse \(A^{\dagger }\). Then, for any \(\gamma \in (0,1)\) and \(\epsilon \in (0,0.1)\), a \((N_{\mathrm{QAE}},N_{\mathrm{rep}})\)-QAE, where the positive integers \(N_{\mathrm{QAE}}\) and \(N_{\mathrm{rep}}\) satisfy

$$ N_{\mathrm{QAE}} \ge \frac{7}{\epsilon } $$


$$ N_{\mathrm{rep}} \ge 12\bigl\lceil \log \bigl(\gamma ^{-1}\bigr) \bigr\rceil + 1 $$

respectively, outputs \(\tilde{a}\in \mathbb{R}\) such that

$$ \vert \tilde{a}-a \vert \le \frac{7}{N_{\mathrm{QAE}}}\le \epsilon , $$

where a is defined as (30), with probability higher than \(1-\gamma \).

Here, (31) is derived from the inequality in Theorem 12 in [33] with \(k=1\), that is,

$$ \vert \tilde{a}-a \vert \le \frac{2\pi \sqrt{a(1-a)}}{M} + \frac{\pi ^{2}}{M^{2}}, $$

where \(M=N_{\mathrm{QAE}}/2\) under the current definition. Using \(\sqrt{a(1-a)}\le \frac{1}{2}\), we see that (34) implies \(|\tilde{a}-a|\le 7/N_{\mathrm{QAE}}\) for \(N_{\mathrm{QAE}}\ge 70\), which follows from (31) and \(0<\epsilon <0.1\). In summary, if \(N_{\mathrm{QAE}}\) satisfies (31) for \(\epsilon \in (0,0.1)\), the error in QAE is suppressed to at most ϵ with high probability. Hereafter, we say that a \((N_{\mathrm{QAE}},N_{\mathrm{rep}})\)-QAE succeeded if it output ã such that \(|\tilde{a}-a|\le 7/N_{\mathrm{QAE}}\).

Quantum algorithm for Monte Carlo integration

One application of QAE is the algorithm for Monte Carlo integration, that is, the method to calculate expected values. Suppose that we want to calculate \(\mathbb{E}[F(\vec{X})]\), the expected value of \(F(\vec{X})\), where X⃗ is some real vector-valued stochastic variable and F is a real-valued function acting on X⃗. We also assume that the range of F is in \([0,1]\), and, if not, we make this hold by adding and/or multiplying some constants to F. Furthermore, suppose that we can use the following oracles \(O_{\vec{X}}\) and \(O_{F}\). \(O_{\vec{X}}\) is the oracle to generate the state in which the distribution of X⃗ is encoded. That is, \(O_{\vec{X}}\) operates on a quantum register and transform the state with all qubits set to \(| 0 \rangle \) into

$$ O_{\vec{X}}| 0 \rangle =\sum_{i=1}^{N_{\vec{X}}} \sqrt{p_{i}} | \vec{x}_{i} \rangle , $$

where \(\vec{x}_{1},\ldots, \vec{x}_{N_{\vec{X}}}\) are \(N_{\vec{X}}\in \mathbb{N}\) possible values of X⃗ and \(p_{i},i\in [N_{\vec{X}}]\) is the probability that \(\vec{X}=\vec{x}_{i}\). Here, we assume that the set of all the values that X⃗ can take is finite. If X⃗ is continuous, we need some discretization. How to create states corresponding to widely used distributions such as normal distribution has been investigated [13, 63]. The second oracle \(O_{F}\) operates on a two-register system, and, using the first register as the input x⃗, outputs \(F(\vec{x})\) into the second register. That is, for any x⃗ in the domain of F,

$$ O_{F}| \vec{x} \rangle | 0 \rangle =| \vec{x} \rangle | F(\vec{x}) \rangle . $$

By these oracles, the following computation is possible. Preparing two registers \(R_{1}\), \(R_{2}\) and a qubit \(R_{3}\), and initializing all of them to \(| 0 \rangle \), we perform

$$\begin{aligned}& | 0 \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow \sum_{i=1}^{N_{\vec{X}}} \sqrt{p_{i}} | \vec{x}_{i} \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow \sum_{i=1}^{N_{\vec{X}}} \sqrt{p_{i}} | \vec{x}_{i} \rangle \bigl| F( \vec{x}_{i}) \bigr\rangle | 0 \rangle \\& \quad \rightarrow \sum_{i=1}^{N_{\vec{X}}} \sqrt{p_{i}} | \vec{x}_{i} \rangle \bigl| F( \vec{x}_{i}) \bigr\rangle \bigl(\sqrt{F(\vec{x}_{i})} | 1 \rangle + \sqrt{1-F(\vec{x}_{i})}| 0 \rangle \bigr), \end{aligned}$$

where the first, second and third kets correspond to \(R_{1}\), \(R_{2}\) and \(R_{3}\), respectively. We use \(O_{\vec{X}}\) and \(O_{F}\) at the first and second arrows, respectively. The transformation at the third arrow is done by arithmetic circuits [64] and controlled rotation gates. Note that the probability to obtain 1 in \(R_{3}\) when we measure the final state in (37) is \(\sum_{i=1}^{N_{\vec{X}}}p_{i}F(\vec{x}_{i})\), that is, \(\mathbb{E}[F(\vec{X})]\). Therefore, using the whole operation in (37) as the oracle A, we can estimate \(\mathbb{E}[F(\vec{X})]\) by QAE.

Bermudan option pricing by Chebyshev interpolation and QAE

Now, let us present the method for Bermudan option pricing by Chebyshev interpolation and QAE.


We begin with making some assumptions necessary to execute the proposed method. The first one is as follows.

Assumption 1

We are given the access to the oracle \(O_{{\mathrm{step}},k}\), which generates the state corresponding to the probability distribution of \(\vec{S}_{k+1}\) conditional on \(\vec{S}_{k}\). That is, for every \(k\in [K-1]_{0}\) and \(\vec{S}\in \mathcal{S}\),

$$ O_{{\mathrm{step}},k}:| \vec{S} \rangle | 0 \rangle \mapsto \sum _{\vec{s}\in \widetilde{\mathcal{S}}_{k+1}(\vec{S})}\sqrt{p_{k+1}(\vec{s};\vec{S})} | \vec{S} \rangle | \vec{s} \rangle , $$

where \(\widetilde{\mathcal{S}}_{k+1}(\vec{S})\) is the set of possible values of \(\vec{S}_{k+1}\) under the condition that \(\vec{S}_{k}=\vec{S}\), and

$$ p_{k+1}(\vec{s};\vec{S}):=\mathbb{P} (\vec{S}_{k+1}= \vec{s} \mid \vec{S}_{k}=\vec{S} ). $$

We here make comments on how to implement \(O_{{\mathrm{step}},k}\). As mentioned in Sect. 3, usually, following some SDE and some numerical method such as Euler-Maruyama, we can generate random sample values of \(\vec{S}_{k+1}\) with the given value of \(\vec{S}_{k}\) as the initial condition. Implementations of such a calculation on quantum circuits have been discussed in the previous papers [7, 9, 13]. That is, we can prepare the states corresponding to some (discretely approximated) random variables (e.g. standard normal) on the other registers, and, using them at discretized time steps, generate the path of \(\vec{S}(t)\) from \(t_{k}\) to \(t_{k+1}\). This yields the state like (38). We should also note that, in Assumption 1, it is assumed that \(\vec{S}_{k+1}\) can take only a finite number of values for the fixed \(\vec{S}_{k}\). This is not the case in the most models of \(\vec{S}(t)\), in which it takes continuous values. However, under the aforementioned implementations for time evolution of \(\vec{S}(t)\), in which both time and random variables are discretely approximated, the number of the possible values of \(\vec{S}_{k}\) necessarily becomes finite.

Hereafter, we are mainly interested in the number of calls to \(O_{{\mathrm{step}},k}\) in calculating the option price as a measure of complexity, since calculation for time evolution of underlying asset prices is typically the most time-consuming part in option pricing.

The second assumption is as follows. Here, \(\mathcal{I}_{\mathcal{A}}\) denotes the set of all real-valued functions on a given subset \(\mathcal{A}\subseteq \mathbb{R}^{d}\).

Assumption 2

For every \(k\in [K-1]\), we are given the following

  • the hyper-rectangle \(\mathcal{D}_{k}:=[L_{1,k},U_{1,k}]\times \cdots\times [L_{d,k},U_{d,k}] \subseteq \mathcal{S}\), with \(L_{1,k},\ldots,L_{d,k},U_{1,k},\ldots,U_{d,k}\in \mathbb{R}\) satisfying \(L_{1,k}< U_{1,k},\ldots , L_{d,k}< U_{d,k}\),

  • \(V^{\mathrm{OB}}_{k}\in \mathcal{I}_{\mathcal{S}\setminus \mathcal{D}_{k}}\)

such that the following (i) and (ii) are satisfied.

  1. (i)

    There exists \(\epsilon ^{\mathrm{OB}}_{k}\in \mathbb{R}_{+}\) such that either

    $$ \bigl\vert V^{\mathrm{OB}}_{k}(\vec{s})-V_{k}( \vec{s}) \bigr\vert < \epsilon ^{\mathrm{OB}}_{k} $$


    $$ \bigl\vert \mathbf{F}_{k}[V_{k}]( \vec{s})-V_{k}(\vec{s}) \bigr\vert < \epsilon ^{\mathrm{OB}}_{k} $$

    is satisfied for any \(\vec{s}\in \mathcal{S}\setminus \mathcal{D}_{k}\). Here, \(\mathbf{F}_{k}[\cdot ]\) is the ‘flat extrapolation operator’ defined as

    $$\begin{aligned}& \mathbf{F}_{k}[F](\vec{s}) := F\bigl(b_{k}( \vec{s})\bigr), \end{aligned}$$
    $$\begin{aligned}& b_{k}(\vec{s}) := \bigl(\min \bigl\{ U_{1,k},\max \{L_{1,k},s_{1}\}\bigr\} ,\ldots, \min \bigl\{ U_{d,k},\max \{L_{d,k},s_{d}\}\bigr\} \bigr)^{T} \end{aligned}$$

    for any \(F\in \mathcal{I}_{\mathcal{D}_{k}}\) and \(\vec{s}=(s_{1},\ldots,s_{d})^{T}\in \mathcal{S}\).

  2. (ii)

    If, for some \(G\in \mathcal{I}_{\mathcal{D}_{k}}\), we have the access to the oracle \(O_{G}\) such that

    $$ O_{G}| \vec{s} \rangle | 0 \rangle =| \vec{s} \rangle \bigl| G(\vec{s}) \bigr\rangle $$

    for any \(\vec{s}\in \mathcal{D}_{k}\), we also have the access to the oracle \(\widetilde{O}_{G}\), which acts as

    $$ \widetilde{O}_{G}| \vec{s} \rangle | 0 \rangle =| \vec{s} \rangle \bigl| \mathbf{G}_{k}[G](\vec{s}) \bigr\rangle . $$

    Here, \(\mathbf{G}_{k}[\cdot ]\) is defined as

    $$ \mathbf{G}_{k}[H](\vec{s}) := \textstyle\begin{cases} V^{\mathrm{OB}}_{k}(\vec{s}) ;& \text{if } \vec{s}\in \mathcal{A}_{k}, \\ \mathbf{F}_{k}[H](\vec{s}) ;& \text{otherwise} \end{cases} $$

    for any \(H\in \mathcal{I}_{\mathcal{D}_{k}}\) and \(\vec{s}\in \mathcal{S}\), where \(\mathcal{A}_{k}\) is a subset of \(\mathcal{S}\setminus \mathcal{D}_{k}\) such that (40) and (41) hold for any \(\vec{s}\in \mathcal{A}_{k}\) and any \(\vec{s}\in (\mathcal{S}\setminus \mathcal{D}_{k})\setminus \mathcal{A}_{k}\), respectively.

We also define \(\mathbf{G}_{K}[H](\vec{s}) := H(\vec{s})\) for any \(H\in \mathcal{I}_{\mathcal{S}}\) and \(\vec{s}\in \mathcal{S}\).

Roughly speaking, this assumption means that, when some of underlying asset prices are extremely large or small, we can approximate the option value \(V_{k}\) by some known and easily computable function \(V^{\mathrm{OB}}_{k}\) or the flat extrapolation of \(V_{k}\) from moderate underlying asset prices. Postponing explanation on why this assumption is necessary to Sect. 5.2, we here see that it is actually satisfied in some typical settings in option pricing. For example, let us consider a basket put option, whose payoff function is \(f^{\mathrm{pay}}_{k}((s_{1},\ldots,s_{d})^{T})=\max \{\kappa -s_{1}-\cdots-s_{d},0 \}\) with some \(\kappa \in \mathbb{R}\) for every \(k\in [K]\), under some model in which \(S_{1}(t),\ldots, S_{d}(t)\) are unbounded from above but bounded from below, say, by 0, as the Black-Scholes model. Then, in each of the following situations, (40) or (41) holds.

  • If some of \(S_{1,k},\ldots, S_{d,k}\) are extremely large, the option is far out-of-money, and therefore its price is almost 0.

  • If some of \(S_{1,k},\ldots, S_{d,k}\) are smaller than the sufficiently small thresholds \(L_{1,k},\ldots,L_{d,k}\in \mathbb{R}_{+}\) respectively, but the others are not, setting the former to the thresholds hardly affects the option price.

  • If all of \(S_{1,k},\ldots, S_{d,k}\) are sufficiently close to 0, the option is exercised at \(t_{k}\), and therefore \(V_{k}(\vec{S}_{k})=f^{\mathrm{pay}}_{k}(\vec{S}_{k})\).

Thirdly, we make the following assumption, which is necessary for bounding the interpolation error in the proposed method.

Assumption 3

For every \(k\in [K-1]\), \(Q_{k}(\vec{S})\) has an analytic extension to \(\mathcal{B}_{\mathcal{D}_{k},\rho _{k}}\), where \(\mathcal{D}_{k}\) is given in Assumption 2 and \(\rho _{k}\) is some real number greater than 1, and

$$ \sup_{\vec{s}\in \mathcal{B}_{\mathcal{D}_{k},\rho _{k}}} \bigl\vert Q_{k}( \vec{S}) \bigr\vert \le B_{k} $$

holds, where \(B_{k}\) is some positive real number.

The proposed method

Under these assumptions, we can construct the procedure for Bermudan option pricing based on QAE and Chebyshev interpolation. This is also a backward calculation similarly to LSM; we sequentially calculate the approximate continuation value \(\widetilde{Q}_{k}\) and option price \(\widetilde{V}_{k}\) at \(t_{k}\), going from the final maturity to the present. Roughly, the outline is as follows. As preparation, for every \(k\in [K-1]\), we set \(m_{k}\in \mathbb{N}\), the degree of Chebyshev polynomials used for the approximation, and the hyper-rectangle \(\mathcal{D}_{k}=[L_{1,k},U_{1,k}]\times \cdots \times [L_{d,k},U_{d,k}] \subseteq \mathcal{S}\). We begin the iterative calculation by setting \(\widetilde{V}_{K}(\vec{S}):=f^{\mathrm{pay}}_{K}(\vec{S})\) for every \(\vec{S}\in \mathcal{S}\). Then, for \(k\in [K-1]\), given \(\widetilde{V}_{k+1}\), we estimate the expected value of \(\widetilde{V}_{k+1}(\vec{S}_{k+1})\) under the condition that \(\vec{S}_{k}=\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}}\) for every Chebyshev node \(\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}}\) by QAE, and denote the estimation as \(\widehat{Q}_{k,\vec{j}}^{\mathrm{QAE}}\). Using these, we construct \(\widetilde{Q}_{k}\), the Chebyshev interpolation of the approximate continuation value, and set \(\widetilde{V}_{k}(\vec{S})=\max \{\widetilde{Q}_{k}(\vec{S}),f^{\mathrm{pay}}_{k}( \vec{S})\}\) for every \(\vec{S}\in \mathcal{S}\). We repeat these steps until we reach \(k=1\). Finally, we estimate the expected value of \(\widetilde{V}_{1}(\vec{S}_{1})\) by QAE again, and let the result be an approximation of \(V_{0}\).

The fully detailed procedure is shown in Algorithm 1.

Algorithm 1
figure a

The method for Bermudan option pricing based on Chebyshev interpolation and QAE

Some additional explanations should be made. The first one is on \(| \Psi _{k,\vec{j}} \rangle \) in Step 4. For every \(k\in [K-1]\) and \(\vec{j}\in \mathcal{J}_{k}\), given the approximation \(\widetilde{V}_{k+1}\in \mathcal{I}_{\mathcal{D}_{k+1}}\) of \(V_{k+1}\), we generate the state \(| \Psi _{k,\vec{l}} \rangle \) on the appropriate multi-register system with the last one being single-qubit, by the following operation:

$$\begin{aligned}& | 0 \rangle | 0 \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow \bigl| \vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr\rangle | 0 \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow \bigl| \vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr\rangle \sum _{ \vec{s}\in \widetilde{\mathcal{S}}_{k+1} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} )} \sqrt{p_{k+1} \bigl(\vec{s}; \vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} \bigr)}| \vec{s} \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow \bigl| \vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr\rangle \sum _{ \vec{s}\in \widetilde{\mathcal{S}}_{k+1} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} )} \sqrt{p_{k+1} \bigl(\vec{s}; \vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} \bigr)}| \vec{s} \rangle \bigl| \mathbf{G}_{k+1}[\widetilde{V}_{k+1}](\vec{s}) \bigr\rangle | 0 \rangle \\& \quad \rightarrow \bigl| \vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr\rangle \sum _{ \vec{s}\in \widetilde{\mathcal{S}}_{k+1} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} )} \sqrt{p_{k+1} \bigl(\vec{s}; \vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} \bigr)}| \vec{s} \rangle \bigl| \mathbf{G}_{k+1}[\widetilde{V}_{k+1}](\vec{s}) \bigr\rangle \\& \qquad {} \otimes \biggl(\sqrt{\frac{1}{2}+ \frac{\mathbf{G}_{k+1}[\widetilde{V}_{k+1}](\vec{s})}{2\widetilde{V}_{k+1}^{\mathrm{max}}}} | 1 \rangle + \sqrt{\frac{1}{2}- \frac{\mathbf{G}_{k+1}[\widetilde{V}_{k+1}](\vec{s})}{2\widetilde{V}_{k+1}^{\mathrm{max}}}} | 0 \rangle \biggr) \\& \quad =: | \Psi _{k,\vec{j}} \rangle , \end{aligned}$$

where \(O_{{\mathrm{step}},k}\) in Assumption 1 and \(\widetilde{O}_{\widetilde{V}_{k+1}}\) in Assumption 2 are used at the second and third arrows, respectively. Note that the probability to obtain 1 on the last qubit in measuring \(| \Psi _{k,\vec{l}} \rangle \) is

$$ P_{k,\vec{j}}=\frac{1}{2}+ \frac{\widehat{Q}_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )}{2\widetilde{V}_{k+1}^{\mathrm{max}}}, $$


$$\begin{aligned} \widehat{Q}_{k} (\vec{S} ) :=&\mathbb{E} \bigl[ \mathbf{G}_{k+1}[ \widetilde{V}_{k+1}]( \vec{S}_{k+1}) \mid \vec{S}_{k}=\vec{S} \bigr] \\ =&\sum_{\vec{s}\in \widetilde{\mathcal{S}}_{k+1} (\vec{S} )} p_{k+1} (\vec{s}; \vec{S} )\mathbf{G}_{k+1}[ \widetilde{V}_{k+1}](\vec{s}). \end{aligned}$$

Therefore, as long as \(\mathbf{G}_{k+1}[\widetilde{V}_{k+1}]\) is close to \(V_{k+1}\), \((2P_{k,\vec{j}}-1)\widetilde{V}^{\mathrm{max}}_{k+1}=\widehat{Q}_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\) is close to \(Q_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\). This is why we can obtain approximations of the continuation values at Chebyshev nodes by Step 4, with the errors from QAEs being also small.

Second, let us explain the state \(| \Psi _{0} \rangle \) in Step 9. Given \(\widetilde{V}_{1}\), we can generate \(| \Psi _{0} \rangle \) similarly to \(| \Psi _{k,\vec{j}} \rangle \) as

$$\begin{aligned}& | 0 \rangle | 0 \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow | \vec{S}_{0} \rangle | 0 \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow | \vec{S}_{0} \rangle \sum_{\vec{s}\in \widetilde{\mathcal{S}}_{1}(\vec{S}_{0})} \sqrt{p_{1}(\vec{s};\vec{S}_{0})} | \vec{s} \rangle | 0 \rangle | 0 \rangle \\& \quad \rightarrow | \vec{S}_{0} \rangle \sum_{\vec{s}\in \widetilde{\mathcal{S}}_{1}(\vec{S}_{0})} \sqrt{p_{1}(\vec{s};\vec{S}_{0})} | \vec{s} \rangle \bigl| \mathbf{G}_{1}[\widetilde{V}_{1}](\vec{s}) \bigr\rangle | 0 \rangle \\& \quad \rightarrow | \vec{S}_{0} \rangle \sum_{\vec{s}\in \widetilde{\mathcal{S}}_{1}(\vec{S}_{0})} \sqrt{p_{1}(\vec{s};\vec{S}_{0})} | \vec{s} \rangle \bigl| \mathbf{G}_{1}[\widetilde{V}_{1}](\vec{s}) \bigr\rangle \\& \qquad {} \otimes \biggl(\sqrt{\frac{1}{2}+ \frac{\mathbf{G}_{1}[\widetilde{V}_{1}](\vec{s})}{2\widetilde{V}_{1}^{\mathrm{max}}}} | 1 \rangle + \sqrt{\frac{1}{2}- \frac{\mathbf{G}_{1}[\widetilde{V}_{1}](\vec{s})}{2\widetilde{V}_{1}^{\mathrm{max}}}} | 0 \rangle \biggr) \\& \quad =: | \Psi _{0} \rangle , \end{aligned}$$

where the last ket corresponds to a single-qubit register. Since the probability \(P_{0}\) to obtain 1 on the last qubit in measuring \(| \Psi _{0} \rangle \) satisfies

$$ (2P_{0}-1)\widetilde{V}^{\mathrm{max}}_{1}= \widehat{V}_{0}, $$


$$ \widehat{V}_{0}:=\mathbb{E} \bigl[\mathbf{G}_{1}[ \widetilde{V}_{1}]( \vec{S}_{1}) \bigr]=\sum _{\vec{s}\in \widetilde{\mathcal{S}}_{1}( \vec{S}_{0})} p_{1}(\vec{s};\vec{S}_{0}) \mathbf{G}_{1}[\widetilde{V}_{1}]( \vec{s}), $$

we can obtain an approximation of \(V_{0}\) by Step 9, as long as \(\mathbf{G}_{1}[\widetilde{V}_{1}]\) is close to \(V_{1}\) and the QAE error is small.

Lastly, let us comment on the reason why Assumption 2 is necessary. This is because we have to handle underlying asset prices out of \(\mathcal{D}_{k+1}\) in Steps 4 and 9, or, more specifically, in generating \(| \Psi _{k,\vec{j}} \rangle \) and \(| \Psi _{0} \rangle \). In fact, when we generate \(| \Psi _{k,\vec{j}} \rangle \), \(\vec{S}_{k+1}\) can be out of \(\mathcal{D}_{k+1}\) with some probability. In particular, when \(| \Psi _{k,\vec{j}} \rangle \) corresponds to a Chebyshev node \(\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}}\) close to the boundary of \(\mathcal{D}_{k}\), or, in other words, the condition that \(\vec{S}_{k}\) is close to the boundary of \(\mathcal{D}_{k}\) is imposed, such a probability becomes non-negligible.

Evaluation of the error

Then, let us consider the error on the present option price in the proposed method. First, we have the following theorem.

Theorem 3

Under Assumptions 1to 3, consider Algorithm 1. Suppose that, for every \(k\in [K-1]\) and \(\vec{j}\in \mathcal{J}_{k}\),

$$ \bigl\vert \widehat{Q}_{k} \bigl(\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr)-\widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}} \bigr\vert \le \epsilon ^{\mathrm{QAE}}_{k} $$

is satisfied, where \(\epsilon ^{\mathrm{QAE}}_{k}\) is some positive real number. Moreover, suppose that

$$ \vert \widehat{V}_{0}-\widetilde{V}_{0} \vert \le \epsilon ^{\mathrm{QAE}}_{0} $$

is satisfied for some \(\epsilon ^{\mathrm{QAE}}_{0}\in \mathbb{R}_{+}\). Then,

$$ \vert V_{0}-\widetilde{V}_{0} \vert \le \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{int}}_{k} + \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{OB}}_{k}+ \sum _{k=0}^{K-1} \widetilde{\Lambda }_{1,k} \epsilon ^{\mathrm{QAE}}_{k} $$

holds, where, for \(k\in [K-1]\) and \(k^{\prime }\in [K-1]_{0}\),

$$ \epsilon ^{\mathrm{int}}_{k} := \epsilon _{\mathrm{int}}(\rho _{k},d,m_{k},B_{k}) $$


$$\begin{aligned}& \widetilde{\Lambda }_{k,k^{\prime }}:= \textstyle\begin{cases} \prod_{i=k}^{k^{\prime }}\Lambda _{i} ;& \textit{if } k\le k^{\prime }\\ 1 ;& \textit{otherwise} \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned}& \Lambda _{k} := \biggl(\frac{2}{\pi }\log (m_{k}+1)+1 \biggr)^{d}. \end{aligned}$$

The proof is presented in Appendix A.1.


Based on Theorem 3, we can evaluate the complexity, that is, the number of calls to \(O_{{\mathrm{step}},k}\) sufficient to achieve the desired level of the error on the present option price.

Corollary 1

Let ϵ be a real number in \((0,0.1)\). Under Assumptions 1to 3, consider Algorithm 1 with the following parameters:

  1. (i)

    \(m_{k},k\in [K-1]\) satisfying \(m_{k}\ge m^{\mathrm{th}}_{k}\) with

    $$ \begin{aligned} &m^{\mathrm{th}}_{1} = \biggl\lceil \frac{1}{\log \rho _{1}}\log \biggl( \frac{2^{d/2+2}\sqrt{d}(K-1)(1-\rho _{1}^{-2})^{-d/2}B_{1}}{\epsilon \widetilde{V}^{\mathrm{max}}_{1}} \biggr) \biggr\rceil \\ &m^{\mathrm{th}}_{k} = \biggl\lceil \frac{1}{\log \rho _{k}}\log \biggl( \frac{2^{d/2+2}\sqrt{d}(K-1)(1-\rho _{k}^{-2})^{-d/2}\widetilde{\Lambda }^{\mathrm{th}}_{1,k-1}B_{k}}{\epsilon \widetilde{V}^{\mathrm{max}}_{1}} \biggr) \biggr\rceil \\ & \quad \textit{for } k=2,\ldots,K-1, \end{aligned} $$

    where \(\widetilde{\Lambda }^{\mathrm{th}}_{1,k-1}\) is determined as \(\widetilde{\Lambda }_{1,k-1}\) in (60) with \(m_{1}=m^{\mathrm{th}}_{1},\ldots , m_{k-1}=m^{\mathrm{th}}_{k-1}\).

  2. (ii)

    \(N^{\mathrm{QAE}}_{k},k\in [K-1]_{0}\) set as

    $$ N^{\mathrm{QAE}}_{k} = \biggl\lceil \frac{7}{\bar{\epsilon }_{k}} \biggr\rceil . $$

    Here, \(\bar{\epsilon }_{0},\ldots, \bar{\epsilon }_{K-1}\) are given by

    $$ \begin{aligned} &\bar{\epsilon }_{0} = \frac{1}{1+\sum_{k^{\prime }=1}^{K-1}\sqrt{(m_{k^{\prime }}+1)^{d}\widetilde{\Lambda }_{1,k^{\prime }}}} \cdot \frac{\epsilon }{4} \\ &\bar{\epsilon }_{k} = \frac{\sqrt{(m_{k}+1)^{d} /\widetilde{\Lambda }_{1,k}}}{1+\sum_{k^{\prime }=1}^{K-1}\sqrt{(m_{k^{\prime }}+1)^{d}\widetilde{\Lambda }_{1,k^{\prime }}}} \cdot \frac{\widetilde{V}^{\mathrm{max}}_{1} \epsilon }{4\widetilde{V}^{\mathrm{max}}_{k}} \quad \textit{for } k=1,\ldots,K-1, \end{aligned} $$

    where \(m_{0},\ldots, m_{K-1}\) are set as (i) and \(\widetilde{\Lambda }_{1,1},\ldots, \widetilde{\Lambda }_{1,K-1}\) are given as (60) with such \(m_{0},\ldots, m_{K-1}\).

  3. (iii)

    \(N^{\mathrm{rep}}_{k}\) set to

    $$ N_{\mathrm{rep}} := 12 \biggl\lceil \log \biggl(\frac{N_{\mathrm{est}}}{0.01} \biggr) \biggr\rceil + 1, $$

    for every \(k\in [K-1]_{0}\). Here, \(N_{\mathrm{est}}:=1+\sum_{k^{\prime }=1}^{K-1}(m_{k^{\prime }}+1)^{d}\) with \(\{m_{k}\}\) set as (i).

Moreover, suppose that \(\epsilon ^{\mathrm{OB}}_{1},\ldots, \epsilon ^{\mathrm{OB}}_{K-1}\) are 0. Then, Algorithm 1 outputs \(\widetilde{V}_{0}\) satisfying \(|V_{0}-\widetilde{V}_{0}|\le \epsilon \widetilde{V}^{\mathrm{max}}_{1}\) with probability higher than 0.99.

The proof is presented in Appendix A.2.

We here explain why the parameters are set as above. As we see in the proof in Appendix A.2, \(m_{1},\ldots, m_{K-1}\) satisfying (62) make the first term in the RHS in (58) smaller than \(\epsilon \widetilde{V}^{\mathrm{max}}_{1}/2\). Then, for such \(\{m_{k}\}_{k=1,\ldots,K-1}\), \(\{N^{\mathrm{QAE}}_{k}\}_{k=0,\ldots,K-1}\) are determined as (63) so that

$$ \frac{N_{\mathrm{tot}}}{N_{\mathrm{rep}}}=N^{\mathrm{QAE}}_{0}+\sum _{k=1}^{K-1}(m_{k}+1)^{d}N^{ \mathrm{QAE}}_{k}, $$

that is, the total number \(N_{\mathrm{tot}}\) of calls to \(\{O_{{\mathrm{step}},k}\}_{k=0,\ldots,K-1}\) divided by the QAE repetition number \(N^{\mathrm{rep}}\), is minimized under the constraint that, if all the QAEs in Algorithm 1 succeed, the third term in the RHS in (58) is smaller than \(\epsilon \widetilde{V}^{\mathrm{max}}_{1}/2\). Finally, \(\{N^{\mathrm{rep}}_{k}\}_{k=0,\ldots,K-1}\) are determined so that the probability that these QAEs all succeed becomes higher than \(0.99=1-0.01\). In total, Algorithm 1 with the setting in Corollary 1 gives an approximation of \(V_{0}\) with an error at most \(\epsilon \widetilde{V}^{\mathrm{max}}_{1}\) with probability higher than 0.99.

Note that, in reality, it is difficult to set \(m_{k}\) to \(m^{\mathrm{th}}_{k}\), since \(\rho _{k}\) and \(B_{k}\) are usually unknown. In practice, we might set them to some conservatively large values, based on, for example, the calculation results of some benchmark pricing problems for various \(\{m_{k}\}_{k=1,\ldots,K-1}\). Besides, note that, in the above setting, the half of the error tolerance ϵ is assigned to the interpolation error and another half is assigned to the QAE error. Although we can of course change this assignment ratio, it affects the complexity only logarithmically, since the sufficient levels of \(\{m_{k}\}_{k=1,\ldots,K-1}\) are logarithmically affected by such a change and so are \(\{N^{\mathrm{rep}}_{k}\}\) compensating the change of \(\{m_{k}\}\).

Let us consider the dependency of the total complexity on the error tolerance ϵ. We see that

$$\begin{aligned}& m_{k} = O \bigl(\log \bigl(\epsilon ^{-1}\bigr){ \mathrm{polyloglog}}\bigl(\epsilon ^{-1} \bigr) \bigr), \end{aligned}$$
$$\begin{aligned}& N^{\mathrm{QAE}}_{k} = O \bigl(\epsilon ^{-1}\times { \mathrm{polyloglog}}\bigl( \epsilon ^{-1}\bigr) \bigr) \end{aligned}$$

for every \(k\in [K-1]\), and that

$$ N^{\mathrm{QAE}}_{0}= O \bigl(\epsilon ^{-1}\log ^{d/2}\bigl(\epsilon ^{-1}\bigr)\operatorname{polyloglog} \bigl(\epsilon ^{-1}\bigr) \bigr), $$

where \(\operatorname{polyloglog}(\cdot )\) means \(\operatorname{polylog} (\log (\cdot ) )\). Combining these with (66), we obtain

$$ N_{\mathrm{tot}} = O \bigl(\epsilon ^{-1}\log ^{d} \bigl(\epsilon ^{-1}\bigr)\operatorname{polyloglog}\bigl( \epsilon ^{-1}\bigr) \bigr), $$

which eventually beats LSM’s complexity \(\widetilde{O}(\epsilon ^{-2})\) for small ϵ.


Comparison with existing Chebyshev interpolation-based methods

In fact, the idea that we approximate the continuation value by Chebyshev interpolation is not novel. There are some classical methods for Bermudan option pricing based on Chebyshev interpolation [4450]. However, in addition to whether we use QAE or other classical methods for calculating the nodal continuation values, there are the following differences between the above proposed method and the existing methods.

First, we note that we do not have to use Monte Carlo for calculating the continuation value, and [44, 45] actually used other methods. These papers considered the situation where the transition probability of the underlying asset prices can be easily calculated, and computed the continuation value by the numerical integration of the product of the transition probability and the option value at the next exercise date. Note that this way is possible only for some simple models for underlying asset evolution such as the Black-Scholes model. On the other hand, in more complicated settings where, for example, we price a multi-asset option under the stochastic local volatility model, Monte Carlo can be the sole solution, and such a time-consuming situation is a meaningful target for quantum speed-up. However, combining Chebyshev interpolation with various methods might be an interesting possibility also in the quantum setup, and worth to be investigated as a future work.

Let us also mention the differences from [48]. The major difference is that, in the method in [48], the continuation value is not the target of either Monte Carlo integration or Chebyshev interpolation. Instead, the method calculates the conditional expectations of Chebyshev polynomials by Monte Carlo or other methods, and find the Chebyshev interpolation of not the continuation value but the option price at each exercise date. This approach saves the computational time when we price many options under a same model, since we can reuse the conditional expectations for interpolations in pricing different options. Considering the quantum version of this approach might be interesting too.

Exponential factor with respect to the number of exercise dates in the error bound

Now, let us make a comment on the factor \(\widetilde{\Lambda }_{1,K-1}\) in (58), which is reflected into the polyloglog factors in (70). This exponentially depends on the number of exercise dates K. Therefore, it seems that the error on the option price exponentially grows as K increase, and so does the complexity sufficient to achieve a given error tolerance. Similar situations arose in the error analyses for LSM [53, 55, 57, 5961] and classical Chebyshev interpolation-based methods [48].

However, we should note that (58) is an upper bound on the error, and that the actual error might not necessarily grows exponentially against K. In fact, in the numerical experiment in [48], where American options were approximately priced as Bermudan options with a small exercise date interval, the error was suppressed even if hundreds or thousands of exercise dates were set.

Let us now consider why the factor exponentially depending on K appears in (58). In the derivation of (58) described in Appendix A.1, we make Assumption 3 on the analyticity and boundedness of the continuation values \(Q_{k}\), and apply Theorem 1 to Chebyshev interpolation of \(Q_{k}\) in Algorithm 1. Since we use not \(Q_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\), the values of \(Q_{k}\) at the Chebyshev nodes, but \(\widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}}\), estimates on \(Q_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\) with some errors, in interpolation, the term like the second term in the RHS in (1) arises in the upper bound of the difference between \(Q_{k}\) and the interpolant \(\widetilde{Q}_{k}\), and is amplified at every later interpolation.

On the other hand, we can consider that the actual target of Chebyshev interpolation is not \(Q_{k}\) but \(\widehat{Q}_{k}\) in (52). That is, we can regard \(\widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}}\) as not an estimate on \(Q_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\) but that on \(\widehat{Q}_{k} (\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} )\). Then, we can make an assumption not on analyticity and boundedness of \(Q_{k}\) but those of \(\widehat{Q}_{k}\). This leads to a different error bound than Theorem 3. Actually, if we make Assumption 4 instead of Assumption 3, we have an error bound with no exponential factor as shown in Theorem 4.

Assumption 4

For every \(k\in [K-1]\), \(\widehat{Q}_{k}(\vec{S})\) has an analytic extension to \(\mathcal{B}_{\mathcal{D}_{k},\widehat{\rho }_{k}}\), where \(\mathcal{D}_{k}\) is given in Assumption 2 and \(\widehat{\rho }_{k}\) is some real number greater than 1, and

$$ \sup_{\vec{s}\in \mathcal{B}_{\mathcal{D}_{k},\widehat{\rho }_{k}}} \bigl\vert \widehat{Q}_{k}( \vec{S}) \bigr\vert \le \widehat{B}_{k} $$

holds, where \(\widehat{B}_{k}\) is some positive real number.

Theorem 4

Under Assumptions 1, 2, and 4, consider Algorithm 1. Suppose that, for every \(k\in [K-1]\) and \(\vec{j}\in \mathcal{J}_{k}\), (56) is satisfied for some \(\epsilon ^{\mathrm{QAE}}_{k}\in \mathbb{R}_{+}\), and that (57) is satisfied for some \(\epsilon ^{\mathrm{QAE}}_{0}\in \mathbb{R}_{+}\). Then,

$$ \vert V_{0}-\widetilde{V}_{0} \vert \le \sum _{k=1}^{K-1}\epsilon ^{\mathrm{OB}}_{k}+ \sum_{k=1}^{K-1}\widetilde{\epsilon }^{\mathrm{int}}_{k} + \sum_{k=0}^{K-1} \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{k} $$

holds, where, for every \(k\in [K-1]\),

$$ \widetilde{\epsilon }^{\mathrm{int}}_{k} := \epsilon _{\mathrm{int}}( \widetilde{\rho }_{k},d,m_{k}, \widetilde{B}_{k}), $$

and \(\Lambda _{k}\) is defined as (61).

The proof is presented in Appendix A.3. Note that a similar point has been made for LSM in [59] (Theorem 3.1).

Of course, \(\widehat{Q}_{k}\) is defined as (52) with \(\widetilde{V}_{k+1}\), which is the intermediate output in Algorithm 1, and making assumptions on such a thing does not lead to self-contained discussion. It is more desirable to derive the error bound under assumptions on \(Q_{k}\) and/or other quantities determined independently from pricing algorithms. We leave considering whether we can obtain an error bound similar to Theorem 4 under such assumptions or not as a future work.

Quantization of LSM

Lastly, we make a comment on whether we can consider the quantum algorithm for LSM. Since there are some quantum algorithms for linear regression [6571], we naturally wonder that we can apply these to LSM and then obtain speed-up. However, this is not so straightforward, since most of these algorithms output the regression result as a quantum state, in which the regression coefficients are amplitude-encoded. Fortunately, some algorithms [67, 71] output the regression coefficient as classical data. In particular, the algorithm in [71] has the complexity of \(\widetilde{O}(D^{7/2}\kappa ^{4}/\epsilon )\) with the tolerance ϵ, the explanatory variable number D, and the condition number κ of the design matrix, from which we expect the quadratic speed-up of LSM with respect to ϵ. Nevertheless, applying this algorithm to LSM is not immediate either, because of some points to be considered. For example, the complexity has strong dependence on D and κ, which might make the algorithm disadvantageous. Therefore, it will be crucial to evaluate these, especially κ, in addition to finding the basis function set which makes κ as small as possible. We will consider this direction in the future work.


In this paper, we considered application of quantum algorithms to Bermudan option pricing. Since there are QAE-based algorithms for Monte Carlo integration, which provide quadratic speed-up compared with the classical counterparts, and applications of them to some option pricing problems have been investigated, it is natural to consider to apply them to Bermudan option pricing. One crucial issue in this problem is how to approximate the continuation value \(Q_{k}\), which determines the optimal exercise date. In order to cope with this, we considered combination of QAE and Chebyshev interpolation. That is, the proposed method estimates the values of \(Q_{k}\) on the interpolation nodes by QAE, and, using such estimates, find a Chebyshev interpolation as an approximation of \(Q_{k}\). We presented the calculation procedure in detail, along with the error bound and the complexity, which corresponds to the number of calls to the oracle for underlying asset price evolution, sufficient to achieve the desired error tolerance ϵ. As expected, this method has the complexity depending on ϵ as \(\widetilde{O}(\epsilon ^{-1})\), which means the quadratic speed-up compared with LSM, the typical classical algorithm for Bermudan option pricing.

As a future work, it is interesting to consider the quantum version of LSM, as mentioned in Sect. 6.3. Besides, it is also meaningful to extend the proposed method to other types of dynamic programming, which is ubiquitous in many fields of science and industry.

Availability of data and materials

Not applicable.


  1. As a standard textbook for quantum computing, we refer to [1].

  2. As standard textbooks for financial engineering, especially option pricing, we refer to [5, 6].

  3. We refer to [26] as a textbook on Monte Carlo simulation and its application to finance.

  4. \(\widetilde{O}(\cdot )\) hides logarithmic factors in the ordinary big O notation \(O(\cdot )\).

  5. See [43] as a textbook on this topic.

  6. [4750] considered the more general case, where the values of ρ and m are different for different directions in \(\mathbb{R}^{d}\). In this paper, we take common values of ρ and m for every direction, for simplicity.



least-squares Monte Carlo


quantum amplitude estimation


stochastic differential equation


right hand side


  1. Nielsen MA, Chuang IL. Quantum computation and quantum information. Cambridge: Cambridge University Press; 2010.

    MATH  Google Scholar 

  2. Orus R et al.. Quantum computing for finance: overview and prospects. Rev Phys. 2019;4:100028.

    Google Scholar 

  3. Egger DJ et al.. Quantum computing for finance: state of the art and future prospects. IEEE Trans Quantum Eng. 2020;1:3101724.

    Google Scholar 

  4. Bouland A, et al. Prospects and challenges of quantum finance. arXiv:2011.06492.

  5. Hull JC. Options, futures, and other derivatives. New York: Prentice Hall; 2012.

    MATH  Google Scholar 

  6. Shreve S. Stochastic calculus for finance I & II. Berlin: Springer; 2004.

    MATH  Google Scholar 

  7. Rebentrost P et al.. Quantum computational finance: Monte Carlo pricing of financial derivatives. Phys Rev A. 2018;98:022321.

    ADS  MathSciNet  Google Scholar 

  8. Martin A et al.. Towards pricing financial derivatives with an IBM quantum computer. Phys Rev Res. 2021;3:013167.

    Google Scholar 

  9. Stamatopoulos N et al.. Option pricing using quantum computers. Quantum. 2020;4:291.

    Google Scholar 

  10. Ramos-Calderer S et al.. Quantum unary approach to option pricing. Phys Rev A. 2021;103:032414.

    ADS  MathSciNet  Google Scholar 

  11. Fontanela F, et al. A quantum algorithm for linear PDEs arising in finance. arXiv:1912.02753.

  12. Vazquez AC, Woerner S. Efficient state preparation for quantum amplitude estimation. Phys Rev Appl. 2021;15:034027.

    ADS  Google Scholar 

  13. Kaneko K, et al. Quantum pricing with a smile: implementation of local volatility model on quantum computer. arXiv:2007.01467.

  14. Tang H, et al. Quantum computation for pricing the collateralized debt obligations. arXiv:2008.04110.

  15. Chakrabarti S et al.. A threshold for quantum advantage in derivative pricing. Quantum. 2021;5:463.

    Google Scholar 

  16. An D et al.. Quantum-accelerated multilevel Monte Carlo methods for stochastic differential equations in mathematical finance. Quantum. 2021;5:481.

    Google Scholar 

  17. Gonzalez-Conde J, et al. Pricing financial derivatives with exponential quantum speedup. arXiv:2101.04023.

  18. Radha SK. Quantum option pricing using Wick rotated imaginary time evolution. arXiv:2101.04280.

  19. Woerner S, Egger DJ. Quantum risk analysis. npj Quantum Inf. 2019;5(1):1.

    Google Scholar 

  20. Egger DJ et al.. Credit risk analysis using quantum computers. IEEE Trans Comput. 2021;70:2136.

    MathSciNet  Google Scholar 

  21. Miyamoto K, Shiohara K. Reduction of qubits in a quantum algorithm for Monte Carlo simulation by a pseudo-random-number generator. Phys Rev A. 2020;102:022424.

    ADS  Google Scholar 

  22. Kaneko K et al.. Quantum speedup of Monte Carlo integration with respect to the number of dimensions and its application to finance. Quantum Inf Process. 2021;20:185.

    ADS  MathSciNet  Google Scholar 

  23. Rebentrost P, Lloyd S. Quantum computational finance: quantum algorithm for portfolio optimization. arXiv:1811.03975.

  24. Kerenidis I et al.. Quantum algorithms for portfolio optimization. In: Proceedings of the 1st ACM conference on advances in financial technologies. 2019. p. 147.

    Google Scholar 

  25. Hodson M, et al. Portfolio rebalancing experiments using the quantum alternating operator ansatz. arXiv:1911.05296.

  26. Glasserman P. Monte Carlo methods in financial engineering. Berlin: Springer; 2003.

    MATH  Google Scholar 

  27. Tavella D, Randall C. Pricing financial instruments: the finite difference method. New York: Wiley; 2000.

    Google Scholar 

  28. Duffy DJ. Finite difference methods in financial engineering: a partial differential equation approach. New York: Wiley; 2006.

    MATH  Google Scholar 

  29. Longstaff FA, Schwartz ES. Valuing American options by simulation: a simple least-squares approach. Rev Financ Stud. 2001;14:113.

    MATH  Google Scholar 

  30. Montanaro A. Quantum speedup of Monte Carlo methods. Proc R Soc Ser A. 2015;471:2181.

    MathSciNet  MATH  Google Scholar 

  31. Suzuki Y et al.. Amplitude estimation without phase estimation. Quantum Inf Process. 2020;19:75.

    ADS  MathSciNet  Google Scholar 

  32. Herbert S. Quantum Monte-Carlo integration: the full advantage in minimal circuit depth. arXiv:2105.09100.

  33. Brassard G et al.. Quantum amplitude amplification and estimation. Contemp Math. 2002;305:53.

    MathSciNet  MATH  Google Scholar 

  34. Aaronson S, Rall P. Quantum approximate counting, simplified. In: Symposium on simplicity in algorithms. Philadelphia: SIAM; 2020. p. 24–32.

    Google Scholar 

  35. Grinko D et al.. Iterative quantum amplitude estimation. npj Quantum Inf. 2021;7:52.

    ADS  Google Scholar 

  36. Nakaji K. Faster amplitude estimation. Quantum Inf Comput. 2020;20:1109.

    MathSciNet  Google Scholar 

  37. Brown EG, et al. Quantum amplitude estimation in the presence of noise. arXiv:2006.14145.

  38. Tanaka T, et al. Amplitude estimation via maximum likelihood on noisy quantum computer. arXiv:2006.16223.

  39. Kerenidis I, Prakash A. A method for amplitude estimation with noisy intermediate-scale quantum computers. U.S. Patent Application No. 16/892,229. 2020.

  40. Uno S, et al. Modified Grover operator for amplitude estimation. arXiv:2010.11656.

  41. Giurgica-Tiron T, et al. Low depth algorithms for quantum amplitude estimation. arXiv:2012.03348.

  42. Wang G et al.. Bayesian inference with engineered likelihood functions for robust amplitude estimation. PRX Quantum. 2021;2:010346.

    Google Scholar 

  43. Trefethen LN. Approximation theory and approximation practice. Philadelphia: SIAM; 2013.

    MATH  Google Scholar 

  44. Sullivan MA. Valuing American put options using Gaussian quadrature. Rev Financ Stud. 2000;13:75.

    Google Scholar 

  45. Lim H et al.. Efficient pricing of Bermudan options using recombining quadratures. J Comput Appl Math. 2014;271:195.

    MathSciNet  MATH  Google Scholar 

  46. Mahlstedt M. Complexity reduction for option pricing. Ph.D. thesis. Technische Universität München; 2017.

  47. GaßM et al.. Chebyshev interpolation for parametric option pricing. Finance Stoch. 2018;22:701.

    MathSciNet  MATH  Google Scholar 

  48. Glau K et al.. A new approach for American option pricing: the dynamic Chebyshev method. SIAM J Sci Comput. 2019;41(1):B153.

    MathSciNet  MATH  Google Scholar 

  49. Glau K, et al. Fast calculation of credit exposures for barrier and Bermudan options using Chebyshev interpolation. arXiv:1905.00238.

  50. Glau K et al.. Speed-up credit exposure calculations for pricing and risk management. Quant Finance. 2021;21:481.

    MathSciNet  MATH  Google Scholar 

  51. Sauter S, Schwab C. Boundary element methods. Berlin: Springer; 2010.

    MATH  Google Scholar 

  52. Clement E et al.. An analysis of a least squares regression method for American option pricing. Finance Stoch. 2002;6(4):449.

    MathSciNet  MATH  Google Scholar 

  53. Glasserman P, Yu B. Number of paths vs. number of basis functions in American option pricing. Ann Appl Probab. 2004;14(4):2090.

    MathSciNet  MATH  Google Scholar 

  54. Stentoft L. Convergence of the least squares Monte Carlo approach to American option valuation. Manag Sci. 2004;50(9):1193.

    MATH  Google Scholar 

  55. Egloff D. Monte Carlo algorithms for optimal stopping and statistical learning. Ann Appl Probab. 2005;15:1396.

    MathSciNet  MATH  Google Scholar 

  56. Gobet E et al.. A regression-based Monte Carlo method to solve backward stochastic differential equations. Ann Appl Probab. 2005;15(3):2172.

    MathSciNet  MATH  Google Scholar 

  57. Zanger DZ. Convergence of a least-squares Monte Carlo algorithm for bounded approximating sets. Appl Math Finance. 2009;16:123.

    MathSciNet  MATH  Google Scholar 

  58. Gerhold S. The Longstaff–Schwartz algorithm for Levy models: results on fast and slow convergence. Ann Appl Probab. 2011;21(2):589.

    ADS  MathSciNet  MATH  Google Scholar 

  59. Zanger DZ. Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing. Finance Stoch. 2013;17(3):503.

    MathSciNet  MATH  Google Scholar 

  60. Zanger DZ. Convergence of a least-squares Monte Carlo algorithm for American option pricing with dependent sample data. Math Finance. 2018;28(1):447.

    MathSciNet  MATH  Google Scholar 

  61. Zanger DZ. General error estimates for the Longstaff–Schwartz least-squares Monte Carlo algorithm. Math Oper Res. 2020;45(3):923.

    MathSciNet  MATH  Google Scholar 

  62. Jerrum M et al.. Random generation of combinatorial structures from a uniform distribution. Theor Comput Sci. 1986;43:169.

    MathSciNet  MATH  Google Scholar 

  63. Grover L, Rudolph T. Creating superpositions that correspond to efficiently integrable probability distributions. arXiv:quant-ph/0208112.

  64. Haner T, et al. Optimizing quantum circuits for arithmetic. arXiv:1805.12445.

  65. Wiebe N et al.. Quantum data fitting. Phys Rev Lett. 2012;109:050505.

    ADS  Google Scholar 

  66. Schuld M et al.. Prediction by linear regression on a quantum computer. Phys Rev A. 2016;94:022342.

    ADS  Google Scholar 

  67. Wang G. Quantum algorithm for linear regression. Phys Rev A. 2017;96:012335.

    ADS  MathSciNet  Google Scholar 

  68. Yu C-H et al.. Quantum algorithms for ridge regression. IEEE Trans Knowl Data Eng. 2019;29:37491.

    Google Scholar 

  69. Chakraborty S. The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation. In: Proceedings of the 46th international colloquium on automata, languages, and programming (ICALP). 2019. p. 33:1–33:14.

    Google Scholar 

  70. Kerenidis I, Prakash A. Quantum gradient descent for linear systems and least squares. Phys Rev A. 2020;101:022316.

    ADS  MathSciNet  Google Scholar 

  71. Kaneko K, et al. Linear regression by quantum amplitude estimation and its extension to convex optimization. arXiv:2105.13511.

Download references


Not applicable.


This work was supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0120319794.

Author information

Authors and Affiliations



KM as the sole author of the manuscript, conceived, designed and performed the analysis; he also wrote and reviewed the paper. The author read and approved the final manuscript.

Corresponding author

Correspondence to Koichi Miyamoto.

Ethics declarations

Competing interests

The author declares that he has no competing interests.



A.1 Proof of Theorem 3


First, we note that, for every \(k\in [K-1]\),

$$ \epsilon _{k} \le \tilde{\epsilon }_{k} $$

holds, where

$$ \epsilon _{k}:= \max_{\vec{S}\in \mathcal{D}_{k}} \bigl\vert V_{k}(\vec{S})- \widetilde{V}_{k}(\vec{S}) \bigr\vert , $$


$$ \tilde{\epsilon }_{k}:= \textstyle\begin{cases} \epsilon ^{\mathrm{int}}_{K-1} + \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{K-1} ;& \text{for } k=K-1, \\ \epsilon ^{\mathrm{int}}_{k} + \Lambda _{k} (\epsilon _{k+1} + \epsilon ^{\mathrm{OB}}_{k+1} + \epsilon ^{\mathrm{QAE}}_{k} ) ;& \text{for } k = 1,\ldots,K-2. \end{cases} $$

The proof of this is as follows. We see that, for any \(k\in [K-1]\),

$$ \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{G}_{k}[\widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert = \bigl\vert V_{k}( \vec{S}_{k}) - \widetilde{V}_{k}(\vec{S}_{k}) \bigr\vert \le \epsilon _{k} $$

holds if \(\vec{S}_{k}\in \mathcal{D}_{k}\), and, under Assumption 2, either

$$ \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{G}_{k}[\widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert = \bigl\vert V_{k}( \vec{S}_{k}) - V^{\mathrm{OB}}_{k}( \vec{S}_{k}) \bigr\vert \le \epsilon ^{\mathrm{OB}}_{k} $$


$$\begin{aligned}& \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{G}_{k}[\widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert \\& \quad = \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{F}_{k}[\widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert \\& \quad \le \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{F}_{k}[V_{k}](\vec{S}_{k}) \bigr\vert + \bigl\vert \mathbf{F}_{k}[V_{k}]( \vec{S}_{k}) - \mathbf{F}_{k}[ \widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert \\& \quad = \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{F}_{k}[V_{k}](\vec{S}_{k}) \bigr\vert + \bigl\vert V_{k}\bigl(b_{k}( \vec{S}_{k})\bigr) - \widetilde{V}_{k} \bigl(b_{k}( \vec{S}_{k})\bigr) \bigr\vert \\& \quad \le \epsilon ^{\mathrm{OB}}_{k} + \epsilon _{k} \end{aligned}$$

holds if \(\vec{S}_{k}\in \mathcal{S}\setminus \mathcal{D}_{k}\). Combining these, we obtain

$$ \bigl\vert V_{k}(\vec{S}_{k}) - \mathbf{G}_{k}[\widetilde{V}_{k}]( \vec{S}_{k}) \bigr\vert \le \epsilon _{k} + \epsilon ^{\mathrm{OB}}_{k} $$

for any \(\vec{S}_{k}\in \mathcal{S}\). This leads to

$$\begin{aligned}& \bigl\vert Q_{k}(\vec{S})-\widehat{Q}_{k}( \vec{S}) \bigr\vert \\& \quad = \bigl\vert \mathbb{E} \bigl[V_{k+1}(\vec{S}_{k+1}) - \mathbf{G}_{k+1}[ \widetilde{V}_{k+1}]( \vec{S}_{k+1}) \mid \vec{S}_{k}=\vec{S} \bigr] \bigr\vert \\& \quad \le \mathbb{E} \bigl[ \bigl\vert V_{k+1}(\vec{S}_{k+1}) - \mathbf{G}_{k+1}[ \widetilde{V}_{k+1}]( \vec{S}_{k+1}) \bigr\vert \mid \vec{S}_{k}= \vec{S} \bigr] \\& \quad \le \epsilon _{k+1} + \epsilon ^{\mathrm{OB}}_{k+1} \end{aligned}$$

for any \(k\in [K-2]\) and \(\vec{S}\in \mathcal{D}_{k}\). Thus, with (56), we obtain

$$\begin{aligned}& \bigl\vert Q_{k} \bigl(\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr)- \widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}} \bigr\vert \\& \quad \le \bigl\vert Q_{k} \bigl(\vec{S}^{\mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr)-\widehat{Q}_{k} \bigl(\vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} \bigr) \bigr\vert + \bigl\vert \widehat{Q}_{k} \bigl( \vec{S}^{ \mathcal{D}_{k},m_{k}}_{\vec{j}} \bigr)-\widehat{Q}^{\mathrm{QAE}}_{k, \vec{j}} \bigr\vert \\& \quad \le \epsilon _{k+1} + \epsilon ^{\mathrm{OB}}_{k+1} + \epsilon ^{\mathrm{QAE}}_{k} \end{aligned}$$

for every \(k\in [K-2]\) and \(\vec{j}\in \mathcal{J}_{k}\). On the other hand, for \(k=K-1\), noting that \(Q_{K-1}(\vec{S})=\widehat{Q}_{K-1}(\vec{S})\) for any \(\vec{S}\in \mathcal{S}\) by definition, we see that

$$ \bigl\vert Q_{K-1} \bigl(\vec{S}^{\mathcal{D}_{K-1},m_{K-1}}_{\vec{j}} \bigr)-\widehat{Q}^{\mathrm{QAE}}_{K-1,\vec{j}} \bigr\vert \le \epsilon ^{ \mathrm{QAE}}_{K-1} $$

for any \(\vec{j}\in \mathcal{J}_{K-1}\). Then, under Assumption 3, invoking Theorem 1, we obtain

$$ \bigl\vert Q_{k}(\vec{S})-\widetilde{Q}_{k}( \vec{S}) \bigr\vert \le \tilde{\epsilon }_{k} $$

for any \(k\in [K-1]\) and \(\vec{S}\in \mathcal{D}_{k}\), which immediately leads to (74) as

$$\begin{aligned}& \bigl\vert V_{k}(\vec{S})-\widetilde{V}_{k}( \vec{S}) \bigr\vert \\& \quad = \bigl\vert \max \bigl\{ f^{\mathrm{pay}}_{k}( \vec{S}),Q_{k}(\vec{S})\bigr\} - \max \bigl\{ f^{ \mathrm{pay}}_{k}( \vec{S}),\widetilde{Q}_{k}(\vec{S})\bigr\} \bigr\vert \\& \quad \le \bigl\vert Q_{k}(\vec{S})-\widetilde{Q}_{k}( \vec{S}) \bigr\vert \\& \quad \le \tilde{\epsilon }_{k}. \end{aligned}$$

Here, we used \(|\max \{a,b\}-\max \{a,c\}|\le |b-c|\), which holds for any \(a,b,c\in \mathbb{R}\).

Next, let us note that, for \(k\in [K-2]\),

$$ \epsilon _{k} \le \sum_{k^{\prime }=k}^{K-1} \widetilde{\Lambda }_{k,k^{\prime }-1} \epsilon ^{\mathrm{int}}_{k^{\prime }} + \sum_{k^{\prime }=k+1}^{K-1} \widetilde{\Lambda }_{k,k^{\prime }-1} \epsilon ^{\mathrm{OB}}_{k^{\prime }}+ \sum _{k^{\prime }=k}^{K-1} \widetilde{\Lambda }_{k,k^{\prime }} \epsilon ^{ \mathrm{QAE}}_{k^{\prime }}, $$

holds. We prove this by induction. For \(k=K-2\), (74) implies that

$$\begin{aligned}& \epsilon _{K-2} \\& \quad \le \epsilon ^{\mathrm{int}}_{K-2} + \Lambda _{K-2} \bigl(\epsilon _{K-1}+ \epsilon ^{\mathrm{OB}}_{K-1}+ \epsilon ^{\mathrm{QAE}}_{K-2} \bigr) \\& \quad \le \epsilon ^{\mathrm{int}}_{K-2} + \Lambda _{K-2} \bigl(\epsilon ^{ \mathrm{int}}_{K-1} + \Lambda _{K-1} \epsilon ^{\mathrm{QAE}}_{K-1}+\epsilon ^{ \mathrm{OB}}_{K-1}+ \epsilon ^{\mathrm{QAE}}_{K-2} \bigr) \\& \quad = \sum_{k^{\prime }=K-2}^{K-1} \widetilde{\Lambda }_{K-2,k^{\prime }-1} \epsilon ^{\mathrm{int}}_{k^{\prime }} + \sum _{k^{\prime }=K-1}^{K-1} \widetilde{\Lambda }_{K-2,k^{\prime }-1} \epsilon ^{\mathrm{OB}}_{k^{\prime }}+ \sum _{k^{\prime }=K-2}^{K-1} \widetilde{\Lambda }_{K-2,k^{\prime }} \epsilon ^{\mathrm{QAE}}_{k^{\prime }}. \end{aligned}$$

Similarly, if (86) hold for \(k\in \{2,\ldots,K-2\}\), (74) implies that

$$\begin{aligned}& \epsilon _{k-1} \\& \quad \le \epsilon ^{\mathrm{int}}_{k-1} + \Lambda _{k-1} \bigl(\epsilon _{k}+ \epsilon ^{\mathrm{OB}}_{k}+ \epsilon ^{\mathrm{QAE}}_{k-1} \bigr) \\& \quad \le \epsilon ^{\mathrm{int}}_{k-1} + \Lambda _{k-1} \Biggl(\sum_{k^{\prime }=k}^{K-1} \widetilde{ \Lambda }_{k,k^{\prime }-1} \epsilon ^{\mathrm{int}}_{k^{\prime }} + \sum _{k^{\prime }=k+1}^{K-1} \widetilde{\Lambda }_{k,k^{\prime }-1} \epsilon ^{\mathrm{OB}}_{k^{\prime }}+ \sum _{k^{\prime }=k}^{K-1} \widetilde{\Lambda }_{k,k^{\prime }} \epsilon ^{\mathrm{QAE}}_{k^{\prime }} + \epsilon ^{\mathrm{OB}}_{k}+\epsilon ^{\mathrm{QAE}}_{k-1} \Biggr) \\& \quad = \sum_{k^{\prime }=k-1}^{K-1} \widetilde{\Lambda }_{k-1,k^{\prime }-1} \epsilon ^{\mathrm{int}}_{k^{\prime }} + \sum _{k^{\prime }=k}^{K-1} \widetilde{\Lambda }_{k-1,k^{\prime }-1} \epsilon ^{\mathrm{OB}}_{k^{\prime }}+ \sum _{k^{\prime }=k-1}^{K-1} \widetilde{\Lambda }_{k-1,k^{\prime }} \epsilon ^{\mathrm{QAE}}_{k^{\prime }}. \end{aligned}$$

Therefore, (86) is proved for every \(k\in [K-2]\).

Finally, the claim is proved as follows. We see that

$$\begin{aligned}& \vert V_{0}-\widehat{V}_{0} \vert \\& \quad = \bigl\vert \mathbb{E}\bigl[V_{1}(\vec{S}_{1}) \bigr]-\mathbb{E}\bigl[\mathbf{G}_{1}[ \tilde{V}_{1}]( \vec{S}_{1})\bigr] \bigr\vert \\& \quad \le \mathbb{E} \bigl[ \bigl\vert V_{1}(\vec{S}_{1})- \mathbf{G}_{1}[ \tilde{V}_{1}](\vec{S}_{1}) \bigr\vert \bigr] \\& \quad \le \epsilon _{1}+\epsilon ^{\mathrm{OB}}_{1} \\& \quad \le \sum_{k=1}^{K-1} \widetilde{ \Lambda }_{1,k-1} \epsilon ^{\mathrm{int}}_{k} + \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{OB}}_{k}+ \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k} \epsilon ^{\mathrm{QAE}}_{k}, \end{aligned}$$

where we used (80) and (86) at the second and last inequalities, respectively. Combining this and (57), we obtain

$$\begin{aligned}& \vert V_{0}-\widetilde{V}_{0} \vert \\& \quad \le \vert V_{0}-\widehat{V}_{0} \vert + \vert \widehat{V}_{0}-\widetilde{V}_{0} \vert \\& \quad = \sum_{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{int}}_{k} + \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{OB}}_{k}+ \sum _{k=0}^{K-1} \widetilde{\Lambda }_{1,k} \epsilon ^{\mathrm{QAE}}_{k}. \end{aligned}$$


A.2 Proof of Corollary 1


By simple algebra, we see that \(m_{1},\ldots, m_{K-1}\) satisfying (62) lead to

$$ \widetilde{\Lambda }_{1,k-1}\epsilon ^{\mathrm{int}}_{k} \le \frac{\epsilon \widetilde{V}^{\mathrm{max}}_{1}}{2(K-1)} $$

for every \(k\in [K-1]\).

On the other hand, Theorem 2 implies that, for every \(k\in [K-1]\) and \(\vec{j}\in \mathcal{J}_{k}\), using \(N^{\mathrm{QAE}}_{k}\) set as (63), Step 4 in Algorithm 1 gives us \(\widetilde{P}_{k,\vec{j}}\), an estimate on \(P_{k,\vec{j}}\) in (51), satisfying

$$ \vert P_{k,\vec{j}}-\widetilde{P}_{k,\vec{j}} \vert \le \bar{ \epsilon }_{k}, $$

and then \(\widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}}\) satisfying

$$\begin{aligned}& \bigl\vert \widehat{Q}_{k} \bigl(\vec{S}^{\mathcal{D}_{k},m_{k}}_{ \vec{j}} \bigr)-\widehat{Q}^{\mathrm{QAE}}_{k,\vec{j}} \bigr\vert \\& \quad = \bigl\vert (2P_{k,\vec{j}}-1)\widetilde{V}^{\mathrm{max}}_{k}-(2 \widetilde{P}_{k,\vec{j}}-1)\widetilde{V}^{\mathrm{max}}_{k} \bigr\vert \\& \quad \le 2\widetilde{V}^{\mathrm{max}}_{k} \bar{\epsilon }_{k} \\& \quad \le \frac{\sqrt{(m_{k}+1)^{d} /\widetilde{\Lambda }_{1,k}}}{1+\sum_{k^{\prime }=1}^{K-1}\sqrt{(m_{k^{\prime }}+1)^{d}\widetilde{\Lambda }_{1,k^{\prime }}}} \cdot \frac{\widetilde{V}^{\mathrm{max}}_{1} \epsilon }{2}=: \widetilde{\epsilon }^{\mathrm{QAE}}_{k}, \end{aligned}$$

with some probability. Similarly, it is implied that, with \(N^{\mathrm{QAE}}_{0}\) set as (63), Step 9 gives us \(\widetilde{P}_{0}\), an estimation on \(P_{0}\), satisfying

$$ \vert P_{0}-\widetilde{P}_{0} \vert \le \bar{ \epsilon }_{0}, $$

and then \(\widetilde{V}_{0}\) satisfying

$$\begin{aligned}& \vert \widehat{V}_{0}-\widetilde{V}_{0} \vert \\& \quad = \bigl\vert (2P_{0}-1)\widetilde{V}^{\mathrm{max}}_{0}-(2 \widetilde{P}_{0}-1) \widetilde{V}^{\mathrm{max}}_{0} \bigr\vert \\& \quad \le 2\widetilde{V}^{\mathrm{max}}_{0} \bar{\epsilon }_{0} \\& \quad \le \frac{1}{1+\sum_{k^{\prime }=1}^{K-1}\sqrt{(m_{k^{\prime }}+1)^{d}\widetilde{\Lambda }_{1,k^{\prime }}}} \cdot \frac{\widetilde{V}^{\mathrm{max}}_{1} \epsilon }{2}=: \widetilde{\epsilon }^{\mathrm{QAE}}_{0}, \end{aligned}$$

with some probability. Therefore, when all of these succeed,

$$\begin{aligned}& \vert V_{0}-\widetilde{V}_{0} \vert \\& \quad \le \sum_{k=1}^{K-1} \widetilde{ \Lambda }_{1,k-1} \epsilon ^{\mathrm{int}}_{k} + \sum _{k=1}^{K-1} \widetilde{\Lambda }_{1,k-1} \epsilon ^{\mathrm{OB}}_{k}+ \sum _{k=0}^{K-1} \widetilde{\Lambda }_{1,k} \widetilde{\epsilon }^{ \mathrm{QAE}}_{k} \\& \quad \le \frac{\epsilon \widetilde{V}^{\mathrm{max}}_{1}}{2} + 0 + \frac{\epsilon \widetilde{V}^{\mathrm{max}}_{1}}{2} \\& \quad = \epsilon \widetilde{V}^{\mathrm{max}}_{1} \end{aligned}$$

holds, according to Theorem 3. Here, we used (91), (93), (95), and the assumption that \(\epsilon ^{\mathrm{OB}}_{1},\ldots, \epsilon ^{\mathrm{OB}}_{K-1}\) are 0, along with simple algebra.

The remaining task is proving that the probability \(P_{\mathrm{all}}\) that all of the estimations in Steps 4 and 9 succeed is larger than 0.99 under the setting of \(N^{\mathrm{rep}}_{k}\) as (65). Note that, according to Theorem 2, with the setting as (65), the probability that Step 4 for a set of \(k\in [K-1]\) and \(\vec{j}\in \mathcal{J}_{k}\) outputs \(\widetilde{P}_{k,\vec{j}}\) satisfying (92) is higher than

$$ 1-\frac{0.01}{N_{\mathrm{est}}}. $$

Similarly, the probability that Step 9 outputs \(\widetilde{P}_{0}\) satisfying (94) is also higher than (97). Besides, the total number of these estimations is \(N_{\mathrm{est}}\). Combining these, we obtain a lower bound of \(P_{\mathrm{all}}\) as

$$ P_{\mathrm{all}}\ge \biggl(1-\frac{0.01}{N_{\mathrm{est}}} \biggr)^{N_{\mathrm{est}}} \ge 1-0.01 = 0.99, $$

which completes the proof. □

A.3 Proof of Theorem 4


Because of (56), Theorem 1 implies that

$$ \bigl\vert \widehat{Q}_{k}(\vec{S})-\widetilde{Q}_{k}( \vec{S}) \bigr\vert \le \widetilde{\epsilon }^{\mathrm{int}}_{k} + \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{k} $$

for every \(k\in [K-1]\) and \(\vec{S}\in \mathcal{D}_{k}\). Besides, for every \(k\in [K-2]\) and \(\vec{S}\in \mathcal{D}_{k}\), \(\vert Q_{k}(\vec{S})-\widehat{Q}_{k}(\vec{S}) \vert \le \epsilon _{k+1}+ \epsilon ^{\mathrm{OB}}_{k+1}\) holds as (81), where \(\epsilon _{k}\) is defined as (75) for every \(k\in [K-1]\), whereas \(Q_{K-1}(\vec{S})=\widehat{Q}_{K-1}(\vec{S})\) by definition of \(\mathbf{G}_{K}[\cdot ]\). Combining these, we see that, for every \(k\in [K-1]\) and \(\vec{S}\in \mathcal{D}_{k}\),

$$\begin{aligned} \bigl\vert Q_{k}(\vec{S})-\widetilde{Q}_{k}( \vec{S}) \bigr\vert \le & \bigl\vert Q_{k}(\vec{S})- \widehat{Q}_{k}(\vec{S}) \bigr\vert + \bigl\vert \widehat{Q}_{k}(\vec{S})-\widetilde{Q}_{k}(\vec{S}) \bigr\vert \\ \le & \epsilon _{k+1}+\epsilon ^{\mathrm{OB}}_{k+1}+ \widetilde{\epsilon }^{ \mathrm{int}}_{k} + \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{k} \end{aligned}$$

holds with \(\epsilon _{K}=0\) and \(\epsilon ^{\mathrm{OB}}_{K}=0\), which leads to

$$ \bigl\vert V_{k}(\vec{S})-\widetilde{V}_{k}( \vec{S}) \bigr\vert \le \epsilon _{k+1}+\epsilon ^{\mathrm{OB}}_{k+1}+\widetilde{\epsilon }^{\mathrm{int}}_{k} + \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{k} $$

similarly to (85). Therefore, for every \(k\in [K-1]\),

$$ \epsilon _{k} =\max_{\vec{S}\in \mathcal{D}_{k}} \bigl\vert V_{k}(\vec{S})- \widetilde{V}_{k}(\vec{S}) \bigr\vert \le \epsilon _{k+1}+\epsilon ^{\mathrm{OB}}_{k+1}+ \widetilde{\epsilon }^{\mathrm{int}}_{k} + \Lambda _{k}\epsilon ^{\mathrm{QAE}}_{k}. $$

This implies

$$ \epsilon _{1} \le \sum_{k=2}^{K-1} \epsilon ^{\mathrm{OB}}_{k}+\sum_{k=1}^{K-1} \widetilde{\epsilon }^{\mathrm{int}}_{k} + \sum _{k=1}^{K-1}\Lambda _{k} \epsilon ^{\mathrm{QAE}}_{k}. $$

Finally, combining (103) with (57) and \(|V_{0}-\widehat{V}_{0}|\le \epsilon _{1}+\epsilon ^{\mathrm{OB}}_{1}\), which we can see as (89), we obtain

$$\begin{aligned}& \vert V_{0}-\widetilde{V}_{0} \vert \\& \quad \le \vert V_{0}-\widehat{V}_{0} \vert + \vert \widehat{V}_{0}-\widetilde{V}_{0} \vert \\& \quad \le \epsilon _{1}+\epsilon ^{\mathrm{OB}}_{1} + \epsilon ^{\mathrm{QAE}}_{0} \\& \quad \le \sum_{k=1}^{K-1}\epsilon ^{\mathrm{OB}}_{k}+\sum_{k=1}^{K-1} \widetilde{\epsilon }^{\mathrm{int}}_{k} + \sum _{k=0}^{K-1}\Lambda _{k} \epsilon ^{\mathrm{QAE}}_{k}. \end{aligned}$$


Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Miyamoto, K. Bermudan option pricing by quantum amplitude estimation and Chebyshev interpolation. EPJ Quantum Technol. 9, 3 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Option pricing
  • Quantum amplitude estimation
  • Chebyshev interpolation