- Research
- Open Access
- Published:

# Reliability of analog quantum simulation

*EPJ Quantum Technology*
**volume 4**, Article number: 1 (2017)

## Abstract

Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these two points of view is a critical step in making the most of this promising technology. In this work we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. To demonstrate the approach, we characterize the robust features of a variety of quantum many-body models.

Quantum simulation is an idea that has been at the center of quantum information science since its inception, beginning with Feynman’s vision of simulating physics using quantum computers [1]. A quantum simulator is a tunable, engineered device that maintains quantum coherence among its degrees of freedom over long enough timescales to extract information that is not efficiently computable using classical computers. The modern view of quantum simulation differentiates between *digital* and *analog* quantum simulations. Specifically, the former performs simulation of a quantum model by using discretized evolutions (*i.e.*, gates) [2–4] whereas the latter uses a physical mimic of the model to infer its properties [5]. A crucial issue is that while quantum error correction can be naturally incorporated into digital quantum simulation, this does not seem to be possible for AQS, which are essentially special-purpose hardware platforms built to model systems of interest. However, digital quantum simulators are extremely challenging to build, whereas AQS are more feasible in the near future, with several experimental candidates already under study [6–10]. Thus a critical question for the quantum simulation field is: as AQS become more sophisticated and begin to model systems that are not classically simulable, can one verify or certify the accuracy of results from systems that are inevitably affected by noises and experimental imperfections [11]?

In response to this challenge, we develop a technique for analyzing the robustness of an AQS to experimental imperfections. We specialize to AQS that prepare ground or thermal states of quantum many-body models since these are the most common types of AQS currently under experimental development.

## Definitions

Define a *quantum simulation model*, notated \((H, O)\), as consisting of a Hamiltonian *H* and an observable of interest *O* (both Hermitian operators). We write a general Hamiltonian in parameterized form as \(H({{\lambda }}) =\sum_{k=1}^{K} \lambda_{k} H_{k}\), where \({{\lambda }}=(\lambda_{1}, \dots, \lambda_{K})^{\mathsf{T}}\) denotes the vector of parameters (\(\hbar=1\) throughout this paper). \(H_{k}\) are the terms in the Hamiltonian that are individually tunable through the parameters \(\lambda_{k}\). In addition, we decompose the observable into orthogonal projectors representing individual measurement outcomes \(O = \sum_{m=1}^{M} \theta_{m} P_{m}\) with \(P_{m} P_{n} = P_{m} \delta_{mn}\).^{Footnote 1}

The goal of an AQS is to produce the probability distribution of a measurement of *O* under a thermal state or ground state of a system governed by \(H({{\lambda }}^{0})\), where \({{\lambda }}^{0}\) denotes the ideal, nominal values of the system parameters. That is, to produce the distribution \(p_{m}({{\lambda }}^{0}) = \operatorname{tr}( P_{m} \varrho({{\lambda }}^{0}) )\), \(m=1, \ldots, M\), where \(\varrho ({{\lambda }}^{0}) = {e^{-\beta H({{\lambda }}^{0})}}/{\operatorname{tr}e^{-\beta H({{\lambda }}^{0})} }\), for some inverse temperature \(\beta= 1/k_{B} T\), if the goal is to predict thermal properties of the model; or \(\varrho({{\lambda }}^{0}) = \vert {\psi _{\mathrm {g}}({{\lambda }}^{0})} \rangle \langle{ \psi _{\mathrm {g}}({{\lambda }}^{0})}\vert \) with \(\vert {\psi _{\mathrm {g}}({{\lambda }}^{0})} \rangle \) being the ground state of \(H({{\lambda }}^{0})\), if the goal is to predict ground state properties. However, due to inevitable environmental interactions, miscalibration, or control errors, the parameters \(\lambda_{k}\) can deviate from their nominal values, which can potentially corrupt AQS predictions. We quantify the reliability of an AQS by the robustness of this probability distribution with respect to the deviations of *λ* from its ideal value \({{\lambda }}^{0}\).

In general, there is no reason to expect that the prepared state \(\varrho({{\lambda }})\) will be robust to perturbations of *λ*. In fact, we know that for Hamiltonians that possess a quantum critical point, thermal and ground states can be extremely sensitive to *λ* around that point [12–14]. However, reliable AQS does not require robustness of \(\varrho({{\lambda }})\) around \({{\lambda }}^{0}\), but only robustness of the probability distribution of observable outcomes, \(\{p_{m}\}_{m=1}^{M}\). The fact that this is a less demanding requirement is the fundamental reason to expect that some models may be reliably simulated using AQS.

## Quantifying AQS robustness

To quantify the reliability or robustness of an AQS, we begin by utilizing the Kullback-Leibler (KL) divergence to measure the difference between the measurement probability distributions \(p({{\lambda }})\) and \(p({{\lambda }}^{0})\) [15]: \(D_{\mathrm{KL}}(p({{\lambda }})||p({{\lambda }}^{0})) =\sum_{m} p_{m}({{\lambda }}) \log\frac{p_{m}({{\lambda }})}{p_{m}({{\lambda }}^{0})}\). Assuming that the deviation in parameters from the ideal, \(\Delta {{\lambda }}={{\lambda }}-{{\lambda }}^{0}\), is small, we expand the KL divergence to second order to obtain

The positive semidefinite matrix *F* is the Fisher information matrix (FIM) for the model, whose elements are given by [15]:

In Appendices 1 and 2 we describe how to compute the FIM for a quantum simulation model in closed-form, without using numerical approximations to derivatives. Note that even though we adopt the KL divergence to motivate the FIM, C̆encov’s theorem states that the FIM is the unique Riemannian metric for the space of probability distributions under some mild conditions [16], and is therefore a general measure of the sensitivity of the parameterized outcome distribution around \({{\lambda }}^{0}\).

We first note that if the parameter deviations, Δ*λ*, are Gaussian distributed with zero mean then the expected KL-divergence can be approximated to second-order by the trace of the FIM. This follows from Eq. (1), and the fact that \(\frac{1}{M}\sum_{i=1}^{M} z_{i}^{\mathsf{T}} A z_{i}\) is an estimate of the trace of *A* when the elements of \(z_{i}\) are independent, standard normal variables [17]. However, we are interested in not only obtaining such an average measure of AQS robustness, but also in understanding the factors that determine robustness, or lack thereof, of a particular model. For this purpose we turn to a spectral analysis of the FIM associated with a quantum simulation model. Consider the set of eigenvalues \(\zeta_{k}\) and eigenvectors \(v_{k}\) of *F*, with *k* indexing the eigenvalues in descending order. Since *F* is a symmetric matrix, we have \(F=\sum_{k=1}^{K} \zeta_{k} v_{k} v_{k}^{\dagger}\). Then the simulation error caused by the deviated parameter *λ* can be approximated to the second order by \(\sum_{k=1}^{K} \frac{\zeta_{k}}{2} \| v_{k}^{\dagger}\Delta\lambda\|^{2}\). This error is influenced by two quantities: the magnitude of the eigenvalues, and the overlap of the eigenvectors with the parameter deviation. We can use this structure to quantify the robustness of AQS outputs to the system parameter deviations around the ideal \({{\lambda }}^{0}\).

A quantum simulation model is *trivially robust* to parameter deviations if all \(\zeta_{k} \approx0\); *i.e.*, \(F\approx0\). In the high temperature limit, \(\beta\rightarrow0\), we can show that \(F({{\lambda }}^{0}) \rightarrow0\) at the rate of \(\beta^{2}\) generically and so all models become trivially robust, see Appendix 5. This is expected since the equilibrium state becomes dominated by thermal fluctuations at high temperatures, and observables become insensitive to underlying Hamiltonian parameters.

A more interesting way a model can be robust is if the FIM possesses only a small number of dominant eigenvalues that are separated by orders of magnitude from other eigenvalues. In this case, only parameter deviations in the directions given by the eigenvectors of dominant eigenvalues affect the simulation results. For instance, if \(\zeta_{1}\) is the dominant eigenvalue, then the *composite parameter deviation* (CPD) \(v_{1}^{\dagger}\Delta\lambda\) has the major influence on simulation errors. We refer to AQS models that have FIMs with a few dominant eigenvalues separated by orders of magnitude from the rest as *sloppy models*. This terminology is adopted from statistical physics, where it has been recently established that a wide variety of physical models possess properties that are extremely insensitive to a majority of underlying model parameters, a phenomenon termed *parameter space compression* (PSC) [18, 19].

Model sloppiness is a prerequisite for non-trivial AQS robustness, since without this property an AQS can only be robust if most or all Hamiltonian parameters can be precisely controlled, an impractical task as quantum simulation models scale in size. In contrast, given a sloppy quantum simulation model, one only has to control and stabilize a few (≪*K*) influential CPDs. However, model sloppiness alone is not sufficient for AQS robustness since the practicality of controlling these influential CPDs has to be evaluated within the context of the particular AQS experiment at hand, including its control limitations and error model. In this work we aim for a general analysis and do not focus on any particular AQS implementation. Instead, we demonstrate that many quantum simulation models exhibit model sloppiness, the prerequisite for robustness, and how this can help to identify the parameters that must be controlled in order to produce reliable AQS predictions.

## Analyzing the FIM

A low rank FIM immediately indicates a sloppy model, and since the rank is an analytically accessible quantity, we can use the FIM rank to study model sloppiness beyond numerical simulations. In particular, in this section we discuss two useful methods for bounding the rank of the FIM for a quantum simulation model.

We begin by rewriting the FIM in a compact form. Define a matrix \(V\in \mathbb{R}^{K \times M}\), whose *km*-th entry is \(\frac{\partial p_{m}({{\lambda }})}{ \partial\lambda_{k}}\), and \(\Lambda= \operatorname {diag}\{p_{1}({{\lambda }}), p_{2}({{\lambda }}), \ldots, p_{M}({{\lambda }})\}\). Then the FIM can be written as \(F = V \Lambda^{-1} V^{\dagger}\). Here we assume that all \(p_{m}({{\lambda }})\) are non-zero. In the case when some \(p_{m}({{\lambda }})\) equal 0, these elements and the corresponding rows in *V* should be removed.

This factorized form of the FIM immediately provides a useful bound on its rank. Notice that the row sum of *V* is zero, therefore the rank of *V* is at most \(M-1\), which is an upper bound on the rank of *F*. In many physical situations, it is common that the number of distinct measurement outcomes is much less than the number of model parameters, *i.e.*, \(M\ll K\). In this case, the rank bound of \(M-1\) can immediately signal a sloppy model. An example of this that we shall encounter later is a spin-spin correlation function observable, whence \(M=2\) and *K* typically scales with *n*, the number of spins in the model.

Next we will show that fundamental symmetries of the quantum simulation model can reduce the rank of the FIM, and further, that symmetries can be used to deduce the structure of the FIM eigenvectors and characterize the influential CPDs. To do this, we define the symmetry group of a quantum simulation model, *G*, as the largest set of symmetries shared by the Hamiltonian and the observable in the model - *i.e.*, the maximal group of space transformations that leave the Hamiltonian and the observable invariant. Let \(\{U_{g}\}_{g\in G}\) be a faithful unitary representation of this symmetry group for the quantum simulation model,^{Footnote 2} and suppose \(U_{g} H_{k} U_{g}^{\dagger}= H_{j}\) for some *k*, *j*, *g*. Then in Appendix 3 we show that \(\frac{\partial p_{m}({{\lambda }})}{\partial\lambda_{k}} = \frac {\partial p_{m}({{\lambda }})}{\partial\lambda_{j}}\) for all *m*, under ground or thermal states. Therefore, the spatial symmetry of the model leads to identical rows in *V*, and we see an immediate connection between model symmetry and model sloppiness: a high degree of symmetry yields a significant redundancy in the FIM and only a few non-zero eigenvalues.

This observation suggests a constructive procedure to formulate an upper bound on the rank of FIM based on model symmetries. Specifically, compute the orbit of \(H_{k}\) under the symmetry group for the quantum simulation model; *i.e.*, \(\{ U_{g} H_{k} U_{g}^{\dagger}\vert g \in G\}\), for all \(1 \leq k \leq K\). The number of orbits will be the maximum number of distinct rows in the matrix *V*, and therefore provides an upper bound to the rank of the FIM.

The repeated rows in *V* resulting from model symmetries also informs us about the structure of the eigenvectors of the FIM, and as a result, the structure of the influential CPDs. Explicitly, the CPD takes the form (see Appendix 4):

where *s* indexes the unique orbits, and \(\mu^{k}_{s}\) is a scalar dependent on the orbit, nominal parameter values and temperature. Although the forms of the CPDs are always determined by the eigenvectors of *F* and therefore by the symmetries of the model, *i.e.*, Eq. (3), the coefficients \(\mu^{k}_{s}({{\lambda }}^{0}, \beta)\) are temperature-dependent and the structure of the CPD can simplify further if these coefficients become alike or approach zero as temperature changes. We will encounter instances of this in the next section.

## Applications

In this section we use the rank bounds derived above and numerical simulations to understand the sloppiness and robustness of several quantum simulation models. In addition to the applications presented here, we analyze several other quantum simulation models in Appendix 7.

### 1D transverse-field Ising model

The well-known transverse field Ising model in one dimension (1D-TFIM) is described by the Hamiltonian:

where \(\sigma_{\alpha}^{i}\) is a Pauli operators acting on spin *i* with \(\alpha=x\), *y*, or *z*, and is normalized such that \(\{\sigma_{\alpha}, \sigma_{\beta}\} = \delta_{\alpha\beta} \frac{I}{2}\). We are interested in the uniform version of this model with \(B^{0}_{i}=B^{0}\) and \(J^{0}_{i}=J^{0}\) for all *i*; however, when this model is simulated by an AQS, the actual values of \(B_{i}\) and \(J_{i}\) may fluctuate around these nominal values. The boundary conditions for this model can be either periodic, *i.e.*, \(\sigma_{x}^{n+1} \equiv \sigma_{x}^{1}\), in which case the Hamiltonian will be denoted as \(H_{1}^{\mathrm{per}}\); or open, *i.e.*, \(J_{n}=0\), in which case the Hamiltonian will be denoted as \(H_{1}^{\mathrm{open}}\). Although this model is efficiently solvable [20–22], its role as a paradigmatic quantum many-body model with a non-trivial phase diagram makes it a useful benchmark for quantum simulation. Moreover, it exhibits many generic phenomena related to robust AQS, as we will show below.

Two observables of interest in this model are the net transverse magnetization \(S_{z} = \sum_{i=1}^{n} \sigma_{z}^{i}\) and two-point correlation functions \(C_{z}(i,j) = \sigma_{z}^{i}\sigma_{z}^{j}\). It is feasible to measure these observables experimentally, and importantly, they probe the magnetic order in the system. For example, both of these observables can be used to characterize a quantum phase transition that occurs in the ground state of the uniform 1D-TFIM when swept past its quantum critical point at \(J^{0}/2B^{0}=1\) [23].

First we consider the quantum model \(\{H_{1}^{\mathrm{per}}, S_{z}\}\) with fixed \(J^{0}\), and sweep the parameter \(B^{0}\) to explore the behavior of the model across its phase diagram. This quantum simulation model has full translational invariance. The orbit of any \(\sigma_{z}^{i}\) under the (lattice) translation group contains all \(\sigma_{z}^{j}\), \(1\leq j \leq n\), and the orbit of any \(\sigma_{x}^{i}\sigma_{x}^{i+1}\) contains all \(\sigma_{x}^{j}\sigma_{x}^{j+1}, 1 \leq j \leq n\). Consequently, we can prove that

for all *m* and \(1 \leq i, j \leq n\); that is, all the rows in *V* corresponding to *B* and *J* are identical, respectively. Hence, an upper bound on the rank for the FIM of this model is 2, for all possible \(J^{0}\), \(B^{0}\), *β*, and *n*. This is a very sloppy model, especially for large *n*.

To illustrate this general result, in Figure 1 we show the eigenvalues of the FIM for a 10-spin 1D-TFIM with \(J^{0}=1\), as \(B^{0}\) is swept. The rank bound derived above is evident in this figure - there are two dominant eigenvalues - and the negligible eigenvalues shown in Figure 1 (gray lines) are actually numerical artifacts. In fact, the largest eigenvalue is also orders of magnitude above the second largest, except in the region of the quantum critical point, where the second eigenvalue approaches it (although still many orders of magnitude smaller).

The eigenvectors associated to the two dominant eigenvalues prescribe the parameter deviations that the model is most sensitive to, and due to the full translational invariance of the model we find that they exhibit particularly simple structure (regardless of *β*). Namely, the two dominant eigenvectors take the form \([\mu, \ldots\mu, \eta, \ldots, \eta]^{\mathsf{T}}\) and \([-\eta, \ldots, -\eta, \mu, \ldots, \mu ]^{\mathsf{T}}\), where *μ* and *η* are two scalars depending on the value of \(B^{0}\). This implies that across all phases, the model is sensitive only to the CPDs \(\sum_{i}\Delta B_{i}\) and \(\sum_{i} \Delta J_{i}\). Hence, this quantum simulation model will be robust to parameters deviations as long as these two sums are maintained at zero; *i.e.*, local fluctuations of the microscopic parameters that (spatially) average to zero are inconsequential.

Next we examine the AQS model \(\{H_{1}^{\mathrm{per}}, C_{z}(i,j)\}\) - *i.e.*, the 1D-TFIM with periodic boundary and a correlation function observable. Noticing that the observable has only two outcomes immediately indicates that the rank of *F* is at most one, and hence this model is also very sloppy, especially for large *n*. To illustrate this in Figure 2(a) we show eigenvalues of the FIM for a 10-spin example, with the observable being the correlation function \(C_{z}(2,6)\), for zero and intermediate temperature. As expected, only one eigenvalue is significant and all the others are zero up to numerical precision across the whole phase diagram (values of \(J^{0}/2B^{0}\)).

The structure of the dominant eigenvector is more complex in this case, since although the Hamiltonian is translationally invariant, the observable is not. The eigenvector structure can be extracted from symmetry considerations, but for simplicity we plot its components for the \(n=10\) case in Figure 2(b), (c), for \(\beta =\infty\), \(\beta=1\), respectively. Focusing on the zero temperature case first (Figure 2(b)), we see that the CPD takes the form \(\sum_{i=1}^{n} \mu_{i}(B^{0}) \Delta B_{i} + \sum_{i=1}^{n} \eta_{i}(B^{0})\Delta J_{i}\), where \(\mu_{i}(B^{0})\) and \(\eta_{i}(B^{0})\) are dependent on \(B^{0}\). Unlike the previous quantum simulation model \(\{H_{1}^{\mathrm{per}}, S_{z}\}\), the form of the linear combination of underlying model parameters that the AQS is sensitive to not only depends on \(B^{0}\), but this dependence is not the same for all 20 parameters. Another interesting aspect of Figure 2(b) is that away from the quantum critical point, the composite parameter is mostly composed of model parameter variations near the spins whose correlation is being evaluated. More specifically, the AQS model is most sensitive to \((\Delta B_{2}+\Delta B_{6})+(\Delta B_{1}+\Delta B_{3}+\Delta B_{5}+\Delta B_{7})/2\) and \((\Delta J_{1}+\Delta J_{2}+\Delta J_{5}+\Delta J_{6})\) (*i.e.*, the parameters local to spins involved in the correlation function \(C_{z}(2,6)\)). However, near the quantum critical point, all underlying parameter changes enter into the definition of the influential CPD. This is a novel manifestation of collective phenomena in quantum many-body systems: whereas local correlations are typically influenced by local parameters, near a critical point, local correlations are influenced by all the parameters in the system.

The complexity of the influential CPD for this model is most evident when the system is in its ground state,^{Footnote 3} but these features persist for small finite temperatures also. However, as shown in Figure 2(c), the structure of the CPD simplifies with increased simulation temperature. The sensitivity to all parameter variations in the model around the region near the quantum critical point disappears at intermediate temperature, as expected, since thermal fluctuations overwhelm signatures of quantum criticality as the temperature increases [24]. Moreover, the influential CPD becomes composed of only the parameter changes at the spins involved in the correlation function (\(\Delta B_{2}+\Delta B_{6}\) and \(\Delta J_{1}+\Delta J_{2}+\Delta J_{5}+\Delta J_{6}\)) across the whole phase diagram.

We pause to reflect on the differences between the two models examined so far. Whereas \(\{H_{1}^{\mathrm{per}}, S_{z}\}\) and \(\{H_{1}^{\mathrm{per}}, C_{z}(i,j)\}\) are both sloppy quantum simulation models, the influential CPD for the former is much simpler in form - its form remains invariant across the phase diagram and with varying temperature. An immediate consequence is that if the goal of a quantum simulation of the 1D-TFIM is to characterize the phase diagram and the phase transition, one should utilize the transverse magnetization as an experimental observable as opposed to correlation functions since the former is more robust to independent local parameter fluctuations. Another option is to probe the site averaged correlation function (\(\bar{C}_{z}(j) = \frac{1}{n}\sum_{i} \sigma_{z}^{i} \sigma_{z}^{i+j}\)) in which case the translational invariance, and consequently robustness to independent local parameter fluctuations of the quantum simulation model is restored.

To study a model with a lower degree of symmetry, we now turn to the 1D-TFIM with open boundary conditions, with the observable of interest being transverse magnetization again; *i.e.*, the quantum simulation model \(\{H_{1}^{\mathrm{open}}, S_{z}\}\). This model is no longer translationally invariant, but has reflection symmetry about the center spin (for odd *n*) or center coupling (for even *n*). Under this symmetry, each orbit contains at most two elements - *e.g.*, the orbit of \(\sigma_{z}^{j}\) contains itself and \(\sigma_{z}^{n+1-j}\) - and hence an upper bound on the rank of the \((2n-1) \times(2n-1)\) matrix *F* is *n*. In this case symmetry considerations do not completely reveal the sloppiness of the model, that is, the FIM rank bound is weak, as *n* is not a lot less than \(2n-1\). We explicitly calculate the FIM for this model with \(n=10\) at low temperature, and Figure 3(a) shows its eigenvalues as a function of \(B^{0}\). As expected from the symmetry rank bound, the model has at most \(n=10\) eigenvalues that are nonzero (within numerical precision). Furthermore, the first eigenvalue is several orders of magnitude larger than the others at all phases, although there is a pronounced aggregation of eigenvalues around the quantum critical point. Hence the model is sloppy although not to the same degree as the previous two models examined. The influential CPDs for this model takes the form:

where \(\mu_{i}\) and \(\eta_{i}\) are \(B^{0}\)-dependent real numbers. Therefore this model is robust to parameter fluctuations that are negatively correlated across its center spin (or coupling for even *n*). As a result of the complexity of these CPDs and the overall lower degree of sloppiness, we conclude that an AQS implementation of this model will be less robust to parameter fluctuations than the previous two 1D-TFIM models considered.

### 2D transverse field Ising model

Now we study the uniform 2D-TFIM on an \(n\times n\) square lattice:

with net magnetization \(S_{z}\) as the observable of interest. Here \(\langle i,j \rangle\) indicates coupling between neighboring spins on a square lattice. We consider open boundary conditions and the uniform nominal operating point \(B_{i}=B^{0}\) and \(J_{ij}=J^{0}\). In this case the model has two types of planar symmetries: rotational symmetry about the center of the lattice and mirror reflection symmetry about four reflection lines. The net magnetization observable is invariant under the above symmetries. This is not an exactly solvable model as in the 1D-FTIM case and is therefore of more fundamental interest for AQS.

Several local terms (\(\sigma_{z}^{i}\)) and coupling terms (\(\sigma_{x}^{i} \sigma_{x}^{j}\)) in the Hamiltonian are mapped to the same orbit under the action of the symmetry transformations for \(\{H_{2}, S_{z}\}\). For example, Figure 4 shows the lattice sites and couplings that lie in the same orbit for a \(3\times3\) lattice. There are a total of five distinct orbits in this case and thus the rank the \(19\times19\) FIM is upper bounded by five. Also, according to Eq. (3) fluctuations of the local magnetic fields or spin-spin couplings that act on identically colored site or edges in Figure 4 will be grouped together in the influential CPD. Explicit computations of eigenvalues and CPDs for this model are included in Appendix 7.1.

### Fermi-Hubbard model

The Fermi-Hubbard Hamiltonian, a minimal model of interacting electrons in materials, is of significant interest to the AQS community since it is thought that understanding emergent properties of this model could explain some high-\(T_{c}\) superconducting materials [25]. The Hamiltonian takes the form:

where \(c^{\dagger}_{i\sigma} (c_{i\sigma})\) creates (annihilates) an electron with spin \(\sigma\in\{ \uparrow, \downarrow\}\) on site *i*, \(n_{i\sigma} = c^{\dagger}_{i\sigma}c_{i\sigma}\) is the electron number operator for site *i*. We consider this Hamiltonian defined over a two-dimensional lattice, and the \(\langle i,j \rangle\) indicates that the first sum runs over nearest neighbor sites. Moreover, \(t_{ij}\) represents the coupling energy between sites that induces hopping of electrons, and \(U_{i}>0\) represents the repulsive energy between two electrons on the same site. We are interested in the uniform version of this Hamiltonian with nominal parameters \(U_{i}=U^{0}\), for all *i* and \(t_{ij} = t^{0}\), for all *i*, *j*. The observable of interest is the double occupancy fraction, \(D = \frac{2}{n}\sum_{i} n_{i\uparrow }n_{i\downarrow}\), where *n* is the total number of sites, which for example can be used to probe metal to insulator transitions in this model.

In Figure 5 we show FIM properties for this AQS on a \(2\times 3\) lattice with periodic boundary conditions. We show results from simulations of the Hubbard model at half-filling (\(\sum_{i} n_{i\uparrow} = \sum_{i} n_{i\downarrow} = 3\)), but the results are qualitatively the same for the slightly doped cases as well. Figure 5(a) shows sites and coupling energies that lie within the same orbit under symmetry transformations for this model, which are lattice translations in the *x* and *y* directions. All Hamiltonian terms that act locally are mapped between each other and all verical and horizontal couplings are mapped between each other, respectively, and thus there are three distinct orbits for this model implying an upper bound on the rank of the FIM of 3. Figure 5(b) shows eigenvalues of the model with \(t^{0}=1\), as a function of \(U^{0}\). As expected, there are always at most three non-zero eigenvalues (to numerical precision) and the model is extremely sloppy. In contrast to the models examined so far, the low temperature version of this model is sloppier than the intermediate temperature version. Finally, Figure 5(c) confirms that the influential CPDs takes the form expected from the symmetry analysis, with the model only showing sensitivity to the sum of local fluctuations \(\sum_{i} \Delta U_{i}\) and sum of vertical coupling terms or horizontal coupling terms.

## Scaling to large systems

Quantum simulation is most compelling for large-scale quantum models since difficulty of classical simulation typically increases exponentially with the model scale.^{Footnote 4} Obviously, evaluation of model robustness through classical computation of the FIM is not possible for large-scale models. However, we will show how analysis of small-scale systems can be bootstrapped by various techniques to draw useful conclusions about their large-scale versions.

First, we note that the bounds on the rank of the FIM that we derived earlier can be useful for models of any scale. For example, the rank bound derived from symmetry considerations allows us to determine the sloppiness of the quantum simulation model \(\{H_{1}^{\mathrm{per}}, S_{z}\}\) at *any* scale (*i.e.*, for any number of spins); and further, symmetry considerations yield the form of the CPD that the model is sensitive to. More generally, we observe that the FIM for any quantum simulation model is greatly simplified by translational invariance, and this can be used to determine sloppiness of the model at any scale. Consider a general (finite-dimensional) translationally invariant Hamiltonian \(H_{g} = \sum_{\alpha=1}^{A} \sum_{\mathcal{N}} \lambda^{\alpha}_{\mathcal{N}} H^{\alpha}_{\mathcal{N}}\), where \(H^{\alpha}_{\mathcal{N}}\) is an operator acting on degrees of freedom in the spatial neighborhood \(\mathcal{N}\), and of type *α*. As an example, consider the following general spin-\(1/2\) Hamiltonian on a 3D lattice with nearest-neighbor interactions and periodic boundary conditions in all directions:

where \(\langle i,j \rangle\) indicates the sum runs over nearest neighbors in all three directions. Here \(\alpha\in\{x,y,z, xx, yy, zz\}\) and the neighborhoods are local sites or edges of the 3D lattice. Translational invariance implies that under the action of the translation symmetry group for these models, all Hamiltonian terms of a given type *α* lie in the same orbit. Therefore, the number of orbits is the same as the number of types of interaction, and assuming that the observable of interest is also translationally invariant, *A* is an upper bound on the rank of the FIM for such models at *any* scale. Thus such models are guaranteed to be sloppy, except at very small scales (where the number of parameters is comparable to *A*). Furthermore, the AQS will be most susceptible to the CPDs \(\sum\Delta \lambda^{\alpha}_{\mathcal{N}}\) for each *α*. For example, for the spin-\(1/2\) Hamiltonian \(H_{4}\) above, if the observable is also translationally invariant, *e.g.*, \(S_{x}\), \(S_{y}\) or \(S_{z}\), then the FIM for this quantum simulation model will have rank at most 6, for *any* number of spins. Note that this example covers a wide range of models including tilted and transverse field Ising models and a variety of Heisenberg models.

The rank bound obtained by counting the number of observable outcomes is also useful in determining sloppiness at any scale. For example, the spin-\(1/2\) correlation \(C_{\alpha}(i,j) =\sigma_{\alpha}^{i} \sigma _{\alpha}^{j}\) has only two possible outcomes ±1, thus the FIM rank is always one, regardless of the Hamiltonian and number of spins. Unfortunately, this bound does not also inform us about the structure of the CPD that the model is sensitive to.

Second, even in cases where a complete symmetry analysis is not possible, an analysis of the small-scale model can be informative about the robustness of the corresponding large-scale model. In particular, since the form of the CPDs is determined by symmetries of the model, one can extrapolate from the form of the CPDs from small-scale models to large versions. For example, for the model \(\{H_{1}^{\mathrm{per}}, C_{z}(i,j)\}\) studied above, we can examine large-scale behavior by using the well-known exact solution to the 1D-TFIM [20, 21] (see Appendix 6 for details), and confirm that the form of the influential CPD remains the same at large *n* as for the small-scale version. In Figure 6 we plot entries of the dominant eigenvector for the model \(\{H_{1}^{\mathrm{per}}, C_{z}(2,10)\}\) for \(n=70\) spins in the ground state. The influential CPD is mostly composed of parameters around the spins whose correlation function is being evaluated, except near the quantum critical point when other parameters also contribute. These trends agree with results for the small-scale version of the model shown in Figure 2(b).

Third, we note that in some cases we can approximate a quantum simulation model with one of higher symmetry in order to gain more information from the FIM. An example of such an approximation is the common practice of imposing periodic boundary conditions on finite lattices in order to make calculations tractable. This approximation can also be useful for assessing robustness of large-scale models using our approach. To illustrate this, we turn to the exact solution of the 1D-TFIM again, and confirm that the model \(\{H_{1}^{\mathrm{open}}, S_{z}\}\) can be approximated by \(\{H_{1}^{\mathrm{per}}, S_{z}\}\) as the number of spins increases. Our numerical investigations show that when *n* is large, *e.g.*, \(n>50\), the largest eigenvalue of the FIMs for these two models become almost identical, and the forms of the influential CPDs for the two models approach each other. Hence for some large-scale models one can infer sloppiness and robustness from analysis of approximations with higher degree of symmetry. Of course such approximations are not always possible and one should be aware of their accuracy across parameter regimes.

Finally, we pose a conjecture regarding the behavior of sloppiness with scale: if a small-scale AQS model with a lattice quantum many-body Hamiltonian is sloppy, then its large-scale version will also be sloppy. Although we currently lack a proof of this statement, it is well supported by numerical evidence. For example, consider the model \(\{H_{1}^{\mathrm{open}}, S_{z}\}\) that was shown to be sloppy at small scales earlier. By utilizing the exact solution to the 1D-TFIM, we can analytically calculate the FIM for a large number of spins. We choose \(B^{0}=0.45\), \(J^{0}=1\), and \(\beta=1\), and in Figure 7 plot the largest 10 eigenvalues of the FIM for this model as a function of the number of spins, *n*. The model remains sloppy across all scales that were simulated.

## Discussion

We have developed and applied a formalism for analyzing the robustness of analog quantum simulators. Many quantum many-body models are potentially robust for AQS, especially if they possess a high degree of symmetry, which we have shown leads to model sloppiness, a necessary condition for robustness. In addition, our techniques allow one to determine which underlying parameter(s) impact simulation results the most, which could help to focus experimental effort when designing AQS platforms. In a sense, our work can be thought of providing a formal justification of the commonly encountered intuition that *bulk properties should be immune to microscopic fluctuations*, and elucidating the connection between this intuition and system symmetries.

For brevity we have only presented results from applying our approach to uniform models above. However, we have analyzed a large variety of more general models, including ones with random parameters and long-range couplings, and some of the results from these studies are presented in Appendix 7. Application of our approach to these more complex cases with less symmetry illustrates how *any* symmetries in the underlying ideal model can be exploited to understand sloppiness and robustness. While nearly all the quantum simulation models we studied were sloppy (the exception being models with complete disorder, *i.e.*, random parameters), in some cases the influential CPD is complex, and engineering robust AQS for these models could be challenging. This finding is mirrored by the ubiquity of sloppiness in the classical models studied by Sethna *et al.* [18, 19].

The intent of this work is to introduce the notion of sloppy models to AQS, demonstrate its relation to robust simulation and illustrate that certain quantum simulation models can be robust to uncertainties in parameters. There are many promising directions to extend this work. For example, while we have focused on AQS that prepare ground or thermal states of quantum many-body models, the approach can be extended to analyze quantum simulations that predict dynamic properties of quantum models by considering probability distributions for the dynamical variables of interest. Finally, we have restricted ourselves in this work to investigating the robustness of analog simulation of Hamiltonian models with calibration uncertainties because these uncertainties can in fact dominate the behavior of existing cold-atom analog quantum simulation platforms, *e.g.*, [7–10], where decoherence due to environmental coupling is very small. However, for a complete picture of robustness, it is desirable to extend this analysis to diagnose robustness of quantum simulation models with decoherence.

## Notes

- 1.
This decomposition of an observable into a set of operators that represent measurement outcomes (or more formally, POVM elements [30]), is not unique. However, there will be an experimentally relevant decomposition dictated by the experimental apparatus used to probe the AQS. Our results are not dependent on the particular decomposition chosen and for concreteness we work with the decomposition given here.

- 2.
Explicit unitary representations of symmetry groups for several quantum simulation models are presented in Appendix 8.

- 3.
This is the reason we present results for the system at zero temperature for this example (instead of \(\beta=10\) which is our low temperature case in the other examples).

- 4.
We assume there is some natural notion of scaling of a model that maintains its symmetries -

*e.g.*, increasing the number of spins in a spin lattice model while maintaining the coupling configurations. - 5.
Conventionally the parameters in this model are \(J_{1}\) (instead of \(J^{0}\)) and \(J_{2}\) (instead of \(K^{0}\)), and hence the name for the model. However, to simplify notation, we use the above parameter names.

## References

- 1.
Feynman RP. Int J Theor Phys. 1982;21:467.

- 2.
Lanyon BP, Hempel C, Nigg D, Müller M, Gerritsma R, Zähringer F, Schindler P, Barreiro JT, Rambach M, Kirchmair G, et al. Science. 2011;334:57.

- 3.
Barends R, Lamata L, Kelly J, García-Álvarez L, Fowler AG, Megrant A, Jeffrey E, White TC, Sank D, Mutus JY, et al. Nat Commun 2015;6:7654.

- 4.
Barends R, Shabani A, Lamata L, Kelly J, Mezzacapo A, Las Heras U, Babbush R, Fowler AG, Campbell B, Chen Y, et al. Nature. 2016;534:222.

- 5.
Johnson TH, Clark S, Jaksch D. EPJ Quantum Technol. 2014;1:10.

- 6.
Kim K, Chang MS, Korenblit S, Islam R, Edwards EE, Freericks JK, Lin G-D, Duan L-M, Monroe C. Nature. 2010;465:590.

- 7.
Simon J, Bakr WS, Ma R, Tai ME, Preiss PM, Preiss PM, Greiner M. Nature. 2011;472:307.

- 8.
Trotzky S, Chen YA, Flesch A, McCulloch IP, Schollwöck U, Eisert J, Bloch I. Nat Phys. 2012;8:325.

- 9.
Greif D, Uehlinger T, Jotzu G, Tarruell L, Esslinger T. Science. 2013;340:1307.

- 10.
Hart RA, Duarte PM, Yang T-L, Liu X, Paiva T, Khatami E, Scalettar RT, Trivedi N, Huse DA, Hulet RG. Nature. 2015;519:211.

- 11.
Hauke P, Cucchietti FM, Tagliacozzo L, Deutsch I, Lewenstein M. Rep Prog Phys. 2012;75:082401.

- 12.
Zanardi P, Campos Venuti L, Giorda P. Phys Rev A. 2007;76:062318.

- 13.
Zanardi P, Paris M, Campos Venuti L. Phys Rev A. 2008;78:042105.

- 14.
Invernizzi C, Paris MGA. J Mod Opt. 2010;57:198.

- 15.
Cover TM, Thomas JA. Elements of information theory; 1991.

- 16.
Campbell LL. Inf Sci. 1985;35:199.

- 17.
Avron H, Toledo S. J ACM. 2011;58(8):1.

- 18.
Machta BB, Chachra R, Transtrum MK, Sethna JP. Science. 2013;342:604.

- 19.
Transtrum MK, Machta BB, Brown KS, Daniels BC, Myers CR, Sethna JP. J Chem Phys. 2015;143:010901.

- 20.
Pfeuty P. Ann Phys. 1970;57:79.

- 21.
Lieb EH, Schultz T, Mattis D. Ann Phys. 1961;16:407.

- 22.
Katsura S. Phys Rev. 1962;127:1508.

- 23.
Chakrabarti BK, Dutta A, Sen P. Quantum Ising phases and transitions in transverse Ising models. Lecture notes in physics monographs. vol. 41. Berlin: Springer; 1996.

- 24.
Cuccoli A, Taiti A, Vaia R, Verrucchi P. Phys Rev B. 2007;76:064405.

- 25.
LeBlanc JPF, Antipov AE, Becca F, Bulik IW, Chan GK-L, Chung C-M, Deng Y, Ferrero M, Henderson TM, Jiménez-Hoyos CA, et al. Phys Rev X. 2015;5:041041.

- 26.
Najfeld I, Havel TF. Adv Appl Math. 1995;16:321.

- 27.
Magnus JR. Econom Theory. 1985;1:179.

- 28.
Horn RA, Johnson CR. Matrix analysis. Cambridge: Cambridge University Press; 1985.

- 29.
Morita S, Kaneko R, Imada M. J Phys Soc Jpn. 2015;84:024720.

- 30.
Nielsen MA, Chuang IL. Quantum computation and quantum information. Cambridge: Cambridge University Press; 2001.

## Acknowledgements

MS would like to acknowledge helpful conversations on the topic of robust analog quantum simulation with Jonathan Moussa, Ivan Deutsch, Robin Blume-Kohout, and Kevin Young. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy’s National Nuclear Security Administration under Contract No. DE-AC04-94AL85000. JZ thanks the financial support from NSFC under Grant No. 61673264 and 61533012, and State Key Laboratory of Precision Spectroscopy, ECNU, China.

## Author information

## Additional information

### Competing interests

All authors declare that they have no competing interests.

### Authors’ contributions

MS conceived the project and approach. JZ developed the analytical expressions for the FIM, the structure of its eigenvectors, and all computational aspects of the 1D-TFIM. MS and JZ developed the symmetry analysis of the FIM rank. All authors contributed to the numerical simulations. MS and JZ jointly wrote and edited the manuscript.

Mohan Sarovar and Jun Zhang contributed equally to this work.

## Appendices

### Appendix 1: Calculation of FIM for thermal states

We can analytically simplify the partial derivatives required to compute the FIM when the system is in a thermal state \(\varrho({\lambda})={e^{-\beta H({{\lambda }})}}/{\mathcal{Z}}\), where \(\mathcal{Z}=\operatorname {tr}e^{-\beta H({{\lambda }})}\). Now we have

In order to calculate \(\frac{\partial e^{-\beta H({{\lambda }})}}{\partial \lambda_{k}}\), we utilize Eq. (78) in Ref. [26] to obtain:

Note that we drop the *λ*-dependence when it is clear from the context. Now we diagonalize the Hamiltonian as

where *T* is a unitary matrix of eigenvectors and \(\Gamma= \textrm{diag}\{\gamma_{1}, \gamma_{2}, \ldots\}\) is a diagonal matrix of eigenvalues. Substituting this decomposition into Eq. (10), we get

where ⊙ denotes the Hadamard product, *i.e.*, element-wise product, and \({\Theta}_{pq}(\tau)= e^{(\gamma_{q}-\gamma_{p})\tau}\) is the *pq*-th element of Θ. The *τ* dependence is entirely in this matrix, and therefore we can evaluate this integral to yield:

where Φ is a matrix with elements:

Consequently,

Inserting these expressions into Eq. (9) allows us to evaluate the derivatives required to calculate the FIM for thermal states in a manner that is numerically stable.

### Appendix 2: Calculation of FIM for ground states

The FIM when the system is in its ground state, \(\vert {\psi _{\mathrm {g}}} \rangle \), can also be obtained in an analytical manner. We must calculate

where \(\varrho_{\mathrm{gs}} \equiv \vert {\psi _{\mathrm {g}}} \rangle \langle{ \psi _{\mathrm {g}}}\vert \). For a Hamiltonian with a simple (non-degenerate) minimum eigenvalue, the minimum eigenvalue and the associated eigenvector are infinitely differentiable in a neighborhood of *H*, and their differentials at \(H({{\lambda }})\) are [27]

and

where ^{+} denotes the Moore-Penrose (MP) pseudoinverse. We then obtain

and therefore,

*V*, the matrix of partial derivatives can then be written in a compact matrix form as:

These analytical expressions for the derivatives for thermal and ground states are faster and more numerically stable to evaluate than approximations using difference equations.

### Appendix 3: FIM and model symmetries

In the main text, we stated that if a quantum simulation model has a symmetry transformation that relates \(H_{k}\) and \(H_{j}\), then

This has consequences for the rank of the FIM for the model.

To prove the above, we start with the explicit expressions for the partial derivatives under thermal states, given in Eq. (9). The two *k* dependent quantities in this expression can be written, using Eq. (10) as:

Then suppose the quantum simulation possesses a symmetry with unitary representation (we assume the symmetry group is compact) \(\{U_{g}\}_{g}\), in which case \([U_{g},H({{\lambda }})] = [U_{g},O] = 0\) for all *g*. Furthermore, given the decomposition of the observable, \([U_{g}, P_{m}] = 0, \forall g,m\). Now, suppose the symmetry maps \(H_{j}\) to \(H_{k}\), meaning \(H_{k} = U_{g} H_{j} U_{g}^{\dagger}\), then using the commutation properties stated above,

Also,

Therefore, all *k*-dependent terms in Eq. (9) are the same if we exchange *k* with *j*, and hence we arrive at Eq. (16) for thermal states.

To prove the same property when the system is in its ground state, we turn to the expression for the partial derivatives given in Eq. (15):

Since \([U_{g},H({{\lambda }})]=0\), and both of these operators are normal, they share an eigenbasis, implying \([U_{g}, \varrho_{\mathrm{gs}}]=0\). Therefore,

Using \([U_{g},H({{\lambda }})]=0\), it is easy to verify that \(U_{g}(E_{0}-H({{\lambda }}))^{+}U_{g}^{\dagger} \) is also the MP pseudoinverse of \(E_{0} I-H({{\lambda }})\), and from the uniqueness of MP pseudoinverse, we have that

From this equality and Eq. (18), Eq. (16) follows for ground states as well.

### Appendix 4: Structure of the eigenvectors of *F*

As discussed in the main text, spatial symmetries of a quantum simulation model render some rows of the matrix *V* equal. Here we show that this induces a certain structure on the Fisher information matrix (FIM), namely that the corresponding entries of each eigenvector of *F* are equal.

Without loss of generality, we assume that *V* can be written as

where \(\mathbf{1}_{k}\) is a column vector with dimension \(n_{k}\) and all entries being 1, and \(v_{k}^{\mathsf{T}}\) are pairwise distinct row vectors. As a result,

Let

and \(p^{\mathsf{T}}= [ p_{1} \ \cdots\ p_{s} ] \) is an eigenvector of *MD* with eigenvalue *α*. Then

Therefore, \([ p_{1}\mathbf{1}_{1}^{\mathsf{T}} \ p_{2}\mathbf{1}_{2}^{\mathsf{T}}\ \cdots\ p_{s}\mathbf{1}_{s}^{\mathsf{T}} ]^{\mathsf{T}}\) is an eigenvector of *F*. From Eq. (20), we know that the rank of *V* is *s*, and thus the ranks of *M* and *F* are both *s*. Hence, all the eigenvectors of *F* can be written in the form \([ p_{1}\mathbf{1}_{1}^{\mathsf{T}} \ p_{2}\mathbf{1}_{2}^{\mathsf{T}} \ \cdots\ p_{s}\mathbf{1}_{s}^{\mathsf{T}}] ^{\mathsf{T}}\), that is, they have the same structure of repeated entries as *V* in Eq. (20).

### Appendix 5: Robustness at high temperature

We will show that in the limit of high temperature, the FIM approaches 0 at the rate of \(\beta^{2}\). For simplicity, we consider an *n*-qubit system. From Appendix 1, and therefore we know that when the system is in a thermal state \(\varrho({\lambda})={e^{-\beta H({{\lambda }})}}/{\mathcal{Z}}\), we have

where \(\mathcal{Z}=\operatorname{tr}e^{-\beta H({{\lambda }})}\). In the high temperature limit, \(\beta\rightarrow0\), we expand to the first order

to obtain

and

Further, using this approximation and ignoring higher order terms in *β*, we get

and

Combining these two equations, we have

where

Define a matrix *U* whose *km*-th element is \(u_{km}\). Then \(F=\beta^{2} U\Lambda^{-1}U^{\dagger}\). Hence, as \(\beta\rightarrow0\), the FIM approaches the zero matrix as \(\beta^{2}\) and thus the quantum simulation is robust. Furthermore, \(U\Lambda^{-1}U^{\dagger}\) is a constant matrix that is independent of the system parameters, which indicates that at high temperature the quantum simulation is completely insensitive to the nominal values of the underlying parameters.

### Appendix 6: Computational aspects for the 1D transverse field Ising model

The 1D transverse field Ising model (1D-TFIM) has a well-known mapping to a free-fermion system [20, 21], and thus is efficiently solvable. We use these efficient solutions in order to present results for large *n* versions of this model. In this section we explicitly demonstrate how the free fermion mapping can be used to calculate the probability distribution of the observables examined in the main text for this model. In the following we present calculations for the open boundary condition case for this model, but similar results hold for the periodic boundary condition also.

### 6.1 Net magnetization distribution for the 1D-TFIM

Recall that the Hamiltonian for the 1D-TFIM is given by

Consider the observable \(S_{z}=\sum_{j=1}^{n} \sigma_{z}^{j} = \sum_{m} \theta _{m} P_{m}\), where in the second equality we have decomposed the observable as a sum of projectors. We wish to compute \(p_{m}=\operatorname {tr}(P_{m}\varrho)\), and we use a two-step procedure to calculate this quantity. First, we express each \(P_{m}\) as a linear combination of \(\{S_{1}, \ldots, S_{n}\}\):

where

Second, we calculate the expection values of \(S_{j}\), *i.e.*, \(\langle S_{j}\rangle=\operatorname {tr}(S_{j} \varrho)\). Combining these two steps, we have

We now elaborate on the details of these two steps. First, we express \(P_{m}\) in terms of \(S_{j}\). The observable \(P_{m}\) can be written as

where \(|\kappa_{j}\rangle\) is a state with \(m-1\) spins in the ground state \(|0\rangle\) and \(n-m+1\) spins in the excited state \(|1\rangle\), and \(N_{m}={n \choose m-1}\). For simplicity, we use the case \(m=2\) to illustrate the approach. In this case, we have

Since \(|0\rangle\langle0|={I/2}+\sigma_{z}\) and \(|1\rangle\langle 1|={I/2}-\sigma _{z}\), we have

Eq. (38) can be rewritten as

To find the coefficients \(\xi_{mj}\), we replace \({I^{\otimes n}/2}\) by \(\frac{1}{2}\) and \(\sigma_{z}^{j}\) by a scalar variable \(x_{j}\) in Eq. (39) and obtain the following polynomial:

The polynomial \(p_{2}\) is symmetric and thus can be represented by elementary symmetric polynomials \(s_{j}\):

The coefficients to represent \(P_{2}\) in terms of \(S_{j}\) are identical to those that represent \(p_{2}\) in terms of \(s_{j}\), that is,

In fact, to obtain \(\xi_{mj}\), we can choose all the variables \(x_{j}\) to be the same *x*. Then, we have

Equating the coefficients in both sides of Eq. (43), we can obtain \(\xi_{mj}\).

Next we show how to compute \(\langle S_{j}\rangle\). From Refs. [20, 21], we define two matrices *P* and *Q* as

Let \(\phi_{k}^{T}\) be a normalized row eigenvector of \((P-Q)(P+Q)\), *i.e.*, \(\phi_{k}^{T} (P-Q)(P+Q)=\Lambda_{k}^{2} \phi_{k}^{T}\). Let \(\psi_{k}^{T}=-\Lambda_{k}^{-1}\phi_{k}^{T}(P-Q)\). Juxtapose \(\phi_{k}^{T}\) and \(\psi_{k}^{T}\) into two matrices Φ and Ψ. For the calculation of ground state, we define

and for the thermal state, we let

From Wick’s theorem and Ref. [21], we know that \(\langle S_{j}\rangle\) is the sum of all the *j*-by-*j* principle minor of *G*. Moreover, from Ref. [28], we have

Hence we can determine \(\langle S_{j}\rangle\) by calculating the characteristic polynomial of *G*. With these two steps, we can now obtain \(p_{m}\).

### 6.2 Correlation function distribution for the 1D-TFIM

When the observable is the correlation function \(C_{z}(i,j)=\sigma_{z}^{i} \sigma_{z}^{j}\), we know from Eq. (2.33c) in Ref. [21] that under the ground state,

and under the thermal state,

where \(G^{g}\) and \(G^{t}\) are defined in Eqs. (45) and (46), respectively.

We then consider to analytically calculate the FIM for ground state. Since \(\sigma_{z}^{i}\sigma_{z}^{j}\) has two eigenvalues \(\pm\frac{1}{4}\), we obtain that for ground state,

Then

We now derive \(dG^{g}/d\lambda_{l}\). Since \(G^{g}=\Psi^{T} \Phi\), we have

The matrix \((P-Q)(P+Q)\) is simple, meaning that it has pairwise distinct eigenvalues. Then its eigenvalue and the associated eigenvector are infinitely differentiable in a neighborhood of \(H({{\lambda }})\) and their differentials are

where ^{+} denotes the Moore-Penrose pseudoinverse. From the definition of *P* and *Q* in Eq. (44), it is straightforward to derive \(dP/d\lambda_{l}\) and \(dQ/d\lambda_{l}\) and thus

Moreover, we have that

where

Combining these equations, we can calculate \(dp_{1}/d\lambda_{l}\) and \(dp_{2}/d\lambda_{l}\) for ground state analytically. For thermal states, we just need to calculate an additional derivative of \(\tanh(\frac{\beta}{2} \Lambda)\) in \(G^{t}\) and can obtain the results similarly.

When the observables are \(\sigma_{x}^{i}\sigma_{x}^{j}\) and \(\sigma_{y}^{i}\sigma_{y}^{j}\), their mean values can be obtained from Eq. (2.33a) and (2.33b) in Ref. [21]. And following similar procedures as above, we can derive analytical expressions for derivatives of the measurement probabilities.

### Appendix 7: Robustness of more quantum simulation models

In this section we report the behavior of the FIM for some quantum simulation models that were not included in the main text for conciseness.

### 7.1 2D transverse field Ising model

In the main text we demonstrate how symmetry analysis of the 2D-TFIM with open boundary conditions and net magnetization as the observable enables one to determine the rank of FIM for this model, and show that it is sloppy. For more details on the symmetry analysis for this model, see Appendix 8. Here in Figure 8, we explicitly present the eigenvalues and eigenvectors of the FIM for a \(3\times3\) square lattice version of this model. It is evident from Figure 8(b) that the FIM eigenvalues agree with the rank bound (\(\operatorname{rank} \leq5\)) derived from symmetry. Furthermore, Figures 8(c) and (d) show that the forms of the influential CPDs respect the symmetry of the model.

### 7.2 1D random Ising model

To examine a model with disorder, consider the 1D transverse field Ising model with random local fields and coupling energies, *i.e.*,

with periodic boundary conditions (\(\sigma_{x}^{n+1}\equiv\sigma _{x}^{1}\)), and \(B_{i}^{0} = B^{0} + \delta B_{i}, J_{i}^{0} = J^{0} + \delta J_{i}\), where \(\delta B_{i}\) and \(\delta J_{i}\) are independent zero-mean Gaussian random variables with standard deviation *σ*. As for the observable of interest, consider the net magnetization \(S_{z}\) again. This quantum simulation model has no symmetries due to the random parameters and so the FIM rank bounds based on symmetry are not informative. The number of measurement outcomes for this observable is \(M=n+1\), and therefore the rank of the FIM is at most *n*. In Figure 9(a) we show the eigenvalues of the FIM for a 10-spin example of this quantum simulation model, with \(J^{0}=1\), disorder variance \(\sigma=0.2\) and \(\beta=10\). This figure shows the FIM eigenvalues for one representative sample of \(\delta B_{i}\) and \(\delta J_{i}\). As evident from this figure, while the dominant eigenvalue is roughly two orders of magnitude above all others, this model cannot be considered sloppy except for small or large values of \(B^{0}\). In Figure 9(b) we also show the form of the first influential CPD (we do not label the points on this plot since we only wish to illustrate the complexity of the behavior of this quantity for this model).

### 7.3 \(J_{1}\)-\(J_{2}\) antiferromagnetic Heisenberg model

Now we turn to a quantum simulation model based on a Hamiltonian that contains non-nearest-neighbor interactions and geometric frustration. The \(J_{1}\)-\(J_{2}\) antiferromagnetic Heisenberg model is defined by the following Hamiltonian governing spin-\(1/2\) systems on a two-dimensional lattice:

where the first sum is over nearest-neighbor spins and the second is over next-nearest-neighbor spins. We are interested in the uniform nominal operating point for this model where \(J_{ij}^{0} = J^{0}\) and \(K_{ij}^{0} = K^{0}\) with \(J^{0}\), \(K^{0}>0\).^{Footnote 5} Figure 10 shows a single plaquette in the square lattice in the nominal model.

The magnetic order in this system is complex with different phases of magnetic ordering being driven by competition between the two different kinds of interactions. The magnetic order parameter is different in different \(K^{0}/J^{0}\) regimes. For small values of this ratio (∼0) the magnetization is Néel ordered (the model resembles a conventional Heisenberg antiferromagnet on a square lattice in this regime), and as this ratio approached unity one has so-called “striped magnetization” [29]. Our observables of interest is the staggered magnetization, which probes the Néel order in the system:

where *n* is the total number of spins in the system.

The quantum simulation model \(\{H_{5}, M_{s}\}\) with open boundary conditions on the lattice has several symmetries despite the complicated form of the observable of interest. For square lattices, this model has rotational symmetry about the center of the lattice and reflection symmetry about four reflection lines. In Figure 11(a) we explicitly show the symmetries in this model for a \(3\times3\) square lattice. Note that since *n* is odd, all these symmetry transformations take odd (even) labeled spins to odd (even) labelled spins, and hence leave the observable of interest invariant. From this symmetry analysis, we obtain a rank bound on the FIM of \(\operatorname {rank}(F)\leq4\). Figure 11(b) shows the eigenvalues of the FIM for this \(3\times3\) example for \(\beta=10\) and \(\beta=1\), and it is clear that the rank bound is respected. Finally, Figure 11(c) shows the primary influential CPD for this model when \(\beta=10\). The first four eigenvectors of the FIM all define influential CPDs since the first four eigenvalues are non-negligible. We only plot the primary influential CPD here for simplicity, but all the others have the same symmetry properties.

### Appendix 8: Examples of model symmetries and representations

Here we explicitly construct representations of symmetry groups for two quantum simulation models analyzed in the main text. These representations acting on the Hilbert space of the model can be constructed from elementary SWAP operations.

First consider the 1D transverse-field Ising model (1D-TFIM) with periodic boundary conditions \(\{H_{1}^{\mathrm{per}}, S_{z}\}\) as discussed in the main text.

This model is translationally invariant and therefore its symmetry group *G* is defined as

where

and \(U_{jk}\) is the SWAP operation between two nodes *j* and *k*, *i.e.*,

It is easy to verify that

where \({{w}}=x\), *y*, or *z*. For \(m< n\), we have

Therefore, we can obtain

and

For any *g*, we have that

where \(j-g\) is understood to be computed with modulo *n*. Then, since the ideal Hamiltonian for the model has identical nominal parameters (\(B_{i}^{0} = B^{0}, J_{i}^{0} = J^{0}\)), we have that \(U^{g} H (U^{g})^{\dagger}=H\); and furthermore, \(U^{g}O(U^{g})^{\dagger}=O\). From Eq. (60) and the discussion in Appendix 3, we know that

We can thus write *V* as

where \(\mathbf{1}= [ 1\ \cdots\ 1] ^{T}\in \mathbb{R}^{n}\), *a*, \(b\in \mathbb{R}^{n}\), and the FIM can be written as

From this form it is evident that \(\operatorname {rank}F=2\), and the two nonzero eigenvectors are

and

Hence the influential composite error deviations take the form \(\sum_{i}\Delta B_{i}\) and \(\sum_{i}\Delta J_{i}\).

Next consider a 2D-TFIM on a square lattice with open boundary conditions. A more explicit form of the Hamiltonian for this model than the one given in the main text is:

where \((j_{1},j_{2})\) denotes the Cartesian coordinate for a node, *e.g.*,

The quantum simulation model we consider is \(\{H_{3}, S_{z}\}\), and thus the observable has complete translational symmetry. For the nominal values of the parameters for this quantum simulation model, we only require those that are symmetric with respect to *x*- or *y*-axes are equal, *i.e.*,

In this case the quantum simulation model has reflection symmetry about the *x*- and *y*-axes and 90^{∘} rotation symmetry. The generators of the symmetry group are \(\{I, U_{x}, U_{y}, U_{R}\}\), where

where \(M_{(j_{1}, j_{2})}^{(k_{1}, k_{2})}\) is the SWAP operation between two nodes \((j_{1}, j_{2})\) and \((k_{1}, k_{2})\):

When this operator is applied to local terms, we have

where \({w}=x\), *y*, or *z*. Note that \(U_{x}\) flips \(\sigma_{{w}}^{(m_{1},m_{2})}\) with respect to *x*-axis, and \(U_{y}\) flips with respect to *y*-axis. The operator \(U_{R}\) is the product of three rotations from quadrant I to II, II to III, and III to IV, and then it rotates \(\sigma_{{w}}^{(m_{1},m_{2})}\) by 90^{∘} clockwise. Hence \(U_{x}\), \(U_{y}\), and \(U_{R}\) commute with both \(H({{\lambda }})\) and *O*. From the discussion in Appendix 1, we obtain

and

We know that all the nodes and couplings that are mirror images of each other with respect to the horizontal or vertical axes, or images of 90^{∘}, 180^{∘}, or 270^{∘} rotations, have identical rows in the FIM.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Sarovar, M., Zhang, J. & Zeng, L. Reliability of analog quantum simulation.
*EPJ Quantum Technol.* **4, **1 (2017) doi:10.1140/epjqt/s40507-016-0054-4

#### Received

#### Accepted

#### Published

#### DOI

### Keywords

- Transverse Magnetization
- Fisher Information Matrix
- Parameter Deviation
- Quantum Critical Point
- Dominant Eigenvalue