Skip to main content

Efficient excitation-transfer across fully connected networks via local-energy optimization

Abstract

We study the excitation transfer across a fully connected quantum network whose sites energies can be artificially designed. Starting from a simplified model of a broadly-studied physical system, we systematically optimize its local energies to achieve high excitation transfer for various environmental conditions, using an adaptive Gradient Descent technique and Automatic Differentiation. We show that almost perfect transfer can be achieved with and without local dephasing, provided that the dephasing rates are not too large. We investigate our solutions in terms of resilience against variations in either the network connection strengths, or size, as well as coherence losses. We highlight the different features of a dephasing-free and dephasing-driven transfer. Our work gives further insight into the interplay between coherence and dephasing effects in excitation-transfer phenomena across fully connected quantum networks. In turn, this will help designing optimal transfer in artificial open networks through the simple manipulation of local energies.

1 Introduction

Boosted by the unprecedented interest towards quantum information technologies, the study of the properties of complex networks in the quantum domain has received a great deal of attention, due to its broad range of applicability [1, 2]. Several theoretical studies have indeed shown that modeling complex quantum phenomena in terms of simple quantum systems is not sufficient to capture a plethora of interesting problems belonging to different fields, ranging from—just to name a few—quantum communication [3, 4], transport phenomena in nanostructures [5, 6], to quantum biology [7, 8].

Despite their apparent differences, such phenomena face similar theoretical challenges. On one hand, they require a deeper understanding of the role played by the geometry and topology in the properties of the network itself, as well as its optimal functionality. On the other hand, while exploring quantum dynamical processes in complex networks, it is crucial to assess whether or not genuine quantum features, such as non-classical correlations [9], or genuine quantum processes, such as decoherence [10], may influence the transport properties of a given complex network. The significance of these theoretical studies is essentially twofold: they constitute an attempt to unveil the potential benefits offered by quantum resources, while paving the way towards a better understanding of the way quantum complex networks can be realised in practice.

In this work, we address the problem of identifying a network configuration compatible with optimal transport performances, including the effect of non-trivial interaction between the complex system and an external environment. These instances call for an extensive use of the open quantum systems formalism, the latter being able to effectively describe and interpret the dynamical evolution of a system undergoing irreversible processes such as dissipation and dephasing, which result from the interaction with the environment [1113].

However, attacking this multifaceted issue from the most general standpoint would be a formidable task. We therefore focus on a specific model of open quantum network, whose features have been extensively studied. The latter has been used to effectively describe the phenomenological dynamics of the Fenna–Matthews–Olson (FMO) protein complex [14, 15]. This complex plays a pivotal role in light-harvesting process of green-sulphur bacteria: it mediates the highly efficient transfer of excitations from large antenna structures to reaction centres [7, 15, 16]. The dynamics of such a complex has been modeled and thoroughly studied resorting to the open quantum system paradigm in a series of seminal papers [1719]. In particular, in this work we refer to Ref. [18], where the FMO complex dynamics is described by a network simultaneously undergoing a Hamiltonian dynamics, accounting for the coherent exchange of excitations between the network sites, along with dephasing and dissipative Lindbladian dynamics, leading to loss of coherence and excitations instead. Interestingly, the dynamics exhibits a behaviour, which, to some extent, seems conterintuitive: unlike a classical random walk model, whenever one studies the fully Hamiltonian dynamics (i.e., in absence of dephasing and dissipation), the transport through the network can be inhibited as a consequence of destructive interference between sites [18]. Such destructive interference can be suppressed by either adding local static disorder—eventually leading to perfect excitation transfer in the limit of random local energies—or adding local dephasing noise. The addition of static disorder contradicts the celebrated Anderson localization, according to which random disorder is responsible for inhibition of fully coherent transfer [20, 21]. The effects of local dephasing mechanisms, instead, clearly show that this is a relevant example of environment-assisted transport [17, 22, 23]: contrary to expectations, the effect of dephasing is not necessarily detrimental for the performance of transport, which could instead be enhanced in certain conditions [17, 18].

Our work systematically explores the interplay between optimal transport and different instances of dephasing noise affecting the system’s coherence. More specifically, we focus on the model introduced in Ref. [18] to describe the FMO complex, where the latter is represented by a \(N=7\)-site fully connected network (FCN), i.e, a network where each site is connected to any other one. We find the optimal distribution of local on-site energies resulting in the best population transfer—constrained by the interaction strengths as gathered from experiments [24]—under different dephasing conditions.

Focusing on the local energies without changing the interaction strengths greatly simplifies both the numerical optimization and practical implementation of the resulting network. Furthermore, in similar systems, there is evidence that the excitation transport efficiency is more susceptible to changes to the site energies than to the connection strengths [25]. We also assess how robust the transfer performance is against changes in the network configuration. In particular, for fixed N, we perform arbitrary changes in the network connectivity, and also change the initial site, where the excitation is initially injected, or the final site where the excitation is extracted from the network (the so-called sink). In all these cases we find that the presence of even a moderate local dephasing noise makes the transport properties quite robust against arbitrary change in the network properties.

We then go beyond the prescriptions imposed by experimental data and perform the optimization step by choosing random (albeit fixed) coupling strengths. This is consistent with the aim of this work that, as we should indeed stress, is not about ascertaining how effective the FCN representation of Ref. [18] is at correctly reproducing the FMO phenomenology. Rather, we would like to address the potential to improve transport performances in a given network, whose architecture is well justified by experimental evidence, by exploiting its quantum features.

The remainder of the paper is organised as follows. In Sect. 2, we describe our model of open N-site FCN, whose dynamics is affected by Markovian dissipation and dephasing. In Sect. 3, we give more details about the methodology used throughout the paper. In particular, we discuss the standard procedure for Markovian master equation vectorization used both for numerically simulating the system dynamics, and arranging the parameters over which performing the optimization in a suitable way. Using an adaptive gradient-descent technique, we run the numerical simulations whose results and analysis are given in Sect. 4. Focusing on the case of a network made of \(N=7\), we thoroughly study its perfomances by optimizing the local energies, while the couplings between the network sites are given. In the same Section, we discuss the resilience of the network against changes in the network configuration. We finally draw our conclusions in Sect. 5, where possible future directions are also discussed.

2 Description of the model

Following Ref. [18], we consider a FCN made of N sites—Cf. Fig. 1. We assume that, together with the Hamiltonian dynamics, the system is affected by two different noise mechanism: local pure dephasing, which destroys coherence of any superposition of states, and local spontaneous emission, causing the network to irreversibly transfer excitations from one site to the environment. We further assume that one excitation at a time can be transferred across the network, i.e., we work in the single-excitation subspace. This assumption reduces the complexity of the problem, as we scale down the Hilbert space dimension from \(2^{N}\) to N, still exhibiting interesting physics. In order to fix the notation, let us introduce the basis \(\{ \vert n \rangle \}\) (\(n=1, \ldots, N\)). In this basis, the unitary dynamics of the FCN is captured by a \(N\times N\) Hamiltonian containing the energies associated to each site, as well as the coupling between them. For purposes that will become clear in the following Sections, the system Hamiltonian can be decomposed as

$$ H=H_{D}+H_{I}, $$
(1)

where \(H_{D}\) is the diagonal part containing all the on-site energies, while \(H_{I}\) contains the coupling between any two sites of the network. The former can be decomposed as

$$ H_{D}=\sum_{n} h_{n} \vert n\rangle \langle n \vert , $$
(2)

where \(h_{n}\) is the energy associated to the n-th site, while the latter is given by

$$ H_{I} = \sum_{m,n} J_{mn} \vert m \rangle \langle n \vert , $$
(3)

which, in the language of graph theory, represents the so-called adjacency matrix [26]. Note that, as we are dealing with a FCN, \(H_{I}\) is not a sparse matrix, meaning that, in general, we have \(J_{mn}\neq 0\).

Figure 1
figure 1

Sketch of the physical situation investigated in this paper. We consider a fully connected network comprising N sites. The excitation is initially injected in one site of the network (darker green), while a different site is attached to a sink into which the excitation is transferred (at a rate \(\Gamma _{s}\)). The generic n-th site of the network is locally affected by dephasing noise (at a rate \(\gamma _{n}\)) and spontaneous emission, the latter causing the excitation to be irreversibly lost at rate \(\Gamma _{n}\). By optimizing the set of local energies \(h_{n}\), we systematically study the efficiency of excitation-transfer to the sink under different dephasing conditions

The whole picture is actually completed by introducing two auxiliary sites to the network: one where excitations are irreversibly lost after spontaneous emission, the other where excitations are transferred to, and which mimics the reaction center in photosynthetic complexes such as the FMO complex. This site is named, from now on, the sink. Owing to such extra sites, we are actually working with an \((N+2)\)-dimensional Hilbert space, therefore we complete our basis by introducing \(\vert 0 \rangle \) and \(\vert s \rangle \), which identify the aforementioned extra sites, respectively.

Resorting to this notation, the pure dephasing process is formally described by local Lindblad operators of the form

$$ L_{\gamma _{n}}=\sqrt{\gamma _{n}} \vert n \rangle \langle n \vert , $$
(4)

where \(\gamma _{n}\) is the dephasing rate. Differently, the spontaneous emission processes are modeled through the set of Lindblad operators

$$ L_{\Gamma _{n}}=\sqrt{\Gamma _{n}} \vert 0 \rangle \langle n \vert , $$
(5)

where \(\Gamma _{n}\) is the rate with which the excitation is lost in the local environment. As we said earlier, we introduce a sink, where the excitation travelling through the network is transferred with a rate \(\Gamma _{s}\). Similarly to Equation (5), this process is physically modeled as a spontaneous emission, whose associated Lindblad operator reads

$$ L_{\Gamma _{s}}=\sqrt{\Gamma _{s}} \vert s\rangle \langle m \vert , $$
(6)

\(\vert m \rangle \) being a given site of the network, i.e., \(m = {1, \ldots N}\). Note that the latter ensures that population is irreversibly transferred to the sink once the target site \(\vert m \rangle \) is reached.

We assume that our system undergoes a fully Markovian irreverisble dynamics, therefore the corresponding Lindblad master equation reads

$$\begin{aligned} \frac{d\rho}{dt}= {}& {-}\frac{i}{\hbar}[H, \rho ] \\ &{} + \frac{1}{2} \sum_{ \substack{\mu = \{\gamma _{n}\}, \\ \{\Gamma _{n}\}, \\ \Gamma _{s}}} \bigl(2 L_{\mu}\rho L^{\dagger}_{\mu}-L^{\dagger}_{\mu }L_{\mu } \rho -\rho L^{\dagger}_{\mu }L_{\mu} \bigr), \end{aligned}$$
(7)

where H is suitably defined over the enlarged \((N+2)\)-dimensional Hilbert space, while the sums over \(\gamma _{n}\) and \(\Gamma _{n}\) are meant to run over all the possible values of \(n= 1, \ldots, N\). Note that the case of a uniform network, i.e., when \(h_{n}, J_{mn}, \gamma _{n}, \Gamma _{n}\) are all equal for any value of n, can be analytically solved—Cf. Appendix A in Ref. [18]. In a more general setting, one can include temperature and memory effects by replacing Equation (7) by a more general master equation, as done, e.g., in Ref. [19] using the numerically exact Time Evolving Density with Orthogonal Polynomial Algorithm (TEDOPA) [2729]. However, including these effects is beyond the scope of this work, so we restrict to the Markovian master equation in Eq. (7). Finally, it is worth stressing that we have made the underlying assumption of local environmental mechanisms. This adheres well with a scenario where the nodes of the network are spaced more than any spatial correlation-length of the environment. While this allows to explicitly bypass the possibility of environment-induced effects in the transport of the excitations to the sink, it matches the situation encountered in situations of simulated networks consisting of matter-like information carriers effectively connected by radiation-based quantum buses and addressed by local potentials to tune their respective local energies. Although identify a specific arrangement is not among the goals of our investigation, we will have such an architecture in mind, implicitly, in the remainder of our formal analysis.

3 Methods

Given the model introduced in Sect. 2, our goal is to optimize the local on-site energies \(\vec{h} \equiv \{h_{1}, \ldots h_{N} \}\) in order to further improve the excitation transfer, under different dephasing conditions.

First, prior to the optimization problem, we need to solve the system dynamics. To this end, one widely used option is to vectorise Equation (7) [30]. Note that, by construction, the reduced density operator is represented by a \((N+2) \times (N+2)\) Hermitian matrix ρ, which also automatically encodes information about two extra sites introduced in Sect. 2. Through vectorisation, the density matrix is readily transformed to a \((N+2)^{2}\)-dimensional vector (whence the name of the technique)

$$ \rho \rightarrow \vec{r}=(\rho _{00},\rho _{01},\ldots,\rho _{0 N+1}, \rho _{10},\ldots,\rho _{N+1 N+1}). $$
(8)

Analogously, the unitary part can be remapped according to

$$ [H,\rho ] \rightarrow \mathcal{L}_{U}\vec{r} \equiv \bigl(I\otimes H-H^{T} \otimes I\bigr)\vec{r}, $$
(9)

whereas the dissipative part transform as

$$\begin{aligned} &L_{\mu}\rho L^{\dagger}_{\mu}- \frac{1}{2}\bigl\{ L^{\dagger}_{\mu }L_{\mu}, \rho \bigr\} \rightarrow \mathcal{L}_{\mu}\vec{r} \\ &\quad =\biggl[\bigl(L^{\dagger}_{\mu}\bigr)^{T}\otimes L_{\mu}-\frac{1}{2}\bigl(I\otimes {L}^{ \dagger}_{\mu} {L}_{\mu}+\bigl({L}^{\dagger}_{\mu} {L}_{\mu} \bigr)^{T}\otimes I\bigr) \biggr]\vec{r}. \end{aligned}$$
(10)

By applying this set of rules, one obtains a first-order differential equation of the form

$$\begin{aligned} \dot{\vec{r}} = -\frac{i}{\hbar}\mathcal{L} \bigl[ \vec{r} (t) \bigr], \end{aligned}$$
(11)

where the full vectorised Lindbladian is given by \(\mathcal{L}\equiv \mathcal{L}_{U} + i \hbar \sum_{\mu }\mathcal{L}_{ \mu}\). Owing to this representation, the system state at a generic time t is formally obtained by exponentiation, i.e.,

$$ \vec{r}(t)=e^{-\frac{i}{\hbar} t \mathcal{L}} \vec{r}(0). $$
(12)

Therefore the sink population is easily obtained considering the \((N+2)^{2}\)-th component of \(\vec{r}(t)\), i.e., \(r_{s}^{t}=\vec{r}(t)\cdot \vec{s}\), where \(\vec{s}=(0,0,\ldots,0,1)\) is a \((N+2)^{2}\)-dimensional vector. Notice that, after applying the transformation given by Equation (9), we are still able to separate between an interaction term \(\mathcal{L}^{I}_{U}\) and a term which depends solely on the local energies \(\mathcal{L}^{D}_{U}\). The latter can be written as

$$ \mathcal{L}^{D}_{U}=\sum _{n} h_{n} \mathcal{H}_{n}, $$
(13)

with

$$ \mathcal{H}_{n}=\bigl(I\otimes \vert n\rangle \langle n \vert - \vert n\rangle \langle n \vert \otimes I \bigr). $$
(14)

Let us now fix the total evolution time T and prepare the system with an excitation in the n-th site. The goal of optimizing the population transfer can be achieved by minimising the cost function

$$ C(\vec{h})=1-r^{T}_{s}(\vec{h}), $$
(15)

where h⃗ represents the set of parameters over which the optimization is performed.

Using Equations (8) to (14), we are able to obtain the sink population \(r^{T}_{s}\) and calculate its gradient with respect to h⃗. The latter can be efficiently done by using Automatic Differentiation techniques [31]. We can hence minimise \(C(\vec{h})\) using gradient-based techniques, eventually finding the optimal on-site energies configuration, i.e., \(\vec{h}_{\text{opt}}\). In this work, we chose to use a Root Mean Square Propagation (RMSprop) algorithm [32], an adaptive learning-rate optimization algorithm developed to tackle limitations of stochastic gradient descent in training deep neural networks. It adjusts learning rates for each parameter and divides the gradients by an exponentially weighted moving average of the squares of the derivatives in the parameters updates. This aids convergence, speed, and stability. While reuqiring careful hyper-parameter tuning, RMSprop is a valuable tool for training neural networks, particularly useful for non-stationary objectives and recurrent neural networks. Our choice is based solely on the fact that we generally observed a comparatively faster convergence to the solution compared to other similar techniques in our numerical simulations.

4 Analysis and results

As mentioned above, we start our analysis by considering a specific network made of \(N=7\) sites, which, according to the evidence experimentally gathered in [24], reproduces quite accurately the excitation transfer operated by a FMO complex. In order to perform the optimization, we assume that the coupling between the network sites are those given in Ref. [18], which, in turn are based on the experimental results given in Ref. [24]. Therefore, the non-diagonal part of the system Hamiltonian is given by

$$ H_{I} = \begin{pmatrix} 0 & -104.1 & 5.1 & -4.3 & 4.7 & -15.1 & -7.8 \\ -104.1 & 0 & 32.6 & 7.1 & 5.4 & 8.3 & 0.8 \\ 5.1 & 32.6 & 0 & -46.8 & 1.0 & -8.1 & 5.1 \\ -4.3 & 7.1 & -46.8 & 0 & -70.7 & -14.7 & -61.5 \\ 4.7 & 5.4 & 1.0 & -70.7 & 0 & 89.7 & -2.5 \\ -15.1 & 8.3 & -8.1 & -14.7 & 89.7 & 0 & 32.7 \\ -7.8 & 0.8 & 5.1 & -61.5 & -2.5 & 32.7 & 0 \end{pmatrix}. $$
(16)

Here and in the following energy values are expressed in units of \(1.2414 \cdot 10^{-4} \ \text{eV}\), while times are in ps, as in Refs. [18, 24].

A few comments about \(H_{I}\) are now in order. Equation (16) is representative of a network with a high level of connectivity—all the entries are non-zero, as we would expect for a FCN where every site is coupled to any other site—where evidently coupling strenghts are site-dependent. We further assume that the sink \(\vert s \rangle \) is attached to the third site, represented by \(\vert 3 \rangle \). This assumption is actually physically motivated in FMO complexes: experimental evidence suggests that the third site is the one coupled with the reaction centre [24]. For the sake of completeness, we mention here that more recent experiments revealed the existence of a 8-th site in the FMO complex [33, 34]; however, for our purposes, we mainly consider the case of \(N=7\) sites, as the largest system, with some results concerning networks with reduced size discussed in Sect. 4.2.

Although our first aim is to optimize the network transport properties over the on-site energies, we will benchmark our numerical findings against those contained in Ref. [18], where the on-site energies are given by, \(\vec{h}_{\text{ref}}=(215, 220, 0, 125, 450, 330, 280)\), the decaying rates associated to spontaneous emission are \(\Gamma _{n}=\Gamma =5\cdot 10^{-4}\), \(\Gamma _{s}=6.283\), the optimal local dephasing rates are \(\vec{\gamma}_{\text{ref}}=(0.157, 9.432, 7.797, 9.432, 7.797, 0.922, 9.433)\), while the total evolution time is \(T=5\) unless differently stated.

Using the methodology introduced in Sect. 3, we can, for instance, plot a typical learning curve of h⃗. In Fig. 2, we show the final sink population \(r_{s}^{T}\) as a function of the number of the iterations of the optimization algorithm, given a specific network and environment configuration.

Figure 2
figure 2

Example of learning curve using the approach presented in Sect. 3. The final sink population \(r_{s}^{T}\) is optimized with respect to h⃗, and we plot it as a function of the number of iterations of RMSprop, with \(T=5\). The numerical values of the \(H_{I}\) entries are those given by Equation (16), the decaying rates are given by \(\Gamma _{n}=\Gamma =5\cdot 10^{-4}\), \(\Gamma _{s}=6.283\), while we assume that \(\gamma _{n}=0\) for any value of n, meaning that the network is not subject to any dephasing. We assume that the excitation is initially injected in the first site of the network, i.e., \(\rho (0)=|1\rangle \langle 1|\)

4.1 Optimal solutions

In this Section, we systematically study the network performance under different dephasing conditions, by looking at the population transferred to the sink over a total evolution time T. To this end, we assume that the coupling between the sites is given by \(H_{I}\) in Equation (16), with the numerical values of the decaying rates as given above. We then study the effectiveness of the optimization of local energies for different dephasing conditions. We indeed start with considering three relevant cases: in the first case, we consider the local dephasing rates \(\vec{\gamma _{\text{ref}}}\) obtained through the optimization performed in Ref. [18]; in the second case, the network sites are not subject to any dephasing, i.e., \(\gamma _{n} = 0\), for any value of n; the third case, where the dephasing is uniform across all sites, i.e., \(\gamma _{n}=\gamma = 1\). For easing the notation, we will denote the array of local depashing rates γ⃗ with \(\vec{\gamma _{\text{ref}}}, \vec{0}, \vec{1}\) in the three aforementioned cases, respectively.

For sake of definiteness, we assume that the excitation is initially injected in the first site of the network, i.e., the initial condition for Equation (7) reads \(\rho (0)=|1\rangle \langle 1|\); furthermore we assume that the third site, i.e., \(\vert 3 \rangle \), is connected to the sink. It is worth mentioning that our results are not qualitatively affected if we change either the state connected to the sink or the initial excited site.

In order to solve the optimization problem, we first initialise the local energies \(\vec{h}_{0}\) setting them all equal to zero, i.e., \(\vec{h} \equiv \vec{h}_{0} = (0,\ldots, 0)\). By so doing, we obtain the final population transferred to the sink \(r^{T}_{s}(\vec{h}_{0})\). We then perform the optimization over the local energies in the way described in Sect. 3 for the three different dephasing conditions, and obtain \(r^{T}_{s}(\vec{h}^{\vec{\gamma}}_{\text{opt}})\), with \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}, \vec{0}, \vec{1}\).

The improvement achieved by optimizing over h⃗ can be deduced from the data shown in Table 1, whereas the sink population dynamics is shown in Fig. 3. The corresponding optimal Hamiltonians can be found in the Appendix. We observe marginal improvement of the population transfer when we take \(\vec{\gamma } = \vec{\gamma}_{\text{ref}}\), a slight improvement for uniform dephasing rates (\(\vec{\gamma} = \vec{1}\)) and a larger improvement in absence dephasing, (\(\vec{\gamma} = \vec{0}\)). In all cases, we are able to achieve high population transfer.

Figure 3
figure 3

Plot of the sink population \(r_{s}^{t}(\vec{h}^{\vec{\gamma}}_{\text{ opt}})\) as a function of t, where \(\vec{h}^{\vec{\gamma}}_{\text{ opt}}\) are the optimal on-site energies obtained through optimization, with the three sets of dephasing rates \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}, \vec{0}, \vec{1}\). The remaining parameters are the same as in Fig. 2. Notice that we are able to achieve high population transfer both with and without dephasing noise

Table 1 Final sink population \(r_{s}^{T}\) (\(T=5\)) for different dephasing consitions, i.e., \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}, \vec{1}, \vec{0}\). In the right-hand column we consider the case where all the local energies h⃗ are set to zero, i.e., \(\vec{h}_{0} = (0,\ldots, 0)\), while in the left-hand column, we show the final sink population for the optimal local energies \(\vec{h}^{\vec{\gamma}}_{\text{opt}}\) as obtained with the optimization method discussed in Sect. 3

We then explore larger uniform dephasing rates by considering \(\gamma _{n} = \gamma \), where γ varies in the range \([0,20]\). As shown in Fig. 4, the optimization allows us to effectively transfer population in a large interval of the chosen range; noticeably, the smaller is the dephasing rate, the larger is the improvement compared to the non-optimized scenario. Moreover, numerical investigations show that, even setting all the local energies to zero, the system achieves high population transfer while we increase the value of the dephasing rates. One can also observe that there is an intermediate range of γ where the population transfer is high even without optimization. In this range, the optimization of local energies is superfluous to observe high transfer; the process is mostly guided by dephasing, as in the case where \(\vec{\gamma} = \vec{\gamma}_{\text{ref}}\). However, when the dephasing rate becomes too large, it turns out to be detrimental to the transfer; we indeed observe a decrease in the final sink population, both in the optimized and in the non-optimized case. This occurrence can be ultimately justified in terms of the quantum Zeno effect [3537]: extreme dephasing conditions tend to freeze the system dynamics [23].

Figure 4
figure 4

Final sink population \(r_{s}^{T}\) (\(T=5\)) for uniform dephasing rate across all network sites \(\gamma _{n}=\gamma \). We compare the case where on-site energies h⃗ are the result of the optimization (dash-dotted line) with the case in which we assume them to be all null (solid line). On the one hand the plot shows that the optimization procedure only leads to minor improvements in the population transfer for moderate to large values of γ compared to the case where all the local energies are set to zero. On the other hand, the optimization procedure results in a significant improvement of the excitation transfer performance for small values of γ

To conclude this part of the study we compare our optimal population transfer with the population transfer achieved when taking \(\vec{h} = \vec{h}_{\text{ref}}\), as given at the beginning of this section. Figure 5 provides evidence of the effectiveness of the optimization: the optimal set of on-site energies \(\vec{h}_{\text{opt}}\) outperforms \(\vec{h}_{\text{ref}}\) for any \(t>0\).

Figure 5
figure 5

Dynamics of the sink population \(r_{s}^{t}(\vec{h})\) under the dephasing condition \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}\). The solid curve corresponds to \(\vec{h} = \vec{h}_{\text{ref}}\), while the dash-dotted curve corresponds to the optimized site energies \(\vec{h} = \vec{h}_{\text{opt}}\). This plot shows that the optimization method discussed in Sect. 3 is effective at improving the population transfer

4.2 Resilience against different configurations

We now want to test some properties of the optimal on-site energies \(\vec{h}_{\text{opt}}\) for different γ⃗ discussed in Sect. 4.1. We start by looking at the resilience of the transfer against variations either in the initial or end sites, or in the coupling between the sites. We consider the optimal solutions for a total evolution time \(T=5\), with \(H_{I}\) being given by Equation (16). In our analysis the initial and the target sites are always different. The corresponding results can be found in Table 2, where in the left-hand column we show the smallest population transferred while varying the site where the excitation is initially injected, while in the right-hand column we show the smallest population transferred when we vary the site connected to the sink. In both cases, the transfer is more resilient when it is mostly guided by dephasing. Indeed, when \(\vec{\gamma} = \vec{1}\) or \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}\) we still observe high population transfer, whereas those results are in stark contrast with the zero-dephasing case, when the population transferred to the sink can drop almost to zero.

Table 2 Smallest population transferred \(\min (r_{s}^{T})\) at \(T=5\) for different sets of dephasing noise \(\vec{\gamma} = \vec{\gamma}_{\text{opt}}, \vec{1}, \vec{0}\). In the central column, we show those instances in which we vary the initial state \(\rho (0)\), i.e., the site where the excitation is initially injected, whereas on the rightmost column we change the site m connected to the sink through the operator \(L_{\Gamma _{s}}\) introduced in Equation (6)

Next, we look at the effect of allowing population transfer to the sink from a second node of the network. Under this hypothesis, we observe a minimum population of \(r_{s}^{T}\approx 0.998\) transferred to the sink, when the latter is connected to both the 3-rd and 7-th sites, i.e., \(m=3\) and \(m=7\) in Equation (6). Furthermore, it is worth mentioning that we observe no significant differences between the different dephasing conditions.

We then use the same local energies while considering different coupling between sites. To do so, we randomly extract the entries of the matrix \(H_{I}\) from a uniform distribution in the range \([-200,200]\). As before, we choose \(\vert 1 \rangle \) as the initial excited site, while \(\vert 3 \rangle \) is the target state. Results are shown in Table 3, where we report the smallest and the largest population transferred to the sink. We can see that the most resilient transfer is achieved for \(\vec{\gamma} = \vec{\gamma}_{\text{ref}}\), while, again, the lowest transfer is observed in absence of dephasing noise. It is worth noticing that in all cases we have evidence of a configuration yielding an almost perfect population transfer. The corresponding Hamiltonians can be found in the Appendix.

Table 3 Smallest and largest population transferred \(r_{s}^{T}\) at \(T=5\) as obtained with adjacency matrix \(H_{I}\) whose elements are randomly extracted from a uniform distribution defined over the interval \([-200,200]\). We consider 104 realisations of \(H_{I}\), showing that there is at least one matrix \(H_{I}\) leading to almost perfect transfer [cf. right-most column]

To complete, we study the population transferred to the sink when the network size is reduced. Starting from a FCN of \(N=7\) sites, where the adjacency matrix \(H_{I}\) is given by Equation (16), we progressively scale down the system size, removing one by one the nodes of the network. To this end, we first discard one node of the network (except for the input and the output nodes, 1 and 3, respectively) and update the adjacency matrix by removing the corresponding row and column, then we optimize over the local energies h⃗. Among all the possible configurations with 6 nodes, we select the one corresponding to the smallest population transferred to the sink after performing the optimization. The rationale behind such choice is that, by looking at the worst case scenario, we test the effectiveness of optimizing only the local energies of a smaller network to achieve high excitation transfer.

We iterate the node-removal followed by optimization procedure until we reach the non-trivial case where we are left with only 3 nodes. Results are shown in Fig. 6, where it can be seen that this operation has a significant, detrimental impact on the population transfer, showing that carefully selecting the local energies for a given network configuration may not be sufficient to achieve the desired transfer for smaller networks. Furthermore, we can see that, in contrast to changes in the coupling strengths for the seven-site network, reduction of the number of nodes seems to be more detrimental in presence of dephasing noise.

Figure 6
figure 6

Optimized population transferred to the sink \(r_{s}^{T}\) at \(T=5\) as a function of the number of nodes removed from the original network for different dephasing conditions, i.e., \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}, \vec{1}, \vec{0}\). See details in Sect. 4.2

4.3 Coherence preservation properties

Results from Sect. 4.2 have shown that, when the transport is dephasing-assisted, the population transfer is more resilient to changes of the network configuration. On the other hand, we also expect that such process would tend to destroy coherence in the system. This is not necessarily true in absence of dephasing noise, provided that the transfer is fast enough (so that coherence is not destroyed due to excitation losses).

To study how coherence is preserved or lost from the system during the population transfer, we add a new site \(|8\rangle \) to the network, uncoupled from all the other sites. We then prepare the system in the superposition \(\frac{1}{\sqrt{2}}(|1\rangle +|8\rangle )\), and we study the time evolution of coherences while the population transfer from site \(|1\rangle \) to the target is taking place.

The irreversible transfer from site \(\vert 3 \rangle \) to the sink will inevitably lead to coherence loss from the system. However, we would like to separate these artificial losses from the effect of dephasing and spontaneous emission induced by the interaction with the environment. To do so, we instead connect the site \(\vert 3 \rangle \) to a long spin chain via an interaction Hamiltonian \(J_{3 s_{0}}(|3\rangle \langle s_{0}|+|s_{0}\rangle \langle 3|)\), where \(\vert s_{0} \rangle \) is the first site of the chain. Moreover, the chain is described by the following nearest- neighbour interaction Hamiltonian

$$ H_{C}=J\sum_{j=0}^{N_{C}} \bigl( \vert s_{j}\rangle \langle s_{j+1} \vert + \vert s_{j+1} \rangle \langle s_{j} \vert \bigr), $$
(17)

where we assume uniform coupling J across the chain, and \(N_{C}\) is the number of sites of the spin chain.

If the chain is long enough and the evolution time considered is not too long, we do not expect revivals to occur, meaning that most of the population transferred to the chain will not go back to the network. This enables us to picture the whole chain as an effective sink. However, in contrast to the previous scenario, the interaction between the network and the chain is affecting the unitary part of the dynamics, therefore it does not induce any additional decoherence.

We hence study the coherence dynamics in this new scenario for different dephasing rates γ⃗ and the associated optimal on-site energies presented in Sect. 4.1. The total population \(p_{C}\) transferred to the chain as a function of time can be found in Fig. 8.

In order to study the time evolution of coherence we employ the standard quantifier given by the \(l_{1}\)-norm [38]. In Fig. 7, we show the dynamics of the total coherence of the system computed as

$$ C=\sum_{i\neq j} \vert \rho _{ij} \vert , $$
(18)

as well as the coherence associated to the 8-th site only

$$ C_{8}=\sum_{j} \vert \rho _{8j} \vert - \rho _{88}, $$
(19)

where \(j=1,\ldots, N, s_{0},\ldots, s_{N_{C}}\). In our simulations, we considered a chain of \(N_{C}=80\) spins, \(J_{3 s_{0}}/\hbar =\Gamma _{s}\), and \(J=2J_{3 s_{0}}\).

Figure 7
figure 7

Time evolution of coherence. In Panel (a), we study the system coherence C as quantified by Equation (18), while in Panel (b), we look at the reduced coherence \(C_{8}\) associated to the 8-th site, computed using Equation (19). We compare the curves obtained for different dephasing rates \(\vec{\gamma} = \vec{\gamma _{\text{ref}}}, \vec{0}, \vec{1}\), while resorting to the corresponding optimal on-site energies. The parameters used for the numerical simulation are the same as in Fig. 8

Figure 8
figure 8

Population \(p_{C}\) transferred from the network to the spin chain as a function of time. We consider the optimal site energies \(\vec{h}_{\text{opt}}\) (obtained for an evolution time \(T=5\)), under different choices of the dephasing rates \(\vec{\gamma}=\vec{\gamma}_{\text{opt}},\vec{0},\vec{1}\). For the numerical simulations, we chose \(N_{C}=80\) spins in the chain, \(J_{3 s_{0}}/\hbar =\Gamma _{s}\), and \(J=2J_{3 s_{0}}\), while all the remaining values of the physical parameters are given at the beginning of Sect. 4

After a time \(T=10\), we observe a significant increase in C in absence of dephasing and a small increase when we add dephasing noise. An increase in \(C_{8}\) can also be observed for \(\vec{\gamma}=\vec{0}\) and \(\vec{\gamma}=\vec{1}\), while \(C_{8}(T)< C_{8}(0)\) for \(\vec{\gamma} = \vec{\gamma}_{\text{ref}}\). We eventually looked at the coherence per number of sites/spins involved \(c=C/N_{\text{tot}}\). For the initial state \(c=\frac{1}{2}\), while at the end of the transfer \(N_{\text{tot}}=N+N_{C}\). We obtained \(c\approx 0.035\) for \(\vec{\gamma}_{\text{ref}}\), \(c\approx 0.080\) for \(\vec{\gamma}=\vec{1}\), \(c\approx 0.499\) for \(\vec{\gamma}=\vec{0}\).

These results are in agreement with the expectation that in absence of dephasing, coherence is mostly preserved, while losses can occur when the population transfer is driven by dephasing.

5 Conclusions and outlook

In this work, we optimized the on-site energies of a fully connected quantum network to improve population transfer for different enviromental conditions. Specifically, we considered a simple model of a FMO complex in the single-excitation subspace, subjected to spontaneous emission and local dephasing. Resorting to a gradient-based technique, we found the optimal site energies for different dephasing rates. We studied the properties of our solutions in terms of resilience against changes in the network initial preparation, couplings, and size, providing a discussion about coherence preservation during the transfer. We show that high population transfer in a FMO-like network can be achieved by merely optimizing the sites energies for a large range of different dephasing rates. However, the optimal solutions for dephasing-driven and zero-dephasing transport are significantly different in terms of resilience to network configuration and coherence preservation. While in absence of dephasing we find both a high transport performance, and a high degree of coherence preservation, the optimal solutions in the presence of dephasing are shown to be more resilient to changes in the network initial state and couplings between sites, while, as expected, they exhibit a higher loss of coherence in the quantum network state. However, in contrast to the transfer resilience against changes in the sites interaction, reduction in the network size seems to be more detrimental in presence of dephasing.

Our work contributes to further understanding of the transport properties of fully connected quantum networks by isolating the effect of local energies, dephasing conditions, and network size for a given set of couplings between the sites. Furthermore, our results show that a viable and fruitful approach to design efficient synthetic devices is to apply adaptive learning approaches to enhanced existing natural devices, such as the photosynthetic complex that we have considered in this paper. In this respect, further progress can be made by either studying larger networks, or going beyond the one-excitation subspace, or studying different models of interactions. One might also consider more complex environments, e.g. including non-Markovian effects, as those are displayed by a variety of non-artificial physical systems, and assess whether that could be beneficial for further improving the properties of the transfer.

Data Availability

The data generated as part of this work is available at this link and from the authors

Abbreviations

FMO:

Fenna–Matthews–Olson

FCN:

Fully connected network

TEDOPA:

Time Evolving Density with Orthogonal Polynomial Algorithm

RMSprop:

Root Mean Square Propagation

eV:

electron-Volt

References

  1. Bianconi G. Europhys Lett. 2015;111:56001. https://doi.org/10.1209/0295-5075/111/56001.

    Article  ADS  Google Scholar 

  2. Mahler G, WeberrußVA. Quantum networks. Berlin: Springer; 1998. https://doi.org/10.1007/978-3-662-03669-3.

    Book  Google Scholar 

  3. Gisin N, Thew R. Nat Photonics. 2007;1:165. https://doi.org/10.1038/nphoton.2007.22.

    Article  ADS  Google Scholar 

  4. Chen J. J Phys Conf Ser. 2021;1865:022008. https://doi.org/10.1088/1742-6596/1865/2/022008.

    Article  Google Scholar 

  5. Lambert CJ. Quantum transport in nanostructures and molecules: an introduction to molecular electronics. Bristol: IOP Publishing; 2021. https://doi.org/10.1088/978-0-7503-3639-0.

    Book  Google Scholar 

  6. Beenakker CWJ, van Houten H. Solid State Phys. 2008;44:1. https://doi.org/10.1016/S0081-1947(08)60091-0.

    Article  Google Scholar 

  7. Lambert N, Chen Y-N, Cheng Y-C, Li C-M, Chen G-Y, Nori F. Nat Phys. 2013;9:10. https://doi.org/10.1038/nphys2474.

    Article  Google Scholar 

  8. Huelga S, Plenio M. Contemp Phys. 2013;54:181. https://doi.org/10.1080/00405000.2013.829687.

    Article  ADS  Google Scholar 

  9. Horodecki R, Horodecki P, Horodecki M, Horodecki K. Rev Mod Phys. 2009;81:865. https://doi.org/10.1103/RevModPhys.81.865.

    Article  ADS  Google Scholar 

  10. Zurek WH. Rev Mod Phys. 2003;75:715. https://doi.org/10.1103/RevModPhys.75.715.

    Article  ADS  Google Scholar 

  11. Breuer H-P, Petruccione F. The theory of open quantum systems. Oxford: Oxford University Press; 2002. https://doi.org/10.1093/acprof:oso/9780199213900.001.0001.

    Book  Google Scholar 

  12. Rivas A, Huelga SF. Open quantum systems: an introduction, SpringerBriefs in physics. Berlin: Springer; 2012. https://doi.org/10.1007/978-3-642-23354-8.

    Book  Google Scholar 

  13. de Vega I, Alonso D. Rev Mod Phys. 2017;89:015001. https://doi.org/10.1103/RevModPhys.89.015001.

    Article  ADS  Google Scholar 

  14. Engel GS, Calhoun TR, Read EL, Ahn T-K, Mančal T, Cheng Y-C, Blankenship RE, Fleming GR. Nature. 2007;446:782. https://doi.org/10.1038/nature05678.

    Article  ADS  Google Scholar 

  15. Cheng Y-C, Fleming GR. Annu Rev Phys Chem. 2009;60:241. https://doi.org/10.1146/annurev.physchem.040808.090259.

    Article  ADS  Google Scholar 

  16. Jang SJ, Mennucci B. Rev Mod Phys. 2018;90:035003. https://doi.org/10.1103/RevModPhys.90.035003.

    Article  ADS  Google Scholar 

  17. Plenio MB, Huelga SF. New J Phys. 2008;10:113019. https://doi.org/10.1088/1367-2630/10/11/113019.

    Article  Google Scholar 

  18. Caruso F, Chin AW, Datta A, Huelga SF, Plenio MB. J Chem Phys. 2009;131:105106. https://doi.org/10.1063/1.3223548.

    Article  ADS  Google Scholar 

  19. Chin AW, Prior J, Rosenbach R, Caycedo-Soler F, Huelga SF, Plenio MB. Nat Phys. 2013;9:113. https://doi.org/10.1038/nphys2515.

    Article  Google Scholar 

  20. Anderson PW. Phys Rev. 1958;109:1492. https://doi.org/10.1103/PhysRev.109.1492.

    Article  ADS  Google Scholar 

  21. Anderson PW. Rev Mod Phys. 1978;50:191. https://doi.org/10.1103/RevModPhys.50.191.

    Article  ADS  Google Scholar 

  22. Mohseni M, Rebentrost P, Lloyd S, Aspuru-Guzik A. J Chem Phys. 2008;129:174106. https://doi.org/10.1063/1.3002335.

    Article  ADS  Google Scholar 

  23. Rebentrost P, Mohseni M, Kassal I, Lloyd S, Aspuru-Guzik A. New J Phys. 2009;11:033003. https://doi.org/10.1088/1367-2630/11/3/033003.

    Article  Google Scholar 

  24. Adolphs J, Renger T. Biophys J. 2006;91:2778. https://doi.org/10.1529/biophysj.105.079483.

    Article  Google Scholar 

  25. Davidson S, Pollock FA, Gauger E. Phys Rev Res. 2021;3:L032001. https://doi.org/10.1103/PhysRevResearch.3.L032001.

    Article  Google Scholar 

  26. Albert R, Barabási A-L. Rev Mod Phys. 2002;74:47. https://doi.org/10.1103/RevModPhys.74.47.

    Article  ADS  MathSciNet  Google Scholar 

  27. Chin AW, Rivas A, Huelga SF, Plenio MB. J Math Phys. 2010;51:092109. https://doi.org/10.1063/1.3490188.

    Article  ADS  MathSciNet  Google Scholar 

  28. Prior J, Chin AW, Huelga SF, Plenio MB. Phys Rev Lett. 2010;105:050404. https://doi.org/10.1103/PhysRevLett.105.050404.

    Article  ADS  Google Scholar 

  29. Tamascelli D, Smirne A, Lim J, Huelga SF, Plenio MB. Phys Rev Lett. 2019;123:090402. https://doi.org/10.1103/PhysRevLett.123.090402.

    Article  ADS  MathSciNet  Google Scholar 

  30. Am-Shallem M, Levy A, Schaefer I, Kosloff R. Three approaches for representing Lindblad dynamics by a matrix-vector notation. 2015. https://doi.org/10.48550/ARXIV.1510.08634.

  31. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X. TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org

  32. Ruder S. An overview of gradient descent optimization algorithms. 2016. https://doi.org/10.48550/ARXIV.1609.04747.

  33. Tronrud DE, Wen J, Gay L, Blankenship RE. Photosynth Res. 2009;100:79. https://doi.org/10.1007/s11120-009-9430-6.

    Article  Google Scholar 

  34. Schmidt am Busch M, Müh F, El-Amine Madjet M, Renger T. J Phys Chem Lett. 2011;2:93. https://doi.org/10.1021/jz101541b.

    Article  Google Scholar 

  35. Misra B, Sudarshan ECG. J Math Phys. 1977;18:756. https://doi.org/10.1063/1.523304.

    Article  ADS  Google Scholar 

  36. Peres A. Am J Phys. 1980;48:931. https://doi.org/10.1119/1.12204.

    Article  ADS  Google Scholar 

  37. Facchi P, Pascazio S. Phys Rev Lett. 2002;89:080401. https://doi.org/10.1103/PhysRevLett.89.080401.

    Article  ADS  MathSciNet  Google Scholar 

  38. Baumgratz T, Cramer M, Plenio MB. Phys Rev Lett. 2014;113:140401. https://doi.org/10.1103/PhysRevLett.113.140401.

    Article  ADS  Google Scholar 

Download references

Acknowledgements

AI gratefully acknowledges the financial support of The Faculty of Science and Technology at Aarhus University through a Sabbatical scholarship and the hospitality of the Quantum Technology group, the Centre for Quantum Materials and Technologies, and the School of Mathematics and Physics, during his stay at Queen’s University Belfast.

Funding

We acknowledge support by the European Union’s Horizon 2020 FET-Open project TEQ (766900), the Horizon Europe EIC-Pathfinder project QuCoM (101046973), the Leverhulme Trust Research Project Grant UltraQuTe (grant RPG-2018-266), the Royal Society Wolfson Fellowship (RSWF/R3/183013), the UK EPSRC (EP/T028424/1), and the Department for the Economy Northern Ireland under the US-Ireland R&D Partnership Programme. AI acknowledges support from the Novo Nordisk Foundation NERD grant (Grant no. NNF22OC0075986)

Author information

Authors and Affiliations

Authors

Contributions

S.G. and G.Z. performed the quantitative analysis. A.I. and M.P. conceptualized the problem. All authors contributed to the writing of the manuscript.

Corresponding author

Correspondence to M. Paternostro.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix:  Optimal Hamiltonians

Appendix:  Optimal Hamiltonians

In this Appendix, we report some of the optimal Hamiltonians found during our optimizations and analysis. All energy values are expressed in units of \(1.2414 \times 10^{-4} \ \text{eV}\).

The optimal Hamiltonians, expressed in matrix form, found with the interactions described by Equation (16) for the results presented in Table 1 in are

$$ \begin{aligned} \begin{pmatrix} 65.7 & -104.1 & 5.1 & -4.3 & 4.7 & -15.1 & -7.8 \\ -104.1 & -11.1 & 32.6 & 7.1 & 5.4 & 8.3 & 0.8 \\ 5.1 & 32.6 & -56.1 & -46.8 & 1.0 & -8.1 & 5.1 \\ -4.3 & 7.1 & -46.8 & -36.2 & -70.7 & -14.7 & -61.5 \\ 4.7 & 5.4 & 1.0 & -70.7 & -30.6 & 89.7 & -2.5 \\ -15.1 & 8.3 & -8.1 & -14.7 & 89.7 & 55.7 & 32.7 \\ -7.8 & 0.8 & 5.1 & -61.5 & -2.5 & 32.7 & 4.2 \end{pmatrix}, \end{aligned} $$
(A.1)

for \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}\),

$$ \begin{aligned} \begin{pmatrix} 43.5 & -104.1 & 5.1 & -4.3 & 4.7 & -15.1 & -7.8 \\ -104.1 & 13.7 & 32.6 & 7.1 & 5.4 & 8.3 & 0.8 \\ 5.1 & 32.6 & -45.8 & -46.8 & 1.0 & -8.1 & 5.1 \\ -4.3 & 7.1 & -46.8 & -4.3 & -70.7 & -14.7 & -61.5 \\ 4.7 & 5.4 & 1.0 & -70.7 & -19.5 & 89.7 & -2.5 \\ -15.1 & 8.3 & -8.1 & -14.7 & 89.7 & 14.4 & 32.7 \\ -7.8 & 0.8 & 5.1 & -61.5 & -2.5 & 32.7 & -8.6 \end{pmatrix}, \end{aligned} $$
(A.2)

for \(\vec{\gamma}=\vec{1}\), and

$$ \begin{aligned} \begin{pmatrix} -13.2 & -104.1 & 5.1 & -4.3 & 4.7 & -15.1 & -7.8 \\ -104.1 & -1.7 & 32.6 & 7.1 & 5.4 & 8.3 & 0.8 \\ 5.1 & 32.6 & 16.1 & -46.8 & 1.0 & -8.1 & 5.1 \\ -4.3 & 7.1 & -46.8 & -43.3 & -70.7 & -14.7 & -61.5 \\ 4.7 & 5.4 & 1.0 & -70.7 & 424.2 & 89.7 & -2.5 \\ -15.1 & 8.3 & -8.1 & -14.7 & 89.7 & -568.8 & 32.7 \\ -7.8 & 0.8 & 5.1 & -61.5 & -2.5 & 32.7 & 39.5 \end{pmatrix}, \end{aligned} $$
(A.3)

for \(\vec{\gamma}=\vec{0}\).

While a simple intuition of the pattern of optimal matrix entries found in such examples seems to be elusive, one can notice that, in the absence of dephasing, the moduli of two of the optimized local energies are significantly higher than the rest. In the analysis reported in Sect. 4.2 we found fully connected networks that achieved near perfect transfer (see Table 3) for different γ⃗. The corresponding Hamiltonians are

$$ \begin{aligned} \begin{pmatrix} 65.7 & 182.4 & -83.3 & -106.2 & -191.6 & -18.8 & -20.8 \\ 182.4 & -11.1 & -152.6 & 91.9 & -162.3 & 55.6 & 183.7 \\ -83.3 & -152.6 & -56.1 & -132.0 & -190.2 & 177.3 & -101.7 \\ -106.2 & 91.9 & -132.0 & -36.2 & -161.3 & -169.0 & 144.6 \\ -191.6 & -162.3 & -190.2 & -161.3 & -30.6 & -106.3 & -102.8 \\ -18.8 & 55.6 & 177.3 & -169.0 & -106.3 & 55.7 & -111.5 \\ -20.8 & 183.7 & -101.7 & 144.6 & -102.8 & -111.5 & 4.2 \end{pmatrix}, \end{aligned} $$
(A.4)

for \(\vec{\gamma}=\vec{\gamma}_{\text{ref}}\),

$$ \begin{aligned} \begin{pmatrix} 43.5 & 102.9 & 92.8 & 63.8 & 28.7 & -136.6 & -183.1 \\ 102.9 & 13.7 & 0.8 & 75.5 & -118.5 & 177.5 & 110.7 \\ 92.8 & 0.8 & -45.8 & -140.9 & -198.8 & 134.9 & 144.7 \\ 63.8 & 75.5 & -140.9 & -4.3 & -184.6 & -14.5 & -139.5 \\ 28.7 & -118.5 & -198.8 & -184.6 & -19.5 & -153.8 & 5.2 \\ -136.6 & 177.5 & 134.9 & -14.5 & -153.8 & 14.4 & -188.0 \\ -183.1 & 110.7 & 144.7 & -139.5 & 5.2 & -188.0 & -8.6 \end{pmatrix}, \end{aligned} $$
(A.5)

for \(\vec{\gamma}=\vec{1}\), and

$$ \begin{aligned} \begin{pmatrix} -13.2 & 6.5 & 64.0 & -147.1 & -71.3 & -46.6 & -156.7 \\ 6.5 & -1.7 & 13.2 & 20.5 & -102.7 & 68.0 & 56.6 \\ 64.0 & 13.2 & 16.1 & -164.0 & 154.5 & 95.7 & -187.0 \\ -147.1 & 20.5 & -164.0 & -43.3 & -86.8 & -17.3 & 43.4 \\ -71.3 & -102.7 & 154.5 & -86.8 & 424.2 & 72.0 & 70.9 \\ -46.6 & 68.0 & 95.7 & -17.3 & 72.0 & -568.8 & 155.8 \\ -156.7 & 56.6 & -187.0 & 43.4 & 70.9 & 155.8 & 39.5 \end{pmatrix}, \end{aligned} $$
(A.6)

for \(\vec{\gamma}=\vec{0}\). Again, no intuition for the optimality of such configurations is apparte. However, that many of the interactions are significantly stronger than in the FMO complex model described by Equation (16).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sgroi, S., Zicari, G., Imparato, A. et al. Efficient excitation-transfer across fully connected networks via local-energy optimization. EPJ Quantum Technol. 11, 29 (2024). https://doi.org/10.1140/epjqt/s40507-024-00238-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1140/epjqt/s40507-024-00238-w