- Research
- Open access
- Published:
Correlation avoidance in single-photon detecting quantum random number generators by dead time overestimation
EPJ Quantum Technology volume 11, Article number: 60 (2024)
Abstract
In the case of quantum random number generators based on single-photon arrivals, the physical properties of single-photon detectors, such as time-tagger clocks and dead time, influence the stochastic properties of the generated random numbers. This can lead to unwanted correlations among consecutive samples.
We present a method based on extending the insensitive periods after photon detections. This method eliminates the unwanted stochastic effects at the cost of reduced generation speed. We calculate performance measures for our presented method and verify its correctness with computer simulations and measurements conducted on an experimental setup. Our algorithm has low complexity, making it convenient to implement in QRNG schemes, where the benefits of having uncorrelated output intervals exceed the disadvantages of the decreased rate.
1 Introduction
Provably secure randomness is an essential resource for many applications like Monte Carlo simulations or the cryptographic protocols of the present [1] and even the quantum cryptographic protocols of the future [2]. Conventional pseudorandom number generators are based on complex but deterministic algorithms, unavoidably leading to some undesirable deterministic features in the long run. In contrast, quantum random number generators (QRNGs) [3, 4] exploit the inherent unpredictability of quantum mechanical phenomena to provide a provably secure entropy source. Optical QRNG schemes make use of the quantum nature of light, leading to many possible architectures, such as generators based on the superposition of single-photon paths [5, 6], photon number counting [7, 8], photon arrival times [9–11], quantum phase fluctuations [12], amplified spontaneous emission [13], or even Raman scattering [14].
Using the arrival time of photons is an attractive choice due to the simplicity of the required hardware. The source of randomness in these generators is the light emission process, whose weak optical signal is detected by a single-photon detector. Bits are then generated from the measured arrival times of the individual photons. Ideally, the measured raw data samples should be independent and come from a well-defined, known distribution. However, in a real-world scenario, there are various imperfections we also have to deal with. The finite precision of time measurement introduces unwanted correlations [15], which can be remedied by restarting the time-tagger clock at each detection [9, 16] at the cost of more complicated hardware. Another major factor is the dead time of photon detectors [17], further changing the measured interval distribution.
In this work, we introduce a method to deal with the effect of non-restartable time-tagger clocks and detector dead time simultaneously, at the cost of reduced bit generation speed. Compared to the standard practice of reducing input rates to limit the unwanted correlations due to these effects, our proposed method also allows generator operation in regimes with higher input rates, thus facilitating improved output performance regarding the bit generation rate. The paper is organized as follows: Sect. 2 describes the basic operation principle of time-of-arrival generators and contains a brief analysis of the measured interval distributions in the non-ideal cases. We introduce our method in Sect. 3 and evaluate its performance in Sect. 4. Measurement data presented in Sect. 5 supports the validity of our method. Finally, Sect. 6 concludes the paper.
2 Principle of QRNG operation
A whole family of QRNGs operates based on the following concept: a single-photon detector (SPD) detects photons emitted by a suitably attenuated continuous-wave (CW) laser, and a time-tagger card (time-to-digital converter, TDC) assigns time stamps to detections based on its continuously running internal clock signal. We assume the photons to arrive according to a homogeneous Poisson point process (PPP) with rate λ, valid for coherent light sources [18]. We refer to λ as the input photon rate of our detection system; it is proportional to the optical power and its value already includes the losses from the \(\eta _{\text{d}}<100\%\) detection efficiency of the SPD. Let \(S_{i}\) denote the ith photon arrival time, and \(T_{i} = S_{i}-S_{i-1}\) the exponentially distributed time elapsed between \(S_{i}\) and \(S_{i-1}\), where \(S_{0}\) is the starting time of the measurement. These times are physically measured by counting the clock signal’s leading edges between \(S_{i}\) and \(S_{i-1}\), yielding integer values. These integers are the discretized time differences (DTDs), discrete random variables denoted by \(D_{i}\). DTDs undergo well-defined mathematical operations based on the applied random bit generation scheme (e.g., [9]), outputting random bits, which form uniformly distributed, uncorrelated sequences in the ideal case. Such generators are commonly referred to as time-of-arrival (ToA) QRNGs. Our method offers a tool for correlation avoidance of the DTDs that can be used with all such devices; independent of the concrete bit generation algorithm.
Let us denote the time-tagger’s resolution—the clock signal’s period—by τ. There is a non-zero \(\gamma _{i}\) time between \(S_{i}\) and the previous leading clock edge, that is, \(\gamma _{i}=S_{i}-\mathopen{\lfloor }S_{i}/\tau \mathclose{\rfloor } \tau \), where \(\mathopen{\lfloor }\cdot \mathclose{\rfloor }\) denotes the floor function, representing the greatest integer less than or equal to its argument. Consequently, \(\gamma _{i} \in [0,\tau )\). We call the random variable \(\gamma _{i}\) the phase of the ith photon detection.
It has been previously known that non-zero phases introduce correlations between the DTDs and, correspondingly, between the random bits generated [9]. In our previous work [15], we have derived a detailed stochastic model of a particular ToA bit generation method, quantitatively analyzing the effects of these phases. We have shown that by increasing the product of the input photon rate of the SPD and the timing resolution (λτ), the correlation coefficients between bits deviate from zero, while the bit-pair and other bit-tuple probabilities deviate from the uniform values. On the other hand, keeping λτ close to zero severely limits the achievable bit generation rates.
2.1 Distribution and correlation of the observed variables
Bit generation schemes are based on the \(D_{i}\) DTDs since they are the physical observables measured in the setup. According to Ref. [15], focusing only on the first arrival, we can write the following for the distribution of these variables and the corresponding phases, for \({x,y\in \left [0,\tau \right )}\):
and
where \(\chi _{A}\) is the indicator of the set A.Footnote 1 We note that if \(\gamma _{0}=0\) then \(F_{n}(0,\tau )=\Pr(D_{1} = n \mid \gamma _{0} = 0)=\left ( 1- \mathrm {e} ^{-\lambda \tau}\right ) \mathrm {e} ^{-\lambda \tau n}\) results in a geometric distribution [16], retaining the memoryless property of the underlying exponential distribution. This means that successive DTDs, \(D_{i}\) and \(D_{i+1}\), would be uncorrelated after eliminating the effects of non-zero phases.
The conditional and unconditional joint distributions of successive DTDs \(D_{1}, \ldots , D_{N}\), i.e.,
can also be calculated based on (2). The joint distributions indicate that the \(D_{1}, \ldots , D_{N}\) variables are correlated [15]. Thus, using the \(D_{1}, \ldots , D_{N}\) sequence for random bit generation might result in correlated bit sequences.
In Ref. [15], we only focused on the correlations between the random bits generated from the physical process but skipped the numerical analysis of correlations between DTDs. To derive the correlation between successive samples, \(D_{i}\) and \(D_{i+1}\)—which is equivalent to the lag-1 autocorrelation coefficient in DTD sequences—, we refer back to our previous work, where we have shown that if the first phase of the process, \(\gamma _{0}\), is uniformly distributed between 0 and τ, then every other \(\gamma _{i}\) has a uniform marginal distribution (Ref. [15], Theorem 1).
Without loss of generality, set \(i=1\) and \(i+1=2\) and compute the correlation \(\rho _{D_{1},D_{2}}\) based on
According to (2), for \(n_{1}>0\) and \(n_{2}>0\), we have
Furthermore, using the uniform distribution of \(\gamma _{0}\), the expectation of the product \(D_{1}D_{2}\) becomes
The DTDs’ expected values \(\operatorname*{\mathbb{E}}(D_{1})=\operatorname*{\mathbb{E}}(D_{2})\) and second moments \({\operatorname*{\mathbb{E}}\left (D_{1}^{2}\right )=\operatorname*{\mathbb{E}}\left (D_{2}^{2}\right )}\) can be calculated using Ref. [15, Eq. (12)], yielding
and
Finally, the correlation between \(D_{1}\) and \(D_{2}\), purely a function of the product λτ, is
The correlation tends to zero as \((\lambda \tau )\to 0\) or \((\lambda \tau )\to \infty \), its value is negative in between (see Fig. 1). It is monotonically decreasing until obtaining its minimum of -0.2233 around \(\lambda \tau =3.5749\). Thus, increasing λτ from zero increases the magnitude of correlations between successive DTDs,Footnote 2 and the resulting sequence of random variables will always contain systematic correlations. Although the standard practice of reducing the optical power (limiting λτ) is a valid approach to decrease correlations, it also severely limits the capabilities of the QRNGs. For example, only allowing \(|\rho _{D_{1},D_{2}}| < 10^{-4}\) means that λτ has an upper bound of 0.0346, which can limit certain architectures in terms of bit generation rates [19, Sect. 3.3]. Therefore, finding a different way of eliminating correlations whilst allowing higher λτ values can prove beneficial.
2.2 Dead time
An additional limitation is imposed by the inability of physical devices to observe all successive photon arrivals. Detectors usually have a dead time, an insensitive time interval of length ζ after a detected photon arrival, during which they cannot register any new arrivals. This means that after a photon detection at \(S_{i}\), no photons arriving before \(S_{i} + \zeta \) are recognized. Consequently, for the observed photon arrivals \(S_{i}>S_{i-1}+\zeta \) holds for \(\forall i>0\). Our model assumes that photon arrivals during the dead time interval are undetected, and such arrivals do not reset the dead time.
Similarly to the previous case free of dead time, we can compute the distribution of the DTDs \(D_{1}, \ldots , D_{N}\) as follows. Assume that \(\zeta = k\tau +\delta \) is constant with \(k\in \mathbb{N}\) and \(0 \leq \delta < \tau \), meaning that \(\Pr \left (D_{1}< k\right )=0\). Then, for \(n\geq k\), the conditional distribution is [15]
and for \(n\geq k\), the conditional density is
Along the lines of the dead time free case, we compute the distribution of \(D_{1}\) and the joint distribution of \(D_{1}\) and \(D_{2}\) from (10), utilizing the uniform distribution of \(\gamma _{0}\), as
The distributions allow us to calculate the expected values \(\operatorname*{\mathbb{E}}\left (D_{1}-k\right )\), \(\operatorname*{\mathbb{E}}\left (\left (D_{1}-k\right )^{2}\right )\) and \(\operatorname*{\mathbb{E}}\left (\left (D_{1}-k\right )\left (D_{2}-k\right )\right )\), along with the correlation \(\rho _{D_{1},D_{2}}=\rho _{D_{1}-k,D_{2}-k}\):
where we provided closed-form expressions for the former two and computed the latter two numerically.
Figure 1 depicts the correlation of consecutive DTDs as a function of the photon arrival rate for selected values of the dead time. We note that the correlation is independent of the integer part of the dead time, k, and only its fractional part, δ, affects the values. The figure verifies that the correlation tends to zero as the photon arrival rate decreases to zero, but for higher photon arrival rates the correlation strongly depends on the dead time.
Note that the presence of dead time reduces the measured rate of photon detections. When \(S_{i}>S_{i-1}+\zeta \), the mean time between photon observations is
As a consequence, the average rate at which the \(D_{i}\) samples are obtained is
3 Dead time overestimation
To eliminate the correlation between successive \(D_{i}\) values, we introduce an approach called the overestimation of dead time. The approach is based on the following observation. The conditional distribution in (9) is such that for \(n>k+1\) the conditional characteristic function
is independent of x and δ, and satisfies
that is, \(D_{1}\) and \(\gamma _{1}\) are independent when \(D_{1}>k+1\). This also means that \(D_{2}\), which depends on \(\gamma _{1}\), will be independent of \(D_{1}\) as long as \({D_{1}>k+1}\).
Thus, the correlation of the consecutive \(D_{i}\) values comes from the small samples; i.e., when \(D_{i}=k\) or \(D_{i}=k+1\), then \(D_{i}\) and \(D_{i+1}\) are correlated. We can exploit this property in the overestimation algorithm to avoid unwanted correlations.
In the following sections, unless the unit of time is specified explicitly, we assume τ and ζ to have arbitrary, unspecified time units, whilst λ is measured in [counts]/[unit of time].
3.1 Overestimation method
Let us overestimate the dead time with an interval covering m clock cycles, where \(m\in \mathbb{Z}^{+}\) such that \(\zeta = k\tau +\delta \leq m \tau \). We refer to m as the overestimation parameter. After a detection event, we start an mτ long safety interval from the next rising clock edge. If a photon is detected after the dead time is over but before this safety interval has ended, we discard the detection event from any further calculations and extend the safety interval by mτ, counted from the following rising edge.
Suppose the safety interval is eventually over because no early detection extends it further. In this case, we continue using our bit generation method as if the previous detection happened at the end of the safety interval. That is, we count the next time difference between the end of the safety interval and the next detection time, then digitize it. See an example in Fig. 2. This approach can be thought of as an algorithm taking the DTDs as input and outputting the virtual DTDs (vDTDs). The algorithm (described in Algorithm 1) has the added benefit of placing the starting points of measurable intervals right to the beginning of a clock cycle, essentially realizing the ideal case of \(\gamma _{i-1} = 0\), yielding geometrically distributed vDTDs.
Let be the observed photon arrival times with dead time ζ (that is, ∀i: \(S_{i}>S_{i-1}+\zeta \)) and be the sequence of measured DTDs associated with \(\mathbb{S}\). Let be the virtual DTD sequence generated by Algorithm 1 from \(\mathbb{D}\).
Theorem 1
The virtual DTD sequence generated by Algorithm 1, \(\mathbb{V}\), is composed of i.i.d. elements with geometric distribution: \(\Pr(V_{\ell}=n)=(1- \mathrm {e} ^{-\lambda \tau}) \mathrm {e} ^{-\lambda \tau n}\).
Proof
For the distribution of DTDs \(D_{i}\) greater than m, we can write
where \(\gamma _{i-1}\) is the arrival phase of \(S_{i-1}\). Using the \(V \gets D-(m+1)\) assignment rule in line 4 of Algorithm 1, we have
for the distribution of the \(V_{\ell}\) variable, which is independent of the phase \(\gamma _{i-1}\). □
Note that without dead time, the choice of \(V\leftarrow D-1\) assignment rule in line 4 of Algorithm 1 would be sufficient since it removes the first fractional clock period, which is responsible for the correlation of successive samples in this case. Additionally, removing m full-length clock periods does not affect the discrete distribution of samples [16]. Using this scheme comes at a cost, as the time used to overestimate the dead time cannot be used for bit generation, leading to a decreased bit generation rate.
One could reason that we could have the same effect by simply reducing the optical power intensity (the photon rate λ) to a regime where correlations and distortions in the distributions vanish. We argue that our algorithm is a better choice than power reduction, both from a philosophical and a numerical point of view.
First, it is true that by decreasing the optical power, the probability \(\Pr \left (D_{i}\leq k+1\right )\) decreases, consequently reducing the number of DTDs causing correlations. However, this probability is never exactly zero—unless λ is set to zero, preventing bit generation. Algorithm 1, on the other hand, removes every problematic DTD, yielding a theoretically correlation-free sequence of virtual DTDs.
Second, reducing the input rate also reduces the available number of measurement samples for bit generation per unit time. Consequently, power reduction limits achievable output bit generation speeds. Footnote 3
3.2 Virtual DTD generation rate
For the performance assessment of Algorithm 1, let us define the u-long subsequence of \(\mathbb{D}\), , responsible for generating the ℓth vDTD, \(V_{\ell}\). According to the algorithm, \(\beta _{\ell}\) starts with an uninterrupted run of zero or more DTDs smaller than or equal to m, and ends with a single element greater than m (\(D_{i-1}>m\) and \(D_{i+u-1} > m\), but \(D_{t} \leq m\) \({\forall t \in (i,i+u-2) }\)). Note that the set of all such subsequences, , is a partition of \(\mathbb{D}\), since \(\forall i:\ D_{i}\in \bigcup _{\ell}\beta _{\ell}\) and \(\left (D_{i}\in \beta _{x}\land D_{i}\in \beta _{y}\right )\Rightarrow \left (\beta _{x}=\beta _{y}\right )\).
The number of elapsed clock signal edges between generating \(V_{\ell -1}\) and \(V_{\ell}\) is \(\Theta _{\ell }= \sum _{k=0}^{u-1} D_{i+k}\), where u is the length of \(\beta _{\ell}\) and \(\Theta _{\ell}\) is the sum of \(\beta _{\ell}\)’s elements.
Similar to \(\lambda _{\text{d}}\), we define \(\lambda _{\text{v}}\), the virtual count rate at which the vDTDs are generated, as
Theorem 2
The virtual count rate \(\lambda _{\textit{v}}\) can be expressed as
Proof
Consider the sequence, where for \(i\geq 0\)
The sum \(S_{N} = \sum _{i=0}^{N} Z_{i}\) then gives the number of vDTDs generated by Algorithm 1 from an original N-long DTD sequence. We can then write
and
Consequently, \(Z_{i}\) only depends on \(\gamma _{i-1}\), in the sense that
That is, the sequence is dependent on an underlying phase sequence. According to (9), the consecutive \(\gamma _{i}\) values form a Markov chain, since \(\Pr(\gamma _{i}< x_{i}\mid \gamma _{i-1}=x_{i-1})= \Pr(\gamma _{i}< x_{i}\mid \gamma _{i-1}=x_{i-1},\ldots ,\gamma _{0}=x_{0})\). The stationary phase distribution satisfies
where \(g(x,y)\) can be obtained from (10) using that the conditional phase density at the first photon arrival after the dead time is
The solution of (27) is \(f(y)=\chi _{\{0\leq y<\tau\}}\frac{1}{\tau}\).
Due to the ergodicity of the \(\gamma _{i}\) Markov chain, as N tends to infinity, the number of samples in the phase sequence which fall into the \((x,x+\Delta )\) interval is proportional to \(f(x)\cdot \Delta \).
Using this, the ratio of DTDs longer than m can be written as
The expected virtual count rate can then be calculated as
where \(\lambda _{\text{d}}\) is the original rate with dead time, as obtained in (18). □
Let \(\Theta =\lim _{\ell \to \infty} \Theta _{\ell}\) be the stationary number of leading clock edges between generating consecutive \(V_{\ell}\) values. Theorem 2 defines its mean as \(\operatorname*{\mathbb{E}}(\Theta )=1/(\lambda _{\text{v}}\tau )\). The expected time for generating a vDTD with Algorithm 1, \(T_{\Theta}\), can then be written as
The vDTD sample generation rate computed according to Theorem 2 is depicted in Fig. 3.
3.3 Computation of further performance indices
Theorem 2 calculates the mean number of non-discarded detections. The analysis approach of this section allows the computation of more detailed performance indices of Algorithm 1.
To compute the distribution of \(\Theta _{1}\) based on (9), we introduce \(\hat{\Theta}(z,x_{0})=\operatorname*{\mathbb{E}}\left (z^{\Theta _{1}}\mid \gamma _{0}=x_{0} \right )\), the z-transform of \(\Theta _{1}\); \(F_{\text{d}}(z,x_{0},x_{1})= \sum _{n=0}^{m} z^{n} f_{n}(x_{0},x_{1})\) describing the discarded arrivals; and \(F_{\text{a}}(z,x_{0},x_{1})= \sum _{n=m+1}^{\infty }z^{n} f_{n}(x_{0},x_{1})\) describing the non-discarded (accepted) arrivals. Based on these functions, \(\hat{\Theta}(z,x_{0})\) can be obtained as
The cumulative distribution function (CDF) of the initial phase distribution after a non-discarded photon arrival is provided in the second term of (20). Its density function (obtained by a derivation according to the function parameter) is
for \(0\leq x < \tau \). The distribution of \(\Theta _{1}\) is obtained in z-transform domain as
We note that the mean “time” between observations, which we computed directly in the previous section, is
Unfortunately, the infinite number of integrals in (32) makes the numerical analysis of \(\hat{\Theta}(z)\) computationally challenging but can be efficiently approximated using the following Erlangization approach.
3.4 Approximation based on an Erlang clock
Following the pattern of Ref. [15, Eq. (50)], we map \(f_{n}(x_{0},x_{1})\), as introduced in (10), into matrices of size \(\hat{N}\times \hat{N}\):
where N̂ is the order of the Erlang clock, \(q=\frac{\lambda \tau}{\lambda \tau + \hat{N}}\) and the discretized version of dead time is \({L=\mathopen{\lfloor }\hat{N} \zeta /\tau \mathclose{\rfloor }}\), an integer. Furthermore, \({J_{i}\in \{1,\ldots ,\hat{N}\}}\) denotes the phase of the grid process at \(S_{i}\), while Ω denotes the number of phase changes.
To compute the number of intervals associated with discarded and non-discarded arrivals, we introduce \({\mathbf{A}_{\text{d}}(z)=\sum _{n=0}^{m}\mathbf{A}_{n} z^{n}}\) and \(\mathbf{A}_{\text{a}}(z)=\sum _{n=m+1}^{\infty}\mathbf{A}_{n} z^{n}\).
The Erlang clock based approximate of \(\hat{\Theta}(z,x_{0})\) is obtained by considering that an accepted photon arrival is preceded by an arbitrary number of dropped photon arrivals, thus
I denoting an identity matrix of appropriate size. From this, the distribution of Θ can be obtained by inverse z-transform and its kth factorial moment as
where is a column vector of ones, , and \(\{\hat{v}\}_{i}=q (1-q)^{i-1}\) is the discretized version of \(f_{\text{init}}\), introduced in (33). E.g., the squared coefficient of variation (SCV) of Θ can be obtained from the factorial moments as
4 Numerical investigations
In this section, we validate the obtained analytical results against simulations for some performance indices.
4.1 Simulations
We created simulation runs, each consisting of 1 million consecutively generated intervals, with a custom-built Python program. For sample interval generation, we utilized Python’s built-in pseudorandom “random” libraryFootnote 4 to simulate photon emission times for particular λ and τ parameters. We also simulated the effect of a constant ζ dead time (emissions in the dead time period are not registered as detections) and then used these intervals as the input for a Python function implementing Algorithm 1 to generate simulated vDTD distributions and calculate various statistics of the simulation results. We obtained every data point by taking the mean of 20 independent simulation runs. In figures, the standard deviation of the statistic is also denoted with a blue error bar based on the 20 samples—although this value is mostly too small for graphical visibility.
First, we verified the validity of simulations using the lag-1 correlations in (16), as well as the mean value of DTDs in (13). The dead time in the simulation had zero integer part (\(k=0\)) and a fractional part δ varying between 0 and 0.9. The clock resolution was set to \(\tau =1\), and we swept the value of λ between 0 and 10. The results in Fig. 4 show excellent agreement between theory and simulations.
Theoretically obtained and simulated results also align for further performance measures, such as the virtual count rate. Figure 5 shows two cases; the results support the validity of the theoretical model presented in Theorem 2. Using these simulations, we also checked the validity of results when using the approximation method based on an Erlang clock, as presented in Sect. 3.4. We found that this approximation already has a decent accuracy with relative errorsFootnote 5 in the order of 10−2 for \({\hat{N}=100}\) and 10−3 for \(\hat{N}=1000\) Erlang phase parameters, while allowing for the approximation of arbitrary performance indices. An example of simulated and approximated results for \(C_{\Theta}^{2}\) can be seen in Fig. 6.
4.2 Performance cost
To demonstrate the performance cost of Algorithm 1, we compare the DTD and vDTD generation rates. Comparing \(\lambda _{\text{d}}\) and \(\lambda _{\text{v}}\) indicates that for \(\lambda \tau \ll 1\), the difference in output rates is not substantial, but when \(\lambda \tau \sim 1 \), the performance cost of using Algorithm 1 becomes apparent, as seen in Fig. 7. We can further define the \(\lambda _{\text{v}}/\lambda _{\text{d}}\) ratio to quantify this performance loss:
Equation (39) indicates that the critical defining factor for performance loss is the difference \(m\tau -\zeta \) (which we will call the accuracy of overestimation), corresponding to how much we overestimate ζ with mτ. While mτ needs to be strictly greater than ζ for Algorithm 1 to provide uncorrelated vDTDs, it is beneficial to choose mτ as close to ζ as possible. This effect is illustrated in Fig. 8.
4.3 Maximally achievable virtual count rate
When generating vDTDs with Algorithm 1, increasing the λ input photon rate beyond a certain point decreases the final virtual count rate as the probability of detections corresponding to smaller \(D_{i}\) values rises. Thus, finding the optimal input λ corresponding to the maximally achievable output \(\lambda _{\text{v}}\) is important.
Using Eq. (30), we can find this maximum by solving
for λ. Unfortunately, this equation has no algebraic solution but can still be solved numerically. Solutions for an example parameter set are compared to simulation results in Fig. 9.
The accuracy of the overestimation (\(m\tau -\zeta \)) also has a critical effect on maximum achievable rates. This reinforces the importance of choosing mτ close to ζ.
Note that compared to the practice of reducing the \(\lambda _{\text{d}}\) input rate for correlation mitigation, the maximal \(\lambda _{\text{v}}\) virtual count rates provided by our method exceed the typical power limited \(\lambda _{\text{d}}\) input rates (see e.g. the end of Sect. 2.1) as long as \(m\tau -\zeta \) is chosen properly.
4.4 Entropy of the output counts
Due to Algorithm 1, the vDTDs are independent and identically geometrically distributed with
probabilities where \(p = 1- \mathrm {e} ^{-\lambda \tau}\). Consequently, the min-entropy of a vDTD is
and its (Shannon) entropy is
The min-entropy of a random variable provides the upper bound of uniform bits that can be extracted from the variable [20] and can never exceed its Shannon entropy, making it a more efficient measure when assessing random number generators. The other main factor determining the achievable raw entropy generation speed is the rate at which measurement samples are obtained. When using Algorithm 1 this rate is the \(\lambda _{\text{v}}\) virtual count rate, as it determines the speed at which Algorithm 1 generates vDTDs. The (min-)entropy rates, defined as the (min-)entropy generated per unit time, are the products of the (min-)entropy per random variable and the rate at which random variables are generated. Their values can be calculated as \(h(V)=\lambda _{\text{v}}\cdot H(V)\) and \(h_{\infty}(V)=\lambda _{\text{v}}\cdot H_{\infty}(V)\), respectively.
4.5 Handling non-constant dead time
The dead time ζ may not be constant in real systems. We also consider the case when ζ is a random variable to model this effect.
4.5.1 Finite support ζ distributions
We first show that the virtual count rate is monotonic in ζ, then provide limits for \(\lambda _{\text{v}}\) assuming finite-support dead time distributions.
Monotonicity of \(\lambda _{\text{v}}\) in ζ
\(\lambda _{\text{v}}\) is monotonic in ζ, since
because \(\lambda > 0\), \(\zeta \geq 0\), and \(\tau > 0\) by definition, which also makes \(\mathrm {e} ^{\lambda \tau}>1\), therefore Eq. (44) holds true for all valid ζ.
Bounded ζ
For the case of finite-support ζ distributions, we can use the upper bound of the distribution to set m adequately. In contrast, due to the monotonicity in ζ, we can use the lower bound of ζ to calculate the worst-case performance characteristics of Algorithm 1 for the chosen m. More precisely, given an upper bound \(\zeta _{\text{U}}\) and lower bound \(\zeta _{\text{L}}\) for ζ, we can substitute \(\zeta = \zeta _{\text{L}}, m = \mathopen{\lfloor }\zeta _{\text{U}}/\tau \mathclose{\rfloor }+1\) into our previous formulae to get worst-case results in terms of the achievable \(\lambda _{\text{v}}\). Since we set our m overestimation parameter according to \(\zeta _{\text{U}}\), and \(\lambda _{\text{v}}\) is maximal when \(m\tau -\zeta \) is minimal, the constant \(\zeta =\zeta _{\text{U}}\) distribution corresponds to the best case scenario, yielding a maximal \(\lambda _{\text{v}}\) for the given m. Substituting these into Eq. (24), we obtain
This way, even if we do not know the exact value or distribution of ζ, we can still give a lower and upper estimate for the achievable virtual count rates.
4.5.2 Unbounded dead time distributions
For a fixed value of m, a particular sample from an arbitrary ζ distribution can fall into two categories:
where \(A_{1}\) and \(A_{2}\) are mutually exclusive and complete. Due to the law of total probability, the stationary distribution of the vDTDs can be written as
where the first part of the sum corresponds to \(A_{1}\) and the second part to \(A_{2}\). In the case of \(A_{1}\), the corresponding distribution of V is the same as in Sect. 3.1 since \(\zeta \leq m\tau \), and in this case, \(\Pr(V = v \mid {\zeta }\leq m \tau )\) is independent of ζ and equal to (22). In the case of \(A_{2}\), \(\Pr(V = v \mid {\zeta }> m \tau )\) is no longer independent of ζ; therefore, V is no longer ensured to be uncorrelated and may show unwanted correlations. However, the probability of potentially correlated samples is \(\Pr({\zeta }> m\tau )\), and can be adjusted by the choice of m. Larger m values result in a lower sample generation rate, \(\lambda _{v}\), but a lower probability of correlated samples, and the opposite holds for smaller m values. The proper choice of m can set an appropriate trade-off.
5 Measurements and experimental results
We tested Algorithm 1 with the physical setup presented in detail in Ref. [19]. A green semiconductor laser (Thorlabs LP520-SF15) working in CW conditions is the source of photons, with a wavelength of 519.9 nm. After passing through several tunable attenuators to set the desired photon rate, the light is detected by a low-noise photomultiplier (PicoQuant PMA-175 NANO), and its output pulses are time-tagged by a time-to-digital converter (PicoQuant TimeHarp 260). Figure 10 shows the block diagram of the experimental setup.
The maximum photon rate tolerated by the photomultiplier is around 5 Mcps (million counts per second). The highest resolution of the detection system is \(\tau =250\) ps, while the total dead time is reported to be typically around 2 ns. According to our measurement results, while 2 ns can be considered a lower limit for the dead time, there are cases where the system exhibits behaviour corresponding to larger values of ζ. Therefore, we cannot consider ζ to be constant.
At first glance, correlation coefficients predicted by e.g. (8) look negligible for the parameter set we use. However, our previous research showed that even seemingly low correlations between DTDs become noticeable once the samples are used for random bit generation. Earlier, we conducted measurements on the same experimental setup and increased the detection rate to around \(3.72\cdot 10^{6}\) cps. The NIST Statistical Test Suite [21], one of the primary tools of randomness assessment, failed the generated bit sequence on the Runs test at a significance level of 0.01, showing that consecutive bits feature a non-zero correlation [19].
We collected measurement data of \(2 \cdot 10^{9}\) observed photon arrival times with a mean detection rate of \(\lambda _{\text{d}}\approx 1.05 \pm 0.01\) Mcps. Rescaling after accounting for the typical dead time of the system according to (18) results in an input photon rate of \(\lambda = 1.052\) Mcps.
We also created time-binned versions of the original, unbinned measurement data to investigate possible λτ statistics beyond our experimental setup’s range of operational limits. To do so, we used data recorded with the device’s own τ time resolution and created lower resolution versions of the same experiment—as if we used a longer, \({\tau '=K_{\text{b}}\cdot \tau}\) clock period, where \(K_{\text{b}}\) is a positive integer. The binning method is presented in Algorithm 2.
We obtained additional binned datasets corresponding to \(K_{\text{b}} = 2,\ 5,\ 10,\ 100,\ 1000\). We applied Algorithm 1 to the unbinned and binned raw datasets. We refer to the output of Algorithm 1 as overestimated data.
For the unbinned data (\(K_{\text{b}}=1\)), we set \(m = 1000\) as a safe overestimation parameter,Footnote 6 and \(m' = 500,\ 200,\ 100,\ 10,\ 1\) for the binned data with \(K_{\text{b}} = 2,\ 5,\ 10,\ 100,\ 1000\), respectively, following the rule \(m'=1000/K_{\text{b}}\).Footnote 7
We evaluated the raw and overestimated (both unbinned and binned) datasets in the following ways:
-
1.
By calculating the autocorrelation of (v)DTD sequences.
-
2.
By counting single (v)DTD occurrences. As the distribution of values (the histogram) is expected to be geometrically distributed, we fit it to the expected form. We then calculated the goodness of fit and checked the fitting parameters.
-
3.
By counting the relative frequencies of consecutive (v)DTDs’ value pairs. Measured pair statistics are compared to the expected value of the ideal, independent case—calculated as the product of relative frequencies of single (v)DTDs—via hypothesis testing.
The results of the evaluation methods are detailed below.
5.1 Autocorrelation of (v)DTD sequences
First, we calculated the autocorrelation coefficients of every dataset, denoted as \(a_{1}\) and \(a_{1}^{\text{o}}\) for raw and overestimated data, respectively. The unbinned raw dataset shows correlation coefficients in the order of 10−5. The half-width of the 95% confidence interval for zero correlation is
for \(2\cdot 10^{9}\) samples, where \(\mathrm{Erf}^{-1}(\cdot )\) is the inverse error function. Obtaining such small correlation coefficients is expected even without overestimation when \(\lambda \tau \ll 1\)—recall that correlations become noticeable as the product increases. Table 1 lists the lag-1 coefficients of raw and overestimated datasets. The only coefficient exceeding 10−4 in absolute value is the lag-1 coefficient for the dataset with the largest λτ, using \(K_{\text{b}}=1000\), which shows a significant and sudden increase, leaping above 10−3 in magnitude.
After overestimation, lag-1 coefficients remained in the order of 10−5, within the 95% confidence interval for zero correlation—even without considering the slight growth of the confidence interval due to the reduced number of samples in the overestimated datasets.Footnote 8 All of the overestimated sequences show lower magnitude autocorrelation coefficients than their unprocessed counterparts. The difference is most notable for the sequence with binning parameter 1000, which was originally heavily correlated. When overestimated, the sequence performs significantly better. Note that sequences have similar values after being passed through the algorithm—this is expected since all of them are discretized from the same realization of the underlying PPP, and all use the same overestimation parameter after adjusting for dead time, \(m'\cdot K_{\text{b}}\).
5.2 Frequencies of (v)DTD values
Histograms show an even more noticeable contrast between the raw and overestimated cases. We fit the function \({y = A\cdot \mathrm {e} ^{-Ax} + C}\) to the histogram data using the least squares method.Footnote 9 Ideally, fitting would yield \(A=\lambda \tau '\) and \(C=0\)—note that this is a discretized version of the exponential probability density function \({f_{T}(t)=\chi _{\{t\geq 0\}}\lambda \cdot \mathrm {e} ^{-\lambda t}}\).Footnote 10 The histograms and results of the fitting are shown in Fig. 11. Histograms show deviations from a geometric distribution for the raw datasets, noticeable even by visual inspection, while overestimated datasets do not. The fitting error statistics of overestimated datasets are at least 3 orders of magnitude better compared to their raw counterparts, both in the case of mean square errors (MSEs) and coefficient of determination parameters (\(R^{2}\); perfect fit is \(R^{2} = 1\)). The resulting A parameters for the overestimated data are also in agreement with the expected \(\lambda \tau '\) values,Footnote 11 although slightly larger. This is most probably because the expected \(\lambda \tau '\) values were calculated with the spreadsheet dead time value of 2 ns, but in reality, the actual dead-time-like imperfections of the measurement setup caused a bigger reduction of the effective rate than what the constant \(\zeta =2\) ns correction accounted for. The fitting results are summarised in Tables 2 and 3.
5.3 Frequencies of successive (v)DTD pair values
If the individual (v)DTDs are independent, then the joint probabilities satisfy
We can use this for hypothesis testing, where our null hypothesis is that the tested data is from an ideal binomial trial with a probability given by (47), and gather evidence trying to refute this.Footnote 12 We applied binomial statistical tests on each of the \(\{D_{i}=k,D_{i+1}=\ell \}\) and \(\{V_{i}=k,V_{i+1}=\ell \}\) pair statistics for \(k,\ell \in \{0,1,\ldots ,19\}\), yielding a p-value for each of the 400 pairs to investigate possible deviations from the expected distribution in the case of consecutive detections.
We set the target of the comprehensive significance level per dataset to 0.01. Since we are looking only at the most extreme p-values, we used the Bonferroni correction (due to the multiple comparisons problem) [22] to get individual significance levels of \(2.5\cdot 10^{-5}\) that we then compare to each of the 400 p-values. If any p-value is lower than the individual significance level, then the whole dataset fails at the comprehensive significance level.
The results of the statistical tests show a clear contrast between the raw and the overestimated data in favour of the latter. The raw data scored minimum p-values of \(1.6\cdot 10^{-5}\) without binning (\(K_{\text{b}}=1\)), and \({5.9\cdot 10^{-7},}\ {3.4\cdot 10^{-13},}\ 9.2\cdot 10^{-31}\), 0 and 0 for binned sets (\(K_{\text{b}}=2,\ 5,\ 10,\ 100,\ 1000\), respectively), which are orders of magnitude under the individual significance level and, therefore, fail the test. The minima of p-values obtained for overestimated datasets range from \(6 \cdot 10^{-4}\) (\(K_{\text{b}} = 10\)) to 0.01 (\(K_{ \text{b}} = 1000\)), which, unlike results from the raw data, are all above the individual significance level, passing the test.
5.4 Further measurement results
The statistical tests signified that the overestimation algorithm can transform distorted distributions into distributions very close to exponential/geometric. Newly measured datasets with detected photon rates of ∼400, ∼600, and ∼800 kcps were also evaluated with the previously presented methodology, yielding similar results, emphasizing the gains.
To stress the potentially disadvantageous effect of correlations in measured DTDs, we also utilized the simple bit generation algorithm presented in Ref. [9] and tested the resulting bit sequences with the NIST STS statistical test suite [21]. The sequences with higher λτ values had failing results for some of the test cases, while bit sequences created from the vDTDs passed all the test cases.
We also calculated the experimental ratio of measured input count rates to the virtual count rates achieved by Algorithm 1. We note that we only have measurement data available corresponding to low values (∼10−4) of λτ, but the experimental results all stay within the bounds given by (45), using \(\zeta _{\text{L}} = 10\tau \) and \(\zeta _{\text{U}} = 999\tau \). The experimental output/input rates of Algorithm 1 range from 0.774 (for ∼1 Mcps input rate) to 0.906 (for ∼400 kcps input rate), which is a tolerable performance loss for eliminating the correlations within the generated DTD series.
6 Conclusion
We have introduced an algorithm to eliminate the dependencies between bits from single-photon detecting QRNG schemes. Compared to reducing the input optical power to limit operation into a regime with low correlations, our approach also allows generator operation in parameter regimes with higher input rates, potentially facilitating improved bit generation rates. The proposed procedure constructs a purely geometric distribution obtained from the discretized measurements of the underlying arrival process by overestimating the insensitive period after registered photon detections. The algorithm avoids correlations between successive time samples by discarding a period used for overestimation, which contains a random component depending on the arrival of photons with respect to the underlying time resolution grid. This virtually realizes the ideal case of no dead time and zero starting phase, yielding geometrically distributed virtual discretized time differences (similarly to a restartable measurement clock without dead time), preserving the memoryless property of the exponentially distributed physical process. Dead time overestimation features a slight compromise by reducing the output rate of detections used for bit generation.
The validity of our analytic results regarding the algorithm’s theoretical soundness and performance metrics is supported by both computer simulations and measurements conducted on an experimental setup. The algorithm has low complexity, making it convenient to implement in random number generators where it is desirable to work with uncorrelated time samples before bit assignment or to harness randomness from an exponential/geometric distribution. Although we evaluated our algorithm’s performance on collected datasets, its low complexity also makes it easy to implement in continuous operation modes. Depending on the focal points of the actual QRNG scheme, the benefits of dead time overestimation can largely exceed the disadvantages of a decreased effective count rate.
Data Availability
No datasets were generated or analysed during the current study.
Notes
Here we have used the fact that the \(T_{i}\) times elapsed between events of the PPP are exponentially distributed, with a cumulative distribution function \({F_{T}(t)=\Pr(T< t)=\chi _{\{t\geq 0\}}(1- \mathrm {e} ^{-\lambda t})}\).
This statement is valid until the global minimum is reached at \(\lambda \tau =3.5749\); however, values of \(\lambda \tau >1\) are impractical. They represent a domain in which, on average, more than one photon arrives within a clock period. This practically means a good-quality SPD with high photon rate tolerance connected to a low-resolution TDC. This domain is irrelevant in the present discussion.
The power reduction approach is disadvantageous even in terms of the achievable min-entropy rate, as the maximum of the min-entropy per unit time often lies in a parameter regime corresponding to a higher λτ product than what the power reduction approach would still allow. See Sect. 4.4 for the discussion about entropy rates.
Although pseudorandom number generators cannot provide truly random numbers, the output they produce is still suitable for initial investigations, as this output is expected to mimic the statistical properties of truly random sequences, without the indeterministic features.
The relative error is defined as the difference in percentage between the approximate and theoretical values when the latter is taken to be 100%.
Examining the measurement data, we conclude that \(\zeta < 1000\tau \) with high enough certainty that this choice of m can be considered safe, faithfully overestimating the dead time.
The binning algorithm rescales the necessary overestimation parameter by \(1/K_{\text{b}}\), as the dead time of the underlying process is unchanged. If \(\zeta < m\tau \), then \({\zeta <(m/K_{\text{b}})\cdot (K_{\text{b}}\cdot \tau )}\) holds trivially. The choice of \(m'=m/K_{\text{b}}\) yields a comparable dataset to the unbinned set overestimated by m; using the original overestimation parameter for the binned sequence would result in a greatly reduced \(\lambda _{\text{v}}\).
E.g., for the shortest dataset (\(K_{\text{b}} = 1000, m'=1\)) with \(1.37 \cdot 10^{9}\) samples, the magnitude of the 95% confidence interval increases to \(\sqrt{2}\cdot \mathrm{Erf}^{-1}(0.95)/\sqrt{1.37 \cdot 10^{9}}=1.96/\sqrt{1.37 \cdot 10^{9}}=5.29 \cdot 10^{-5}\).
We utilized the Scipy python library’s “curve_fit” method with initial guiding guesses determined by the expected \(\lambda \tau '\) parameter, and 105 maximum evaluations.
As shown in Eq. (41) and Ref. [16], sampling exponentially distributed time intervals with parameter λ—using a restartable clock with resolution τ and no dead time—yields geometrically distributed samples. Thus, an equivalent exponential fit is also a valid substitute for this geometric fit. The additional C parameter is introduced because we only considered data in the histograms corresponding to the first part of the distribution that fits into the predetermined amount of histogram bins.
For the \(K_{b} = 100\) and \(K_{b} = 1000\) cases, bigger deviation of the fit parameters are expected due to smaller sample sizes (since the number of histogram bins was also scaled with \(K_{b}\) for comparability of results) and higher impact of the C fitting parameter.
Successful rejection of the null hypothesis constitutes a test failure.
Abbreviations
- CDF:
-
cumulative distribution function
- cps:
-
count(s) per second
- CW:
-
continuous-wave
- DTD:
-
discretized time difference
- MSE:
-
mean square error
- NIST:
-
National Institute of Standards and Technology
- PC:
-
personal computer
- PMT:
-
photomultiplier tube
- PPP:
-
Poisson point process
- SCV:
-
squared coefficient of variation
- SPD:
-
single-photon detector
- TDC:
-
time-to-digital converter
- ToA:
-
time-of-arrival
- QRNG:
-
quantum random number generator/generation
- vDTD:
-
virtual discretized time difference
- VOA:
-
variable optical attenuator
References
Dodis Y, Ong SJ, Prabhakaran M, Sahai A. On the (im)possibility of cryptography with imperfect randomness. In: 45th annual IEEE symposium on foundations of computer science. New York: IEEE Press; 2004. p. 196–205. https://doi.org/10.1109/FOCS.2004.44.
Gyöngyösi L, Bacsardi L, Imre S. A survey on quantum key distribution. Infocommun J. 2019;11(2):14–21. https://doi.org/10.36244/ICJ.2019.2.2.
Herrero-Collantes M, García-Escartín JC. Quantum random number generators. Rev Mod Phys. 2017;89(1):015004. https://doi.org/10.1103/RevModPhys.89.015004.
Mannalatha V, Mishra S, Pathak A. A comprehensive review of quantum random number generators: concepts, classification and the origin of randomness. Quantum Inf Process. 2023;22(12):439.
Jennewein T, Achleitner U, Weihs G, Weinfurter H, Zeilinger A. A fast and compact quantum random number generator. Rev Sci Instrum. 2000;71(4):1675–80. https://doi.org/10.1063/1.1150518.
Stefanov A, Gisin N, Guinnard O, Guinnard L, Zbinden H. Optical quantum random number generator. J Mod Opt. 2000;47(4):595–8. https://doi.org/10.1080/09500340008233380.
Fürst H, Weier H, Nauerth S, Marangon DG, Kurtsiefer C, Weinfurter H. High speed optical quantum random number generation. Opt Express. 2010;18(12):13029–37. https://doi.org/10.1364/OE.18.013029.
Gras G, Martin A, Choi JW, Quantum BF. Entropy model of an integrated quantum-random-number-generator chip. Phys Rev Appl. 2021;15(5):054048. https://doi.org/10.1103/physrevapplied.15.054048.
Stipčević M, Rogina BM. Quantum random number generator based on photonic emission in semiconductors. Rev Sci Instrum. 2007;78(4):045104. https://doi.org/10.1063/1.2720728.
Wahl M, Leifgen M, Berlin M, Röhlicke T, Rahn HJ, Benson O. An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements. Appl Phys Lett. 2011;98(17):171105. https://doi.org/10.1063/1.3578456.
Massari N, Tontini A, Parmesan L, Perenzoni M, Gruijć M, Verbauwhede I, et al.. A monolithic SPAD-based random number generator for cryptographic application. In: ESSCIRC 2022- IEEE 48th European solid state circuits conference (ESSCIRC). New York: IEEE Press; 2022. p. 73–6. https://doi.org/10.1109/ESSCIRC55480.2022.9911498.
Lei W, Xie Z, Li Y, Fang J, Shen W. An 8.4 Gbps real-time quantum random number generator based on quantum phase fluctuation. Quantum Inf Process. 2020;19(11):405. https://doi.org/10.1007/s11128-020-02896-y.
Williams CR, et al.. Fast physical random number generator using amplified spontaneous emission. Opt Express. 2010;18(23):23584–97. https://doi.org/10.1364/OE.18.023584.
Bustard PJ, Moffatt D, Lausten R, Wu G, Walmsley IA, Sussman BJ. Quantum random bit generation using stimulated Raman scattering. Opt Express. 2011;19(25):25173. https://doi.org/10.1364/oe.19.025173.
Schranz Á, Solymos B, Telek M. Stochastic performance analysis of a time-of-arrival quantum random number generator. IET Quantum Commun. 2024;5(2):140–56. https://doi.org/10.1049/qtc2.12080.
Schranz Á, Udvary E. Mathematical analysis of a quantum random number generator based on the time difference between photon detections. Opt Eng. 2020;59(4):044104. https://doi.org/10.1117/1.OE.59.4.044104.
Müller JW. Generalized dead times. Nucl Instrum Methods Phys Res, Sect A, Accel Spectrom Detect Assoc Equip. 1991;301(3):543–51. https://doi.org/10.1016/0168-9002(91)90021-H.
Glauber RJ. Coherent and incoherent states of the radiation field. Phys Rev. 1963;131(6):2766–88. https://doi.org/10.1103/PhysRev.131.2766.
Schranz Á. Optical solutions for quantum key distribution transmitters [Ph. D. dissertation. Budapest University of Technology and Economics; 2021. http://hdl.handle.net/10890/16991.
Konig R, Renner R, Schaffner C. The operational meaning of min- and max-entropy. IEEE Trans Inf Theory. 2009;55(9):4337–47. https://doi.org/10.1109/tit.2009.2025545.
Rukhin AL, et al.. A statistical test suite for random and pseudorandom number generators for cryptographic applications. Gaithersburg: National Institute of Standards & Technology; 2010. https://doi.org/10.6028/nist.sp.800-22. Spec. Pub. 800-22, Rev. 1a.
Dunn OJ. Multiple comparisons among means. J Am Stat Assoc. 1961;56(293):52–64. https://doi.org/10.1080/01621459.1961.10482090.
Acknowledgements
Not applicable.
Funding
M.T. was supported by the OTKA K-138208 project of the Hungarian Scientific Research Fund. B.S. was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004). Á.S. was supported by the OTKA K-142845 project of the Hungarian Scientific Research Fund and also received funding from the European Union under grant agreement No. 101081247 (QCIHungary project), which has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund.
Author information
Authors and Affiliations
Contributions
B.S. provided the original concept of the dead time overestimating algorithm and conducted simulations and measurements. M.T. implemented the scheme and obtained results regarding the performance indices in Mathematica. Á.S. assembled the physical measurement setup. All three authors contributed to developing the theory and writing and proofreading the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Solymos, B., Schranz, Á. & Telek, M. Correlation avoidance in single-photon detecting quantum random number generators by dead time overestimation. EPJ Quantum Technol. 11, 60 (2024). https://doi.org/10.1140/epjqt/s40507-024-00272-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjqt/s40507-024-00272-8