Skip to main content

Advertisement

Figure 8 | EPJ Quantum Technology

Figure 8

From: A coherent perceptron for all-optical learning

Figure 8

The perceptron’s error rate vs the difficulty of the classification task and as a function of the parameters determining the learning rate. In part (a) we compare the unoptimized performance of the perceptron circuit (red diamonds) to the optimal performance bound (solid, green) as well as a GDA (blue ×’s) trained on the same number of training examples. We show averages over 100 trials at each cluster separation. The GDA data was similarly averaged over 100 trials. The transparent envelopes indicate the sample standard deviation. The black dots show the perceptron performance when simulated without shot noise. We see that the shot noise has very little effect. In part (b) we plot the average error rate (averaged over 50 trials) at fixed cluster separation \(\| \mu_{1} - \mu_{0} \|_{2} / \sigma= 2\) for various values of the time interval Δt for which each data sample is presented to the circuit as well as the strength of the training feedback α. The total number of feedback photons \(N_{fb} = |\alpha|^{2} \Delta t\) per sample is constant along the faint dashed lines and the actual value is indicated on the right. A good choice of parameters is characterized both by low feedback power (small \(|\alpha|^{2}\)) and high input rate (low sample time Δt) while still resulting in a low classification error rate. The × marks the parameters used for the results in (a) and the previous figures.

Back to article page