# Making Machine Learning for Quantum Dots a Reality

### Syed Adil Rab

Quantum Computing Lead, Cogisen

March 20th 2019 – In the last six months I have attended two key events in the quantum computing space; the “2018 Conference on the Theory of Quantum Computation, Communication and Cryptography” in Sydney and “Quantum for Business” in Silicon Valley. It is clear that attendees at both events really share my excitement and belief in the potential of quantum computing. At the same time, what was good was that I did not get a sense delegates were getting carried away on the hype. Drawn from academia and the worlds of venture capital, start-ups and major technology companies, there was a level of pragmatism about possible timelines for quantum computers to overtake classical computing.

Everyone understood that to be successful, quantum computing must put together a complete solution made up of hardware, software and the applications that run on the platform. Clearly, this process is more advanced in some sectors than others; for example, cryptography is often cited as a likely first use case for quantum systems. There was general acceptance, though, that we have significant challenges ahead, none more so than addressing noise and scalability.

One area where there is yet to be consensus, never mind a common standard, is on the hardware front. This is particularly relevant to Cogisen and work we are undertaking at our research centre, the Cognitive Modelling Laboratory. It was very apparent at the “Quantum for Business” conference that there are a number of solutions being discussed, including superconducting, NMR, Majorana Fermions, FQH state, Trapped Ions and Quantum Dots. We are very interested in the last of these options, because it holds many of the key features needed for practical quantum computing.

For large scale production of quantum dots, one of the biggest technological limitations is the complexity of controlling large arrays, given the multi-dimensional voltage space describing the system. This is a critical hindrance, because the electrostatic confinement has large variability from one device to the other and in time, due to temperature changes or usury. It means Quantum Dot machines suffer from optimisation and calibration limitations when attempting to scale the number of qubits, because they generate significantly more parameters and correlations as the size of the system grows.

It becomes essential to use machine learning to automate the scaling process for quantum dot applications. Machine learning has the potential to become the technique for the autonomous classification of states and self-tuning of hardware. This is critical, because of the limitations around controlling large scale arrays of quantum dots, which currently still need direct human intervention. If quantum dot hardware is to be successful, control and auto-tuning are essential for the larger scale fabrication of quantum dots and for the practical use of quantum dot devices. Machine learning will allow the real time control and testing of devices and do so in feasible time scales.

Until now this has very much been theoretical study, but at Cogisen we have been working on a novel machine learning algorithm based on data mapping, which will be an effective tool for the classification of quantum dot systems. Our approach is not based on neural networks, rather it maps raw data into a space where information is linearised and can be extracted with dramatically fewer optimisation parameters. One reason why this approach is successful, is because we can extract information from temporal data and it can be applied to higher dimensional representations of data. This means our algorithm can transform raw data into more suitable feature maps for machine learning and it is agnostic to the type of data that can be analysed.

We are confident of the strength of our technology, because we tested it against data of simulated quantum dots publicly available from the National Institute of Standards and Technology (NIST). This dataset consists of 1,000 experiments obtained by simulating a two quantum dots nanowire. The training of our algorithm was done using 200 randomly picked experiments from the data. The generated model was tested in over 800 experiments and achieved optimal performance and accuracy of 97.86% on average; and in some cases resulted in accuracy of up to 99.9%. The training set size and sparsity shows our approach accepts a reduced input space for comparable average accuracy. Qualitatively, we are confident it works, because it can extract the full information from the data set by selecting and recombining only a sparse subset of raw data, without any loss of accuracy and generality. Only a portion of the power spectra is taken into account and recombined into a linear model.

A major advantage of this approach also comes from the number of parameters and multiplication and additions (MAdd) needed for the training compared to Convolutional Neural Networks (CNN). Our technology requires 99.98% fewer parameters compared to an average CNN and has 98.77% less MAdd. The lightness and speed of our algorithm leads to a smaller requirement for parameter space to optimise the training and reduces the number of operations for the optimisation compared to a CNN. There is no need for a kernel, as the tuning of the parameter is done directly through the transformation of the data. The training can be adapted for changes in the dataset with much less effort, rendering the Cogisen algorithm data-type agnostic and ensures there are no hidden layers as in a neural network. Consequently, the models created can be used for efficient auto-tuning of the quantum dot states in real-time, thus speeding up research for these devices.

We believe these results pave the way for a practical application of quantum dots, because they demonstrate a methodology for the automation of these systems. The extension of Cogisen’s algorithm to higher dimensions will not require any fundamental re-writing of the algorithm, so we are now planning to study complex systems. We are aiming to test our algorithm with data structures that have a much larger number of quantum dots, where the control over the large number of parameters has traditionally hindered the scalability of practical quantum dot devices. While we do not believe quantum dots are the only solution to the hardware questions around quantum computing, we are very excited about the breakthrough we have made and have ambitions to conduct the much larger tests in the next year. Watch this space for more updates!

*Syed Adil Rab is the Quantum Computing Lead at Cogisen. He holds a Doctorate in Quantum Information and Optics from Sapienza Università di Roma. *

Next Article

Quantum Supremacy: are we nearly there yet?