TY - JOUR T1 - Theoretical bounds on data requirements for the ray-based classification JF - SN Comput. Sci. Y1 - 2022 A1 - Brian J. Weber A1 - Sandesh S. Kalantre A1 - Thomas McJunkin A1 - J. M. Taylor A1 - Justyna P. Zwolak AB -

The problem of classifying high-dimensional shapes in real-world data grows in complexity as the dimension of the space increases. For the case of identifying convex shapes of different geometries, a new classification framework has recently been proposed in which the intersections of a set of one-dimensional representations, called rays, with the boundaries of the shape are used to identify the specific geometry. This ray-based classification (RBC) has been empirically verified using a synthetic dataset of two- and three-dimensional shapes [1] and, more recently, has also been validated experimentally [2]. Here, we establish a bound on the number of rays necessary for shape classification, defined by key angular metrics, for arbitrary convex shapes. For two dimensions, we derive a lower bound on the number of rays in terms of the shape's length, diameter, and exterior angles. For convex polytopes in R^N, we generalize this result to a similar bound given as a function of the dihedral angle and the geometrical parameters of polygonal faces. This result enables a different approach for estimating high-dimensional shapes using substantially fewer data elements than volumetric or surface-based approaches.

VL - 3 UR - https://arxiv.org/abs/2103.09577 CP - 57 U5 - https://doi.org/10.1007/s42979-021-00921-0 ER - TY - JOUR T1 - Toward Robust Autotuning of Noisy Quantum dot Devices JF - Physical Review Applied Y1 - 2022 A1 - Joshua Ziegler A1 - Thomas McJunkin A1 - E.S. Joseph A1 - Sandesh S. Kalantre A1 - Benjamin Harpt A1 - D.E. Savage A1 - M.G. Lagally A1 - M.A. Eriksson A1 - Jacob M. Taylor A1 - Justyna P. Zwolak AB -

The current autotuning approaches for quantum dot (QD) devices, while showing some success, lack an assessment of data reliability. This leads to unexpected failures when noisy or otherwise low-quality data is processed by an autonomous system. In this work, we propose a framework for robust autotuning of QD devices that combines a machine learning (ML) state classifier with a data quality control module. The data quality control module acts as a "gatekeeper" system, ensuring that only reliable data are processed by the state classifier. Lower data quality results in either device recalibration or termination. To train both ML systems, we enhance the QD simulation by incorporating synthetic noise typical of QD experiments. We confirm that the inclusion of synthetic noise in the training of the state classifier significantly improves the performance, resulting in an accuracy of 95.0(9) % when tested on experimental data. We then validate the functionality of the data quality control module by showing that the state classifier performance deteriorates with decreasing data quality, as expected. Our results establish a robust and flexible ML framework for autonomous tuning of noisy QD devices.

VL - 17 UR - https://arxiv.org/abs/2108.00043 U5 - https://doi.org/10.1103/PhysRevApplied.17.024069 ER - TY - JOUR T1 - Ray-based framework for state identification in quantum dot devices JF - PRX Quantum Y1 - 2021 A1 - Justyna P. Zwolak A1 - Thomas McJunkin A1 - Sandesh S. Kalantre A1 - Samuel F. Neyens A1 - E. R. MacQuarrie A1 - Mark A. Eriksson A1 - J. M. Taylor AB -

Quantum dots (QDs) defined with electrostatic gates are a leading platform for a scalable quantum computing implementation. However, with increasing numbers of qubits, the complexity of the control parameter space also grows. Traditional measurement techniques, relying on complete or near-complete exploration via two-parameter scans (images) of the device response, quickly become impractical with increasing numbers of gates. Here, we propose to circumvent this challenge by introducing a measurement technique relying on one-dimensional projections of the device response in the multi-dimensional parameter space. Dubbed as the ray-based classification (RBC) framework, we use this machine learning (ML) approach to implement a classifier for QD states, enabling automated recognition of qubit-relevant parameter regimes. We show that RBC surpasses the 82 % accuracy benchmark from the experimental implementation of image-based classification techniques from prior work while cutting down the number of measurement points needed by up to 70 %. The reduction in measurement cost is a significant gain for time-intensive QD measurements and is a step forward towards the scalability of these devices. We also discuss how the RBC-based optimizer, which tunes the device to a multi-qubit regime, performs when tuning in the two- and three-dimensional parameter spaces defined by plunger and barrier gates that control the dots. This work provides experimental validation of both efficient state identification and optimization with ML techniques for non-traditional measurements in quantum systems with high-dimensional parameter spaces and time-intensive measurements.

VL - 2 UR - https://arxiv.org/abs/2102.11784 CP - 020335 U5 - https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.020335 ER - TY - JOUR T1 - Auto-tuning of double dot devices in situ with machine learning JF - Phys. Rev. Applied Y1 - 2020 A1 - Justyna P. Zwolak A1 - Thomas McJunkin A1 - Sandesh S. Kalantre A1 - J. P. Dodson A1 - E. R. MacQuarrie A1 - D. E. Savage A1 - M. G. Lagally A1 - S. N. Coppersmith A1 - Mark A. Eriksson A1 - J. M. Taylor AB -

There are myriad quantum computing approaches, each having its own set of challenges to understand and effectively control their operation. Electrons confined in arrays of semiconductor nanostructures, called quantum dots (QDs), is one such approach. The easy access to control parameters, fast measurements, long qubit lifetimes, and the potential for scalability make QDs especially attractive. However, as the size of the QD array grows, so does the number of parameters needed for control and thus the tuning complexity. The current practice of manually tuning the qubits is a relatively time-consuming procedure and is inherently impractical for scaling up and applications. In this work, we report on the in situ implementation of an auto-tuning protocol proposed by Kalantre et al. [arXiv:1712.04914]. In particular, we discuss how to establish a seamless communication protocol between a machine learning (ML)-based auto-tuner and the experimental apparatus. We then show that a ML algorithm trained exclusively on synthetic data coming from a physical model to quantitatively classify the state of the QD device, combined with an optimization routine, can be used to replace manual tuning of gate voltages in devices. A success rate of over 85 % is determined for tuning to a double quantum dot regime when at least one of the plunger gates is initiated sufficiently close to the desired state. Modifications to the training network, fitness function, and optimizer are discussed as a path towards further improvement in the success rate when starting both near and far detuned from the target double dot range.

VL - 13 UR - https://arxiv.org/abs/1909.08030 CP - 034075 U5 - https://doi.org/10.1103/PhysRevApplied.13.034075 ER - TY - JOUR T1 - Ray-based classification framework for high-dimensional data JF - Proceedings of the Machine Learning and the Physical Sciences Workshop at NeurIPS 2020, Vancouver, Canada Y1 - 2020 A1 - Justyna P. Zwolak A1 - Sandesh S. Kalantre A1 - Thomas McJunkin A1 - Brian J. Weber A1 - J. M. Taylor AB -

While classification of arbitrary structures in high dimensions may require complete quantitative information, for simple geometrical structures, low-dimensional qualitative information about the boundaries defining the structures can suffice. Rather than using dense, multi-dimensional data, we propose a deep neural network (DNN) classification framework that utilizes a minimal collection of one-dimensional representations, called \emph{rays}, to construct the "fingerprint" of the structure(s) based on substantially reduced information. We empirically study this framework using a synthetic dataset of double and triple quantum dot devices and apply it to the classification problem of identifying the device state. We show that the performance of the ray-based classifier is already on par with traditional 2D images for low dimensional systems, while significantly cutting down the data acquisition cost.

UR - https://arxiv.org/abs/2010.00500 ER - TY - JOUR T1 - QFlow lite dataset: A machine-learning approach to the charge states in quantum dot experiments JF - PLOS ONE Y1 - 2018 A1 - Justyna P. Zwolak A1 - Sandesh S. Kalantre A1 - Xingyao Wu A1 - Stephen Ragole A1 - J. M. Taylor AB -

Over the past decade, machine learning techniques have revolutionized how research is done, from designing new materials and predicting their properties to assisting drug discovery to advancing cybersecurity. Recently, we added to this list by showing how a machine learning algorithm (a so-called learner) combined with an optimization routine can assist experimental efforts in the realm of tuning semiconductor quantum dot (QD) devices. Among other applications, semiconductor QDs are a candidate system for building quantum computers. The present-day tuning techniques for bringing the QD devices into a desirable configuration suitable for quantum computing that rely on heuristics do not scale with the increasing size of the quantum dot arrays required for even near-term quantum computing demonstrations. Establishing a reliable protocol for tuning that does not rely on the gross-scale heuristics developed by experimentalists is thus of great importance. To implement the machine learning-based approach, we constructed a dataset of simulated QD device characteristics, such as the conductance and the charge sensor response versus the applied electrostatic gate voltages. Here, we describe the methodology for generating the dataset, as well as its validation in training convolutional neural networks. We show that the learner's accuracy in recognizing the state of a device is ~96.5 % in both current- and charge-sensor-based training. We also introduce a tool that enables other researchers to use this approach for further research: QFlow lite - a Python-based mini-software suite that uses the dataset to train neural networks to recognize the state of a device and differentiate between states in experimental data. This work gives the definitive reference for the new dataset that will help enable researchers to use it in their experiments or to develop new machine learning approaches and concepts

VL - 13 U4 - e0205844 UR - https://arxiv.org/abs/1809.10018 CP - 10 U5 - https://doi.org/10.1371/journal.pone.0205844 ER - TY - JOUR T1 - Machine Learning techniques for state recognition and auto-tuning in quantum dots Y1 - 2017 A1 - Sandesh S. Kalantre A1 - Justyna P. Zwolak A1 - Stephen Ragole A1 - Xingyao Wu A1 - Neil M. Zimmerman A1 - M. D. Stewart A1 - J. M. Taylor AB -

Recent progress in building large-scale quantum devices for exploring quantum computing and simulation paradigms has relied upon effective tools for achieving and maintaining good experimental parameters, i.e. tuning up devices. In many cases, including in quantum-dot based architectures, the parameter space grows substantially with the number of qubits, and may become a limit to scalability. Fortunately, machine learning techniques for pattern recognition and image classification using so-called deep neural networks have shown surprising successes for computer-aided understanding of complex systems. In this work, we use deep and convolutional neural networks to characterize states and charge configurations of semiconductor quantum dot arrays when one can only measure a current-voltage characteristic of transport (here conductance) through such a device. For simplicity, we model a semiconductor nanowire connected to leads and capacitively coupled to depletion gates using the Thomas-Fermi approximation and Coulomb blockade physics. We then generate labeled training data for the neural networks, and find at least 90 % accuracy for charge and state identification for single and double dots purely from the dependence of the nanowire’s conductance upon gate voltages. Using these characterization networks, we can then optimize the parameter space to achieve a desired configuration of the array, a technique we call ‘auto-tuning’. Finally, we show how such techniques can be implemented in an experimental setting by applying our approach to an experimental data set, and outline further problems in this domain, from using charge sensing data to extensions to full one and two-dimensional arrays, that can be tackled with machine learning.

UR - https://arxiv.org/abs/1712.04914 ER -