TY - JOUR T1 - Provably efficient machine learning for quantum many-body problems JF - Science Y1 - 2022 A1 - Hsin-Yuan Huang A1 - Richard Kueng A1 - Giacomo Torlai A1 - Victor V. Albert A1 - John Preskill AB -

Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground state properties of gapped Hamiltonians in finite spatial dimensions, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter. In contrast, under widely accepted complexity theory assumptions, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a many-body quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, 2D random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.

VL - 377 UR - https://arxiv.org/abs/2106.12627 U5 - 10.1126/science.abk3333 ER - TY - JOUR T1 - Generalization in quantum machine learning from few training data Y1 - 2021 A1 - Matthias C. Caro A1 - Hsin-Yuan Huang A1 - M. Cerezo A1 - Kunal Sharma A1 - Andrew Sornborger A1 - Lukasz Cincio A1 - Patrick J. Coles AB -

Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as T/N−−−−√. When only K≪T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to K/N−−−−√. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.

UR - https://arxiv.org/abs/2111.05292 ER -