01744nas a2200169 4500008004100000245007100041210006900112260001400181490000800195520123200203100002101435700001901456700002001475700002301495700001901518856003701537 2022 eng d00aProvably efficient machine learning for quantum many-body problems0 aProvably efficient machine learning for quantum manybody problem c9/26/20220 v3773 a
Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground state properties of gapped Hamiltonians in finite spatial dimensions, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter. In contrast, under widely accepted complexity theory assumptions, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a many-body quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, 2D random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.
1 aHuang, Hsin-Yuan1 aKueng, Richard1 aTorlai, Giacomo1 aAlbert, Victor, V.1 aPreskill, John uhttps://arxiv.org/abs/2106.1262701877nas a2200181 4500008004100000245007000041210006900111260001400180520132200194100002301516700002101539700001501560700001801575700002301593700001901616700002301635856003701658 2021 eng d00aGeneralization in quantum machine learning from few training data0 aGeneralization in quantum machine learning from few training dat c11/9/20213 aModern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as T/N−−−−√. When only K≪T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to K/N−−−−√. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
1 aCaro, Matthias, C.1 aHuang, Hsin-Yuan1 aCerezo, M.1 aSharma, Kunal1 aSornborger, Andrew1 aCincio, Lukasz1 aColes, Patrick, J. uhttps://arxiv.org/abs/2111.05292