We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang\&$\#$39;s breakthrough quantum-inspired algorithm for recommendation systems [STOC\&$\#$39;19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gily{\'e}n et al. [STOC\&$\#$39;19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all recent results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: l2-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.

}, doi = {https://doi.org/10.1145/3357713.3384314}, url = {https://arxiv.org/abs/1910.06151}, author = {Nai-Hui Chia and Andras Gilyen and Tongyang Li and Han-Hsuan Lin and Ewin Tang and Chunhao Wang} } @article {2722, title = {Sublinear classical and quantum algorithms for general matrix games}, journal = {To appear in the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021)}, year = {2020}, month = {12/11/2020}, abstract = {We investigate sublinear classical and quantum algorithms for matrix games, a fundamental problem in optimization and machine learning, with provable guarantees. Given a matrix A\∈Rn\×d, sublinear algorithms for the matrix game minx\∈Xmaxy\∈Yy⊤Ax were previously known only for two special cases: (1) Y being the l1-norm unit ball, and (2) X being either the l1- or the l2-norm unit ball. We give a sublinear classical algorithm that can interpolate smoothly between these two cases: for any fixed q\∈(1,2], we solve the matrix game where X is a lq-norm unit ball within additive error ε in time O~((n+d)/ε2). We also provide a corresponding sublinear quantum algorithm that solves the same task in time O~((n\−\−\√+d\−\−\√)poly(1/ε)) with a quadratic improvement in both n and d. Both our classical and quantum algorithms are optimal in the dimension parameters n and d up to poly-logarithmic factors. Finally, we propose sublinear classical and quantum algorithms for the approximate Carath{\'e}odory problem and the lq-margin support vector machines as applications.

}, url = {https://arxiv.org/abs/2012.06519}, author = {Tongyang Li and Chunhao Wang and Shouvanik Chakrabarti and Xiaodi Wu} } @article {2448, title = {Quantum algorithm for estimating volumes of convex bodies}, year = {2019}, month = {8/11/2019}, abstract = {Estimating the volume of a convex body is a central problem in convex geometry and can be viewed as a continuous version of counting. We present a quantum algorithm that estimates the volume of an n-dimensional convex body within multiplicative error ε using O~(n3.5+n2.5/ε) queries to a membership oracle and O~(n5.5+n4.5/ε) additional arithmetic operations. For comparison, the best known classical algorithm uses O~(n4+n3/ε2) queries and O~(n6+n5/ε2) additional arithmetic operations. To the best of our knowledge, this is the first quantum speedup for volume estimation. Our algorithm is based on a refined framework for speeding up simulated annealing algorithms that might be of independent interest. This framework applies in the setting of \"Chebyshev cooling\", where the solution is expressed as a telescoping product of ratios, each having bounded variance. We develop several novel techniques when implementing our framework, including a theory of continuous-space quantum walks with rigorous bounds on discretization error.

}, url = {https://arxiv.org/abs/1908.03903}, author = {Shouvanik Chakrabarti and Andrew M. Childs and Shih-Han Hung and Tongyang Li and Chunhao Wang and Xiaodi Wu} } @article {2338, title = {Quantum-inspired classical sublinear-time algorithm for solving low-rank semidefinite programming via sampling approaches}, year = {2019}, abstract = {Semidefinite programming (SDP) is a central topic in mathematical optimization with extensive studies on its efficient solvers. Recently, quantum algorithms with superpolynomial speedups for solving SDPs have been proposed assuming access to its constraint matrices in quantum superposition. Mutually inspired by both classical and quantum SDP solvers, in this paper we present a sublinear classical algorithm for solving low-rank SDPs which is asymptotically as good as existing quantum algorithms. Specifically, given an SDP with m constraint matrices, each of dimension n and rank poly(logn), our algorithm gives a succinct description and any entry of the solution matrix in time O(m\⋅poly(logn,1/ε)) given access to a sample-based low-overhead data structure of the constraint matrices, where ε is the precision of the solution. In addition, we apply our algorithm to a quantum state learning task as an application. Technically, our approach aligns with both the SDP solvers based on the matrix multiplicative weight (MMW) framework and the recent studies of quantum-inspired machine learning algorithms. The cost of solving SDPs by MMW mainly comes from the exponentiation of Hermitian matrices, and we propose two new technical ingredients (compared to previous sample-based algorithms) for this task that may be of independent interest: . Weighted sampling: assuming sampling access to each individual constraint matrix A1,\…,Aτ, we propose a procedure that gives a good approximation of A=A1+...+Aτ. . Symmetric approximation: we propose a sampling procedure that gives low-rank spectral decomposition of a Hermitian matrix A. This improves upon previous sampling procedures that only give low-rank singular value decompositions, losing the signs of eigenvalues.

}, url = {https://arxiv.org/abs/1901.03254}, author = {Nai-Hui Chia and Tongyang Li and Han-Hsuan Lin and Chunhao Wang} }