01429nas a2200157 4500008004100000245005600041210005600097260001300153520097200166100001801138700002301156700001801179700002101197700001601218856003701234 2023 eng d00aRandom Pulse Sequences for Qubit Noise Spectroscopy0 aRandom Pulse Sequences for Qubit Noise Spectroscopy c3/2/20233 a
Qubit noise spectroscopy is an important tool for the experimental investigation of open quantum systems. However, conventional techniques for implementing noise spectroscopy are time-consuming, because they require multiple measurements of the noise spectral density at different frequencies. Here we describe an alternative method for quickly characterizing the spectral density. Our method utilizes random pulse sequences, with carefully-controlled correlations among the pulses, to measure arbitrary linear functionals of the noise spectrum. Such measurements allow us to estimate k'th-order moments of the noise spectrum, as well as to reconstruct sparse noise spectra via compressed sensing. Our simulations of the performance of the random pulse sequences on a realistic physical system, self-assembled quantum dots, reveal a speedup of an order of magnitude in extracting the noise spectrum compared to conventional dynamical decoupling approaches.
1 aHuang, Kaixin1 aFarfurnik, Demitry1 aSeif, Alireza1 aHafezi, Mohammad1 aLiu, Yi-Kai uhttps://arxiv.org/abs/2303.0090901629nas a2200109 4500008004100000245013500041210006900176260001400245520120700259100001601466856003701482 2023 eng d00aAn Uncertainty Principle for the Curvelet Transform, and the Infeasibility of Quantum Algorithms for Finding Short Lattice Vectors0 aUncertainty Principle for the Curvelet Transform and the Infeasi c11/7/20233 aThe curvelet transform is a special type of wavelet transform, which is useful for estimating the locations and orientations of waves propagating in Euclidean space. We prove an uncertainty principle that lower-bounds the variance of these estimates, for radial wave functions in n dimensions. As an application of this uncertainty principle, we show the infeasibility of one approach to constructing quantum algorithms for solving lattice problems, such as the approximate shortest vector problem (approximate-SVP), and bounded distance decoding (BDD). This gives insight into the computational intractability of approximate-SVP, which plays an important role in algorithms for integer programming, and in post-quantum cryptosystems.
In this approach to solving lattice problems, one prepares quantum superpositions of Gaussian-like wave functions centered at lattice points. A key step in this procedure requires finding the center of each Gaussian-like wave function, using the quantum curvelet transform. We show that, for any choice of the Gaussian-like wave function, the error in this step will be above the threshold required to solve BDD and approximate-SVP.
The curvelet transform is a special type of wavelet transform, which is useful for estimating the locations and orientations of waves propagating in Euclidean space. We prove an uncertainty principle that lower-bounds the variance of these estimates, for radial wave functions in n dimensions.
As an application of this uncertainty principle, we show the infeasibility of one approach to constructing quantum algorithms for solving lattice problems, such as the approximate shortest vector problem (approximate-SVP), and bounded distance decoding (BDD). This gives insight into the computational intractability of approximate-SVP, which plays an important role in algorithms for integer programming, and in post-quantum cryptosystems.
In this approach to solving lattice problems, one prepares quantum superpositions of Gaussian-like wave functions centered at lattice points. A key step in this procedure requires finding the center of each Gaussian-like wave function, using the quantum curvelet transform. We show that, for any choice of the Gaussian-like wave function, the error in this step will be above the threshold required to solve BDD and approximate-SVP.
Long-range correlated errors can severely impact the performance of NISQ (noisy intermediate-scale quantum) devices, and fault-tolerant quantum computation. Characterizing these errors is important for improving the performance of these devices, via calibration and error correction, and to ensure correct interpretation of the results. We propose a compressed sensing method for detecting two-qubit correlated dephasing errors, assuming only that the correlations are sparse (i.e., at most s pairs of qubits have correlated errors, where s << n(n-1)/2, and n is the total number of qubits). In particular, our method can detect long-range correlations between any two qubits in the system (i.e., the correlations are not restricted to be geometrically local).
Our method is highly scalable: it requires as few as m = O(s log n) measurement settings, and efficient classical postprocessing based on convex optimization. In addition, when m = O(s log^4(n)), our method is highly robust to noise, and has sample complexity O(max(n,s)^2 log^4(n)), which can be compared to conventional methods that have sample complexity O(n^3). Thus, our method is advantageous when the correlations are sufficiently sparse, that is, when s < O(n^(3/2) / log^2(n)). Our method also performs well in numerical simulations on small system sizes, and has some resistance to state-preparation-and-measurement (SPAM) errors. The key ingredient in our method is a new type of compressed sensing measurement, which works by preparing entangled Greenberger-Horne-Zeilinger states (GHZ states) on random subsets of qubits, and measuring their decay rates with high precision.
The National Institute of Standards and Technology is in the process of selecting one or more public-key cryptographic algorithms through a public, competition-like process. The new public-key cryptography standards will specify one or more additional digital signatures, public-key encryption, and key-establishment algorithms to augment Federal Information Processing Standard (FIPS) 186-4, Digital Signature Standard (DSS), as well as NIST Special Publication (SP) 800-56A Revision 3, Recommendation for Pair-Wise Key-Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B Revision 2, Recommendation for Pair-Wise Key Establishment Using Integer Factorization Cryptography. It is intended that these algorithms will be capable of protecting sensitive information well into the foreseeable future, including after the advent of quantum computers.
The NIST Post-Quantum Cryptography Standardization Process began in 2017 with 69 candidate algorithms that met both the minimum acceptance criteria and submission requirements. The first round lasted until January 2019, during which candidate algorithms were evaluated based on their security, performance, and other characteristics. NIST selected 26 algorithms to advance to the second round for more analysis. This report describes the evaluation and selection process, based on public feedback and internal review, of the second-round candidates. The report summarizes the 26 second-round candidate algorithms and identifies those selected to move forward to the third round of the competition. The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.
1 aAlagic, Gorjan1 aAlperin-Sheriff, Jacob1 aApon, Daniel1 aCooper, David1 aDang, Quynh1 aKelsey, John1 aLiu, Yi-Kai1 aMiller, Carl1 aMoody, Dustin1 aPeralta, Rene1 aPerlner, Ray1 aRobinson, Angela1 aSmith-Tone, Daniel uhttps://quics.umd.edu/publications/status-report-second-round-nist-post-quantum-cryptography-standardization-process01696nas a2200145 4500008004100000245005800041210005700099260001400156520126100170100002501431700001601456700001701472700002401489856003701513 2019 eng d00aClassifying single-qubit noise using machine learning0 aClassifying singlequbit noise using machine learning c8/30/20193 aQuantum characterization, validation, and verification (QCVV) techniques are used to probe, characterize, diagnose, and detect errors in quantum information processors (QIPs). An important component of any QCVV protocol is a mapping from experimental data to an estimate of a property of a QIP. Machine learning (ML) algorithms can help automate the development of QCVV protocols, creating such maps by learning them from training data. We identify the critical components of "machine-learned" QCVV techniques, and present a rubric for developing them. To demonstrate this approach, we focus on the problem of determining whether noise affecting a single qubit is coherent or stochastic (incoherent) using the data sets originally proposed for gate set tomography. We leverage known ML algorithms to train a classifier distinguishing these two kinds of noise. The accuracy of the classifier depends on how well it can approximate the "natural" geometry of the training data. We find GST data sets generated by a noisy qubit can reliably be separated by linear surfaces, although feature engineering can be necessary. We also show the classifier learned by a support vector machine (SVM) is robust under finite-sample noise.
1 aScholten, Travis, L.1 aLiu, Yi-Kai1 aYoung, Kevin1 aBlume-Kohout, Robin uhttps://arxiv.org/abs/1908.1176202754nas a2200229 4500008004100000245009900041210006900140520205300209100001902262700002402281700001302305700001502318700001302333700001702346700001402363700001602377700001602393700001702409700001902426700001602445856006302461 2019 eng d00aStatus Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process0 aStatus Report on the First Round of the NIST PostQuantum Cryptog3 aThe National Institute of Standards and Technology is in the process of selecting one or more
public-key cryptographic algorithms through a public competition-like process. The new publickey cryptography standards will specify one or more additional digital signature, public-key
encryption, and key-establishment algorithms to augment FIPS 186-4, Digital Signature Standard
(DSS), as well as special publications SP 800-56A Revision 2, Recommendation for Pair-Wise
Key Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B,
Recommendation for Pair-Wise Key-Establishment Schemes Using Integer Factorization. It is
intended that these algorithms will be capable of protecting sensitive information well into the
foreseeable future, including after the advent of quantum computers.
In November 2017, 82 candidate algorithms were submitted to NIST for consideration. Among
these, 69 met both the minimum acceptance criteria and our submission requirements, and were
accepted as First-Round Candidates on Dec. 20, 2017, marking the beginning of the First Round
of the NIST Post-Quantum Cryptography Standardization Process. This report describes the
evaluation criteria and selection process, based on public feedback and internal review of the
first-round candidates, and summarizes the 26 candidate algorithms announced on January 30,
2019 for moving forward to the second round of the competition. The 17 Second-Round
Candidate public-key encryption and key-establishment algorithms are BIKE, Classic McEliece,
CRYSTALS-KYBER, FrodoKEM, HQC, LAC, LEDAcrypt (merger of LEDAkem/LEDApkc),
NewHope, NTRU (merger of NTRUEncrypt/NTRU-HRSS-KEM), NTRU Prime, NTS-KEM,
ROLLO (merger of LAKE/LOCKER/Ouroboros-R), Round5 (merger of Hila5/Round2), RQC,
SABER, SIKE, and Three Bears. The 9 Second-Round Candidates for digital signatures are
CRYSTALS-DILITHIUM, FALCON, GeMSS, LUOV, MQDSS, Picnic, qTESLA, Rainbow,
and SPHINCS+.
From dice to modern complex circuits, there have been many attempts to build increasingly better devices to generate random numbers. Today, randomness is fundamental to security and cryptographic systems, as well as safeguarding privacy. A key challenge with random number generators is that it is hard to ensure that their outputs are unpredictable. For a random number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model describing the underlying physics is required to assert unpredictability. Such a model must make a number of assumptions that may not be valid, thereby compromising the integrity of the device. However, it is possible to exploit the phenomenon of quantum nonlocality with a loophole-free Bell test to build a random number generator that can produce output that is unpredictable to any adversary limited only by general physical principles. With recent technological developments, it is now possible to carry out such a loophole-free Bell test. Here we present certified randomness obtained from a photonic Bell experiment and extract 1024 random bits uniform to within 10−12. These random bits could not have been predicted within any physical theory that prohibits superluminal signaling and allows one to make independent measurement choices. To certify and quantify the randomness, we describe a new protocol that is optimized for apparatuses characterized by a low per-trial violation of Bell inequalities. We thus enlisted an experimental result that fundamentally challenges the notion of determinism to build a system that can increase trust in random sources. In the future, random number generators based on loophole-free Bell tests may play a role in increasing the security and trust of our cryptographic systems and infrastructure.
1 aBierhorst, Peter1 aKnill, Emanuel1 aGlancy, Scott1 aZhang, Yanbao1 aMink, Alan1 aJordan, Stephen1 aRommal, Andrea1 aLiu, Yi-Kai1 aChristensen, Bradley1 aNam, Sae, Woo1 aStevens, Martin, J.1 aShalm, Lynden, K. uhttps://arxiv.org/abs/1803.0621901770nas a2200145 4500008004100000245006300041210006200104260001500166300001200181490000700193520134000200100001901540700001601559856004901575 2018 eng d00aPhase Retrieval Without Small-Ball Probability Assumptions0 aPhase Retrieval Without SmallBall Probability Assumptions c2018/01/01 a485-5000 v643 aIn the context of the phase retrieval problem, it is known that certain natural classes of measurements, such as Fourier measurements and random Bernoulli measurements, do not lead to the unique reconstruction of all possible signals, even in combination with certain practically feasible random masks. To avoid this difficulty, the analysis is often restricted to measurement ensembles (or masks) that satisfy a small-ball probability condition, in order to ensure that the reconstruction is unique. This paper shows a complementary result: for random Bernoulli measurements, there is still a large class of signals that can be reconstructed uniquely, namely, those signals that are non-peaky. In fact, this result is much more general: it holds for random measurements sampled from any subgaussian distribution 2), without any small-ball conditions. This is demonstrated in two ways: 1) a proof of stability and uniqueness and 2) a uniform recovery guarantee for the PhaseLift algorithm. In all of these cases, the number of measurements m approaches the information-theoretic lower bound. Finally, for random Bernoulli measurements with erasures, it is shown that PhaseLift achieves uniform recovery of all signals (including peaky ones).
1 aKrahmer, Felix1 aLiu, Yi-Kai uhttp://ieeexplore.ieee.org/document/8052535/01127nas a2200145 4500008004100000245006400041210006200105260001500167490001000182520070300192100001800895700001600913700001500929856003700944 2018 eng d00aPseudorandom States, Non-Cloning Theorems and Quantum Money0 aPseudorandom States NonCloning Theorems and Quantum Money c2017/11/010 v109933 aWe propose the concept of pseudorandom states and study their constructions, properties, and applications. Under the assumption that quantum-secure one-way functions exist, we present concrete and efficient constructions of pseudorandom states. The non-cloning theorem plays a central role in our study—it motivates the proper definition and characterizes one of the important properties of pseudorandom quantum states. Namely, there is no efficient quantum algorithm that can create more copies of the state from a given number of pseudorandom states. As the main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme.
1 aJi, Zhengfeng1 aLiu, Yi-Kai1 aSong, Fang uhttps://arxiv.org/abs/1711.0038500429nas a2200133 4500008004100000245005200041210004900093260001200142300001000154490000700164100002400171700001600195856008400211 2018 eng d00aQuantum Cryptanalysis: Shor, Grover, and Beyond0 aQuantum Cryptanalysis Shor Grover and Beyond c2018/09 a14-210 v161 aJordan, Stephen, P.1 aLiu, Yi-Kai uhttps://quics.umd.edu/publications/quantum-cryptanalysis-shor-grover-and-beyond02434nas a2200205 4500008004100000245006200041210006200103260001500165300001100180490000800191520186900199100001502068700001902083700001902102700001602121700001702137700001702154700002002171856003702191 2018 eng d00aRecovering quantum gates from few average gate fidelities0 aRecovering quantum gates from few average gate fidelities c2018/03/01 a1705020 v1213 aCharacterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.
1 aRoth, Ingo1 aKueng, Richard1 aKimmel, Shelby1 aLiu, Yi-Kai1 aGross, David1 aEisert, Jens1 aKliesch, Martin uhttps://arxiv.org/abs/1803.0057201575nas a2200217 4500008004100000245010100041210006900142260001500211520089600226100002101122700001901143700001801162700001501180700002401195700001901219700001601238700002501254700001801279700002201297856003801319 2017 eng d00aExperimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling0 aExperimentally Generated Random Numbers Certified by the Impossi c2017/02/163 aRandom numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment for the generation of randomness that cannot be predicted within any physical theory that allows one to make independent measurement choices and prohibits superluminal signaling. To certify and quantify the randomness, we describe a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtained 256 new random bits, uniform to within 0.001.
1 aBierhorst, Peter1 aKnill, Emanuel1 aGlancy, Scott1 aMink, Alan1 aJordan, Stephen, P.1 aRommal, Andrea1 aLiu, Yi-Kai1 aChristensen, Bradley1 aNam, Sae, Woo1 aShalm, Lynden, K. uhttps://arxiv.org/abs/1702.05178#01775nas a2200133 4500008004100000245007500041210006900116520134900185100001901534700001601553700001601569700001901585856003701604 2017 eng d00aExponential improvements for quantum-accessible reinforcement learning0 aExponential improvements for quantumaccessible reinforcement lea3 aQuantum computers can offer dramatic improvements over classical devices for data analysis tasks such as prediction and classification. However, less is known about the advantages that quantum computers may bring in the setting of reinforcement learning, where learning is achieved via interaction with a task environment. Here, we consider a special case of reinforcement learning, where the task environment allows quantum access. In addition, we impose certain "naturalness" conditions on the task environment, which rule out the kinds of oracle problems that are studied in quantum query complexity (and for which quantum speedups are well-known). Within this framework of quantum-accessible reinforcement learning environments, we demonstrate that quantum agents can achieve exponential improvements in learning efficiency, surpassing previous results that showed only quadratic improvements. A key step in the proof is to construct task environments that encode well-known oracle problems, such as Simon's problem and Recursive Fourier Sampling, while satisfying the above "naturalness" conditions for reinforcement learning. Our results suggest that quantum agents may perform well in certain game-playing scenarios, where the game has recursive structure, and the agent can learn by playing against itself
1 aDunjko, Vedran1 aLiu, Yi-Kai1 aWu, Xingyao1 aTaylor, J., M. uhttps://arxiv.org/abs/1710.1116001920nas a2200121 4500008004100000245004400041210004300085260001200128520157400140100001901714700001601733856004901749 2017 eng d00aPhase retrieval using unitary 2-designs0 aPhase retrieval using unitary 2designs c2017/073 aWe consider a variant of the phase retrieval problem, where vectors are replaced by unitary matrices, i.e., the unknown signal is a unitary matrix U, and the measurements consist of squared inner products |tr(C†U)|2 with unitary matrices C that are chosen by the observer. This problem has applications to quantum process tomography, when the unknown process is a unitary operation. We show that PhaseLift, a convex programming algorithm for phase retrieval, can be adapted to this matrix setting, using measurements that are sampled from unitary 4- and 2-designs. In the case of unitary 4-design measurements, we show that PhaseLift can reconstruct all unitary matrices, using a nearoptimal number of measurements. This extends previous work on PhaseLift using spherical 4-designs. In the case of unitary 2-design measurements, we show that PhaseLift still works pretty well on average: it recovers almost all signals, up to a constant additive error, using a near-optimal number of measurements. These 2-design measurements are convenient for quantum process tomography, as they can be implemented via randomized benchmarking techniques. This is the first positive result on PhaseLift using 2-designs.
1 aKimmel, Shelby1 aLiu, Yi-Kai uhttp://ieeexplore.ieee.org/document/8024414/01811nas a2200145 4500008004100000245009400041210006900135260001500204520133900219100001901558700001601577700001601593700001901609856003701628 2017 eng d00aSuper-polynomial and exponential improvements for quantum-enhanced reinforcement learning0 aSuperpolynomial and exponential improvements for quantumenhanced c2017/12/123 aRecent work on quantum machine learning has demonstrated that quantum computers can offer dramatic improvements over classical devices for data mining, prediction and classification. However, less is known about the advantages using quantum computers may bring in the more general setting of reinforcement learning, where learning is achieved via interaction with a task environment that provides occasional rewards. Reinforcement learning can incorporate data-analysis-oriented learning settings as special cases, but also includes more complex situations where, e.g., reinforcing feedback is delayed. In a few recent works, Grover-type amplification has been utilized to construct quantum agents that achieve up-to-quadratic improvements in learning efficiency. These encouraging results have left open the key question of whether super-polynomial improvements in learning times are possible for genuine reinforcement learning problems, that is problems that go beyond the other more restricted learning paradigms. In this work, we provide a family of such genuine reinforcement learning tasks. We construct quantum-enhanced learners which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model, and we discuss the potential impact our results may have on future technologies.
1 aDunjko, Vedran1 aLiu, Yi-Kai1 aWu, Xingyao1 aTaylor, J., M. uhttps://arxiv.org/abs/1710.1116001290nas a2200121 4500008004100000245007000041210006900111260001500180520090300195100001701098700001601115856003701131 2017 eng d00aThermodynamic Analysis of Classical and Quantum Search Algorithms0 aThermodynamic Analysis of Classical and Quantum Search Algorithm c2017/09/293 aWe analyze the performance of classical and quantum search algorithms from a thermodynamic perspective, focusing on resources such as time, energy, and memory size. We consider two examples that are relevant to post-quantum cryptography: Grover’s search algorithm, and the quantum algorithm for collisionfinding. Using Bennett’s “Brownian” model of low-power reversible computation, we show classical algorithms that have the same asymptotic energy consumption as these quantum algorithms. Thus, the quantum advantage in query complexity does not imply a reduction in these thermodynamic resource costs. In addition, we present realistic estimates of the resource costs of quantum and classical search, for near-future computing technologies. We find that, if memory is cheap, classical exhaustive search can be surprisingly competitive with Grover’s algorithm.
1 aPerlner, Ray1 aLiu, Yi-Kai uhttps://arxiv.org/abs/1709.1051001645nas a2200205 4500008004100000245008200041210006900123260001500192300001100207490000700218520102200225100001501247700002401262700002101286700001701307700001601324700002701340700001701367856005501384 2016 eng d00aOptimized tomography of continuous variable systems using excitation counting0 aOptimized tomography of continuous variable systems using excita c2016/11/21 a0523270 v943 aWe propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
1 aShen, Chao1 aHeeres, Reinier, W.1 aReinhold, Philip1 aJiang, Luyao1 aLiu, Yi-Kai1 aSchoelkopf, Robert, J.1 aJiang, Liang uhttp://link.aps.org/doi/10.1103/PhysRevA.94.05232701564nas a2200133 4500008004100000245008900041210006900130260001500199300001200214520109500226100001901321700001601340856007401356 2015 eng d00aPhase Retrieval Without Small-Ball Probability Assumptions: Stability and Uniqueness0 aPhase Retrieval Without SmallBall Probability Assumptions Stabil c2015/05/25 a411-4143 aWe study stability and uniqueness for the phase retrieval problem. That is, we ask when is a signal x ∈ R n stably and uniquely determined (up to small perturbations), when one performs phaseless measurements of the form yi = |a T i x| 2 (for i = 1, . . . , N), where the vectors ai ∈ R n are chosen independently at random, with each coordinate aij ∈ R being chosen independently from a fixed sub-Gaussian distribution D. It is well known that for many common choices of D, certain ambiguities can arise that prevent x from being uniquely determined. In this note we show that for any sub-Gaussian distribution D, with no additional assumptions, most vectors x cannot lead to such ambiguities. More precisely, we show stability and uniqueness for all sets of vectors T ⊂ R n which are not too peaky, in the sense that at most a constant fraction of their mass is concentrated on any one coordinate. The number of measurements needed to recover x ∈ T depends on the complexity of T in a natural way, extending previous results of Eldar and Mendelson [12].1 aKrahmer, Felix1 aLiu, Yi-Kai uhttp://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148923&tag=101216nas a2200121 4500008004100000245004700041210004600088260001500134520087400149100001901023700001601042856003601058 2015 eng d00aQuantum Compressed Sensing Using 2-Designs0 aQuantum Compressed Sensing Using 2Designs c2015/10/293 aWe develop a method for quantum process tomography that combines the efficiency of compressed sensing with the robustness of randomized benchmarking. Our method is robust to state preparation and measurement errors, and it achieves a quadratic speedup over conventional tomography when the unknown process is a generic unitary evolution. Our method is based on PhaseLift, a convex programming technique for phase retrieval. We show that this method achieves approximate recovery of almost all signals, using measurements sampled from spherical or unitary 2-designs. This is the first positive result on PhaseLift using 2-designs. We also show that exact recovery of all signals is possible using unitary 4-designs. Previous positive results for PhaseLift required spherical 4-designs, while PhaseLift was known to fail in certain cases when using spherical 2-designs.1 aKimmel, Shelby1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1510.0888701400nas a2200121 4500008004100000245005500041210005500096260001500151300001200166520104700178100001601225856003701241 2014 eng d00aPrivacy Amplification in the Isolated Qubits Model0 aPrivacy Amplification in the Isolated Qubits Model c2014/10/15 a785-8143 a Isolated qubits are a special class of quantum devices, which can be used to implement tamper-resistant cryptographic hardware such as one-time memories (OTM's). Unfortunately, these OTM constructions leak some information, and standard methods for privacy amplification cannot be applied here, because the adversary has advance knowledge of the hash function that the honest parties will use. In this paper we show a stronger form of privacy amplification that solves this problem, using a fixed hash function that is secure against all possible adversaries in the isolated qubits model. This allows us to construct single-bit OTM's which only leak an exponentially small amount of information. We then study a natural generalization of the isolated qubits model, where the adversary is allowed to perform a polynomially-bounded number of entangling gates, in addition to unbounded local operations and classical communication (LOCC). We show that our technique for privacy amplification is also secure in this setting. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1410.3918v201619nas a2200133 4500008004100000245007600041210006900117260001500186300001000201490001200211520120900223100001601432856003701448 2014 eng d00aSingle-shot security for one-time memories in the isolated qubits model0 aSingleshot security for onetime memories in the isolated qubits c2014/02/01 a19-360 vPart II3 a One-time memories (OTM's) are simple, tamper-resistant cryptographic devices, which can be used to implement sophisticated functionalities such as one-time programs. Can one construct OTM's whose security follows from some physical principle? This is not possible in a fully-classical world, or in a fully-quantum world, but there is evidence that OTM's can be built using "isolated qubits" -- qubits that cannot be entangled, but can be accessed using adaptive sequences of single-qubit measurements. Here we present new constructions for OTM's using isolated qubits, which improve on previous work in several respects: they achieve a stronger "single-shot" security guarantee, which is stated in terms of the (smoothed) min-entropy; they are proven secure against adversaries who can perform arbitrary local operations and classical communication (LOCC); and they are efficiently implementable. These results use Wiesner's idea of conjugate coding, combined with error-correcting codes that approach the capacity of the q-ary symmetric channel, and a high-order entropic uncertainty relation, which was originally developed for cryptography in the bounded quantum storage model. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1402.0049v201994nas a2200121 4500008004100000245005200041210005100093260001500144300001200159520164800171100001601819856003701835 2013 eng d00aBuilding one-time memories from isolated qubits0 aBuilding onetime memories from isolated qubits c2013/04/18 a269-2863 a One-time memories (OTM's) are simple tamper-resistant cryptographic devices, which can be used to implement one-time programs, a very general form of software protection and program obfuscation. Here we investigate the possibility of building OTM's using quantum mechanical devices. It is known that OTM's cannot exist in a fully-quantum world or in a fully-classical world. Instead, we propose a new model based on "isolated qubits" -- qubits that can only be accessed using local operations and classical communication (LOCC). This model combines a quantum resource (single-qubit measurements) with a classical restriction (on communication between qubits), and can be implemented using current technologies, such as nitrogen vacancy centers in diamond. In this model, we construct OTM's that are information-theoretically secure against one-pass LOCC adversaries that use 2-outcome measurements. Our construction resembles Wiesner's old idea of quantum conjugate coding, implemented using random error-correcting codes; our proof of security uses entropy chaining to bound the supremum of a suitable empirical process. In addition, we conjecture that our random codes can be replaced by some class of efficiently-decodable codes, to get computationally-efficient OTM's that are secure against computationally-bounded LOCC adversaries. In addition, we construct data-hiding states, which allow an LOCC sender to encode an (n-O(1))-bit messsage into n qubits, such that at most half of the message can be extracted by a one-pass LOCC receiver, but the whole message can be extracted by a general quantum receiver. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1304.5007v201523nas a2200181 4500008004100000245009800041210006900139260001500208300001000223520092900233100002101162700002201183700001701205700001601222700002401238700002801262856005101290 2013 eng d00aMultilingual Summarization: Dimensionality Reduction and a Step Towards Optimal Term Coverage0 aMultilingual Summarization Dimensionality Reduction and a Step T c2013/08/09 a55-633 aIn this paper we present three term weighting approaches for multi-lingual document summarization and give results on the DUC 2002 data as well as on the 2013 Multilingual Wikipedia feature articles data set. We introduce a new intervalbounded nonnegative matrix factorization. We use this new method, latent semantic analysis (LSA), and latent Dirichlet allocation (LDA) to give three term-weighting methods for multi-document multi-lingual summarization. Results on DUC and TAC data, as well as on the MultiLing 2013 data, demonstrate that these methods are very promising, since they achieve oracle coverage scores in the range of humans for 6 of the 10 test languages. Finally, we present three term weighting approaches for the MultiLing13 single document summarization task on the Wikipedia featured articles. Our submissions signifi- cantly outperformed the baseline in 19 out of 41 languages. 1 aConroy, John, M.1 aDavis, Sashka, T.1 aKubina, Jeff1 aLiu, Yi-Kai1 aO'Leary, Dianne, P.1 aSchlesinger, Judith, D. uhttp://aclweb.org/anthology/W/W13/W13-3108.pdf00882nas a2200157 4500008004100000245004900041210004700090260001400137490000700151520044900158100002200607700002400629700001600653700001800669856003700687 2013 eng d00aTesting quantum expanders is co-QMA-complete0 aTesting quantum expanders is coQMAcomplete c2013/4/150 v873 a A quantum expander is a unital quantum channel that is rapidly mixing, has only a few Kraus operators, and can be implemented efficiently on a quantum computer. We consider the problem of estimating the mixing time (i.e., the spectral gap) of a quantum expander. We show that this problem is co-QMA-complete. This has applications to testing randomized constructions of quantum expanders, and studying thermalization of open quantum systems. 1 aBookatz, Adam, D.1 aJordan, Stephen, P.1 aLiu, Yi-Kai1 aWocjan, Pawel uhttp://arxiv.org/abs/1210.0787v202431nas a2200169 4500008004100000245010800041210006900149260001500218300001100233490000700244520189900251100002402150700001702174700001602191700001702207856003702224 2012 eng d00aQuantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators 0 aQuantum Tomography via Compressed Sensing Error Bounds Sample Co c2012/09/27 a0950220 v143 a Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. 1 aFlammia, Steven, T.1 aGross, David1 aLiu, Yi-Kai1 aEisert, Jens uhttp://arxiv.org/abs/1205.2300v201932nas a2200169 4500008004100000245005700041210005500098260001500153300001200168520144400180100002701624700002101651700001601672700002101688700001601709856003701725 2012 eng d00aA Spectral Algorithm for Latent Dirichlet Allocation0 aSpectral Algorithm for Latent Dirichlet Allocation c2012/04/30 a193-2143 a The problem of topic modeling can be seen as a generalization of the clustering problem, in that it posits that observations are generated due to multiple latent factors (e.g., the words in each document are generated as a mixture of several active topics, as opposed to just one). This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating the topic probability vectors (the distributions over words for each topic), when only the words are observed and the corresponding topics are hidden. We provide a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of mixture models, including the popular latent Dirichlet allocation (LDA) model. For LDA, the procedure correctly recovers both the topic probability vectors and the prior over the topics, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, termed Excess Correlation Analysis (ECA), is based on a spectral decomposition of low order moments (third and fourth order) via two singular value decompositions (SVDs). Moreover, the algorithm is scalable since the SVD operations are carried out on $k\times k$ matrices, where $k$ is the number of latent factors (e.g. the number of topics), rather than in the $d$-dimensional observed space (typically $d \gg k$). 1 aAnandkumar, Animashree1 aFoster, Dean, P.1 aHsu, Daniel1 aKakade, Sham, M.1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1204.6703v401526nas a2200157 4500008004100000245005100041210005000092260001500142520108300157100002201240700001901262700001701281700001601298700001701314856003701331 2011 eng d00aContinuous-variable quantum compressed sensing0 aContinuousvariable quantum compressed sensing c2011/11/033 a We significantly extend recently developed methods to faithfully reconstruct unknown quantum states that are approximately low-rank, using only a few measurement settings. Our new method is general enough to allow for measurements from a continuous family, and is also applicable to continuous-variable states. As a technical result, this work generalizes quantum compressed sensing to the situation where the measured observables are taken from a so-called tight frame (rather than an orthonormal basis) --- hence covering most realistic measurement scenarios. As an application, we discuss the reconstruction of quantum states of light from homodyne detection and other types of measurements, and we present simulations that show the advantage of the proposed compressed sensing technique over present methods. Finally, we introduce a method to construct a certificate which guarantees the success of the reconstruction with no assumption on the state, and we show how slightly more measurements give rise to "universal" state reconstruction that is highly robust to noise. 1 aOhliger, Matthias1 aNesme, Vincent1 aGross, David1 aLiu, Yi-Kai1 aEisert, Jens uhttp://arxiv.org/abs/1111.0853v300981nas a2200133 4500008004100000245005900041210005900100260001300159490000800172520059000180100002400770700001600794856003700810 2011 eng d00aDirect Fidelity Estimation from Few Pauli Measurements0 aDirect Fidelity Estimation from Few Pauli Measurements c2011/6/80 v1063 a We describe a simple method for certifying that an experimental device prepares a desired quantum state rho. Our method is applicable to any pure state rho, and it provides an estimate of the fidelity between rho and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels. 1 aFlammia, Steven, T.1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1104.4695v301292nas a2200121 4500008004100000245006300041210006200104260001500166300001400181520092200195100001601117856003701133 2011 eng d00aUniversal low-rank matrix recovery from Pauli measurements0 aUniversal lowrank matrix recovery from Pauli measurements c2011/03/14 a1638-16463 a We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients. We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the rank-r restricted isometry property (RIP). This implies that M can be recovered from a fixed ("universal") set of Pauli measurements, using nuclear-norm minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result holds for any class of measurements that use an orthonormal operator basis whose elements have small operator norm. Our proof uses Dudley's inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy duality. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1103.2816v201192nas a2200133 4500008004100000245005800041210005800099260001500157520078600172100002900958700001600987700001801003856003701021 2010 eng d00aEfficient Direct Tomography for Matrix Product States0 aEfficient Direct Tomography for Matrix Product States c2010/02/243 a In this note, we describe a method for reconstructing matrix product states from a small number of efficiently-implementable measurements. Our method is exponentially faster than standard tomography, and it can also be used to certify that the unknown state is an MPS. The basic idea is to use local unitary operations to measure in the Schmidt basis, giving direct access to the MPS representation. This compares favorably with recently and independently proposed methods that recover the MPS tensors by performing a variational minimization, which is computationally intractable in certain cases. Our method also has the advantage of recovering any MPS, while other approaches were limited to special classes of states that exclude important examples such as GHZ and W states. 1 aLandon-Cardinal, Olivier1 aLiu, Yi-Kai1 aPoulin, David uhttp://arxiv.org/abs/1002.4632v101775nas a2200229 4500008004100000245003900041210003900080260001500119300000800134490000600142520116900148100001901317700002301336700002401359700001701383700002601400700001901426700002901445700001601474700001801490856003701508 2010 eng d00aEfficient quantum state tomography0 aEfficient quantum state tomography c2010/12/21 a1490 v13 a Quantum state tomography, the ability to deduce the state of a quantum system from measured data, is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes infeasible because the number of quantum measurements and the amount of computation required to process them grows exponentially in the system size. Here we show that we can do exponentially better than direct state tomography for a wide range of quantum states, in particular those that are well approximated by a matrix product state ansatz. We present two schemes for tomography in 1-D quantum systems and touch on generalizations. One scheme requires unitary operations on a constant number of subsystems, while the other requires only local measurements together with more elaborate post-processing. Both schemes rely only on a linear number of experimental operations and classical postprocessing that is polynomial in the system size. A further strength of the methods is that the accuracy of the reconstructed states can be rigorously certified without any a priori assumptions. 1 aCramer, Marcus1 aPlenio, Martin, B.1 aFlammia, Steven, T.1 aGross, David1 aBartlett, Stephen, D.1 aSomma, Rolando1 aLandon-Cardinal, Olivier1 aLiu, Yi-Kai1 aPoulin, David uhttp://arxiv.org/abs/1101.4366v101146nas a2200145 4500008004100000245005500041210005400096260001500150300001200165520072600177100002100903700002300924700001600947856003700963 2010 eng d00aQuantum property testing for bounded-degree graphs0 aQuantum property testing for boundeddegree graphs c2010/12/14 a365-3763 a We study quantum algorithms for testing bipartiteness and expansion of bounded-degree graphs. We give quantum algorithms that solve these problems in time O(N^(1/3)), beating the Omega(sqrt(N)) classical lower bound. For testing expansion, we also prove an Omega(N^(1/4)) quantum query lower bound, thus ruling out the possibility of an exponential quantum speedup. Our quantum algorithms follow from a combination of classical property testing techniques due to Goldreich and Ron, derandomization, and the quantum algorithm for element distinctness. The quantum lower bound is obtained by the polynomial method, using novel algebraic techniques and combinatorial analysis to accommodate the graph structure. 1 aAmbainis, Andris1 aChilds, Andrew, M.1 aLiu, Yi-Kai uhttp://arxiv.org/abs/1012.3174v301338nas a2200169 4500008004100000245005200041210005200093260001400145490000800159520087000167100001701037700001601054700002401070700002001094700001701114856003701131 2010 eng d00aQuantum state tomography via compressed sensing0 aQuantum state tomography via compressed sensing c2010/10/40 v1053 a We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log^2 d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations. 1 aGross, David1 aLiu, Yi-Kai1 aFlammia, Steven, T.1 aBecker, Stephen1 aEisert, Jens uhttp://arxiv.org/abs/0909.3304v401371nas a2200121 4500008004100000245005200041210005200093260001500145300001200160520102400172100001601196856003701212 2009 eng d00aQuantum Algorithms Using the Curvelet Transform0 aQuantum Algorithms Using the Curvelet Transform c2009/10/28 a391-4003 a The curvelet transform is a directional wavelet transform over R^n, which is used to analyze functions that have singularities along smooth surfaces (Candes and Donoho, 2002). I demonstrate how this can lead to new quantum algorithms. I give an efficient implementation of a quantum curvelet transform, together with two applications: a single-shot measurement procedure for approximately finding the center of a ball in R^n, given a quantum-sample over the ball; and, a quantum algorithm for finding the center of a radial function over R^n, given oracle access to the function. I conjecture that these algorithms succeed with constant probability, using one quantum-sample and O(1) oracle queries, respectively, independent of the dimension n -- this can be interpreted as a quantum speed-up. To support this conjecture, I prove rigorous bounds on the distribution of probability mass for the continuous curvelet transform. This shows that the above algorithms work in an idealized "continuous" model. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/0810.4968v201885nas a2200109 4500008004100000245009000041210006900131260001500200520150700215100001601722856003701738 2007 eng d00aThe Complexity of the Consistency and N-representability Problems for Quantum States0 aComplexity of the Consistency and Nrepresentability Problems for c2007/12/183 a QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most notably the ``Local Hamiltonian'' problem introduced by Kitaev. In this dissertation we show some new QMA-complete problems. The first one is ``Consistency of Local Density Matrices'': given several density matrices describing different (constant-size) subsets of an n-qubit system, decide whether these are consistent with a single global state. This problem was first suggested by Aharonov. We show that it is QMA-complete, via an oracle reduction from Local Hamiltonian. This uses algorithms for convex optimization with a membership oracle, due to Yudin and Nemirovskii. Next we show that two problems from quantum chemistry, ``Fermionic Local Hamiltonian'' and ``N-representability,'' are QMA-complete. These problems arise in calculating the ground state energies of molecular systems. N-representability is a key component in recently developed numerical methods using the contracted Schrodinger equation. Although these problems have been studied since the 1960's, it is only recently that the theory of quantum computation has allowed us to properly characterize their complexity. Finally, we study some special cases of the Consistency problem, pertaining to 1-dimensional and ``stoquastic'' systems. We also give an alternative proof of a result due to Jaynes: whenever local density matrices are consistent, they are consistent with a Gibbs state. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/0712.3041v101015nas a2200109 4500008004100000245007300041210006800114260001500182520065500197100001600852856003700868 2007 eng d00aThe Local Consistency Problem for Stoquastic and 1-D Quantum Systems0 aLocal Consistency Problem for Stoquastic and 1D Quantum Systems c2007/12/103 a The Local Hamiltonian problem (finding the ground state energy of a quantum system) is known to be QMA-complete. The Local Consistency problem (deciding whether descriptions of small pieces of a quantum system are consistent) is also known to be QMA-complete. Here we consider special cases of Local Hamiltonian, for ``stoquastic'' and 1-dimensional systems, that seem to be strictly easier than QMA. We show that there exist analogous special cases of Local Consistency, that have equivalent complexity (up to poly-time oracle reductions). Our main technical tool is a new reduction from Local Consistency to Local Hamiltonian, using SDP duality. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/0712.1388v200845nas a2200145 4500008004100000245003900041210003700080260001500117490000700132520042900139100001600568700002500584700001900609856007100628 2007 eng d00aN-representability is QMA-complete0 aNrepresentability is QMAcomplete c2007/03/160 v983 aWe study the computational complexity of the N-representability problem in quantum chemistry. We show that this problem is quantum Merlin-Arthur complete, which is the quantum generalization of nondeterministic polynomial time complete. Our proof uses a simple mapping from spin systems to fermionic systems, as well as a convex optimization technique that reduces the problem of finding ground states to N representability.1 aLiu, Yi-Kai1 aChristandl, Matthias1 aVerstraete, F. uhttp://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.11050301845nas a2200145 4500008004100000245005400041210005100095260001500146300001200161520139800173100001601571700002401587700002401611856006401635 2006 eng d00aOn Bounded Distance Decoding for General Lattices0 aBounded Distance Decoding for General Lattices c2006/01/01 a450-4613 aA central problem in the algorithmic study of lattices is the closest vector problem: given a lattice L represented by some basis, and a target point y⃗ , find the lattice point closest to y⃗ . Bounded Distance Decoding is a variant of this problem in which the target is guaranteed to be close to the lattice, relative to the minimum distance λ1(L) of the lattice. Specifically, in the α-Bounded Distance Decoding problem (α-BDD), we are given a lattice L and a vector y⃗ (within distance α⋅λ1(L) from the lattice), and we are asked to find a lattice point x⃗ ∈L within distance α⋅λ1(L) from the target. In coding theory, the lattice points correspond to codewords, and the target points correspond to lattice points being perturbed by noise vectors. Since in coding theory the lattice is usually fixed, we may “pre-process” it before receiving any targets, to make the subsequent decoding faster. This leads us to consider α-BDD with pre-processing. We show how a recent technique of Aharonov and Regev [2] can be used to solve α-BDD with pre-processing in polynomial time for α=O((logn)/n−−−−−−−√). This improves upon the previously best known algorithm due to Klein [13] which solved the problem for α=O(1/n). We also establish hardness results for α-BDD and α-BDD with pre-processing, as well as generalize our results to other ℓ p norms.1 aLiu, Yi-Kai1 aLyubashevsky, Vadim1 aMicciancio, Daniele uhttp://link.springer.com/chapter/10.1007/11830924_41#page-101241nas a2200121 4500008004100000245005800041210005700099260001500156300001200171520087600183100001601059856004401075 2006 eng d00aConsistency of Local Density Matrices is QMA-complete0 aConsistency of Local Density Matrices is QMAcomplete c2006/04/21 a438-4493 a Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes a subset C_i of the qubits. We say that the rho_i are ``consistent'' if there exists some global state sigma (on all n qubits) that matches each of the rho_i on the subsets C_i. This generalizes the classical notion of the consistency of marginal probability distributions. We show that deciding the consistency of local density matrices is QMA-complete (where QMA is the quantum analogue of NP). This gives an interesting example of a hard problem in QMA. Our proof is somewhat unusual: we give a Turing reduction from Local Hamiltonian, using a convex optimization algorithm by Bertsimas and Vempala, which is based on random sampling. Unlike in the classical case, simple mapping reductions do not seem to work here. 1 aLiu, Yi-Kai uhttp://arxiv.org/abs/quant-ph/0604166v301350nas a2200109 4500008004100000245006300041210006300104260001500167520099800182100001601180856004401196 2006 eng d00aGibbs States and the Consistency of Local Density Matrices0 aGibbs States and the Consistency of Local Density Matrices c2006/03/023 a Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes some subset of the qubits. We say that rho_1,...,rho_m are "consistent" if there exists a global state sigma (on all n qubits) whose reduced density matrices match rho_1,...,rho_m. We prove the following result: if rho_1,...,rho_m are consistent with some state sigma > 0, then they are also consistent with a state sigma' of the form sigma' = (1/Z) exp(M_1+...+M_m), where each M_i is a Hermitian matrix acting on the same qubits as rho_i, and Z is a normalizing factor. (This is known as a Gibbs state.) Actually, we show a more general result, on the consistency of a set of expectation values