Thanks everyone for attending the workshop. We have posted slides for contributed talks and invited talks on the website.
Abstracts for accepted contributed presentations have been posted. Visit the Contributed Presentations page for more information
Workshop registration and abstract submission are now available!
•Register now
•Submit your abstract
Speaker: Boris Altshuler, Columbia University, New York, NY, USA
Title: Adiabatic quantum optimization and Anderson localization
Abstract: Understanding NP-complete problems is a central topic in computer science. This is why adiabatic quantum optimization has attracted so much attention, as it provided a new approach to tackle NP-complete problems using a quantum computer. The efficiency of this approach is limited by small spectral gaps between the ground and excited states of the quantum computer's Hamiltonian. We will discuss the statistics of the gaps using the borrowed from the theory of quantum disordered systems. It turns out that due to a phenomenon similar to Anderson localization exponentially small gaps appear close to the end of the adiabatic algorithm for large random instances of NP-complete problems. We will present the quantitative analysis of the small spectral gaps and discuss possible consequence of this phenomenon on the adiabatic optimization paradigm.
**********
Speaker: Daniel E. Browne, University College London, UK
Title: Correlations in Measurement-Based Quantum Computing and Bell Inequalities
Abstract:
Measurement-based quantum computation (or the one-way quantum computation) is a model in which a series of adaptive single qubit measurements upon an entangled resource state can produce classical output data equivalent to an arbitrary quantum logic circuit. Loosely speaking, the correlations in these measurements can be considered the resource which allows the computation to be performed.
Bell inequalities demonstrate that quantum mechanics permits the outcomes of sets of measurements to be correlated in ways incompatible with the predictions of any local hidden variable theory, including classical physics. In the Bell inequality, and multi-party generalisations, a set of single-qubit measurements are performed, on spatially separated systems. Correlations in these measurements can act as a signature that no local hidden variable description of the experiment is possible.
At first sight, we see superficial similarities between these settings. In both cases, non-classical correlations play a central role. However, there are also some key differences. Most importantly, measurement-based quantum computing requires adaptive measurements, at odds with the spatial-separated nature of Bell inequality experiments.
Nevertheless some striking connections have been reported [1], [2] particularly between GHZ-type paradoxes and deterministic MBQC computations, and between CHSH-type Bell inequalities and non-deterministic MBQC correlations [1], [3]. In my talk I will survey these works and some recent results [3], [4] from my group at UCL, in which a number of connections between multi-party Bell inequalities and MBQC are explored.
[1] J. Anders and D.E. Browne, "The Computational Power of Correlations", Physical Review Letters 102, 050502 (2009)
[2] R. Raussendorf, "Quantum computation, discreteness, and contextuality", arxiv:0907.5449
[3] M. J. Hoban, E.T. Campbell, K. Loukopolous, D.E. Browne, "Non-adaptive measurement-based quantum computation and multi-party Bell inequalties", in preparation.
[4] M. J. Hoban and D.E. Browne, in preparation.
**********
Speaker: Gemma De las Cuevas, Universität Innsbruck, Innsbruck, Austria
Title: Unifying Classical Spin Models Using A Quantum Formalism
Abstract:
We have recently shown that the partition function of any classical spin model, including all discrete standard statistical models and all Abelian discrete lattice gauge theories, can be expressed as the specific instance of the partition function of a four dimensional Ising lattice gauge theory. This unifies models with apparently very different features (global vs. local symmetries, many body interactions, different dimensions, etc) in a single complete model. The proof of the result uses techniques from quantum information. In this talk we review these mappings and discuss recent developments, including quantum algorithms to approximate the partition function.
**********
Speaker: Steve Flammia, Perimeter Institute, Waterloo, ON, Canadaa
Title: Adiabatic Quantum Transistors
Authors: Dave Bacon(1,2), Gregory M. Crosswhite(2), and Steven T. Flammia(3)
Institution(s): 1.Department of Computer Science & Engineering, University of Washington; 2. Department of Physics, University of Washington; 3.Perimeter Institute for Theoretical Physics
Abstract: The invention of the transistor was a watershed moment in the history of computing: it provided a logic element that was naturally robust to noise and error. Quantum computers offer the potential to exponentially speed up some computational problems, but have not been built in large part because quantum information is notoriously fragile and quickly becomes classical information in the presence of noise. In theory, the quantum threshold theorem asserts that these difficulties can be circumvented, but in practice the requirements of this theorem are daunting. Here we propose a novel method for building a fault-tolerant quantum computer which much more closely mimics the classical transistor. We show how a suitably engineered material can be made to quantum compute by the simple application of an external field to the sample. This construction opens a new path toward the engineering of a large-scale quantum computer with design and control advantages o! ver prior state of the art. Just as a transistor works by causing a phase transition between an insulating and conducting phase conditional on an external electric field, the applied field here causes a phase transition at the end of which a quantum gate has been enacted and quantum information propagated across the device. View original pdf abstract
**********
Speaker: Daniel Lidar, University of Southern California, Los Angeles, CA, USA
Title: Accurate and decoherence-protected adiabatic quantum computation
Abstract: In the closed system setting I will show how to obtain extremely accurate adiabatic QC by proper choice of the interpolation between the initial and final Hamiltonians. Namely, given an analytic interpolation whose first N initial and final time derivatives vanish, the error can be made to be smaller than 1/N^N, with an evolution time which scales as N and the square of the norm of the time-derivative of the Hamiltonian, divided by the cube of the gap (joint work with Ali Rezakhani and Alioscia Hamma). In the open system setting I will describe a method for protecting adiabatic QC by use of a hybrid encoding-dynamical decoupling scheme. This strategy can be used to protect spin-based universal adiabatic QC against arbitrary 1-local noise using only global magnetic fields. By combining error bounds for the closed and open system settings, I will show that in principle the method is scalable to arbitrarily large computations.
References:
Closed system case: J. Math. Phys. 50, 102106 (2009)
Open system case: Phys. Rev. Lett. 100, 160506 (2008)
**********
Speaker: David Poulin, Université de Sherbrooke, Sherbrooke, QC, Canada
Title: Quantum Metropolis Sampling: An algorithm to simulate thermal systems with a quantum computer
Key Words: Quantum algorithm, Quantum many-body physics, Quantum simulation, Markov Chain, Monte Carlo
Authors: K. Temme, T.J. Osborne, K. Vollbrecht, David Poulin, and F. Verstraete
Abstract: The original motivation to build a quantum computer came from Feynman who envisaged a machine capable of simulating generic quantum mechanical systems, a task that is intractable for classical computers. Such a machine would have tremendous applications in all physical sciences, including condensed matter physics, chemistry, and high energy physics. Part of Feynman's challenge was met by Lloyd who showed how to approximately decompose the time-evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the more fundamental problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibb's states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that basically acquired a monopoly for the simulation of interacting particles. In this talk, I will demonstrate that the corresponding quantum problem can be solved by a quantum Metropolis algorithm. This validates the quantum computer as a universal simulator, and proves that the so-called sign ! problem occurring in quantum Monte Carlo methods can be resolved with a quantum computer.
**********
Speaker: Rob Spekkens, Perimeter Institute, Waterloo, ON, Canada
Title: Why the quantum? Insights from classical theories with a statistical restriction
Abstract:
A significant part of quantum theory can be obtained by postulating a single conceptual innovation relative to classical theories, namely, that agents face a fundamental restriction on what they can know about the physical state of any system. This talk will consider a particular sort of statistical restriction wherein only classical variables with vanishing Poisson bracket can be known simultaneously. When this principle is applied to a classical statistical theory of three-level systems (trits), the resulting theory is found to be operationally equivalent to the stabilizer formalism for qutrits. Applied to a classical theory of harmonic oscillators, it yields quantum mechanics restricted to quadrature eigenstates and observables. Finally, applied to a classical statistical theory of bits, it yields a theory that is almost equivalent to (but interestingly different from) the stabilizer formalism for qubits. I will discuss the significance of these results for the project of deriving the formalism of quantum theory from physical principles.
**********
Speaker: Maarten van den Nest, Max-Planck-Institut für Quantenoptik, Garching, Germany
Title: Simulating quantum computers with probabilistic methods
Key Words: Quantum computation, classical simulation
Abstract: The study of quantum computations that can be simulated efficiently classically is of interest for numerous reasons. From a fundamental point of view, such an investigation sheds light on the intrinsic computational power harnessed in quantum mechanics as compared to classical physics. More practically, understanding which quantum computations do not offer any speed-up over classical computation provides insights in where (not) to look for novel quantum algorithmic primitives. In this talk we discuss novel classical simulation methods that are centered on probabilistic methods ('weak simulation'). We show how these techniques outperform existing methods that rely on the exact computation of measurement probabilities ('strong simulation'). Using weak simulation methods, several new classes of simulatable quantum circuits are generated. For example, we show that various concatenations of matchgate, Toffoli, Clifford, bounded-depth, Fourier transform an! d other circuits are classically simulatable. Finally, we focus on famous quantum algorithms and investigate the origin of their computational power, or their lack thereof.
**********
Speaker: Pawel Wocjan, University of Central Florida, Orlando, Florida, USA
Title: Quantum Algorithm for Preparing Thermal Gibbs States
Abstract:
We present a quantum algorithm for preparing thermal Gibbs states of interacting quantum systems. This algorithm is based on Grover’s technique for quantum state engineering, and its running time is dominated by the factor sqrt{D/Z}, where D and Z_beta denote the dimension of the quantum system and its partition function at inverse temperature beta, respectively. We discuss the differences between this algorithm and quantum Metropolis sampling (see the presentation by David Poulin) and outline the analysis of the errors that arise due to imperfect simulation of Hamiltonian time evolutions and limited performance of phase estimation (finite accuracy and nonzero probability of failure).