Michael A. Nielsen & Isaac L. Chuang o M. Nielsen and I. Chuang Preskill's review of quantum error-correction[Pre97], Nielsen's thesis on quantum . Quantum Computation and Quantum Information / Michael A. Nielsen and Isaac L. Chuang. p. cm. Includes bibliographical references and index. M. Nielsen, I. Chuang, Quantum Computation and Quantum Information ( Cambridge, ). • build scalable macroscopic quantum circuits. • control open .

Author: | LORINDA GRASSO |

Language: | English, Spanish, Hindi |

Country: | South Africa |

Genre: | Personal Growth |

Pages: | 619 |

Published (Last): | 15.05.2016 |

ISBN: | 478-4-35846-597-5 |

Distribution: | Free* [*Register to download] |

Uploaded by: | HAYWOOD |

Quantum Computation and Quantum Information. Home · Quantum Computation and Quantum Information Author: Michael A. Nielsen | Isaac L. Chuang. Michael A. Nielsen, Isaac L. Chuang, Massachusetts Institute of Technology. Publisher: Cambridge . Frontmatter. pp i-viii. Access. PDF; Export citation. Jul 8, PDF | On Nov 1, , Manuel Vogel and others published Quantum Computation and Quantum Information, by M.A. Nielsen and I.L. Chuang.

Chapter 1: Deterministic Turing Machines and Complexity Classes [pdf] [pdf-2up] Chapter 2: Nondeterministic complexity classes [pdf] [pdf-2up] Chapter 3: Completeness [pdf] [pdf-2up] Chapter 4: Oracles and the polynomial hierarchy [pdf] [pdf-2up] Chapter 5: Alternating Complexity Classes [pdf] [pdf-2up] Chapter 6: Complexity Theory for Probabilistic Algorithms [pdf] [pdf-2up] Lecture Notes Quantum Computing Chapter 1: Introduction [pdf] [pdf-2up] Chapter 2: Universal Quantum Gates [pdf] [pdf-2up] Chapter 3: Quantum Algorithms [pdf] [pdf-2up] Content Complexity Theory Complexity theory classifies algorithmic problems according to the amount of ressources like time, space, hardware, In this lecture, we study the most important complexity classes for deterministic, nondeterministic, parallel, and probabilistic computations. Particular attention will be paid to the relationships between differrent computation models and to complete problems in the most relevant complexity classes. Quantum Computing You have nothing to do but mention the quantum theory, and people will take your voice for the voice of science, and believe anything. George Bernard Shaw The development of Quantum Computers aims at exploiting quantum mechanical effects to build non-classical computing systems. While it is still unclear whether quantum computers with more than just a few qubits are physically realisable, at least in theory quantum phenomena allow for fundamentally new kinds of computations. Indeed, the theory of quantum computations, which has been developed during the last years, yields interesting and surprising results. Quantum Turing Machines and Quantum Gate Arrays serve as formal models of quantum computation, which are probabilistic models of computation using interference of configurations and transition rules specified by probability amplitudes instead of probabilities. We will analyse these models and investigate several algorithm for quantum computers.

For example, there is the current search for a unified description of the particles and fields of nature [], or attempts to understand the principles underlying pattern formation in physics []. The second aspect of this theme is the discovery and explanation of phenomena in terms of simple frameworks. For example, there is the remarkable Bardeen-Cooper-Schrieffer theory of superconductivity [6], based upon the principles of quantum mechanics, or the current search for gravitational waves [], potentially one of the most useful consequences of the general theory of relativity.

Note that both these themes are somewhat gray.

There are differing degrees of universality, and physics does not concern itself with the reduction of all phenomena to fundamentals. It leaves many phenomena the human body, climate patterns, computer design to other disciplines. Here too, universality plays a role, with physics being primarily interested in relatively simple phenomena, such as superfluidity, which do not have an especially detailed historical dependence such as may be found, for example, in the functioning of a cell, and can therefore be relatively easily reproduced by a variety of means, in many locations.

Traditionally, computer science is based upon a small number of universal models that are each supposed to capture the essence of some aspect of information processing. For example, the majority of work done on algorithm design has been framed within the well known Turing machine model [] of computation, or one of its equivalents. Shannons model [] of a communications channel is the foundation for modern work in information theory. Computer science is also concerned with the reduction of phenomena, but in a different way than is often the case in physics.

Reduction in physics often concerns the explanation of phenomena discovered without specific intent, such as superconductivity. In computer science, it is more typical to set a specific information processing goal I would like my computer to sort this list of names for me in such and such an amount of time and then to attempt to meet that goal within an existing model of information processing.

What is the origin of the fundamental models used as the basis for further progress in computer science? Examination of the original papers shows that the founders used systems existing in the real world as inspiration and justification for the models of computation they proposed. For example, Turing analyzed the set of operations which a mathematician could perform with pen and paper, in order to help justify the claim that his model of computation was truly universal.

It is a key insight of the last thirty years that these pseudophysical justifications for the fundamental models of computation may be carried much further. For example, a theory of com- putation which has its foundations in quantum mechanics has been formulated [63].

Information is physical, as Landauer reminds us []. That is, any real information processing system relies for its implementation upon systems whose behaviour is completely described by the laws of physics.

Remarkable progress has been achieved by acting on this insight, re-examining and refor- mulating the fundamental models of information based upon physical principles. The hope, which has been fulfilled, is that such a reformulation will reveal information processing capabilities that go beyond what was thought to be possible in the old models of computation. The field of science which studies these fundamental connections between physics and infor- mation processing has come to be known as the physics of information.

The connection between physics and information processing is a two way street, with potential benefits for both computer science and physics. Computer science benefits from physics by the introduction of new models of information processing. Any physical theory may be regarded as the basis for a theory of information processing. We may, for example, enquire about the computational power of Einsteins general theory of relativ- ity, or about the computational power of a quantum field theory.

The hope is that these new models of information processing may give rise to capabilities not present in existing models of information processing.

In this Dissertation we will primarily be concerned with the information processing power of quantum mechanics. The other possible implication for computer science is more ominous: there may be unphysical elements in existing theories of information processing which need to be rooted out if those theories are to accurately reflect reality.

Physics benefits in at least four ways from computer science. First, computer science may act as a stimulus for the development of new techniques which assist in the pursuit of the funda- mental goals of physics.

For example, inspired by coding theory, error correction methods to protect against noise in quantum mechanics have been developed. One of the chief obstacles to precision measurement is, of course, the presence of noise, so error correcting codes to reduce the effects of that noise are welcome.

They are doubly useful, however, as a diagnostic tool, since error correcting codes can be used to determine what types of noise occur in a system. Computational physics has allowed us to investigate physical theories in regimes that were not previously accessible. Such investigations can lead to interesting new questions about those theories, and yield important insights into the predictions made by our physical theories.

The third way physics benefits from computer science is that computers enable us to per- form experiments that would once have been impossible or, at the least, much more difficult and expensive.

Computer-based methods for obtaining, analysing, and presenting data have opened up new experimental realms. For example, computers enormously simplify the analysis of data taken in particle accelerators, in which only a miniscule fraction of the events detected in a given experimen- tal run may be of direct interest. Automated sifting of the data and identification of the relevant events is performed in an instant using powerful computers, rather than the time of years or more that it would take a human being to achieve the same results.

The fourth way physics benefits from computer science is more difficult to describe or justify. My experience has been that computer science is a great inspiration for fundamental questions about physics, and can sometimes suggest useful approaches to take in the solution of physics problems. This will be apparent several times during the main body of this Dissertation.

I can not yet say precisely why this should be the case, although as we have seen, both physics and computer science involve the development of tools to reduce phenomena involving complex interacting systems to certain fundamental models, as well as continual questioning and refinement of those models.

Perhaps it is not so surprising that each field should have much to teach the other. This Dissertation is concerned principally with a special subfield of the physics of information, quan- tum information, in which the fundamental models for information processing are based upon the laws of quantum mechanics.

The earlier formulation of the question investigated by this Disserta- tion may thus be refined: What is discovered when the laws of quantum mechanics are used as the foundation for investigations of information processing and computation? To better understand the subject of quantum information, it is useful to have a concrete example in hand. This section presents a simple example which illustrates many of the basic themes of quantum information.

The example is also interesting in its own right, as it takes us straight to the edge of what is known, posing a fundamental question about quantum mechanics, inspired by the methods of computer science.

The example concerns the question of what properties of a quantum mechanical system may be measured? In the s, Heisenberg and other researchers formulated the notion of a quantum mechanical observable.

Observables were introduced into quantum mechanics as a means of describ- ing what properties of a quantum system may be measured. For example, a particles position is regarded as an observable in quantum mechanics.

Mathematically, the concept of an observable is usually formulated as follows. An observable is any Hermitian operator acting on the state space of a physical system, where by state space we shall mean the usual Hilbert space associated with a physical system. Recall from elementary quantum mechanics that the measurement postulate of quantum mechanics as usually formulated has the following consequences: To each measurable quantity of a quantum mechanical system there is associated a mathematical object, an observable, which is a Hermitian operator acting on the state space of the quantum system.

The possible outcomes of the measurement are given by the spectrum of the observable. One of the most remarkable discoveries of quantum mechanics is that the theory implies limits to the class of measurements which may be performed on a physical system.

The most famous example of this is the Heisenberg uncertainty principle, which establishes fundamental limits upon our ability to perform simultaneous measurements of position and momentum.

Given the shock caused by Heisenbergs result that there are limits, in principle, to our ability to make observations on a physical system, it is natural to ask for a precise characterization of what properties of a system may be measured. For example, Diracs influential text [61], page 37 makes the following assertion on the subject: The question now presents itself Can every observable be measured?

The answer theo- retically is yes. In practice it may be very awkward, or perhaps even beyond the ingenuity of the experimenter, to devise an apparatus which could measure some particular observable, but the theory always allows one to imagine that the measurement can be made. That is, Dirac is asserting that given any observable for a reasonable quantum system, it is possible in principle to build a measuring device that makes the measurement corresponding to that observable.

Dirac leaves his discussion of the subject at that, making no attempt to further justify his claims. Later, Wigner [] investigated the problem, and discovered that conservation laws do, in fact, impose interesting physical constraints upon what properties of a system may be measured.

To my knowledge, there has been remarkably little other work done on the fundamental question of what observables may be measured in quantum mechanics. Not long after Heisenberg, Dirac and others were laying the foundations for the new quantum mechanics, a revolution of similar magnitude was underway in computer science.

The remarkable English mathematician Alan Turing laid out the foundations for modern computer science in a paper written in []2.

Turings work was motivated, in part, by a challenge set down by the great mathematician David Hilbert at the International Congress of Mathematicians held in Bologna in Hilberts problem, the Entscheidungsproblem, was to find an algorithm by which all mathematical questions could be decided.

Remarkably, Turing was able to show that there is no such procedure. Turing demonstrated this by giving an explicit example of an interesting mathematical question whose answer could not be decided by algorithmic means.

In order to do this, Turing had to formalize our intuitive notion of what it means to perform some task by algorithmic means. To do this, Turing invented what is now known as the universal Turing machine. Essentially, a universal Turing machine behaves like an idealized modern computer, with an infinite memory. Turings computer was capable of being programmed, in much the same sense as a modern computer may be programmed. Turings programs computed mathematical functions: the machine would take a number as input, and return a number as output, with the function computed by the machine in this way being determined by the program being run on the machine.

In addition, it was possible that programs would fail to halt, continuing to execute forever, never giving a definite output.

The most important assertion in Turings paper has come to be known as the Church-Turing thesis. Roughly speaking this thesis states that any function which may be computed by what we intuitively regard as an algorithm may be computed by a program running on a universal Turing machine, and vice versa. The reason this thesis is so important is because it asserts the equivalence 2 It is worth noting that many other researchers arrived at similar results around the same time, notably Church and Post.

However it is my opinion that it is Turings grand vision that has ultimately proved to be the deepest and most influential.

The validity of the Church-Turing thesis has been repeatedly tested and verified inductively since Turings original paper, and it is this continuing success that ensures that Turings model of computation, and others equivalent to it, remain the foundation of theoretical work in computer science.

One observation made by Turing was that the programs for his universal machine could be numbered, 0, 1, 2,. This led him to pose the halting problem: does program number x halt on input of the value x, or does it continue forever? Turing showed that this apparently innocuous question has no solution by algorithmic means. In fact, it is now known that in some sense most questions admit no algorithmic solution. In Chapter 2 we review the proof that there is no algorithm which can compute the halting function, establishing Turings great result.

For now, we will assume that this remarkable result is correct. Turings result paves the way for an interesting quantum mechanical construction. Suppose we consider a quantum mechanical system whose state space is spanned by orthonormal states 0i, 1i,.

Notice that it has two eigenvalues, 0 and 1. Is the halting observable a measurable property of the quantum mechanical system? More precisely, is it possible to construct a measuring device which performs a measurement of the halting observable?

There are two possibilities: 1. Classical and quantum computation. Mathematics of quantum computation and quantum technology. Mathematics of Quantum Computation and Quantum Technology. The physics of quantum information: Quantum Information. Quantum information. The Physics of Quantum Information: Classical and Quantum Information.

Quantum information and computing. Quantum Information Theory and Quantum Statistics. Classical and quantum information. Quantum Stochastics and Information. Topological quantum computation. Fundamentals of quantum optics and quantum information.