First in a 2-part series on quantum computing.
Computers today are fast approaching some fundamental limitations. Perhaps their biggest problem is that they exploit the classical physics that governs the hurly-burly rush of countless billions of electrons through nearly as many transistors. And the chips at the heart of today’s computers are running out of room for classical physics to work.
To make those chips’ transistors switch faster, we’ve primarily relied on making the devices smaller. But when they begin to approach 10 nanometers or so—and it is the goal of the semi conductor industry to get there in the next decade—very odd things will happen. Formerly well-behaved electrons will start revealing their quantum nature—darting across the transistor on the dictates of probability, regardless of whether the device is switched on or off. When transistors reach those infinitesimal dimensions and electrons start showing their true colors, computer make rs will have two choices: try to fend off the quantum weirdness with radically new types of semiconductors and transistors, or embrace the weirdness.
We say: surrender to the weirdness. Working with the quantum nature of things instead of against it will open up vast new frontiers for computing. And achievements during the past couple of years at university and government laboratories around the world have made it clear that a large-scale, practical quantum computer could be built, probably in the next 25 to 30 years. These achievements have demonstrated that the semiconductor manufacturing technologies underpinning modern computing, which were developed over nearly half a century, need not be abandoned. On the contrary, they will be instrumental in making quantum computers a practical reality.
These machines will take computing where it’s never been before. Most notably, there are classes of problems for which a conventional computer can do little more than try out all the possible solutions one at a time until it stumbles on the answer. Say, you have a phone number and want to look up the name it’s paired with in a phone book that has 1 million entries. There’s not much you can do but go page by page looking for the match. On average, your classical computer must examine half a million entries before finding a match. Sure, at gigahertz microprocessor speeds even that won’t take long, but there are plenty of much larger needle-in-a-haystack problems scientists face all the time, some of which would take your laptop 100 years to complete.
If you had a computer based on the principles of quantum mechanics, however, you could, in effect, examine all the entries in the telephone book practically at the same time. Such a quantum computer would need just 1000 steps—one five-hundredth of what a classical computer needs—to find the right name in the million-entry phone book. The theoretical ability of quantum computers to perform parallel processes seemed like an odd parlor trick when they were dreamed up in the 1980s, first by Richard Feynman and more concretely by David Deutsch. But in 1994 something happened that put quantum computing squarely in the crosshairs of governments, armed forces, and everyone else with digital secrets to keep.
Peter Shor, a theoretical matematician then at Bell Labs, discovered an algorithm for a quantum computer that could far more efficiently determine the prime factors of a large integer. Factoring is one of those problems that tie conventional computers in knots. Computers are so bad at it, in fact, that most encryption systems today rely on the products of enormous prime numbers, figuring it would take a computer decades to factor the number. Shor’s algorithm changed all that, and the idea that so much information could become so vulnerable has sparked a worldwide race to build a machine powerful enough to crack codes.
The first step in building a quantum computer is to find something to act as a quantum bit, or qubit, something whose quantum state can be read and manipulated. The trouble is that a quantum state is an exceedingly delicate thing, mainly because it can be changed by the most evanescent interactions—a fluctuation in a magnetic field, a wayward photon, and so on. Just a year after Shor’s breakthrough, two physicists then at Austria’s University of Innsbruck, Juan Ignacio Cirac and Peter Zoller, theorized that a string of ions held fast in a vacuum by an electromagnetic field and cooled to within a few thousandths of a degree above absolute zero could act as stable qubits and form the basis of a quantum computer. Scientists at the U.S. National Institute of Standards and Technology (NIST), the nation’s timekeeper, already had plenty of experience trapping and cooling ions from their work with atomic clocks, and they wasted no time in putting the scheme into practice. That same year, David Wineland of NIST and one of us (Monroe) used a trapped beryllium ion as a qubit to perform logic operations that are key to running a quantum computer.
Since then, physicists have come up with at least half a dozen potential ways to do quantum computation—including using the atomic nuclei in dissolved organic compounds as qubits and manipulating electrons within superconducting loops. With few exceptions, though, these schemes will never lead to a quantum computer that can solve a useful problem, because they simply can’t handle more than a dozen or so qubits, and what’s needed are hundreds—if not thousands.
We can’t construct a full-scale ion trap big enough to house that many qubits. So the only way we can see to build a practical quantum computer is to borrow a page from the electronics industry and build the equivalent of quantum integrated circuits. The analogy here is to transistors—traps work pretty much the same way if you shrink them down enough and put many of them on the same piece of semiconductor. That was demonstrated just last year when our research group at the University of Michigan and Wineland’s group at NIST independently produced the first ion-trap microchips built with the same techniques that microprocessor and MEMS makers employ. These chips are far from being useful computers themselves, but they are the first step in a path that could take us beyond the limits of computing as we know it.
The heart of any quantum computer , whether it’s built on a sliver of semiconductor or not, is the qubit. A word about the qubit: it’s odd.
In an ordinary computer, information is stored as bits, usually a minuscule reservoir of charge or the charge’s absence in a memory cell’s capacitor. At any given instant, an ordinary binary digit can be in one and only one of two different states. But the value of a qubit is determined by the quantum states of individual particles. So, like those quantum states, a qubit can have the value 1, or 0, or it can be—in the paradoxical world of the quantum—both values at the same time. This versatility is central to the power of quantum computers. In an ordinary computer you can represent a number between 0 and 31 using five binary digits. But using the same number of qubits you could represent all 32 numbers at once and perform the same calculation on them simultaneously. And that’s not even the end of the weirdness: two or more qubits can be linked together in ways no two transistors could ever be, influencing each other instantaneously—even if they are separated by a distance of light-years.
It was time to show that ion-trap quantum computers could be scaled up. And that meant shrinking them down
The specific quantum state of a particle that is generally exploited to determine a qubit’s value is called spin. In an ion-trap computer as well as several other schemes, the value of a qubit is determined by the direction of a particle’s spin state.
Spin is a measure of a particle’s angular momentum. Angular momentum is easy to understand for large spinning objects like a basketball, but photons, electrons, and other fundamental particles that make good qubits are as close as you can get to being dimension less points in space. The question is, How can they spin?
They don’t. Like many aspects of quantum mechanics, spin makes no intuitive sense—even to physicists. But it’s real, and it’s something measurable. For a particle, spin is an intrinsic property like charge, not something that comes about because of physical rotation.
Spin has direction—up or down, in quantum computing’s shorthand—and it’s the direction we use to represent the value of the qubit. The qubits used in ion-trap quantum computers rely on the spin state of an ion’s outermost electron and that of its nucleus. If the electron’s spin is aligned with that of the nucleus it orbits, we say the qubit is in the 1 state. If the two quantum states are pointing in opposite directions, we say the qubit is in the 0state. And the qubit ion can be put in a combination of 1 and 0 if the electron’s spin is itself a combination of up and down.
This ability of a qubit to have two values simultaneously is called the principle of superposition, and it allows a register of qubits to hold exponentially more information than the register with the same number of classical bits. For two ordinary bits, for example, the possible combinations are 00, 01, 10, or 11. But for qubits in a state of superposition, their values could be all four of those numbers at the same time.
Best of all, you can perform a calculation on all four at once, whereas in a classical computer it would have to be done one at a time. Although that would give you only a fourfold improvement for the 2-qubit example, the gain grows quickly as the number of bits increases. In theory, with n qubits you could be calculating with 2n numbers at once. With just 300 qubits, you could simultaneously perform a calculation on more numbers than there are atoms in the universe.
But quantum information is very delicate. Interactions with the environment—a random fluctuation in an electric field, say—can turn a superposed qubit into just another 1 or 0, or flip a 0 to a 1, or vice versa. Such an occurrence is called decoherence, and it can immediately render all the computation up to that point worthless.
The length of time that a qubit remains in the state or superposition of states that you put it in is called the coherence time. A quantum computer is limited in the number of operations it can perform by the time it takes for decoherence errors to overwhelm the computation. Ion traps are designed to be extremely quiet environments, in which qubits have been known to last 10 seconds or more—long enough to carry out several complex calculations.
Another trick is that qubits can be linked together in a way that has no counterpart in the macroworld we experience. This linkage, called entanglement, was one of the main arguments against quantum mechanics when it was introduced in the early 20th century. According to the theory, two particles can be linked so that if you perturb or measure one, the quantum state of the other changes instantly. For instance, you could entangle two ions so that their spin states will always be opposite to each other. If you measure the state of the first as a 1, the second will instantly become a0. Entanglement is key to executing many quantum computer programs, because it allows operations performed on one qubit to simultaneously be performed on all the others it is entangled with. Rainer Blatt’s group at the University of Innsbruck set the record for most entangled ions, at eight.
To understand exactly how entanglement and other quantum elements work in an ion-trap computer, you have to understand how an ion trap operates. A linear Paul trap, the Nobel Prize-winning invention of Wolfgang Paul, is a vacuum chamber that houses four long electrodes arranged so that they form the long edges of a rectangular box. On two of the electrodes, diagonally across from each other, is a voltage that oscillates at a radio frequency. On the other two is a dc voltage. The combination of the electric fields emanating from the electrodes tends to force ions toward the center line equidistant from all four electrodes.
Here’s why. Consider, for a moment, only the RF electrodes. At any one instant, the forces on an ion between them are like the force of gravity on a ball sitting on a saddle-shaped surface. The saddle traps in one dimension—the ball will not roll in either uphill direction. And it antitraps in the other—if moved, the ball will tend to roll down and off the saddle. But because the electric field from the RF electrodes is always oscillating, it’s as if the saddle were rotating beneath the ball. An ion slightly off the centerline would find itself on the uphill slope being pushed back in line for half of the cycle and on the downhill slope falling outward for the other half. However, the RF signal is designed so that the outward force is weaker than the inward force when the ion is close to the center, so on the average the force an ion feels is toward the centerline. The trap’s other electrodes, the ones with the dc voltage, keep the ion from wandering along thecenterline by pushing on it from both sides.
The result of the trapping is a string of ions along the centerline of the trap, and because they all have the same charge, they repel each other. Imagine the ions as balls suspended on strings and attached to each other by springs. The ions can be frozen in place by catching them at the intersection of three lasers, or they can slide back and forth along the line or vibrate against each other. The ion’s collective motion is called the vibrational state, which acts like a data bus. Starting from a standstill using a sequence of specially tuned lasers, one qubit’s data can be mapped onto the shared vibrational state, and then the vibrational state can alter a second qubit. That mapping technique is key to carrying out the operations that make up quantum algorithms.
By 2003 ion traps had proven that they could do all the basic functions needed for quantum computing, but could they handle enough qubits to do anything useful? [See sidebar, “5 Things Every Quantum Computer Needs.”] It was time to prove that ion-trap computers could be scaled up. And that meant shrinking them down.
The natural step was to turn to photolithography and other methods for making microchips. But we knew it wouldn’t be easy. Traps have pretty exotic requirements compared with your garden-variety logic chip. For example, the RF voltages, which oscillate at anywhere from 15 to 200 megahertz, are typically about 50 to 300 volts. Compare that with the 1.5-V signals inside a modern microprocessor. Making chips with insulators strong enough to survive such large potential differences is difficult, as is dissipating the heat the RF electrodes generate. We fried our share of chips giving it our best shot.
Trap designs fall into two general categories: symmetric and asymmetric. For symmetric traps, dc and RF electrodes are arranged so the sum of the dc electric field is 0 at the midpoint between the RF electrodes. Our chip is symmetric, and it’s carved from a wafer of gallium arsenide. Using photolithography and several types of standard etching techniques, we built a segmented trap where the ions line up in a narrow rectangular hole about 60 micrometers across and more than 1 millimeter long that goes all the way through to the back of the chip. Each of the trap’s four segments has four electrodes that look like microscopic diving boards, two on each side, arranged one on top of the other, to form the familiar four electrodes of a Paul trap. The segments allow us to push ions from one side of the trap to the other or even push them out of the trap simply by manipulating the dc voltage—applying a negative voltage to draw in a positive ion from a neighboring segment, for instance. Groups at other labs such as Sandia National Laboratories, in Albuquerque, have built micro fabricated symmetric traps working on the same principles.
Because symmetric traps are made on thin slivers of already thin semiconductor wafers, the thickness of the insulator between electrodes is limited, which limits the voltage you can apply to them. This restriction may not be fundamental, however; a European ion-trap fabrication effort has a plan for a semiconductor trap with thick insulating layers.
The other type of ion-trap geometry is the asymmetric trap, in which the RF node is not symmetrically located with respect to the electrodes. The ions float above the surface of the chip, out of the plane of the semiconductor, obviating a symmetric trap’s hole and cantilever electrodes. Wineland’s group at NIST has already built such a trap using gold electrodes patterned on the surface of a sapphire substrate, and one constructed at Alcatel-Lucent’s Bell Labs uses aluminum for the conducting electrodes and silicon oxide as the insulator. Figuring out the voltages that will keep the ions still and suspended above the chip is complex compared with what’s needed in symmetric traps. Getting the lasers on the ions is also a bit more difficult. They either have to be shot across the surface of the chip or through holes etched in particular areas of the trap. Bringing a laser beam across the surface can scatter its light—which makes reading the state of the ions more difficult. But without a bunch of trenches cut all the way through the chip, as with symmetric traps, it’s easier to lay out an interconnected array of traps. Asymmetric traps are also attractive because their construction requires less three- dimensional carving, making the process more like traditional chip fabrication, which is largely two-dimensional.
Regardless of the trap type, making them chip scale means controlling the motion of the ions, critical to many calculations, is even harder. Anomalous electric fields appear on the electrodes that make the ions vibrate and heat up—wreaking havoc with our calculations. Noise that overlaps the vibrational frequency of an ion in the trap—about 1MHz—is the prime culprit. But ions in chip traps experience more noise than we would have expected. The source of this noise, called ”patch-potential noise” for the patches of voltage that seem to move around the electrodes, remains a mystery. Researchers are trying to identify its cause and hope to eliminate it. As an example, patch-potential noise can be suppressed greatly by decreasing the temperature of the electrodes. In one experiment, halving the temperature from room temperature to 150 K cut the noise at the ion by an order of magnitude. Other experiments have shown that the farther from the electrodes the ions are, the less noise they experience, so making bigger traps may help keep things cool. Researchers also would like to identify materials and surface preparation techniques that limit noise or—better yet—don’t create any at all.
In some ways, a full-scale quantum computer would work like the standard desktop computer, in that it would have a place to store data, a place where a program manipulates the data, and interconnections to move the data from one to the other. In the computer you are using now, bits of data—stored as quantity of charge or its absence—are transferred from memory to a processor in the form of levels of voltage. At the processor the computer’s program determines which logic operations the bits will be subjected to. Once the logic operations are completed, the bits are converted to amounts of charge and stored in memory again.
Similarly, in an ion-trap computer, stored qubits would be called from a storage trap to a logic trap, the kind we’ve been building so far. The two traps would be connected by a long trap that acts like an interconnect or a data bus. In truth, there’s little structural difference between the memory, logic, and interconnect regions; so by building one, we’ve pretty much built them all.
Qubits leaving the storage trap would be moved along the center line of the interconnect trap by varying the strength of the dc electric field holding the ion in place—strengthening the field ahead of the ion and weakening the field behind it, thereby pulling it along. Once the qubits are in the logic trap, the program—the series of laser pulses that rotate, entangle, and otherwise manipulate the qubits—goes to work. The answer to a calculation could be read out then, or the qubits could be sent down another interconnect and later brought back to the logic region to continue the calculation with other qubits. Because ions cannot pass each other on the interconnect, there would also have to be junctions—points where three trap lines converge—so that ions could move to other interaction regions or storage areas or simply get out of the way of other ions coming down the interconnect.
It sounds simple, but such a structure would have to be repeated and connected many dozens of times on the same chip to handle the number of ions we’d need. Assume that we require 100 qubits to perform a particular algorithm and that each qubit is encoded with 50 extra ions for error correction. This 5000-ion array would need on the order of 50 000 individually controlled dc electrodes and their attendant wires. Therefore, a quantum computer equivalent of very-large-scale integration would be required to handle the control circuitry just to move the ions around.
Five thousand ions would need many dozens of lasers for cooling, detection, and gate operations. They’d all have to be precisely controlled in coordination with the ions’ motion in the trap—which is in turn determined by the 50 000 dc electrodes. The lasers would have to be aligned on the ion and maintain that alignment over the entire course of the computation, a straightforward task for a small experiment but nearly impossible for an array of 5000 ions without feedback-controlled motorized mirrors.
Such considerations lead to a great irony: you’d need a great deal of infrastructure, including a powerful classical computer, to run a useful quantum computer. But there is hope. The small-scale quantum algorithms that scientists are running today and plan to run in the near future will almost certainly lead to insights that could make full-scale quantum computing, if not easy, at least more tractable.
About the Authors
CHRISTOPHER MONROE leads the trapped-ion quantum computing group at the University of Maryland, College Park, and was involved in many of the field’s milestones, including the first demonstration of a quantum logic gate in an ion trap. JONATHAN D. STERK is a doctoral student in Monroe’s lab, and DANIEL STICK is a former student of Monroe’s now doing postdoctoral research at Sandia National Labs, in Albuquerque.
To Probe Further
The authors described one of the first ion-trap chips in ”Ion Trap in a Semiconductor Chip,” by Daniel Stick et al., Nature Physics, January 2006, pp. 36–39.