Quantum computers built by companies like Google and IBM are general purpose gate-based machines. They can fix any problem and should show considerable acceleration for specific classes of problems – or they will, as soon as the number of gates is high enough. Right now, these quantum computers are limited to a few dozen gates and have no error correction. Getting them to the necessary scale presents a series of difficult technical challenges.
The D-Wave machine is not for general use; it’s technically a quantum annealing, not a quantum computer. It performs calculations that find low energy states for different configurations of quantum devices in hardware. As such, it will only work if a computer glitch can be translated into a power minimization problem in one of the possible chip configurations. This is not as limiting as it might sound, as there are many forms of optimization that can result in an energy minimization problem, including things like complicated scheduling issues and protein structures.
It is easier to think of these setups as a landscape with a series of peaks and valleys, problem solving being the equivalent of finding the landscape for the lower valley. The more quantum devices there are on D-Wave’s chip, the more thoroughly it can sample the landscape. So increasing the number of qubits is absolutely essential for the utility of a quantum annealer.
This idea fits the D-Wave hardware pretty well, as it’s much easier to add qubits to a quantum annealer; The company’s current offering is 2,000. There is also a question of fault tolerance. While errors in a gate-based quantum computer typically result in unnecessary output, failures on a D-Wave machine typically mean that the response it returns is low energy, but not the lowest. And for many problems, a reasonably optimized solution may suffice.
What has been less clear is whether the approach offers clear advantages over algorithms run on conventional computers. For gate-based quantum computers, researchers had already developed mathematics to show the potential for quantum supremacy. This is not the case for quantum annealing. Over the past few years, there have been a number of instances where D-Wave’s hardware has shown a clear advantage over conventional computers, only to see a combination of algorithm and hardware improvements on the classic side erase. the difference.
Through the generations
D-Wave hopes that the new system, which it calls Advantage, will be able to demonstrate a clear difference in performance. Before today, D-Wave offered a 2000 qubit quantum optimizer. The Advantage system increases this number to 5000. Just as critical, these qubits are connected in additional ways. As mentioned above, the problems are structured as a specific configuration of connections between the qubits of the machine. If a direct connection between two is not available, some of the qubits must be used to establish the connection and therefore are not available for troubleshooting.
The 2,000 qubit machine had a total of 6,000 possible connections among its qubits, for an average of three for each. The new machine increases that total to 35,000, or an average of seven connections per qubit. Obviously this allows a lot more problems to be configured without using qubits to make connections. A white paper shared by D-Wave says it works as expected: bigger issues fit into the hardware, and fewer qubits need to be used as bridges to connect other qubits.
Each qubit on the chip comes in the form of a loop of superconducting wire called a Josephson junction. But there are many more than 5000 Josephson junctions on the chip. “The lion’s share of these are involved in superconducting control circuits,” D-Wave processor manager Mark Johnson told Ars. “They’re basically digital-to-analog converters with memory that we can use to program a particular problem.”
To achieve the level of control needed, the new chip has over a million Josephson junctions in total. “Let’s put that in perspective,” Johnson said. “My iPhone has a processor that contains billions of transistors. So in that sense, it’s not a lot. But if you’re familiar with superconducting integrated circuit technology, it’s well outside the curve. Connecting everything also required over 100 meters of superconducting wire, all on a chip that was about the size of a sticker.
While all of this is done using standard silicon fabrication tools, this is just a practical substrate – there are no semiconductor devices on the chip. Johnson couldn’t go into details on the manufacturing process, but he was ready to talk about how these chips are made more generally.
It’s not TSMC
One of the big differences between this process and standard chip manufacturing is the volume. Most of D-Wave’s chips are hosted in its own facilities and are accessible to customers through a cloud service; only a handle is purchased and installed elsewhere. This means that the company does not need to manufacture a lot of chips.
When asked how much that made, Johnson laughed and said, “I’m going to end up like that guy who predicted that there would never be more than five computers in this world,” before say: we can achieve our business goals with a dozen or less of them. ”
If the company made standard semiconductor devices, that would mean making one wafer and calling it a day. But D-Wave considers advancements to have reached the point where it gets a useful device on every slice. “We’re constantly pushing the comfort zone of what you might have in a TSMC or an Intel, where you’re looking at how many 9s can I get in my performance,” Johnson told Ars. “If we’re performing this high, we probably haven’t pushed hard enough. ”
Much of that push came in the years leading up to this new processor. Johnson told Ars that the higher levels of connectivity required new process technology. ” [It’s] the first time we’ve made a significant change in the tech node in about 10 years, “he told Ars.” Our fab cross-cut is much more complicated. There are more materials, more layers, more types of devices, and more steps. ”
Beyond the complexity of the design of the device itself, the fact that it operates at temperatures in the milli-Kelvin range also adds to the design challenges. As Johnson noted, every wire entering the chip from the outside world is a potential heat conduit that should be minimized – again, an issue most chipmakers don’t face.