But, by far, the most interesting news we heard at I/O has to be the announcement that Google intends to build a new quantum AI center in Santa Barbara where the company says it will produce a “useful, error-corrected quantum computer” by 2029. Pause for applause, amirite? It would certainly be amazing, but is it actually feasible? Quantum computers are ludicrously complex, but they can be explained with relative ease. In order to build one you have to overcome environmental factors such as keeping them extremely cold and figuring out how to keep qubits – the quantum version of a computer bit – from becoming decoherent and unmeasurable. These are difficult challenges that exist at the very edge of scientific exploration in the realms of engineering and physics. Currently, the two most popular examples of a “functioning” quantum computer are IBM’s 65-qubit system and Google’s own 72-qubit Bristlecone system. These systems are sort of like the giant mainframes of yesteryear, except they use far too much power to be useful, they’re incredibly prone to errors, and the only things we can really get them to do are experiments designed to show what future usefulness could look like. Getting from there to something that’s not only functional, but capable of performing truly useful feats better than any existing technology, seems like something that could take decades. Per a blog post from Google: Google’s been known to be a bit hyperbolic when it comes to quantum computing. In 2019 Google and NASA claimed they’d achieved “quantum supremacy” by developing a quantum computing system that could solve a problem in a matter of minutes that would take a classical computer “10,000 years” to solve. It turns out, the supercomputer they were talking about was an IBM device and Big Blue claims that it can actually solve the problem Google is talking about in a couple of days if you use it right. That’s a bit embarassing for Google, as 10,000 years and 48 hours are pretty far apart. So it’s worth taking things with a grain of salt when Big G says it’ll create a one-million-qubit computer in the same time-frame it takes Rockstar Games to develop a Grand Theft Auto game. However, Google isn’t just hoping that if it builds a big lab the qubits will come. It has a rather innovative plan to scale things up so quickly. Per the blog post: This is the first time we’ve heard the term “forever qubit” here at Neural, and it’s unclear if this a goal post-moving way of describing a hybrid quantum computing system that’s functionally a wacky classical system. But, if you take the whole blog post at face value… To get there, we need to show we can encode one logical qubit — with 1,000 physical qubits. Using quantum error-correction, these physical qubits work together to form a long-lived nearly perfect qubit — a forever qubit that maintains coherence until power is removed, ushering in the digital era of quantum computing. Again, we expect years of concerted development to achieve this goal. And to get THERE(!), we need to show that the more physical qubits participate in error correction, the more you can cut down on errors in the first place — this is a crucial step given how error-prone physical qubits are. There’s still simply no telling whether Google’s 50 years way from a useful quantum system or five. It’ll be exciting to follow along and, hopefully, as the research papers come out, we’ll get a better idea of what these “forever qubits” mean for the field. Don’t forget to check out the rest of our Google I/O coverage right here.