Quantum Untangled: A quantum path towards AGI?
Or – how we’ll need quantum computers to power the superintelligence we’re all craving.
How long before AGI? This, perhaps, has been the loudest refrain heard in technology journalism this year. After ChatGPT dazzled the world last winter, the mainstream media quickly turned their attention to what would inevitably come next – namely, artificial general intelligence, or AGI. This, they reported, was the logical next step for this formidable technology, the dawn of which could either lead to the uplifting transformation of the global economy or its wrecking underfoot by a malevolent, man-made superintelligence.
The problem is, getting to either of these points will likely require levels of computational power beyond the capabilities of even the hardiest GPU. That isn’t to say that the AI utopians/doomsters don’t have options. IBM, for example, has created an analog AI chip that can handle natural-language tasks more efficiently than a GPU, while companies like AMD are tackling Nvidia's dominance in the AI market through software that improves model efficiency. Even so, these approaches are little more than new ways to push classical computational hardware harder and harder — or, put another way, yet more attempts to squeeze blood out of an increasingly exsanguinated stone.
Then there’s the quantum pathway. For several years now, scientists have dreamed of combining AI with quantum computing to create some kind of souped-up hybrid of the two, a model that might, just might, transcend the puny physical constraints of classical computers and finally create the kind of benign superintelligence everyone is hoping for. Of course, that all depends on the emergence of quantum computers that harness logical qubits in sufficient quantities to support stable AI models, but it’s a prospect that, as readers will note, is probably closer than we think.
Experiments in this area are certainly proceeding apace. Earlier this month, Quantinuum announced plans to work with researchers at Chubu University, Japan, on ways to utilise quantum theory to model language, meaning and psychology in AI algorithms. The new multi-year project will focus on creating new quantum machine learning models geared for cognition, and will be adapted to take advantage of the latest generation of quantum computers – including those with logical qubits that can’t be simulated on classical hardware. The first of this type of machine is expected to be released next year.
It will build on previous work by researchers at Quantinuum and Chubu charting parallels between quantum and cognitive systems. This work suggested that the mathematical structures of quantum theory could be observed in facets of human language and cognition and explain some of the currently intractable problems of modern artificial intelligence. This includes the role of context in generating meaning in text or “question order effect”, where a previous question may affect the cognitive response process and respondents' answers.
Quantinuum isn't the first quantum startup to explore how quantum computers could lead to human-like AI. Earlier this year IonQ's CEO Peter Chapman said quantum artificial intelligence would be as significant as the current crop of large language models, such as those powering OpenAI's ChatGPT. The company published a research paper that suggests human decision-making could be tested on future quantum computers, also suggesting an ability to understand the order in which people ask questions and the influence it has on their answers. ORCA, meanwhile, is working with Nvidia to deploy its photonic quantum computers alongside GPUs to speed up and improve the quality of artificial image generation and analysis.
Others are deploying quantum machine learning techniques to create smaller models with similar outcomes to those of classical machine learning. Whether the next evolution of that will result in greater understanding about the mysteries of human cognition remains to be seen. Certainly, it feels like a stretch. Then again, I hear you ask, who’d have imagined ChatGPT and the ability to generate complex text responses from a simple prompt ten years ago?
Classical computing is starting to creep towards its limits when it comes to ever larger AI models. It is possible that the next generation foundation models, namely OpenAI’s GPT-5, Anthropic’s Claude 3 and Google’s Gemini, will be the last that can be trained on classical hardware alone. The models that come after them, likely between five and ten years away, may well require the power of a QPU to be trained or even run. But, at least for now, I take some comfort in the knowledge that the human mind remains the most powerful computational engine on Earth.
Partner content
The security challenges of digitalising the energy grid - Tech Monitor
How do we restore trust in the public sector? - The New Statesman
Brands must seek digital fashion solutions - Tech Monitor
The new to direct capital deployment to decarbonise household electrification - Capital Monitor
What factors drive data centre vendor selection? - Tech Monitor