The paper’s called “Co-evolutionary hybrid intelligence,” and it’s a work of art that belongs in a museum. But, since it’s state-sponsored research from Russia that was uploaded to a pre-print server, we’ll just talk about it here. The research is only four pages long, but the team manages to pack a lot in that space. They don’t beat around the bush. You want to know how to solve AGI? Boom! Page one: The “hybridization of artificial and human capabilities and their co-directional evolution” sounds a lot like people and computers getting the urge to merge and making a go of things together. That’s kind of romantic. But what’s it really mean? When AI researchers talk about “strong” AI they don’t mean a robot that can carry heavy objects. They’re talking about the opposite of a “narrow AI.” All modern AI is narrow. We train AI to do a specific (hence narrow) function and then find ways to apply it to a task. A “strong” AI would be capable of doing anything a person can do. If such an AI ran up against a task it wasn’t trained for, it could write new algorithms or apply knowledge from a similar but unrelated task to solve the problem at hand. What the researchers propose is a method by which we would stop relying on massive quantities of data to brute-force progress in AI. They say we should combine our natural intelligence with the machines’ artificial intelligence and become permanently linked in a co-evolutionary paradigm. Per the paper: As to why this “new frontier” is the only path forward, the researchers offer the following explanation: The team is telling us that humans can’t build an AI that’s smarter than a human because we’re only human. And, even if we did, we wouldn’t be smart enough understand it. In other words, a hypothetical model of the strong artificial intelligence can only be hybrid. That sounds pretty deep, if we’re speaking philosophically, but we use math to describe the unknown in physics all the time. It’s difficult to place any scientific value on the assertion we couldn’t define a superintelligent AI if we built one. Let’s just roll with it though. According to the researchers, the path towards hybrid strong intelligence involves augmenting data-centric training methods with direct human-involvement at every level of learning. That sounds a lot like the way we “train” humans. We send them to school, they get educated, they become experts, they teach, and the cycle begins anew. We’re all for such a paradigm. If every AI company had to do hands-on training instead of just smashing everything inside of a black box and monetizing whatever comes out the other end, we wouldn’t be living in a world where AI scams regularly become billion-dollar industries. It’s hard to imagine how putting more humans in the loop will directly lead to strong AI however. But, if the researchers are correct in their (somewhat pessimistic and weird) assertion that humans will never create a machine that’s independently smarter than us, a hybrid intellect may be the only way to make people and machines smarter. You can read the whole paper here.