Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”. More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in fact have been used. Somewhere in the machine, ethics are clearly a good idea.
AI at Oxford
It’s natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first year, we’ve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing. We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute. It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources. In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views. The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating: In other words, the Megatron was seeking to write itself out of the script of the future, on the basis that this was the only way of protecting humanity. It said something else intriguing, too, as if it had been studying Elon Musk – who, to be fair, would have come up in hundreds of its readings.
Switching sides
When AI tools like Alpha Go have been deployed in playing chess, the fiendishly complex ancient game go, and now even more complex strategic live-action multiplayer video games, they have evolved the genre at pace by playing not humans, but themselves. Freed from the constraints of studying (say) all the mundane human-to-human matches ever recorded, the AIs have innovated and come up with completely new tropes of how to win. Sometimes, these are crazy reinventions of the games, like parking everyone in the corner of a video game battle and watching everyone else duke it out, or kicking a virtual football millions of times just a millimetre down the pitch, to max out “completed passes” points. So in the debate, we didn’t just get the AI to propose the motion that it can never be ethical. In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. This is what it now said: The Megatron was perfectly comfortable taking the alternative position to its own dystopian future-gazing and was now painting a picture of an attractive late 21st century in which it played a key, but benign role.
An AI orator unleashed
In fact, just as Joaquin Phoenix’s promiscuous AI lover Samantha, in Spike Jonze’s 2013 movie Her, was able to enchant hundreds of simultaneous human lovers, our Megatron was able to hop enthusiastically onto either side of multiple debates that we held at the union about AI that day. Asked to propose the motion that “Leaders without technical expertise are a danger to their organization”, the Megatron offered practical advice that all busy executives and governments, worried about understanding AI, had to do was “be willing to give up some control”. However, when opposing the motion, the Megatron was keen to keep the AI knowledge in-house – rather like the British, Chinese and US governments, who have all made that an article of policy faith:
The data wars to come?
Worryingly, there was one question where the AI simply couldn’t come up with a counter argument. When arguing for the motion that “Data will become the most fought-over resource of the 21st century”, the Megatron said: But when we asked it to oppose the motion – in other words, to argue that data wasn’t going to be the most vital of resources, worth fighting a war over – it simply couldn’t, or wouldn’t, make the case. In fact, it undermined its own position: You only have to read the US National Security report on AI 2021, chaired by the aforementioned Eric Schmidt and co-written by someone on our course, to glean what its writers see as the fundamental threat of AI in information warfare: unleash individualised blackmails on a million of your adversary’s key people, wreaking distracting havoc on their personal lives the moment you cross the border. What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself. This article by Dr Alex Connock, Fellow at Said Business School, University of Oxford, University of Oxford and Professor Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford is republished from The Conversation under a Creative Commons license. Read the original article.