Dialogue on AI must be transdisciplinary
Advances in machine learning have been dramatic over the past five years. This form of artificial intelligence has led to significant advancements in areas such as autonomous driving, automated text generation, and machine translation (to and from multiple languages).
Google Translate is the most obvious example of the latter. As useful as it is, it still makes some pretty basic mistakes. For example, it correctly translates “the window that I have closed” in French as “the window that I have closed”, but incorrectly translates “the key that I have found” as “the key that I have find “.
Anyone with a level of French A will tell you, with the to have verb, the past participle must agree with the direct object when it precedes the verb. “Key” is feminine, so the extra “e” is needed at the end of “found”. Testing with similar examples gives around 50 percent translation accuracy, which isn’t great.
For someone like me who has worked in machine learning (ML) for 30 years, this is no surprise. The translation is only as good as the data supplied to the ML algorithm during the learning phase. Google Translate has no understanding of French grammar: it learns by the rough repetition of exemplary sequences. Obviously, there aren’t enough examples in Google’s training data of phrases with feminine nouns as the objects preceding to have so that the correct translation is given every time.
I have a book on my shelves that I bought in the late 1980s when I started experimenting with machine learning (“artificial neural networks” or “connectionism” as the field was called then). . In this book, Thinking machines, the authors described the Chinese Room thought experiment, proposed by philosopher John Searle in 1980. Strings of Chinese characters (“entry questions”) passed under the door of the room. By following the instructions of a computer program to correctly manipulate Chinese symbols, Searle, who does not speak Chinese, is able to return the appropriate sequence of Chinese characters under the door (“exit responses”), thus convincing observers to outside the room. that there is a Chinese speaker inside the room.
At the end of his thought experiment, Searle asks if the computer program could understand Chinese (“strong AI”) or if it simulated this ability (“weak AI”). As my experience with Google Translate reveals, such a question is still relevant, even though today’s data-driven ML algorithms are totally different from the symbol manipulation programs of the early 1980s.
Within the ML community, the focus is almost entirely on building ever more impressive demonstrators, such as the work of DeepMind researchers on gaming machines. In 2016, their AlphaGo ML algorithm managed to beat the best. Go player to the world. AlphaGo Zero and AlphaZero then went beyond AlphaGo to generate their own training datasets, using a combination of deep neural networks, reinforcement learning, and game-specific representations to achieve performance. superhuman ”. Thanks to two AlphaZero machines playing millions of games against each other, they explored a huge space of possibilities and were able to perform movements that a human player could not have foreseen. But AlphaZero has no more understanding of Go than Google Translate does of French language or grammar.
The most powerful ML model today, GPT-3, is used in hundreds of text-generating applications, such as chatbots, producing nearly 5 billion words per day. But does GPT-3 understand the text it generates automatically?
There have been extraordinary advancements in learning algorithms, computer hardware, and the size of training data, but are we any closer to building thinking machines than 30 years ago (than we called them Strong AI, artificial general intelligence or super-intelligence)? What is demonstrated by the ability to learn to translate languages, play intellectually demanding games, or generate text automatically in response to prompts? The remarkable success of weak AI? Or the first clue of a powerful AI?
Such a debate should really take place within higher education, especially as face-to-face seminars and workshops resume. Emily Bender, a linguist at the University of Washington, updated the Chinese Room thought experiment with her “octopus test” last year to stress the importance of the connection between form and meaning. Two people living alone on isolated islands send each other text messages via an undersea cable. An octopus listens to the impulses, then cuts off one of the islanders and attempts to impersonate them by tapping on the cable. What happens when one of the islanders sends a message with instructions on how to build a coconut catapult, but also asks the other islander for suggestions on how to improve the design?
However, the dialogue around such deep issues with ML researchers in computer science departments has been minimal, as most of them are too busy trying to keep up with big tech companies while training PhD students – who are. soon absorbed into the more and more laboratories of these same companies.
In a world of chatbots and autonomous vehicles, fundamental questions about the limits of AI / ML urgently need to be revisited, with insights from multiple disciplines. ML researchers in academia should engage in a new dialogue with colleagues in philosophy, linguistics and cognitive science. Reuben College, the newest of Oxford’s colleges, intends to play its part in promoting these multidisciplinary exchanges.
Lionel Tarassenko is President of Reuben College, University of Oxford.