People talk about artificial intelligence all the time. Artificial intelligence is the branch of study which deals with putting intelligence into the machines so that they can do things by themselves. But how do we know if they are getting intelligent? For us humans, we have designed various kinds of IQ tests to measure our intelligence. Is there anything for machines as well? I am not talking about the robustness of an algorithm or the accuracy with which a computer can finish a certain task, I am talking about the actual intelligence. Is it possible to measure it?
What exactly is the question here?
Back in the early 1950s, the great Alan Turing started thinking about this question and came up with a test called “The Turing Test”. This test was proposed by Turing as a way of dealing with the question of whether machines can think. According to Turing, the question of whether machines can think is meaningless. However, if we consider the more precise question of whether a digital computer can do well in a certain kind of game that he describes, then we do have a question that is worthy of discussion. This game is called the “Imitation Game”. Turing thought that it would not be too long before we had digital computers that could do well in the Imitation Game.
The Imitation Game
Alan Turing, in his 1951 paper, proposed a test called “The Imitation Game” that was supposed to settle the issue of machine intelligence. The first version of the game he explained involved no computer intelligence whatsoever. Imagine three rooms, each connected via computer screen and keyboard to the others. In one room sits a man, in the second a woman, and in the third sits a person who will be the “judge”. The judge’s job is to decide which of the two people talking to him/her through the computer is the man. The computer terminals are used so that physical clues cannot be used. The man will attempt to help the judge, offering whatever evidence he can to prove that he is a man. The woman’s job is to trick the judge, so she will attempt to deceive him/her, and counteract her opponent’s claims, in hopes that the judge will erroneously identify her as the male.
What does any of this have to do with machine intelligence?
Well, Turing then proposed a modification of the game. In this new game, instead of a man and a woman as contestants, there was a human at one terminal and a computer at the other. The human can be of either gender. Now the judge’s job is to decide which of the contestants is human. Turing proposed that if a judge were less than 50% accurate under these conditions, then the machine is intelligent. That is, if a judge is as likely to pick either human or computer, then the computer must be a passable simulation of a human being, and hence intelligent. The game has been recently modified so that there is only one contestant, and the judge’s job is not to choose between two contestants, but simply to decide whether the single contestant is human or machine.
The Turing Test is more generally used to refer to some kinds of behavioral tests to check whether or not an entity has intelligence. Turing, in fact, put forth the following argument against machine intelligence. If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first clue is that the machine could never use words, or put together signs, as we do in order to declare our thoughts to others. We can certainly conceive of a machine constructed so that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence. Every single human can do this with absolute ease! Second clue is that even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others. This reveals that they are not acting from understanding, but rather from a predefined algorithm which dictates its actions.