In one of my previous blog posts, we discussed about measuring computer’s intelligence and how we can use the Turing test for it. After Turing proposed that test, a lot of people realized its importance and started working on it. People really wanted to believe that machines are indeed capable of thinking. We need a way to determine if a computer is thinking on its own. The biggest obstacle here is that this is a very subjective topic. Given the knowledge that something is indeed a machine, how do we ascertain that the machine is capable of thinking?
How do we know if thinking machines exist?
Well, if the machine can produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, then we can conclude that it is a thinking machine. Out of an attempt to pass Turing’s test, a group of programs emerged in the 1970s that tried to cross the first human-computer barrier: language. Since textual mode of interaction with the computer was the first mode to be designed, people started off with natural language processing (NLP). These programs, often fairly simple in design, employed small databases of language combined with a series of rules for forming intelligent sentences. While most were very inadequate, some grew to tremendous popularity. Perhaps the most famous such program was Joseph Weizenbaum’s ELIZA.
What is ELIZA?
Written in 1966, it was one of the first and remained for quite a while one of the most convincing. ELIZA simulates a Rogerian psychotherapist. The Rogerian therapist is empathic, but passive, asking leading questions, but doing very little talking (like “Tell me more about that” or “How does that make you feel?”). It does so quite convincingly for a while. There is no hint of intelligence in ELIZA’s code, it simply scans for keywords like “Mother” or “Depressed” and then asks suitable questions from a large database. Failing that, it generates something generic in an attempt to elicit further conversation. Most programs since have relied on similar principles of keyword matching, paired with basic knowledge of sentence structure.
Why is it difficult?
Humans tend to form so many different sentences and react in so many different ways to the same situation. We construct new sentences, combine two words, use different structures, etc. We continually learn and update ourselves. How do we introduce this randomness into a machine? Even if we write an algorithm to induce randomness, will that not be deterministic because we wrote the algorithm for that? No matter how many varieties we use to train a machine, it will still be a fixed set. NLP technology has gotten very good over the years. Search engines heavily utilize it to optimize their search technology. But regular human-computer interaction is still pretty far from satisfactory.
————————————————————————————————-