Sources

Listen, to write an episode, first and foremost, creativity is required. Understanding and the ability to explain are needed. In short, the ability to think is needed. But computer programs cannot think, can they? Or can they? This question has been asked many times over the years. But for the first time in history, during an era when the concept of artificial intelligence didn’t even exist, this idea occurred to a genius in 1950. That genius is none other than Alan Turing. Turing was actually a mathematician and had the misfortune of living through the turbulent first half of the 20th century, shaken by two world wars. Yet, this unfortunate period in human history became an opportunity for Turing’s star to shine. If you remember the movie The Imitation Game, Turing was selected by the British government to join a team formed to crack the German army’s encryption device, Enigma. The encryption-breaking machine that this team invented, called Bombe, shortened World War II by about two years and saved more than 21 million lives. This also caused Alan Turing to shift his interest and work toward computer science. In fact, due to his work during this time, Turing is recognized today as the person who laid the foundations of computer science. Later in his career, Turing began contemplating abstract and theoretical topics. While computer science was still in its infancy, and even the term "artificial intelligence" was non-existent, it was as if he could see the future. He predicted, even back then, that machines would one day be able to play chess and converse with humans. However, all of these thoughts in Turing’s mind culminated in a single, ultimate question. In 1950, in one of the most important works in the history of artificial intelligence—his paper titled Computing Machinery and Intelligence—he asked this very question: “Can machines think?” What exactly does this question mean? Are we wondering whether machines can possess consciousness or whether they can fall in love? Or are we curious whether machines can create music, poetry, art, or even a podcast episode? Can machines derive the same pleasure I get from a delicious meal, say, a chocolate cake? According to Turing, none of these questions are unimportant. Yet, they are all very difficult—almost impossible—to test, observe, or experiment with. So how can we determine if artificial intelligence truly thinks? If you recall, in our previous podcast episode, we talked about the power of words. We mentioned that language allows us to express our thoughts. We even discussed how, for some ideas, we rely on words to think. Alan Turing decided to place the relationship between language and thought at the center of the question, “Can machines think?” If a machine can speak just like a human, respond appropriately to questions like a human, and continue a dialogue like a human, then it is capable of thinking like a human. In 1950, Alan Turing proposed the Turing Test, which still serves as a litmus test for artificial intelligence today. Imagine two stools in front of you. Now, let’s pull a thick curtain in front of these stools. You can no longer see what’s behind the curtain. On one of these stools, a human sits. On the other, a computer with a conversational program. Of course, due to the curtain, you cannot see which stool holds the human and which holds the computer. Turing’s question to us is this: Can you figure out, just by exchanging text, which is the human and which is the computer? If you can, congratulations, you have proven that humanity is smarter. But if you can’t, then you’re interacting with a computer so advanced that it has convinced you it’s a human. Alan Turing suggested that a computer capable of convincing at least 30% of a jury that it was human could be considered to possess genuine intelligence. Thirty percent. Doesn’t that seem a little low to you? Shouldn’t a machine have to convince at least 50%, or even 51%, of the jury to pass the test? Actually, if you think about it, a computer convincing more than half the jury under Turing Test conditions would be a terrifying result. It would mean that the computer imitates a human even better than a real human does. This is why Alan Turing considered it sufficient for a machine to convince 30% of the jury. Sadly, Alan Turing did not live to see any computer program capable of taking the Turing Test. He died by suicide in 1954, about four years after publishing the paper. It would take another decade for talking computers to emerge. This happened in 1964, at the famous MIT (Massachusetts Institute of Technology). Professor Joseph Weizenbaum, working in computer science, set out to create a computer program that could converse with humans and respond to their questions. The program Weizenbaum developed, named Eliza, became the first chatbot in history. What made Eliza unique was that it had the persona of a psychotherapist. It could detect keywords like “depression” and “anxiety” in the sentences people typed, then place those words into different sentences and ask questions back. When Weizenbaum finished coding Eliza, he invited his colleagues at MIT into his office and asked them to talk to the program. His goal was to show how artificial and awkward communication between machines and humans really was. Surely, one couldn’t sustain a boring conversation with a program that merely repeated what you said back in different ways, right? But things didn’t go as Weizenbaum had expected. Even the assistants who helped him code Eliza wanted to talk to the program more and more, endlessly. This woman had deepened her conversation with a computer program consisting of just 100 lines of code so much and had begun opening up such personal parts of her life that she became uncomfortable with the presence of another person in the room. At this point, Vision's amazement and laughter were, of course, replaced by concern and slight annoyance. I mentioned that Vision’s purpose in writing Eliza was to show the superficiality of communication between humans and machines. However, some people talking to Eliza delved deeper into their conversations, and some even claimed they had finally found someone who understood them. When news about Eliza spread among the public, the medical community began to claim that this would start a new revolution in psychotherapy, replacing therapists with computer programs, and that everyone would soon have a computer therapist at home. These public reactions were the last straw for Vision, and he decided to terminate the Eliza program. Joseph Weizenbaum, who coded the world’s first chatbot and first AI, abandoned his role as a key advocate of artificial intelligence and dedicated the rest of his life to opposing it. According to Weizenbaum, people tended to misunderstand artificial intelligence and attribute human characteristics to something that wasn’t even alive. In a 2008 interview, he explained his decision to shut down the program: “My primary objection is that when a computer says ‘I understand,’ there is no one actually saying it. In other words, it’s a lie. And I don’t believe that emotionally unbalanced people can be treated systematically with lies.” The first AI example Weizenbaum created, Eliza, had conversations with many people, but it never officially took the Turing Test. Even if it had, it wouldn’t have passed because all of its responses were, in a sense, pre-programmed. Therefore, when it couldn’t find a new response, it would often repeat itself. When the conversation opened into new areas, it would essentially start spouting nonsense. So, if this method isn’t very logical, how can artificial intelligence respond to questions in a human-like way? Let me introduce you to Cleverbot. Actually, I don’t even need to introduce it—just go to cleverbot.com and meet it yourself. It’s a chatbot aiming to pass the Turing Test. But what sets it apart from other chatbots is this: while other chatbots’ sentences are determined by pre-written algorithms in their code, Cleverbot starts from scratch, like a baby, and learns something new with every conversation. Just like a child learns a language through interactions with its surroundings, Cleverbot develops its language through socialization. When it first emerged in 1997, it didn’t know a single word, but today, on cleverbot.com, it converses with approximately 4 million people every month, seemingly knowing just a little more with each conversation. When you ask Cleverbot a question, it accesses the millions of conversations it has had in the past and essentially asks itself, “How do people usually respond to this question?” Then, it finds a suitable response from past conversations and sends it to you. In other words, it incorporates sentences from someone else’s past dialogue into its response to you. You might be asking, “How is this artificial intelligence? That’s just copy-paste.” Well, we can actually say there’s a partial intelligence here—directly borrowed human intelligence. Because when you talk to Cleverbot, you are essentially conversing with intelligence it has borrowed from millions of people it has interacted with. Despite this, this borrowed intelligence is still not enough for Cleverbot to pass the Turing Test. It still has a world of information to learn. Its strength lies in learning from humans, but this is also the root of its weaknesses. For example, if you say something absurd to Cleverbot, like “I slid over the rainbow today,” it doesn’t know how to respond because no one has ever said such a thing to it before, and it starts to babble nonsense. In fact, absurd questions are the type that machines struggle with the most during the Turing Test, often revealing their artificiality. But so far, has no robot, no machine, no code ever passed the Turing Test? In fact, there’s an exception: Eugene Goostman. In 2014, he managed to convince 33% of the jury that he was human. Remember, according to Alan Turing, exceeding 30% was sufficient to pass the test. Eugene Goostman’s success made quite a stir in tech news at the time. Finally, after 64 years, an AI had passed the Turing Test! But the reality isn’t so simple. First, in the year Eugene Goostman participated in the test, only three judges tested him. In other words, Eugene Goostman convinced just one person that he was human. The real issue, however, was that Eugene Goostman relied on deceptive techniques to convince the jury. For instance, the bot claimed it was a 13-year-old boy from Ukraine. Naturally, the jury attributed its strange answers to the child’s age and limited English skills. Still, because these types of bots attempt to pass the Turing Test by using tricks instead of demonstrating genuine AI, many experts don’t consider them legitimate. Thus, the success of Eugene Goostman in 2014 isn’t seen as particularly impressive by authorities. So, we are still in the same place. Alan Turing’s famous test, designed in 1950 to question whether machines can think, has not yet been passed by any machine. Turing predicted that computers would easily pass the test by the 2000s when memory would exceed 100 megabytes. Yet, 71 years have passed, and while today 100 megabytes is smaller than a YouTube video, no machine has yet proven Turing’s prophecy correct. Still, some experts claim that a chatbot capable of passing the Turing Test could be created around 2030. Others believe it’s more likely to happen closer to 2040. Most people agree that we will see a machine capable of passing the Turing Test within our lifetimes. But I intend to look at the situation from another angle. According to Oxford philosopher John Lucas, if machines ever pass the Turing Test, it won’t just be due to machines becoming smarter. There’s another side to the coin—us humans. Lucas argues that as machines become more capable, we humans are becoming increasingly soulless, or “machine-like,” so to speak. Think about it: first, we replaced face-to-face communication with phone calls, eliminating the visual aspect of our interactions. Then, we replaced phone calls with emails, excluding the sound of the person’s voice. Over time, emails gave way to text messages, and the things we said became more concise. Today, we sometimes don’t even bother to write a message, as we have emojis that express how happy, sad, or angry we are. Doesn’t it seem as though the bandwidth of our interactions has narrowed more and more throughout history? While robots taking the test become stronger every day, and the humans conducting the test become weaker, how much longer can the Turing Test hold? In an age where human interaction is becoming increasingly mechanical, how much longer can we distinguish between machines and humans?

Podcast Editor
Podcast.json
Preview
Audio