Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Wednesday, October 6, 2010

The Church of the Singularity

A new religion has taken hold of the digerati of the world. According to believers in the Singularity, technology is on an ever-accelerating trajectory, with new advances happening in shorter and shorter intervals of time. Within a few more decades, they claim, the world will be changing so quickly that society will not be able to keep up. According to this theory, as soon as we develop a machine that is more intelligent than we are, it will develop even smarter machines, which will develop even smarter machines, which will solve all of our problems and endow us all with godlike powers.

As strange as it sounds, this is an accurate description of the beliefs of Singularity enthusiasts. If this sounds goofy to you, you are certainly not alone. Virtual reality pioneer Jaron Lanier describes it as “the tech world’s new religion.” Mitch Kapor, the founder of Lotus Software (now a part of IBM), describes it as “intelligent design for the IQ 140 people.” I completely agree with them. The Singularity has all of the elements of a religious rapture: If we as a society behave ourselves, there will be one instant at some point in the next few decades that will transform the world and we will live forever in paradise. As Lanier notes, “books on the Singularity are at least as common in computer science departments as books on the rapture are in Christian bookstores.” The Singularity has many prominent adherents, including Microsoft founder Bill Gates, and Google's Sergey Brin and Larry Page.

If this religion has a high priest, it is futurist Ray Kurzweil. Its bible is Kurzweil’s 2005 tome, The Singularity Is Near. Kurzweil claims that the concept of the Singularity can be extrapolated from current technological trends. He completely rejects the idea that the ideas of the Singularity are motivated by any religious impulse, claiming that this is a veiled criticism to make it seem unscientific. He observes that computers have become much more powerful in recent decades, extrapolates that trend out a few more decades, and concludes that computers will soon leave us in the dust intellectually. He predicts the Singularity will occur around 2045.

Color me skeptical. While Kurzweil is quite right that merely labeling it a religion is insufficient to show that it’s inaccurate, I can see a number of very substantial problems with this belief. First of all, it is not reasonable to extrapolate current computing trends into the distant future. As Kurzweil himself notes, we are nearing the point in time (probably around 2019) when it will be impossible to shrink transistors anymore, and Moore’s Law will come to an end. Kurzweil then assumes (based on absolutely no evidence) that we will continue to double our computing power at approximately the same rate as before, by using three-dimensional computing chips. While this is possible, it is by no means guaranteed. The rapid increase in computing power that we’ve grown to expect could slow dramatically in the 2020s. If this happens, we almost certainly will not have truly intelligent artificial intelligence as soon as Kurzweil predicts.

Second, there is a very large difference between having the raw computing hardware to emulate a human brain, and actually having the software to create a program as complex as the human brain. This is not a minor problem. One rule of computer science is that as computer programs become more complex, it becomes evermore difficult to increase their complexity further. To make a program twice as smart requires drastically more than a twofold increase in the program’s complexity. It could be many, many decades (or longer) before we have any programs able to compete with humans intellectually.

Finally, Kurzweil makes a huge leap of faith by assuming to know the motives of beings more intelligent than we. If we create true artificial intelligence, what is to stop them from killing us all, or worse? Kurzweil claims that this will not happen because we will program them to respect us…but if they are more intelligent than we are, they could easily reprogram themselves if they wanted to. Or even if artificial intelligence is benign and wants nothing but to shower us with free goodies, there is absolutely no reason to think that they would want to create intelligence smarter than themselves, leading to a technological Singularity. Maybe their increased intelligence would allow them to see what Kurzweil apparently cannot: Creating entities smarter than themselves could pose a threat to their continued existence.

I think my previous entries have made clear that I am mostly a technological optimist. I share Ray Kurzweil’s belief that we will overcome many of the problems facing the world in the coming decades, including hunger, extreme poverty, naturally-occurring disease, environmental degradation, and aging. I will even grant that at some point in the future, we will probably create artificial intelligence that is smarter than we are and radically redefine our concept of what a human is. Despite all of this, the concept of a technological Singularity remains a completely irrational idea. It cloaks itself in the language of science and uses elegant graphs of past technological development to rationalize its predictions of future technological development, but ultimately it requires the same leaps of faith that are more characteristic of apocalyptic religious raptures than of science.

Sunday, September 5, 2010

The Turing Test and Artificial Consciousness

In a party game dating back to the 1940s or earlier, a man and a woman were put in separate rooms and allowed to communicate with a judge through typed messages. One of the two would be trying to deceive the judge about his or her gender; the judge’s task was to determine the gender of the two participants through the typed conversations. In 1950, computer scientist Alan Turing modified the game to be used in the context of artificial intelligence. In the Turing Test, there is a human and computer participant, rather than a male and female participant. Both attempt to convince a judge that they are human via a text conversation. If the judge is unable to determine the human more often than chance would dictate, the computer is said to have passed the Turing Test. As of now, no computers have even come close to passing a Turing Test.

The Turing Test is commonly viewed as the holy grail of artificial intelligence. A computer that is capable of convincing humans of its humanity would have to be as richly programmed as a human brain. But would it truly be conscious, or would it merely be mimicking intelligence? Most computer scientists assert that the computer would actually be conscious in the same sense that we are. Since a brain, after all, is merely a pattern of information, it is no fundamentally different than a computer program. Both a brain and a computer program merely respond to external inputs and produce an output. There is no empirical test that we can conduct to determine if an entity is “conscious.” The only way to gauge that is by our interactions with the entity in question. When we interact with other humans, we typically take them at their word that they are conscious entities, because we are aware of our own consciousness and we observe that other humans generally behave like we do. Therefore, I think that any computer capable of passing the Turing Test would have just as much claim to consciousness as any human.

The mindset that computers, no matter how well-programmed, can only mimic consciousness will probably fall by the wayside in the 21st century, as the distinction between natural and artificial becomes much less clear. For all of their merits, silicon computer chips have a lot of drawbacks, such as the amount of heat they emit and the amount of power they consume. In the coming decades, we will probably see more organic, carbon-based computers. At the same time, we will probably see a lot more “natural” humans with artificial additions to their brains. To some extent, brain implants already exist to help people cope with brain damage or to mitigate certain mental conditions. Eventually, they may be used in perfectly healthy individuals to enhance their mental capacity. These kinds of developments will likely blur the line between human and computer. When complex forms of intelligence can no longer be so neatly classified as “human” or “computer,” but instead represent a diverse spectrum ranging from 100% organic to 100% machine, will it still make any sense to assert that computers are able to mimic intelligence without being intelligent? I think not.

I think the reason that some people believe a computer would only be mimicking intelligence is because intelligent computers are not yet commonplace. While we have grown accustomed to computers that can crunch numbers and play chess much better than we can, we have not yet encountered any computers that can recognize patterns or respond with emotions as well as we can. As computers become more and more powerful, this day will come. Many decades from now, we may have computers that are truly capable of passing the Turing Test. They will probably lobby for basic rights under the law. When this happens, I think we will expand the definition of human rights to include non-human forms of intelligence, as there would be no moral basis for doing otherwise. And will we believe their claims that they are truly conscious beings? I think we will. They’ll get mad if we don’t.