Can Machines Think?

Maybe so, as Deep Blue's chess prowess suggests. And that sparks a fresh debate about the nature of mind. Is it just neurons?

By ROBERT WRIGHT

WHEN GARRY KASPAROV FACED OFF AGAINST AN IBM COMPUTER in last month's celebrated chess match, he wasn't just after more fame and money. By his own account, the world chess champion was playing for you, me, the whole human species. He was trying, as he put it shortly before the match, to "help defend our dignity."

Nice of him to offer. But if human dignity has much to do with chess mastery, then most of us are so abject that not even Kasparov can save us. If we must vest the honor of our species in some quintessentially human feat and then defy a machine to perform it, shouldn't it be something the average human can do? Play a mediocre game of Trivial Pursuit, say? (Or lose to Kasparov in chess?)

Apparently not. As Kasparov suspected, his duel with Deep Blue indeed became an icon in musings on the meaning and dignity of human life. While the world monitored his narrow escape from a historic defeat-and at the same time marked the 50th birthday of the first real computer, ENIAC-he seemed to personify some kind of identity crisis that computers have induced in our species.

Maybe such a crisis is in order. It isn't just that as these machines get more powerful they do more jobs once done only by people, from financial analysis to secretarial work to world-class chess playing. It's that, in the process, they seem to underscore the generally dispiriting drift of scientific inquiry. First Copernicus said we're not the center of the universe. Then Darwin said we're just protozoans with a long list of add-ons-mere "survival machines," as modern Darwinians put it. And machines don't have souls, right? Certainly Deep Blue hasn't mentioned having one. The better these seemingly soulless machines get at doing things people do, the more plausible it seems that we could be soulless machines too.

But however logical this downbeat argument may sound, it doesn't appear to be prevailing among scholars who ponder such issues for a living. That isn't to say philosophers are suddenly resurrecting the idea of a distinct, immaterial soul that governs the body for a lifetime and then drifts off to its reward. They're philosophers, not theologians. When talking about some conceivably nonphysical property of human beings, they talk not about "souls" but about "consciousness" and "mind." The point is simply that as the information age advances and computers get brainier, philosophers are taking the ethereal existence of mind, of consciousness, more seriously, not less. And one result is to leave the theologically inclined more room for spiritual speculation.

"The mystery grows more acute," says philosopher David Chalmers, whose book The Conscious Mind will be published next month by Oxford University Press. "The more we think about computers, the more we realize how strange consciousness is."

Though chess has lately been the best-publicized measure of a machine's human- ity, it is not the standard gauge. That was invented by the great British computer scientist Alan Turing in a 1950 essay in the journal Mind. Turing set out to address the question "Can machines think?" and proposed what is now called the Turing test. Suppose an interrogator is communicating by keyboard with a series of entities that are concealed from view. Some entities are people, some are computers, and the interrogator has to guess which is which. To the extent that a computer fools interrogators, it can be said to think.

At least that's the way the meaning of the Turing test is usually put. In truth, midway through his famous essay, Turing wrote, "The original question, 'Can machines think?,' I believe to be too meaningless to deserve discussion." His test wasn't supposed to answer this murky question but to replace it. Still, he did add, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Guess again. With the century's end in sight, no machine has consistently passed the Turing test. And on those few occasions when interrogators have been fooled by computers, the transcripts reveal a less-than-penetrating interrogation. (Hence one problem with the Turing test: Is it measuring the thinking power of the machines or of the humans?)

The lesson here-now dogma among researchers in artificial intelligence, or AI-is that the hardest thing for computers is the "simple" stuff. Sure they can play great chess, a game of mechanical rules and finite options. But making small talk--or, indeed, playing Trivial Pursuit-is another matter. So too with recognizing a face or recognizing a joke. As Marvin Minsky of the Massachusetts Institute of Techology likes to say, the biggest challenge is giving machines common sense. To pass the Turing test, you need some of that.

Bcsides, judging the hubbub over the Kasparov match, even if computers could pass the test, debate would still rage over whether they think. No one doubted Deep Blue's chess skills, but many doubted whether it is a thinking machine. It uses "brute force"-zillions of trivial calculations, rather than a few strokes of strategic Big Think. ("You don't invite forklifts to weight-lifting competitions," an organizer of exclusively human chess tournaments said about the idea of man-vs.-machine matches.) On the other hand, there are chess programs that work somewhat like humans. They size up the state of play and reason strategically from there. And though they aren't good enough to beat Kasparov, they're good enough to leave the average Homo sapiens writhing in humiliation.

Further, much of the progress made lately on the difficult "simple" problems -- like recognizing faces -- has come via parallel computers, which mirror the diffuse data-processing architecture of the brain. Though progress in AI hasn't matched the high hopes of its founders, the field is making computers more like us, not just in what they do but in how they do it-more like us on the inside.

So machines can think? Not so fast. Many people would still say no. When they talk about what's inside a human being, they mean way inside-not just the neu- ronal data flow corresponding to our thoughts and feelings but the thoughts and feelings themselves. You know: the exhilaration of insight or the dull anxiety of doubt. When Kasparov lost Game 1, he was gloomy. Could Deep Blue ever feel deeply blue? Does a face-recognition program have the experience of recognizing a face? Can computers-even computers whose data flow precisely mimics human data flow-actually have subjective experience? This is the question of consciousness or mind. The lights are on, but is anyone home?

For years AI researchers have tossed around the question of whether computers might be sentient. But since they often did so in casual late-night conver- sations, and sometimes in an altered state of consciousness, their speculations weren't hailed as major contributions to Western thought. However, as computers keep evolving, more philosophers are taking the issue of computer consciousness seriously. And some of them-such as Chalmers, a professor of philosophy at the University of California at Santa Cruz-are using it to argue that consciousness is a deeper puzzle than many philosophers have realized.

Chalmers' forthcoming book is already making a stir. His argument has been labeled "a major misdirector of attention, an illusion generator," by the well-known philosopher Daniel Dennett of Tufts University. Dennett believes consciousness is no longer a mystery. Sure there are details to work out, but the puzzle has been reduced to "a set of manageable problems."

The roots of the debate between Chalmers and Dennett--the debate over how mysterious mind is or isn't--lie in the work of Dennett's mentor at Oxford Uni- versity, Gilbert Ryle. In 1949 Ryle pub- lished a landmark book called The Concept of Mind. It resoundingly dismissed the idea of a human soul-a "ghost in the machine," as Ryle derisively put it-as a hangover from prescientific thought. Ryle's juiciest target was the sort of soul imagined back in the 17th century by Rene Descartes: an immaterial, somewhat autonomous soul that steers the body through life. But the book subdued enthusiasm for even less supernatural versions of a soul: mind, conscious- ness, subjective experience.

Some adherents of the "materialist" line that Ryle helped spread insisted that these things don't even exist. Others said they exist but consist simply of the brain. And by this they didn't just mean that consciousness is produced by the brain the way steam is produced by a steam engine. They meant that the mind is the brain-the machine itself, period.

Some laypeople (like me, for example) have trouble seeing the difference be- tween these two views-between saying consciousness doesn't exist and saying it is nothing more than the brain. In any event, both versions of strict materialism put a damper on cosmic speculation. As strict materialism became more mainstream, many philosophers talked as if the mind-body problem was no great problem. Consciousness became almost passe.

Ryle's book was published three years after ENIAC'S birth, and at first glance his ideas would seem to draw strength from the computer age. That, at any rate, is the line Dennett takes in defending his teacher's school of thought. Dennett notes that AI is progressing, creating smart machines that process data somewhat the way human beings do. As this trend continues, he be- lieves, it will become clearer that we're all machines, that Ryle's strict materialism was basically on target, that the mind-body problem is in principle solved. The title of Dennett's 1991 book says it all: Consciousness Explained.

Dennett's book got rave reviews and has sold well, 100,000 copies to date. But among philosophers the reaction was mixed. The can-do attitude that was com- mon in the decades after Ryle wrote-the belief that consciousness is readily "explained"-has waned. "Most people in the field now take the problem far more seriously," says Rutgers University philosopher Colin McGinn, author of The Problem of Consciousness. By acting as if consciousness is no great mystery, says McCinn, "Dennett's fighting a rearguard action."

McGinn and Chalmers are among the philosophers who have been called the New Mysterians because they think consciousness is, well, mysterious. McGinn goes so far as to say it will always remain so. For human beings to try to grasp how subjective experience arises from matter, he says, "is like slugs trying to do Freudian psychoanalysis. They just don't have the conceptual equipment."

Actually there have long been a few mysterians insisting that the glory of hu- man experience defies scientific dissection. But the current debate is different. The New Mysterians are fundamentally scientific in outlook. They don't begin by doubting the audacious premises of AI. O.K., they say, maybe it is possible-in principle, at least-to build an electronic machine that can do everything a human brain can do. They just think people like Dennett mis- understand the import of such a prospect: rather than bury old puzzles about consciousness, it resurrects them in clearer form than ever.

Consider, says Chalmers, the robot named Cog, being developed at M.I.T.'s artificial-intelligence lab with input from Dennett (see following story). Cog will someday have "skin"- a synthetic membrane sensitive to contact. Upon touching an object, the skin will send a data packet to the "brain." The brain may then instruct the robot to recoil frorn the object, depending on whether the object could damage the robot. When human beings recoil from things, they too are under the influence of data packets. If you touch something that's dangerously hot, the appropriate electrical impulses go from hand to brain, which then sends impulses instructing the hand to recoil. In that sense, Cog is a good model of human data processing, just the kind of machine that Dennett believes helps "explain" consciousness.

But wait a second. Human beings have, in addition to the physical data flow representing the heat, one other thing: a feeling of heat and pain, subjective experience, consciousness. Why do they? According to Chalmers, studying Cog doesn't answer that question but deepens it. For the moral of Cog's story seems to be that you don't, in principle, need pain to function like a human being. After all, the reflexive withdrawal of Cog's hand is entirely explicable in terms of physical data flow, electrons coercing Cog into recoiling. There's no apparent role for subjective experience. So why do human beings have it?

Of course, it's always possible that Cog does have a kind of consciousness-a consideration that neither Dennett nor Chalmers rules out. But even then the mystery would persist, for you could still account for all the behavior by talking about physical processes, without ever mentioning feelings. And so too with humans. This, says Chalmers, is the mystery of the 'extraness" of consciousness. And it is crystallized, not resolved, by advances in artificial intelligence. Because however human machines become-however deftly they someday pass the Turing test, however precisely their data flow mirrors the brain's data flow-everything they do will be explicable in strictly physical terms. And that will suggest with ever greater force that human consciousness is itself somehow "extra."

Chalmers remarks, "It seems God could have created the world physically ex- actly like this one, atom for atom, but with no consciousness at all. And it would have worked just as well. But our universe isn't like that. Our universe has consciousness." For some reason, God chose "to do more work" in order "to put consciousness in."

When Chalmers says "God," he doesn't mean-you know-God. He's speaking as a philosopher, using the term as a proxy for whoever, whatever (if anyone, anything) is respon sible for the nature of the universe. Still, though he isn't personally inclined to religious speculation, he can see how people who grasp the extraness of consciousness might carry it in that direction.

After all, consciousness--the existence of pleasure and pain, love and grief--is a fairly central source of life's meaning. For it to have been thrown into the fabric of the universe as a freebie would suggest to some people that the thrower wanted to impart significance.

It's always possible that consciousness isn't extra, that it actually does something in the physical world, like influence b ehavior. Indeed, as a common sense intuition, this strikes many people as obvious. But as a philosophical doctrine it is radical, for it would seem to carry us back toward Descartes, toward the idea that "soul stuff" helps govern the physical world. And within both philosophy and science, Descartes is dead or, at best, on life support. And the New Mysterians, a pretty hard-nosed group, have no interest in reviving him.

The extraness problem is what Chalmers calls one of the "hard" questions of consciousness. What Dennett does, Chalmers says, is skip the "hard" questions and focus on the "easy" questions-and then title his book Consciousness Explained. There is one other "hard" question that Chalmers emphasizes. It-and Dennett's alleged tendency to avoid such questions-is illustrated by something called pandemonium, an AI model that Dennett favors.

According to the model, our brain subconsciously generates competing theories about the world, and only the "winning" theory becomes part of consciousness. Is that a nearby fly or a distant airplane on the edge of your vision? Is that a baby crying or a cst meowing? By the time we become a_vare of such images and sounds, these debates have usually been resolved via a winner-the-all struggle. The winning theory-the one that best matches the data-has wrested eontrol of our neurons and thus of our perceptual field.

As a scientific model, pandemonium has virtues. First, it works; you can run the model suceessfully on a computer. Second, it works best on massively parallel computers, whose structure resembles the brain's structure. So it's a plausible theory of data flow in the human brain, and of the criteria by which the brain admits some data, but not other data, to consciousness.

Still, says Chalmers, once we know which kinds of data become part of con- sciousness, and how they earned that privilege, the question remains, "How do data become part of consciousness?" Suppose that the physical information representing the "baby crying" hypothesis has carried the day and vanquished the information representing the rival "cat meowing" hypothesis. How exactly-by what physical or metaphysical alchemy-is the physical information transformed into the subjective experience of hearing a baby cry? As McCinn puts the question, "How does the brain 'turn the water into wine?"'

McGinn doesn't mean that subjective experience is literally a miracle. He con- siders himself a materialist, if in a "thin" sense. He presumes there is some physical explanation for subjective experience even though he doubts that the human brain-or mind, or whatever-can ever grasp it. Nevertheless, McGinn doesn't laugh at people who take the water-into-wine metaphor more literally. "I think in a way it's legitimate to take the mystery of consciousness and convert it into a theological system. I don't do that myself, but I think in a sense it's more rational than strict materialism, because it respects the data." That is, it respects the lack of data, the yawning and perhaps eternal gap in scientific understanding.

These two "hard" questions about consciousness-the extraness question and the water-into-wine question-don't depend on artificial intelligence. They could occur (and have occurred) to people who simply take the mind-as-machine idea seriously and ponder its implications. But the actual construction of a robot like Cog, or of a pandemonium machine, makes the hard questions more vivid. Materialist dismissals of the mind-body problem may seem forceful on paper, but, says McCinn, "you start to see the limits of a concept once it gets realized." With AI, the tenets of strict materialism are being realized-and found, by some at least, incapable of explaining certain parts of human experience. Namely, the experience part.

Dennett has answers to these critiques. As for the extraness problem, the question of what function consciousness serves: if you're a strict materialist and believe "the mind is the brain," then consciousness must have a function. After all, the brain has a function, and consciousness is the brain. Similarly, turning the water into wine seems a less acute problem if the wine is water.

To people who don't share Dennett's philosophical intuitions, these arguments may seem unintelligible. (It's one thing to say feelings are generated by the brain, which Chalmers and McGinn believe, but what does it even mean to say feelings are the brain?) Still, that doesn't mean Dennett is wrong. Some people share his intuitions and find the thinking of his critics opaque. Consciousness is one of those questions so deep that frequently people with different views don't just fail to convince one another, they fail even to communicate. The unintelligibility is often mutual.

Chalmers isn't a hard-core mysterian like McGinn. He thinks a solution to the consciousness puzzle is possible. But he thinks it will require recognizing that consciousness is something "over and above the physical" and then building a theory some might call metaphysical. This word has long been out of vogue in philosophy, and even Chalmers uses it only under duress, since it makes people think of crystals and Shirley MacLaine. He prefers "psychophysical . "

In The Conscious Mind, Chalmers speculatively sets out a psychophysical the- ory. Maybe, he says, consciousness is a "nonphysical" property of the universe vaguely comparable to physical properties like mass or space or time. And maybe, by some law of the universe, consciousness accompanies certain configurations of information, such as brains. Maybe information, though composed of ordinary matter. is a special incarnation of matter and has two sides-the physical and the experiential. (Insert Twilight ZorpS music here.)

In this view, Cog may indeed have consciousness. So might a pandemonium ma- chine. So might a thermostat. Chalmers thinks it quite possible that AI research may someday generate-may now be generating-new spheres of consciousness unsensed by the rest of us. Strange as it ma) seem, the prospect that we are creating a new species of sentient life is now being taken seriously in philosophy.

Though Turing ,penerally shied away from such metaphysical questions, his 1950 paper did touch briefly on this issue. Some people, he noted, might complain that to create true thinking machines would be to create souls, and thus exercise powers reserved for God. Turing disagreed. "In attempting to construct such machines we should not be irreverently usurping his power of creating souls, any more than we are in the procreation of children," Turing wrote. "Rather we are, in either case, instruments of his will providing mansions for the souls that he creates."