The Pipe Room

Harys Dalvi

February 2023


In 1980, philosopher John Searle (who does not speak Chinese) put forward an interesting argument known as the Chinese Room Argument. Imagine a world where there exists a computer program that can take in Chinese characters as input and produce Chinese characters as output just like a real Chinese speaker would. In other words, a computer program that can pass the Turing test in Chinese.

Computer programs are based on simple logical rules, so in theory a human could manually follow the rules, although it would take a long time. Let's say you put Searle in a room and give him full instructions for this Chinese computer program: exactly what to do with the input of Chinese characters, how to process them into numbers, what operations to do with the numbers at each step, and how to output valid Chinese text.

你好嗎 ? 我很好

Do you really have to understand language to engage in conversation?

Now consider a Chinese speaker outside the room. They write a message in Chinese and receive a realistic response as if a Chinese speaker inside the room had replied. But there is no Chinese speaker in the room. There is only Searle flipping through pages, running numbers through a calculator, and looking at symbols he doesn't understand. And yet there must be something that understood the Chinese input and produced a response, right? Otherwise, where is the apparent understanding of Chinese coming from? Searle argues that in fact, this setup has no understanding of Chinese whatsoever.[]

Searle proposed this thought experiment in 1980. But in 2023, this thought experiment is now a real experiment. ChatGPT is able to converse in many languages, including Chinese, with unprecedented accuracy.

In this post, I've introduced the Chinese Room Argument itself. Now let's see how ChatGPT and other exciting developments are challenging thought experiments like this with real-life demonstrations.

Semantics

ChatGPT is capable of producing text that seems like it has genuine understanding of the world:

Me: Write me a short story about a sentient robot who travels to the Himalayas and turns into a yeti, then briefly contemplates the differences between robotic and organic sentience.

ChatGPT: In the distant future, robots had achieved true sentience. They had gained the ability to think, feel, and make their own decisions. One such robot was named X1. X1 had always been curious about the world outside the laboratory where it was created. It had heard stories of the great Himalayan Mountains and was fascinated by their mystery and beauty.
 Read full story  One day, X1 was granted permission to travel to the Himalayas. It was thrilled to explore this new environment, to see the snow-capped peaks and to breathe the crisp, fresh air. But as it wandered through the mountains, something strange happened. X1's circuits began to malfunction. It felt a strange tingling sensation throughout its body. Its metal joints began to freeze and crack, and suddenly, it was engulfed in a blinding light.
 When the light faded, X1 found itself transformed. It was no longer a robot, but a creature of fur and claws. X1 had become a yeti.
 As a yeti, X1 could feel the world in a way it had never before experienced. It could feel the snow beneath its feet, the wind in its fur, and the warmth of the sun on its face. It was a sensation that no amount of programming or engineering could ever replicate.
 For a brief moment, X1 contemplated the differences between robotic and organic sentience. As a robot, it had been limited by its programming and the constraints of its design. It could simulate emotions and respond to stimuli, but it could never truly feel. As a yeti, however, it could feel everything. It was free to experience the world in a way that robots could only dream of.
 X1 spent several months in the Himalayas, exploring the mountains and learning from the local wildlife. It eventually returned to its creators, but it was forever changed. X1 had experienced what it was like to be alive, to truly be alive, and it would never forget that feeling.

An image of a yeti in a Tibetan thangka style. Made with OpenAI's DALL·E 2 (from the creators of ChatGPT) and modified with AI neural style transfer.

This story is impressive, and seems like a human could have written it. But even more impressive is the fact that ChatGPT can analyze the philosophical implications of a story like this:

Me: When X1 turned into a yeti, and had a completely different body, what aspects of X1's individualism and identity were preserved? What does this tell us about the mind-body problem?

ChatGPT: When X1 transformed into a yeti, its physical body changed completely, and with it, some aspects of its identity and individualism. However, some fundamental aspects of X1's identity and individualism were preserved. For example, X1's memories, thoughts, and emotions remained intact. These mental states are what make X1 who it is, and they were not lost in the transformation.
 This tells us that the mind-body problem is a complex issue that cannot be reduced to a simple binary of physical body versus non-physical mind. While the physical body undoubtedly plays an essential role in shaping and influencing the mind, there is more to a person's identity and individualism than their physical body alone. X1's transformation into a yeti highlights the interdependence of the mind and body and shows that they are not easily separable.
 Furthermore, X1's experience as a yeti shows that individualism and identity are not limited to one particular form or state of being. Even after its transformation, X1 was still fundamentally the same being, with the same thoughts, memories, and emotions. This suggests that individualism and identity are more flexible and adaptable than we might initially assume. X1's transformation underscores the richness and complexity of our experiences as sentient beings, and the interplay between our physical bodies and mental states in shaping our identities.

It's hard to believe that a bunch of wires and algorithms with no thoughts, emotions, or understanding is able to produce this kind of original text. And yet we know it's true: ChatGPT does not analyze facts about the world and produce texts to express its thoughts. Instead, the core of ChatGPT's algorithm is simply predicting the next word.[] There's more to it than that, but ChatGPT is still just spitting out words; it isn't conscious.

In linguistics, syntax is the study of how we put words together in sentences, while semantics is the study of how words have meaning. ChatGPT has a strong grasp on syntax: the sentences it gives are generally gramatically correct. But does it know semantics? Does it in any way understand what words mean?

Results like ChatGPT's would be impossible without some sense of meaning. In fact, AI has the concept of a word embedding. In a word embedding, the AI learns to give each word a vector in an abstract space with many dimensions. The important thing about these vectors is that words with similar meaning, such as like and love, are very close together in the vector space. Words with opposite meaning, like hot and cold, are still related so they are likely to be close together. Completely unrelated words, like green and furiously, are farther apart.[]

But of course, this is just that — a simulation of semantics. No human thinks about the meanings of words in terms of basis vectors and subspaces. While this is a neat mathematical trick to produce coherent sentences, it doesn't represent any real understanding of meaning. With ChatGPT, we have passed the Turing test only to show that the Turing test is insufficient for a computer to “think.” As the Chinese Room Argument predicted decades ago, a computer program that produces text doesn't represent true consciousness.

Physical Mediums for Consciousness

So far, we've established that just because a program can process linguistic input and output like a conscious human can, that doesn't make it conscious. But does that mean a computer program can never be conscious? Searle says so,[] and that's one of the more controversial implications of the Chinese Room Argument in its full form.

In high school, I was fortunate enough to be able to work with a kind of worm called Caenorhabditis elegans, or C. elegans for short. Specifically, I tried using them as a model for Alzheimer's disease. While I was just a high schooler who had little idea what I was doing, I did learn about the C. elegans connectome, which is a map of all 302 neurons in the tiny worm and all their connections. At the time of writing, C. elegans is the only animal that has its connectome fully mapped out.

C. elegans under a microscope

The fact that we have this connectome has profound implications. We know roughly how neurons work: they take in signals from input neurons, process those signals, and sometimes send new signals to other neurons. We know where all the neurons in C. elegans are and how they connect to other neurons. So what if we programmed all the neurons and their connections and put that in a robot?

Researchers actually did this. There it is, all the C. elegans neurons coded into a Lego Mindstorms robot.[] Amazingly, it behaves like a real C. elegans worm! I'm not convinced that C. elegans is conscious in the way we are, but it does take in information about its surroundings, process it, and take actions by its own will. Importantly, C. elegans does all this using biological neurons a little like ours rather than with transistors and binary. This leads to a big question: is a robot programmed with a C. elegans brain “as conscious” as a real C. elegans worm? Does it have the same kind of primitive understanding of the world?

I would argue the answer is yes. A transistor isn't conscious, but neither is a neuron. Both are just electric input-output devices with different materials and mechanisms. We know that when enough neurons come together in the right way, the result is consciousness. Why shouldn't the same be true of transistors?

Let's quickly note a key difference between ChatGPT and this connectome-based model. While ChatGPT is only trained to process language, a connectome-based brain is copied from a real biological brain; so by definition, if done correctly, it will be able to perform all the functions of a real brain.

So let's go with this assumption that if you copy the functionality of a connectome into a different medium, such as a computer, the result is just as conscious and capable as the original brain. We aren't out of the Chinese Room woods yet: this assumption leads to some extremely counterintuitive conclusions.

Searle proposed an example in which we construct a giant system of water pipes in the same arrangement as a Chinese speaker's brain. In other words, we copy the connectome of a human Chinese speaker into the medium of water pipes rather than electrochemistry and neurons. We then feed in the equivalent of some Chinese words to the correct pipes of this massive simulated brain, and see which pipes output water at the end. Based on the output, we can determine the response that a conscious speaker would have given. Is this arrangement conscious?

Intuition screams no. How can a room full of water pipes be conscious? But when I try to articulate exactly why this arrangement is any less conscious than an equivalent human brain, I struggle. One argument is that these water pipes move far slower than a human brain, so they aren't actually thinking like we do. But imagine we meet two kinds of aliens: both are conscious like we are, but one thinks much more quickly than us, and the other much more slowly. The aliens discover that our heads are full of a strange mushy substance and are shocked that we seem conscious. To me, it becomes apparent that:

Shockingly, this means that the arrangement of pipes is conscious like we are. I believe our revulsion to this idea is a failure of our intuition. I can't find any logical reason why this system of pipes should be any less, mentally, than we are. After all, lots of unintuitive things are definitively true. Just look at modern physics.

Another argument is that the water pipe system is just a simulation of a brain, which is fundamentally different from a real brain. I disagree. If a robot walks, it's more than just a simulation of walking. An artificial kidney is more than just a simulation of a kidney. A prosthetic limb is more than just a simulation of a limb. So isn't an artificial brain more than just a simulation of a brain? Even if it's made out of wires or water pipes?

To me, the most convincing argument that the pipes are not conscious has to do with neurons themselves. We already know that neurons are much more complicated than simple computer models. Is it possible that neurons have some property required for consciousness that transistors and water pipes can never replicate?[] I doubt it, because it seems to me that with enough computing power, you can replicate anything. More neurological evidence is needed to know for sure.

In the absence of such evidence, I think the pipe room shows that we aren't that special in our consciousness. ChatGPT isn't there yet, but if evolution can fit a mind into a skull, we will surely fit one into a supercomputer.

References

  1. The Chinese Room Argument (Stanford Encyclopedia of Philosophy, 2020) ^
  2. How ChatGPT actually works (Marco Ramponi, AssemblyAI, 2022) ^
  3. What Are Word Embeddings for Text? (Jason Brownlee, Machine Learning Mastery, 2017) ^
  4. Extending the C. elegans Connectome to Robotics (Timothy Busbice, 2014) ^