The debate on the possibility of artificial intelligence seems to rage on, despite the fact that one side’s position verges on the supernatural. In here I want to try to debunk once and for all the claim that it will never be possible to produce a sentient machine.
Here are the two sides of the debate:
- Intelligence can be reproduced artificially.
- Intelligence cannot be reproduced artificially.
In between those there are a number of shades of gray. For example, Penrose would be on the “artificial intelligence is possible” side while adding “but we would need a quantum computer for that.”
In order to argue my point I will assume we are material beings. That is, our intelligence and understanding do not come from a soul that resides outside the physical realm, but from the workings of our brain. I think any rational, scientifically-minded person will agree with this.
If we accept that we are material beings and that intelligence is what brains do, then the two sides of the debate are reduced to:
- It is possible to create an artificial brain.
- It is impossible to create an artificial brain.
If by “artificial” we mean “made by people” then there is no debate: artificial brains have been created already. In fact, they are being produced by the scores every day, using a very ancient and pleasant procedure most of us are quite familiar with.
If by “artificial” we mean “made by means other than having babies” then, no, we still haven’t created a brain. Is it possible? Certainly yes. Using stem cells we can produce neurons which we can then connect. Given enough time, we could definitely create a brain: maybe not a human brain (not in a while, anyway) but a brain nonetheless.
However, by “artificial” most people mean “non-biological.” In this case, things seem to be a bit more debatable. But consider this: it is possible to create a machine or a piece of software that reproduces the behavior of a neuron, at least in its relevant parts. This has not only been done, but is the basis for a lot of our current technology. Yes, these virtual neurons are not the same as real ones, but that is because we have no need to add self-maintaining routines and reproduction. We have stripped neurons down to the characteristics that are important for cognition.
Given enough neurons and enough information on how to place them, it is fairly obvious we could create a thinking brain. You can think of it this way: I make a machine that mimics the behavior of a neuron and I replace one of your real, live neurons with it. I repeat the procedure millions of times until your whole brain is made up of those artificial neurons. There you go: an artificial brain.
This article wouldn’t be complete without mentioning some of the objections to the possibility of artificial intelligence. Most of them include a somewhat veiled belief on the soul, as well as a romanticized view of what “knowledge” and “understanding” mean.
I think the issue that prevents people from intuitively agreeing that machines will be (or are) able to think is that they confuse knowledge and understanding with the feeling of knowing or understanding. Our brains are statistical processors: they receive inputs from the exterior and construct statistical models based on the most likely scenario. This allows them to operate with insufficient information and to optimize problem-solving algorithms which would otherwise take too long to process. Certainty is expensive.
However, that is not what we feel. When we know something, we can feel it. We know we know. We can almost touch the certainty. We also feel understanding in a way that we cannot readily explain and therefore are unable to imagine a machine, which is a mechanical being, understanding anything. But those feelings are not knowing and understanding: they are just ways our bodies have of telling us a certain model is trustworthy, in the sense that operating according to it has a very small chance of resulting in an unfavorable outcome.
I think the “Chinese room” argument by John Searle is so appealing precisely because it appeals to our feelings of understanding and not to its operational definition. In this thought experiment, there is a person in a room who only speaks English but who, following a certain amount of rules in his language, is able to build strings of Chinese characters that sound like native speech. Searle equates saying that a machine “understands” with saying the English person in the room can speak Chinese. He certainly doesn’t!
This argument is misguided because it does not understand how intelligence works. For example, each of the neurons in my brain acts according to a specific set of rules. And, indeed, none of them speak English. What speaks English is the aggregate of neurons that makes up my self: the knowledge resides in the system. Similarly, while the person in the Chinese room does not speak Chinese, the expert system constituted by him and the set of rules certainly does. The argument fails because it assumes knowledge has to be placed in a singular location, whereas it is actually distributed. The fact that the same thought experiment can be applied to our own brains to reach exactly the same conclusion should tell us one of two things:
- There is really no intelligence, natural or artificial.
- Intelligence is distributed and is not what Searle thinks it is.
The intuitive argument against artificial intelligence is, however, extremely powerful, since it is grounded on very vivid feelings and strong beliefs. I don’t expect to have convinced anyone, but at least I hope you will consider the possibility that knowledge and understanding are not equivalent to the feeling of having them; I would also be greatly pleased if this made you reflect on the nature of intelligence and understanding; even more if you shared your thoughts below.