Is artificial intelligence possible? Well, yes.

Is Artificial Intelligence Possible? Well, Yes

The debate on the possibility of artificial intelligence seems to rage on, despite the fact that one side’s position verges on the supernatural. In here I want to try to debunk once and for all the claim that it will never be possible to produce a sentient machine.

Here are the two sides of the debate:

  1. Intelligence can be reproduced artificially.
  2. Intelligence cannot be reproduced artificially.

In between those there are a number of shades of gray. For example, Penrose would be on the “artificial intelligence is possible” side while adding “but we would need a quantum computer for that.”

In order to argue my point I will assume we are material beings. That is, our intelligence and understanding do not come from a soul that resides outside the physical realm, but from the workings of our brain. I think any rational, scientifically-minded person will agree with this.

Frog Brains <i>in</i>, umm... <i>sort of vivo</i>?

Frog Brains in, umm… sort of vivo? (Photo credit: Mal Cubed)

If we accept that we are material beings and that intelligence is what brains do, then the two sides of the debate are reduced to:

  1. It is possible to create an artificial brain.
  2. It is impossible to create an artificial brain.

If by “artificial” we mean “made by people” then there is no debate: artificial brains have been created already. In fact, they are being produced by the scores every day, using a very ancient and pleasant procedure most of us are quite familiar with.

If by “artificial” we mean “made by means other than having babies” then, no, we still haven’t created a brain. Is it possible? Certainly yes. Using stem cells we can produce neurons which we can then connect. Given enough time, we could definitely create a brain: maybe not a human brain (not in a while, anyway) but a brain nonetheless.

However, by “artificial” most people mean “non-biological.” In this case, things seem to be a bit more debatable. But consider this: it is possible to create a machine or a piece of software that reproduces the behavior of a neuron, at least in its relevant parts. This has not only been done, but is the basis for a lot of our current technology. Yes, these virtual neurons are not the same as real ones, but that is because we have no need to add self-maintaining routines and reproduction. We have stripped neurons down to the characteristics that are important for cognition.

Deutsch: Phrenologie

Deutsch: Phrenologie (Photo credit: Wikipedia)

Given enough neurons and enough information on how to place them, it is fairly obvious we could create a thinking brain. You can think of it this way: I make a machine that mimics the behavior of a neuron and I replace one of your real, live neurons with it. I repeat the procedure millions of times until your whole brain is made up of those artificial neurons. There you go: an artificial brain.

This article wouldn’t be complete without mentioning some of the objections to the possibility of artificial intelligence. Most of them include a somewhat veiled belief on the soul, as well as a romanticized view of what “knowledge” and “understanding” mean.

I think the issue that prevents people from intuitively agreeing that machines will be (or are) able to think is that they confuse knowledge and understanding with the feeling of knowing or understanding. Our brains are statistical processors: they receive inputs from the exterior and construct statistical models based on the most likely scenario. This allows them to operate with insufficient information and to optimize problem-solving algorithms which would otherwise take too long to process. Certainty is expensive.

However, that is not what we feel. When we know something, we can feel it. We know we know. We can almost touch the certainty. We also feel understanding in a way that we cannot readily explain and therefore are unable to imagine a machine, which is a mechanical being, understanding anything. But those feelings are not knowing and understanding: they are just ways our bodies have of telling us a certain model is trustworthy, in the sense that operating according to it has a very small chance of resulting in an unfavorable outcome.

English: Complete neuron cell diagram. Neurons...

English: Complete neuron cell diagram. (Photo credit: Wikipedia)

I think the “Chinese room” argument by John Searle is so appealing precisely because it appeals to our feelings of understanding and not to its operational definition. In this thought experiment, there is a person in a room who only speaks English but who, following a certain amount of rules in his language, is able to build strings of Chinese characters that sound like native speech. Searle equates saying that a machine “understands” with saying the English person in the room can speak Chinese. He certainly doesn’t!

This argument is misguided because it does not understand how intelligence works. For example, each of the neurons in my brain acts according to a specific set of rules. And, indeed, none of them speak English. What speaks English is the aggregate of neurons that makes up my self: the knowledge resides in the system. Similarly, while the person in the Chinese room does not speak Chinese, the expert system constituted by him and the set of rules certainly does. The argument fails because it assumes knowledge has to be placed in a singular location, whereas it is actually distributed. The fact that the same thought experiment can be applied to our own brains to reach exactly the same conclusion should tell us one of two things:

  1. There is really no intelligence, natural or artificial.
  2. Intelligence is distributed and is not what Searle thinks it is.

The intuitive argument against artificial intelligence is, however, extremely powerful, since it is grounded on very vivid feelings and strong beliefs. I don’t expect to have convinced anyone, but at least I hope you will consider the possibility that knowledge and understanding are not equivalent to the feeling of having them; I would also be greatly pleased if this made you reflect on the nature of intelligence and understanding; even more if you shared your thoughts below.

Enhanced by Zemanta

21 thoughts on “Is Artificial Intelligence Possible? Well, Yes

  1. Mike Johnson

    It’s quite a question, isn’t it? How possible– and feasible– is AI?

    My own beliefs run along the following lines:
    We’re far from understanding the brain, but we don’t have to understand all (or even much) about it to copy it in software. We just need good brain scanners and fast computers. Plotting the trends within these two technologies, it seems possible, perhaps plausible, that AI could happen in our lifetimes.

    This is a bit disconcerting, as AI (or ‘AGI’ as some folks call it) would seem to have a rather seismic impact on, well, just about everything. I don’t buy the full-on paranoia that “Creating an AGI without a proof of Friendliness is essentially equivalent to killing all people!” — but I have a healthy respect for an AI that could change its own code, hit ‘recompile’, and get smarter. And while I love Moore’s Law, I do agree it’s probably increasing the chances of someone creating an AI that does Something Really Bad.

    (Sidenote: here’s a fascinating/troubling angle on that…
    http://www.gwern.net/Slowing%20Moore%27s%20Law )

    Personally, I find myself musing over qualia and valence in the context of digital brains. If you copy a brain that’s happy, is that copy also happy, in a real sense? If you use software to simulate a brain that’s experiencing a pain stimulus, does that digital simulation experience pain, in any visceral, real sense? I don’t have a clue. Obviously words fail us– we don’t have the proper vocabulary to really ask these questions. But I think they poke at a real hole in our understanding. And mysteries are cool.

    Reply
    1. David Yerle Post author

      Indeed. An AI that can make itself smarter is pretty scary. I think the only way to keep it under control will be to merge ourselves with it, so that it is us who are the artificial intelligence which makes itself smarter. Otherwise things could get ugly.

      Reply
  2. geneticfractals

    In my model of intelligence, the answer is silly. Intelligence is often thought of as the complex reasoning of our human brain and questions of replicating one hundred billion neurons once looked like quite a challenge. My four thousand billion byte back-up drive is discussing the matter with Watson and I’ll have to get back on that.

    The point is, is that there are degrees of intelligence. Does a baby have intelligence? Does a dog have intelligence? If we would be willing to define intelligence as “the capability to respond meaningfully to external triggers”, than we can answer those questions. We can make a more elaborate definition but if we promise ourselves to leave God and mystical consciousness out of the definition we should come up with something rational that can be reduced to something like my simplistic definition here.

    This allows us to trace intelligence back to its evolutionary roots. Intelligence in its most rudimentary form devolves to the ability of things to interact meaningfully with their environment. This means that a rock with a hole in it that happens to lie in a quarry full of rocks with bits sticking out, has intelligence: it can match up with another rock.

    Therefore, almost everything we can see around us has its own intelligence and the question whether we can create intelligence artificially is silly.

    Now, this may seem like a silly response to the question but the point is that a) rudimentary intelligence is not well defined and b) what people want is to replicate humans. Forget intelligence. The question is: can we create androids that can pass a 3D version of the Turing test. I suspect that we are getting there but we shouldn’t get misty eyed about whether that android is intelligent (it is) or whether it will have consciousness (it will). The real question is, can it join us in the bar and can it comment on this blog?

    (btw, I am an android)

    Reply
    1. David Yerle Post author

      I think your definition of intelligence is perfect and pretty much sums up the way I see it too. I wanted to write more on what intelligence is but I thought the article was too long already. The whole point was that we mystify intelligence in such a way that only what humans do qualifies as such. Not only that, but we even overstate what our own intelligence does, probably because we’re so full of ourselves.
      The day an android comments on this blog I’ll be pretty darn surprised, though I suspect by the time androids can comment on blogs there will be no more blogs left and something else will have replaced this.

      Reply
  3. SilverSeason

    A frog is intelligent as a frog. He/she interacts with the environment and learns from experience. Whether the frog is conscious of being a frog — and an intelligent one at that — is beside the point. As a human being I would make a very stupid frog. The AI debate often gets mired in issues of what is intelligence and what is consciousness. If intelligence is species specific, then an artificial brain can be quite an intelligent artificial brain, but maybe not an intelligent human being or frog. Oh my.

    Reply
  4. bloggingisaresponsibility

    I think that artificial intelligence is not only possible in principle, but human intelligence can be greatly exceeded. Now whether this produces consciousness is another matter, but the two issues are different.

    So is the issue of knowledge. If one can perform a task, then one has knowledge. Searle’s Chinese room is an irrelevant attempt to appeal to sentiment.

    Reply
    1. David Yerle Post author

      Exactly. I can’t wait for the day when intelligence stops being something you’re born with and starts being something you can choose to have. That will be a pretty interesting world.

      Reply
  5. Valeness

    Great Article! I wish you had gone into more depth on the fact that our knowledge and intelligence is part of a collective, not singular. On that note, you may want to look into distributed processing and Agent Based Programs. I don’t know a lot about them, but from what I can gather, the agents have rules, but the system it is a part of does not. Apparently these systems are not only apt to, but are designed to mutate and grow.

    As you were saying, our brain is nothing but a collective of neurons. So, if you were to scale up our brain a million times, all you’d see is basically a cloud of individuals, or “neurons”. Imagine if each agent in one of these programs, had the rules of a neuron, but the collective was free to mutate as it wished?

    I believe it may result in something resembling A.I.

    I don’t know if you’re one for fiction but “Prey by Michael Crichton” is a good book revolving around a sentient Nano-bot cloud. The main character is a programmer that has a history in studying behavioral biology with swarms and uses that knowledge to write agent-based programs.

    Anyways, I enjoyed the read and wouldn’t mind seeing more things computer related!

    Reply
    1. David Yerle Post author

      I’ll take a look at Crichton’s book, seems just like the kind of stuff I like to read. I wanted to go more in depth with intelligence, but I was already at two pages and, in my experience, for every paragraph over one page I lose a reader, so I decided to leave it for another day. The upside is you can expect to see more on computer-related stuff in the near future!

      Reply
  6. Johannes Nelson

    Since I was 17 and had to write a paper on this topic for Theory of Knowledge IB course, I have always stood on the side of “of course machines can know.” When the paper was graded, the mark was unpleasant because I did not adhere to their accepted defnition of knowledge which was a justified, true, belief. I did not adhere to it because I thought it was ridiculous. True is way too complicated a thing to just throw in an already complicated definition.

    Justification is what I think many people associate with the ‘feeling’, but this is easy, in my mind to get around. If we are justified in our beliefs because of a feeling that a certain theory is trustworthy, we are justified because of our experience –conscious and unconscious –that seems to resonate with the claim. A machine is not a member of the same sort of reality, but their justification is just as real. They are justified in their “beliefs” insofar as they have been programmed by their makers and birthed into a world where those rules stand true. While people might say that our experience is more worthy because it is raw and unfettered, it really isn’t. We navigate a universe of rules, and our experience beneath those rules dictates our intuition and understanding of theories derived from them. A machine’s ‘experience’ with rules is just as external and ‘divine.’ They have been placed into a predetermined system –and within that system, they are justified in everything they do. Even if a machine is broken or has been programmed with false mathematics, its justification is still pure, perhaps even purer than ours.

    This won’t convince many people I am sure. I use justification and belief very loosely. But if we take our intuition and our ‘feeling’ that something is true and reduce it to a calculable outcome of our experience with our external reality, then I think the two can be equated. If we believe there is something magical about feeling and intuition, then of course we can’t see this. It seems to be only a matter of comprehension. Since we cannot map out the neural activity that surrounds something like justified belief, then we assume that it can’t be mapped out. That is silly. Of course it can! Right?

    Reply
  7. Johannes Nelson

    I think it is maybe helpful to think of your argument on causation. We perceive machines as operating under very strict limitations, unable to perceive the larger picture that surrounds their ‘collapsed reality.’ Well, we were programmed by physical processes that do the same to our reality. We cannot perceive what lies without our dimension, and therefore have trouble with an intuitive knowledge of the inadequacy of a cause and effect model.

    I am so sorry. I am literally using your page as a place to sort out my own thoughts, and hoping that you can help. Perhaps this would have been better sent through email.

    Reply
    1. David Yerle Post author

      Hahaha go ahead! It’s interesting and also amusing. It’s funny you mention theory of knowledge, since I’m supposed to teach that next year! I’ll be sure not to grade papers based on my personal opinions, though.
      I think the idea of “justification” is not justified. What I mean is that justification is a very strong statistical correlation, nothing else. So I would also disagree with their definition. And yes, I think machines can have beliefs, as long as you consider those to be an internal state that causes behavior, which seems like a pretty decent definition.

      Reply
  8. Stathis Fotiadis

    I think there is no debate here. Strong AI is 100% a matter of time. Think about the eu human brain project. We are few decades of moore’s law away from simulating a complete brain with known neuron models. Even further down the road would be simulation by molecular dynamics of every single atom of a living organism. Then ab initio etc. we are just short of processing power.

    Reply
    1. David Yerle Post author

      Exactly. I didn’t mention the human brain project because some people think it’s not really a simulation and I wanted to stay uncontroversial, but I think it is a simulation and I can’t wait until it’s done. As you say, it’s just a matter of processing power.

      Reply
  9. livelysceptic

    Hi David!
    Like many other commenters, I have no problem with the idea of artificial intelligence. Reading the article, I thought of being human as a combination of thinking power and constant input from our senses. The fact that our thinking is disturbed by what we see, hear, feel and what goes on inside our bodies makes our life more interesting and makes unexpected combinations possible. Add interaction with similar beings and there are even more possibilities. I don’t expect anything like that being built soon. But then, what is soon?

    Reply
    1. David Yerle Post author

      Hi Lively,
      Sorry for the late reply. I’ve had a hectic couple of days arranging the move. I also don’t expect anything like that being built soon, but you never know. I am hopeful…

      Reply
  10. elkement

    Thought-provoking post – as usual!!

    Not an expert, I have always considered AI possible – as I think of intelligence and consciousness as an emergent phenomenon that would naturally arise from some complex interconnected system (such as an artificial brain built from wired artifical neurons, as in the Blue Brain Project).
    I would believe that it is plausible that such an “artificial” conscious beings would naturally ask the same philosophical question as we do. Probably I have seen too much clichéd movies, but the robot who does not know he/she/it is an artificial life form does not seem alien to me.
    I realize I have equated consciousness and intelligence which is probably exactly the opposite of what you stated. But I find it more natural to conjecture that true intelligence will arise as a by-product of creating consciousness – or better: creating a sufficiently complex thing that will let consciousness emerge.

    (As a disclaimer I see consciousness as something awe-inspiring, but still “technical” – probably something we can really model and expain in the future. No need for spooky quantum consciousness and the like).

    Reply
    1. David Yerle Post author

      Well, I didn’t mention the issue of consciousness, but if we can create an artificial brain I’m assuming it will be conscious, so I agree with you on that. The fact that we have no idea what consciousness is makes it harder to defend that view, but even if we don’t know what it is, we may still be able to build it by mimicking something that has it.

      Reply

Leave a Reply