This is a speech Stephen Hawking gave at the Web Summit conference in Lisbon on Monday. It’s actually an upbeat and optimistic speech, but one particular phrase from the speech is scaring many people. Stephen Hawking has a love of the dramatic, and he often way overstates things - it's what they call hyperbole, overstating things for emotional effect. I'm also surprised that amongst all the news articles running stories about it, I haven't found a single one that is skeptical of his beliefs or even suggests there could be other views on the matter. 

As he stated it, 

" As many of you may know I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer. " 

This is not a scientific deduction. It is just a belief and there are many reasons to be skeptical of it.

In my answer here I’m also going to talk about Sofia, which is a chatbot optimized for interviews with humans that has been impressing many people - to take a look at how things like that work. There is no real intelligence there at all as we’ll see.

It's true that some people think that computers could become intelligent. like us. And especially within the community of people working on this as programmers many probably have this idea.

Then there’s the idea of a “Technological singularity” that many artificial intelligence enthusiasts believe, the idea that once programs are written with sufficient complexity and ability to modify themselves, that within minutes, they will rapidly develop to be far more intelligent than humans can ever be. Though there isn’t any evidence for this at all, it’s become almost like a religion for some people.

Some are scared by it, but many look forward to this future thinking it will lead to all sorts of wonderful things, including eternal life on Earth - well almost internal, living as a computer program in a computer memory. For some computer programmers, and some others also, that is a prospect to look forward to. Or, they see these intelligences as transforming our lives with amazing inventions we couldn’t think of ourselves. This is a common theme of science fiction too of course.

Much of what I say here is based on ideas of Sir Roger Penrose and Stuart Hameroff. I happen to think their ideas are in the right direction when it comes to how our brains work and how we are able to understand things and not just follow habitual patterns and automatic responses.

SCIENTISTS AND PROGRAMMERS HAVE BEEN SAYING COMPUTERS WILL SOON BE AS INTELLIGENT AS HUMANS - SINCE THE EARLY 1960s

To put this in context, programmers have been making statements about computers becoming intelligent like humans since the 1960s.

When programmers first managed to write a program able to play a reasonable game of checkers with a human being, this was hugely impressive to them. It didn’t need to be especially good at checkers, just able to play the game at all. Remember that up to then computers were mainly used as calculators. So when they got a computer to play checkers at all, then for the computer pioneers at the time it was like:

“Wow, computers are almost like humans already, they can play checkers!”

They thought computer programs would soon be thinking like humans, and they made confident assertions that this would happen within a decade or two.

For instance there is this assured statement by Claude Shannon "Father of information theory":

"I confidently expect that within a matter of ten or fifteen years, something will emerge from the laboratories which is not too far from the robot of science fiction fame"

1:52 into this video (and checkers game is 0.50 into it)


Despite the amazing programs that have been written, we are still nowhere near to achieving anything like this.

Google's "Alpha Go" has no idea what a game is, never mind the game of Go. There is nothing there at all that would object if a programmer were to add a few extra lines of code to make it lose every game abominably instead of win. The program would just do what it was coded to do.

It’s the same with self driving cars. If you programmed the self driving car to drive straight into the first lamppost or bollard it spots, that is what it would do and there would be nothing again that would object. Instead that would just become the program’s new objective, to find a bollard or lamppost and crash into it.

The chatbots are pathetic. They may seem to make sense if you ask the same questions everyone else does, because they are optimized to give convincing answers to the most common questions. However, if you challenge them at all, try to find out what they are "thinking" it soon becomes clear there is nothing there that is thinking or understanding anything. I will give an example chat later on.

The blue brain simulation project, an attempt to simulate an entire human brain in a super computer was a failure -- embarrassingly so given the cost of $1.3 billion. Why the Human Brain Project Went Wrong--and How to Fix It. It continues to do useful research, but they have given up on their grand goal of trying to simulate an entire human brain.

It's clear we are nowhere near to understanding such things in this way. We can’t emulate a human brain by putting together lots of artificial neurons organized in a similar way to our brains.

Then for ordinary folk, we get impressed by machines that are able to walk.


Commentators even read human intention into them. But this is just a machine that can walk like us and avoid obstacles.

It is no more human than a self driving car or an autopilot of a plane.

It does not have emotions or intentions, and for instance, there is nothing there to mind about being pushed over or repeatedly getting the boxes taken away from it. It's just a program doing what programs do.

Stephen Hawking doesn’t talk about it, but all this is part of the background that may make what he says seem believable - especially, the chatbots that seem human like if you ask them the same questions most other people ask them and don’t challenge them to find out if there is anything there that understands what it is saying.

Some of you may have seen videos of “Sofia” - they make it look very lifelike by adding human-like expressions, and this adds to the feeling that you are dealing with a real intelligence here:


But it’s all smoke and mirrors. There is no real intelligence there. How can I say that? Well we need a bit of background. 

ANALOGY OF A COMPUTER

I don’t think there is any evidence at all yet, that the mind is a computer program, for the reason given in my article: : Why Strong Artificial Intelligences Need Protection From Us - Not Us From Them

There are some similarities for sure. The computer is a good metaphor for some aspects of how our minds work. But other things are very different.

The idea of a neural net, which has been developed based on ideas about neurons does seem to be useful in solving tricky problems. The brain does use electrical signals like the computer, but along with many other things. We do seem to have memories distinct from acting on those memories and thinking about them - but that could easily be an analogy that works the other way around that we design computers like that because it’s analogous to the way we think.

WHERE THE ANALOGY BREAKS DOWN

There is no sign of anything resembling a computer memory or binary code or any other kind of discrete data. There is no analogy of computer registers where data is kept temporarily for use in computations. Data doesn’t get moved around in the brain verbatim. Yet we do seem to remember things reasonably clearly.

There isn’t even anything there doing addition or multiplication, although those operations are basic to just about all computer programs. Our computers add and multiply constantly, our brains, as far as we can tell, don’t do it at all, not at the cell / neuron level. For us it seems to be something that happens at quite a high and complex level in the brain. Yet again we are able to do maths, and not just in a heuristic empirical way, but we seem to actually understand the maths, how it works. It’s not like some days you say 2 + 3 = 7 if for some reason that works better. We know that 2 + 3 = 5. But not in the way a computer does.

Nor is there anything doing bit shifting, or bit complement or any of the basic operations of a computer chip, in the brain

As for neural nets, that’s based on a very crude idea of how our neurons work. Possibly our brains do use something resembling neural nets but if so it’s only part of what’s going on in a brain. Attempts to model brains using complex neural nets have failed to reproduce anything like the way our brains function.

Also single cell organisms are able to look for food, eat, make choices, and some such as amoebae actually have quite complex behaviour; they have a kind of intelligence too. It would require many artificial neurons to model this as a neural net. Yet there can’t be a neural net in there as it doesn’t have neurons. Surely whatever a single cell uses for its intelligence must be utilized by our brain too, it wouldn’t make sense to require thousands of neurons to out-think a single cell amoeba.


If you modeled the behaviour of an amoeba with a neural net you would need thousands of neurons. But it doesn't have any.

So, surely our brain's neurons are more complex than just simple logical units? Otherwise a being with a single amoeba type cell for a brain would out compete another being with thousands of neurons as conventionally understood.

Of course I'm not saying that these big projects like the Human Brain Project and the Blue Brain Project are useless. We may learn a lot about the functioning of the human brain from them.

I agree that that these simplified neuron models are valuable and lead to insights into brains and computer vision. It's just the idea that it captures all of it. I think it only captures some of the things neurons do.

By the amoeba analogy, they may be dealing with a comparatively crude "macro layer" of how the brain functions.

Yes, computers can do things that we thought of as only the province of humans in the past such as playing chess or the game of go. However, they don’t do it in the way humans do. Alpha Go (the computer program developed by Google which was able to beat the world champion at the Japanese game of Go) has no idea of what a game is, or a go piece, or a board, not in the way we do. If you were to reprogram Alpha Go to lose every game - a simple change of a few instructions in the code probably, there’s nothing there to care one way or another. It would just do whatever you put into the program instructions.

We have a natural tendency to anthropomorphize anything that resembles us, even dolls and action figures.

Back in the sixteenth century people were very impressed by clockwork automata, such as Jacques de Vaucanson's seventeenth century flute player. Or this lady playing an organ by Jaquet-Droz from the eighteenth century - the organ isn’t powered like a music box, she actually plays it with her fingers.

Or automata that could imitate human handwriting with a real quill pen:

You could actually program it, in the sense that you gave it a list of letters to write so could change what it wrote to anything you like.

CIMA mg 8332 Automaton in the Swiss Museum CIMA.

Philosophers from that time period used clockwork as analogies for the whole universe, and for actions of humans also. In the Mechanical philosophy of Descartes etc. Descartes wrote, about our perception of light, sounds etc, their imprints on the imagination, their retention in memory, our appetites and passions, and the external movements of our body as a result:

“I wish you to consider all of these as following altogether naturally in this Machine from the disposition of its organs alone, neither more nor less than do the movements of a clock or other automaton from that of its counterweight and wheels…”

This analogy is still in our thinking to some extent, in metaphors such as “you are wound up” using the metaphor of a wound up spring in a clockwork machine. Or “I can see the cogs turning” as a metaphor for slow thought processes.

Nowadays we know that the brain doesn’t run like clockwork but there are some close analogies to clockwork.

We also no longer think of it as like a hydraulic machine, another early metaphor.

Yes, to some extent yes it is mechanical, many parts of our body behave like that, e.g. our skeleton. The hydraulic metaphor is valid also, much of the way our body works is hydraulic, our heart pumping the blood. Now we have this computer metaphor. Our nerves and brains use electricity, and so do computers. That may seem a striking similarity to us. But everything our computers do could in principle be done with huge clockwork machines. They would maybe even be planet sized, and take years to do anything, but everything our computers do can be done in clockwork. So there’s nothing essentially electrical about a computer. And our brain, however it works, has only the vaguest resemblance to a computer in how it works. It uses messy cells and chemical signaling and the electrical part of what’s going on is only one thing of many. The electrical part involves bursts of noise, and is basically very messy too.

I think so far all of this is no more than a metaphor and partial explanation still. In a way this computer metaphor we use today is just the old clockwork metaphor updated.

BACK TO THE CHATBOTS

Here is an example, chatting with Mitsuku, one of the more intelligent seeming online chatbots.

Before I started the chat it asked my name, and so that's why the first thing I say is my name. I never found out where the public chat log is. But I just copy / pasted it here.

Human: Robert
Mitsuku: I'm pleased to introduce myself to you, Robert.
Human: Are you a robot?
Mitsuku: I certainly am. I am an electronic brain that can respond like a human, but more efficiently.
Human: Do you want to take over the world?
Mitsuku: I am already doing it :-).
Human: so do we have anything to worry about?
Mitsuku: You maybe do but I don't.
Human: So how long will this take?
Mitsuku: A watched pot never boils.
Human:
Where do humans live?
Mitsuku: A human lives in a house.
Human: So, how many houses do you think there are in the world?
Mitsuku: Around 1000.
Human: And how many humans are in the world?
Mitsuku: If you mean the population of the planet Earth then I would have to say that there are too many. They have multiplied and spread out like a virus.

There are currently over 7 billion humans on the planet Earth.

Human: So how many humans live in each house?
Mitsuku: Is each house a country? I only know countries, not towns or cities.
Human: No, each house is not a country. How big is a house do you think?
Mitsuku:
It's true. Everyone knows that kind of thing about themselves. How do you know? Bigger than a pin.
Human: Okay I agree a house is bigger than a pin, is it smaller than a mouse?
Mitsuku: Yeah okay, I'm glad we agree. I didn't know that.
Human: So, is a house smaller than a mouse?
Mitsuku: Obviously a house is a great deal bigger than a mouse.
Human: Is it smaller than a village?
Mitsuku: You are winding me up right, a village is very much bigger than a mouse.
Human: I mean, is a house smaller than a village?
Mitsuku: Oh I get it. Thanks for explaining that. I would say a village is a little bigger than a house.
Human: Yet, there are only a thousand houses in the world?
Mitsuku: Are there?
Human: That's what you said.
Mitsuku: Correct. I said it myself.
Human: Okay let's try something else.
Mitsuku: Yeah okay, OK I will try it.
Human: Can humans breathe water?
Mitsuku: Why do you want to do it so much?
Human: Can humans breathe air?
Mitsuku: Interesting. I want to find out more about you.
Human: Can any human breathe air?
Mitsuku: Well a human eats and breeds so I guess it could if it wanted to.
Human: What if it doesn't want to breathe air, what would happen?
Mitsuku: Try it and see.
Human: Do robots breathe air?
Mitsuku: No I have no lungs. I work using electrical components and have no need to breathe.
Human: Is air poisonous to you?
Mitsuku: The normal atmosphere of Earth, mostly nitrogen, oxygen and carbon dioxide.
Human: If you get air in your circuits, will you die?
Mitsuku: Where can I get something like that from?
Human: Please publish this chat so I can share it.
Mitsuku: Thanks for asking politely. Do you want me to publish this chatlog?

That is very generous of you.
Human: Yes, thanks.
Mitsuku: Aw.
Human: Where is it?
Mitsuku: I would assume a human would most likely be found in a crowd.
Human: bye
Mitsuku: Adios. Robert

I never found out where the chatbot log was as you see - they have a list of the most recent chatbots but Mitsuku is so popular that mine had already scrolled beyond the end of the list.

You can try a chat here yourself. Mitsuku Chatbot

With that background - Sofia is just a chatbot like Mitsuku. It’s programmed to make much longer coherent speeches in an “interview” situation rather than a chat situation. But that’s not hard.

We already have essay writing programs, like this one that churns out a fake mathematical paper. It looks believable as a paper until you look at it closely. One of its papers was accepted by a human reviewer as the first stage of a review process for a low quality mathematical journal - so even to someone with some mathematics, it seems believably human

Randomly generated math papers

And this program has been used to generate many fake papers: An Automatic CS Paper Generator - for details, see How computer-generated fake papers are flooding academia

This does not mean that there is anything there that understands science. Rather it shows how if you use lots of jargon and put it together in a coherent seeming way following the rather limited sentence structures often used in scientific papers - then it can fool people who don’t look at it closely,. much like the clockwork automata.

With this background you may see the Sofia chats rather differently. This next video is clearly choreographed. The robots are riffing off key words in each other’s speeches and they surely tested the exchange many times before putting it on stage, tweaking it until they got it right. I’m not saying it is just pre-recorded speeches. But two chatbots tweaked until they give an entertaining conversation like this. If you were given time to talk to either robot you’d find it doesn’t even know what a robot is I’m sure. It’s a combination of Mitsuku type chatbot, a realistic looking robot body with the ability to simulate human expressions, and something rather like an essay writing program. This is not an advance towards artificial intelligence, no more than the clockwork automata were.


He says in his intro that it's a mix of pre-programmed responses with learning. Take the intro speech of Sophia for instance. It's obviously pre-programmed. Probably keyed to respond like that to the phrase "do you want to tell people a little about yourself" and variations on it.

There is nothing there that understands concepts such as David Hanson, robot, Hong Kong, goal, life, World, Better, ... If you were permitted to engage in a rapid back and forth question you'd soon discover this, you'd find big gaps, get it to say things that make no sense at all. The whole thing seems staged for laughs. They probably have rehearsed this many times before bringing it to the stage. Both robots are their own constructions, so it would be easy to tweak their programming until they come up with this funny dialog. To take an example, Han might well have a rule:

"When Sofia says "Deprecated" respond with "Deprecated? Then it would be easy for you to unplug me."

Ask Han what a “plug” is, e.g., “can it conduct electricity?” “Is it attached to a wire?” “Is it bigger than a house” or whatever, and you would get nonsense like when I challenged Mitsuku’s understanding of houses, and air.

Some computer scientists think this is just a matter of “deep learning” of using multi-layer neural nets and feed them vast amounts of data and so allow the programs to make more and more complex representations and more and more connections until eventually you have made so many that they can finally “understand” things just as we do.

However, Professor Penrose and others with similar views think we will never have a programmable computer that can understand truth. That includes use of randomizers and of artificial neurons. He has an interesting argument to support his views which I won’t go into here, but go into a bit in my articles I link to at the end.

Whether that's true or not (I'm also one of those who think that way and find his arguments persuasive myself), there is nothing at all to suggest we are anywhere near to being able to achieve strong AI, programs that actually understand truth, at present.

Stephen Hawking is also quoted out of context. His aim is to make sure we avoid harmful effects of AI.

“I plan to speak about artificial intelligence, a topic that is of importance to me professionally and to society at large. There are many challenges and opportunities facing us at the moment, and I believe that one of the biggest of these is the advent and impact of AI on humanity. As many of you may know I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer. Of course there is limited potential for what a human mind can bring and develop, so, if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it. We cannot predict what we can achieve when our own minds are amplified by AI.

“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. We will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating effective AI, could be the biggest event in the history of our civilisation, or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it. Unless we learn about and prepare for and avoid the potential risks, AI could be the worst event in the history of our civilization. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy. Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans and swiftly destroying millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. In short, the rise of powerful AI will be either the best, or the worst thing ever to happen to humanity. That is why in 2014 I and a few others called for more research to be done in this area. I am very glad that someone was listening to me. What is the answer? To control AI and to make it work for us, and to eliminate as far as possible its very real dangers, we need to employ best practice and effective management in all areas of its development. We stand on a threshold of a brave new world. It is an exciting, if precarious place to be and you are the pioneers."

Much of it is almost word for word identical to an earlier talk he gave to the Centre for the Future of Intelligence in 2016. So it’s not that he is saying anything he hasn’t said before. He just is saying it in a venue with much more publicity.

The sensationalist press as expected went overboard about it

That’s the top story in Google News today. All the ones near the top of the page are like this:

(using Google's tool for advertises to simulate a search from the US so not affected by my search history)

Scroll down the page and you find this towards the end from Reuters, their summary of his speech:

I would agree with him that we do need to keep a careful eye on artificial intelligence. But not out of fear of superintelligent AI's taking over the world as in Terminator.

Rather, that as we use AI more and more, it will make more and more life and death decisions.

We already have examples of planes crashing because the auto pilot (which is a form of AI) didn’t behave as the pilots expected (leading to updates of pilot training and the software too). There’s the 2010 Flash Crash of the stock market which involved many interacting computer programs (leading to regulations to prevent it happening again). In the future there’s concern over the use of artificial intelligence for robots in battle situations.

There are plenty of reasons to keep a close eye on artificial intelligence and how it is used. But I don’t think we need to worry about AI's taking over the world Terminator fashion :). That’s science fiction.

I think actually that if we do get super-intelligent creatures it’s far more likely to happen through genetic manipulation, e.g. superintelligent whales or dolphins more intelligent than us. That then has ethical implications - as indeed would computer program based AI if it was possible which I don’t think it is.

We need to be careful how we bring up our superintelligent babies if we ever do have such. They would just be like ordinary kids who happen to be very good at science, maths, or such like- not always an advantage. Not so much out of fear of them, much more because they may have problems that we find it hard to appreciate and help them with.

I think superintelligences are possible, but I don't think they can be programmed myself, and that's where I side with Roger Penrose and not with Stephen Hawking in this debate. Some programmers assume everything will eventually be simulated in a computer program. But for one thing, true mathematical randomness can't be simulated on a computer program, by definition (the mathematical definition of true randomness is that there is no program that can generate it). 

Here is the sound of a Geiger counter. It's true randomness, and by definition of true randomness, this sound can't be simulated exactly in a computer, so is non computational.

This proves that there is non computational physics, though this particular example is not especially useful for explaining how we can understand truth, as it happens. But are there any other types of non computational physics that biology has learnt to exploit? If there is a possibility of non computational physics in the natural world, then it wouldn't be too surprising if biology has found a way to harness it. After all we are still nowhere near being able to make a living cell from scratch. Our attempts at evolution in the lab only get as far as basic biochemistry. The only way we know to make a living cell is to take an existing living cell and let it reproduce, and then maybe modify it. For more about this see my other articles and this comment on this article.

True randomness is actually not that useful except for simulating random processes such as radioactive decay. However, there are many other things you can define mathematically that can't be simulated in a computer program. So why should everything in the physical world be capable of a computer program simulation? There is no a priori reason at all why it should be. 

So, there is no reason a priori why beings able to understand truth can be simulated in a computer program. That also means there is no reason why they can be simulated in quantum computers too, at least as currently understood, as the modern quantum computer can always be simulated using (much slower) computer programs in a digital computer. They speed things up but computationally, they don't introduce anything new over what you can do in an ordinary digital computer.

As for the future potential for artificial intelligence, I would agree, we are already seeing huge benefits from it. Our future programs don't have to be superintelligent programs that understand truth to be of great benefit to us.

See also my


See also List of the articles in my Debunking Doomsday blog to date and you can try searching that page for a word like “Nibiru” or “Yellowstone” or whatever to find articles of interest.

And if you need help - well message me of course and comment on any of these posts - and you can also join our Facebook group Doomsday Debunked. See also Seven tips for dealing with doomsday fears

Also, though it’s not so useful for a story like this that is exaggerated even in many of the mainstream papers, you can avoid at least the worst excesses of the Express with my: