This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
Robert Walker

I'm a programmer and mathematician, and not an AI researcher. But I have been following this field somewhat since the 1980s when I went to some of Roger Penrose's talks in Oxford when I was studying post graduate logic there under Robin Gandy.

At the time Roger Penrose was writing his books such as "The Emperor's New Mind" and lecturing about them to the logicians, philosophers and physicists at Oxford University. I know he has had a lot of opposition, but to me much of it seems to miss the point. He gave a decent logical argument to suggest that a programmable computer can never understand truth, and suggestions for one way that non computable physics can arise in the brain. At the time he was almost laughed at by many because he suggested that quantum processes such as superposition can happen in a human brain at the temperature of the human body. But since 2007, this is now confirmed in many different areas of biology, that you actually do get quantum superpositions of states.

According to his views, then the brain is hugely more complex than AI researchers suggest. He thinks that very single neuron uses vast amounts of computational power within it. Again he was already saying this back in the 1980s, and he identified structures that could do this - the microtubules. These are structures that are usually thought of as a kind of scaffolding a bit like our skeleton. But they are much more dynamic than skeletons as they get rebuilt continually. When moving an ameoba constantly dissolves and rebuilds microtubules along its leading edge. And it turns out - that the microtubules have surfaces that have the potential to behave like cellular automata. Perhaps these are not just the skeletons but also the brains of  ameobas?

It makes sense to me that this should be the case, as after all amoebas don't have any neurons, yet would easily out think a computer neural net of thousands, maybe millions of neurons. They actually have rather complex behaviour. So how could the first primitive brains with thousands of neurons evolve if a single cell could out think it? It must make more use of the interior of the cells. How could it not? How could all that potential of the interior of a neuron be used only as a simple node in a neural net? How could our brains not use this complexity of the interior of each cell?

But he takes it one further, he says that no program can ever understand truth. It will always have truth glitches, kind of logic bombs in it that prevent it understanding certain points. It's an intricate argument.

As a result of this he thinks that something non computable is going on. I.e. that strong AI can't be simulated with a computer program.

That then leads in to his idea of how we might be doing it - using quantum coherence - but not just over one cell, but many, until it builds up to a Plank's mass of matter - about the mass of an eyelash hair. At which point the state collapses, and this is where he thinks the non computable behaviour comes in.

What his opponents miss, it seems to me - is that he has an argument to show that strong AI is a program in non computable physics. They try to find problems with his argument, and also say they don't think the particular physics he outlines is possible.

But - that's somewhat missing the point. More to the point - can they prove that strong AI is computable? It's not enough to just find flaws in one particular argument that shows that it is non computable. That doesn't establish computability- it just shows that there are flaws in that particular argument (if they are right). Can they establish computability - that strong AI is a problem in computable physics? I don't think anyone is close to doing that. And convinced by Roger Penrose's arguments, I don't think they ever will do.

For more on this, see: Robert Walker's answer to What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity? That's a $500 knowledge prize which expires today.

And for many other answers, most of which currently support  The Artificial Intelligence Revolution: Part 1 - Wait But Why idea see the other answers to it.

What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity?

That article itself, I know it's a lot of Woohoo! seems on first reading if you haven't followed th subject much. But - Elon Musk is not hte only one. Many people take this really seriously. They think that we will have a singularity, perhaps as soon as 2045 is one projection, where suddenly some computer will become intelligent as described in that article, then find ad way to create more and more intelligent computers and that everything will be changed. Kind of a before and after, nothing can be the same again after that.

I think that's just sci fi myself - as likely as us suddenly discovering warp drive or how to teleport people in 2045. Indeed since I htink it's based on a false premiss, convinced by Roger Penrose's argument- I'd rate warp drive as more likely and even matter transmission as slightly more likely (though almost pure fantasy) than this idea.

About the Author

Robert Walker

Robert Walker

Writer of articles on Mars and Space issues - Software Developer of Tune Smithy, Bounce Metronome etc.
Studied at Wolfson College, Oxford
Lives in Isle of Mull
4.8m answer views110.3k this month
Top Writer2017, 2016, and 2015
Published WriterHuffPost, Slate, and 4 more