This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
Robert Walker

BRAIN CAN'T BE JUST A NEURAL NET MADE OF AS MANY NODES AS WE HAVE NEURONS


First - to say that those who add up the number of neurons in the brain and the number of connections between them - I think they are hugely underestimating the problem.

INTELLIGENCE AND SENTIENCE OF A SINGLE MICROBE


After all - even single cell microbes are to some degree sentient - they can search for food, are aware of their surroundings etc.

They do that without any brain at all, obviously, can't have neurons. You'd need a pretty complex neural net to try to model the behaviour of a single microbe. Then add in its ability to change behaviour depending on the environment, and on proximity of other microbes.

So - given that a single cell microbe has a fair bit of intelligence - seems absurd that in the human brain those neurons just function as simple transistors. If they did - then another organism surely would develop a brain that squeezes hundreds, or thousands of those neurons into a single cell. And an animal with a brain would need a complex brain to outsmart a single microbe. That can't be right.

So - I'm sure that the internal workings of the neurons has to have a lot to do with the way we think - just by that a priori type argument that it would be absurd for evolution not to take advantage of the inner workings of the neurons.

And indeed, there is a whole lot more going on in the brain over and above the transmissions of electrical signals. And the signals themselves are not simple on / off pulses or binary trains as in digital computers, but complex bursts of activity.

There must be a reason for that also - otherwise the neurons would just send single pulses to each other, and why hasn't evolution hit on binary or some other simple encoding system - and some simple easy to decipher and robust, error fixing - memory state - if that is all the neurons were doing?

So - that suggests, that if it is possible at all - it is probably some orders of magnitude harder than you'd expect from just counting neurons and connections.

IT MIGHT NOT BE POSSIBLE AT ALL


Depends what you mean by an Aritficial intelligence here. And sentience for that matter.

  • If you mean a normal type computer with chips and a program that runs on it - even one that involves neural nets or randomness or massively parallel or quantum computers, and is self modifying and "learns" -
  • and if by sentience you include, ability to tell the difference between truth and falsity

then - I think, myself, that that can never succeed.

There I'm persuaded myself by Roger Penrose's arguments based on Godel's theorem, that such a program can never fully understand mathematical truth and so has no intuition of what is true and what is false such as we have - just acts in certain ways because that's how it is programmed.

There are many views on his argument, so - not saying you have to agree with me here :). But is my own view based on this argument and are, obviously, at least a few other people who think this way including Roger Penrose himself.

TO BE SENTIENT, I THINK IT HAS TO HAVE SOME NOTION OF TRUTH, HOWEVER VAGUE


Also I think you couldn't really call it sentient if it has no notion, however vague, of truth and falsity (I don't mean it never lies - but - that it doesn't even have anything that corresponds to awareness of what is a lie or a mistaken understanding of the world or of other beings or itself).

WHAT DOES NON COMPUTABLE MEAN HERE


It might help to have an example of some non computable functions. Because it is easy to think that our computers can simulate everything, if we can just make them fast enough. That's not true though.

There are many problems in maths that our computers have no hope at all of solving, because they are non computable problems.

A simple example is to calculate the tiling function. This is a function that given any finite set of square tiles, with various indentations around the sides that constrain how they fit together - tells you a value k for the largest k by k square pattern of tiles it can tile.

No computer will ever be able to evaluate this. That includes quantum computers, artificial neural nets, and machines that incoroporate random processes. It's because of the existence of Wang tiles which can be used to simulate the behaviour of any Turing machine.


Because the halting problem is non computable, then there is no computable function that, given a finite set of Wang tiles, can put any bounds on the size of the largest region it can tile.

That is to say - to make this clearer - we can simulate Wang Tiles in a computer of course. But if some process in nature was able to take any finite set of Wang Tiles and output the size of the largest region it can tile - that is something our computers can never simulate.

For more examples, see

Non-computable but easily described arithmetical functions

Roger Penrose thinks that when we understand the notion of mathematical truth (and so presumably also when we understand truth generally) we are making a similar non computable leap that our programmed computers will never be able to achieve.

THIS MAY MEAN IT HAS TO HAVE NON COMPUTABLE BEHAVIOUR


If so - and if you are persuaded by Roger Penrose's arguments - it probably needs to have some kind of "non computable" basis - not only not programmed in conventional sence - but not based on neural nets, or random processes or QM logic, or fuzzy logic or any of those things - as they are all things that are susceptible to Roger Penrose's argument to prove that programs based on those ideas don't really have a notion of truth as we do.

So - what that is - and how it is we manage to have anything like that ourselves - is a bit mysterious - Roger Penrose has suggestions for how it could happen - but his ideas there are extremely speculative and not verified yet.

Now  his proofs are not widely accepted - I know that. I find them convincing myself however - and that's why I predict, with some confidence, that we won't have sentient computers based on any of those techniques.

COMPUTERS BASED ON LIFE PROCESSES DIRECTLY - OR ANALOGUES FOR THEM - NOT PROGRAMMED OR NEURAL NETS


Instead they would have to be in some way based on life processees, or cyborgs, or based on some understanding of what it is about cells and living organisms that gives complex creatures the ability to hae notions of truth and falsity - not just humans - I think all creatures have some idea of that sort themselves at some level.

Possibly even microbes do in a way. Dim "awareness" that this is food, that is to hot - that is too cold, too salty etc - let's swim away - that in some way they don't just react to their environment - that they also in some - of course very low level way - actually understand it in a way that a computer program never could do.

COMPUTERS THAT USE NON COMPUTABLE ELEMENTS THAT WE DON'T REALIZE ARE THERE


Conceivably - we might build a computer that we think is working as a conventional quantum computer or hardware based neural net - but through some flaw due to insufficient understanding of the basic physics of the situation - is actually using the same non computable phenomena as life - that might be sentient I suppose.

I don't know how easy it is to assess that. That might be a "sci fi". type  explanation for the fictional "positronic robots" in Amisov's fiction for instance :). Or the self aware androids in Star Trek. But - no idea how that would work in practise.

THIS IS AN OPINION


This is an opinion of course, and I'm not suggesting that it is more than that!

But that's my own projection and opinion - that we might have cyborgs or novel forms of life that are sentient - but - that - except possibly through some strange accident based on underlying physics that we don't understand or know is there - we won't have computers that are sentient.

UNLESS WE COME TO UNDERSTAND HOW LIVING ORGANISMS DEVELOP SENTIENCE


That is - unless we come to understand how it is that living cells can develop sentience - must be there at least in multi-cellular creatures like ourselves - and quite possibly whatever it is also works right down to the level of individual microbes (as would Professor Penrose's controversial but interesting suggestion for an explanation involving non computable Plank scale collapse of the waveform locally in coherent large scale quantum states in various microscopic structures within some cells and microbes).

But as long as we don't understand how it happens, and given the complexity of living organisms - I would be surprised if it does happen in the next century. 

CYBORGS AND "UPLIFTED" SPECIES


Except of course as cyborgs, or genetically modified organisms - or "uplifted" creatures (David Brin's idea) e.g. dogs, parrots, octopuses etc that we train and maybe also genetically enhance until they develop ability to speak to us as in some sci. fi. stories. I find those not too implausible - though there is quite an ethical minefield to go through there first.

Is it right for us to give parrots, and octopuses and so forth the ability to reason and think like we do? If that is we find out enough about life processes to make it possible. I think we might be able to do it some time this century - as that doesn't require us to understand too much about how it works on the level of atoms, but a far more crude understanding of how genes work etc.

MACHINE SENTIENCE - ONLY IF NON COMPUTABLE


Possibly could get machine sentience such as machines that are designed as quantum computers (but aren't really).

But in that case I think - wouldn't happen just by making the machines faster and more capable (because of Penrose's argument) .

Would happen only if through sheer chance we hit on something, that triggers non computable behaviour in a way we didn't expect.

But I think personally that the chance of that happening is remote. Just triggering something non computable wouldn't be enough - hard though that is. It then has to start acting in a coherent sentient way. Non computable but essentially random behaviour for instance would be of no use at all.

I think, not impossible, after all evolution did it. But unlikely to just happen in a laboratory.

I would guess that if a beginner tried to hit a target in the dark, at a distance, on your first attempt, after someone has spun you around a few dozen times, and using a bow, when you have never trained as an archer - that that would be more likely than for us to accidentally create a sentient machine in the next century.

See also Robert Walker's answer to If money was not a constraint, what could be done to avoid an AI apocalypse?

About the Author

Robert Walker

Robert Walker

Writer of articles on Mars and Space issues - Software Developer of Tune Smithy, Bounce Metronome etc.
Studied at Wolfson College, Oxford
Lives in Isle of Mull
4.8m answer views110.4k this month
Top Writer2017, 2016, and 2015
Published WriterHuffPost, Slate, and 4 more