This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
Robert Walker
Well computers are tools, created by humans. So first, is this at all possible? Could we deliberately, or inadvertently make a computer that is self aware?

WHY I DON'T THINK, MYSELF, THAT WE WILL EVER SEE A PROGRAMMED AI, IN ORDINARY SENSE


Personally, I'm highly skeptical that we can do it by programming it, or by quantum computers or by neural nets. These programmable devices are a long way from becoming even moderately intelligent, and I think myself, no prospect of doing so in the future. There I'm persuaded by Roger Penrose's argument that any such machine can't  have a proper understanding the notion of mathematical truth - by an adaptation of Godel's argument.

That's a personal point of view, many think it is possible, and we just need more computer power and speed. I think myself that no computer constructed in that way can understand what it means for something to be true or false. It can just act in various ways, but it doesn't really "know" anything in the way we do. And it is programmed, so if it causes problems, just pull the plug on it.

ACCIDENTAL AI WITH LAWS OF PHYSICS WE HAVEN'T UNDERSTOOD YET


But - we might create one by accident, if it's using laws of physics we haven't understood yet. E.g. we call it a quantum computer, but it isn't really, is doing something else.

I think chance of doing that with a purely mechanical computer, by accident, must surely be about as likely as accidentally making a brand new biological life form by mixing various chemicals together.

So - exceedingly unlikely. But perhaps it is possible. Penrose's argument doesn't forbid it, if you are one of those who are convinced by his argument.

DIRECT COPIES OF CELLS, OR - SOME WAY OF RAPIDLY EVOLVING NEW BIOLOGY


But perhaps a copy of the way cells work, where we don't recognize how they work but somehow copy it anyway. Or perhaps some insight into the way evolution works allowing us to speed up evolution and evolve new biological forms in the laboratory.

This also seems close to fantasy right now. As in -copying individual cells is beyond us, but to copy it and then somehow go all the way to a fully functioning multi-cellular life from that...

Depends on how robust complex thought processes are in multi-cellular form. Can just putting together neurons in a brain like structure lead to beings able to think and understand notions of truth? Or has it to be part of a fully functional multi-cellular life-form? I suspect it is more than just putting a bunch of many neurons together until they start to think for themselves.

And that trying to duplicate the way the brain works in detail - is probably not going to work without a human body which is part of - and if we thought we were close to achieving a functional human brain without a body - I think would be unethical also to continue - it might be in pain, mad, etc.

I can see neurons that are close analogs of our biology working for things like pattern recognition, expert systems and so forth. But a complete functioning mind... I think that verges on fantasy and has serious ethical problems

AUGMENTED BIOLOGY


So - here I think it is plausible. Some program of "uplifting" in David Brin's sense. Making non human animals more intelligent. Making humans more intelligent so we get "super human" abilities of reasoning. Augmenting human brains with machine interfaces etc.

Again - some problems of ethics here.

But if we do develop super-intelligent AIs - I see this as the most likely way it might happen. Either cyborgs, or through genetic manipulation, or hybrids of humans with implanted independently grown biological organs (i.e. like cyborgs but with biological rather than mechanical implants).

WHAT COULD GO WRONG


So - well there are three main possibilities here, from the lifeboat foundation discussion of the problem

Lifeboat Foundation AIShield

First - the AIs could be created by someone malevolent - e.g. wanting to gain an advantage over the rest of humanity for their own ends.

Then - it could become autonomous and start making its own decisions.

Also - it could be benevolent - but because of its non human origins - make decisions that to us are bizarre and unacceptable.

In that article, the author talks about numerous things that could go wrong. And they don't have a solution of a way to prevent it.

SAFEGUARDS


This is mainly for the accidental AI using physics not known to us yet.

So - the thing is - to build in safeguards into the computers themselves so they can't go rogue.

Asimov's positronic robots might be relevant, they have the three laws:

Three Laws Of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But - his stories are full of things that go wrong when the robots misinterpret their actions according to the laws, or take an order too literally etc.

And he was, after all, a science fiction writer. He is the same chap who wrote many stories about "multivac" - a computer the size of a huge building - in some stories kilometers in size - that was the only computer in the States. Which people traveled for miles to consult - scaling up the then large scale monster computers which were the only things they had.

So - science fiction is a product of its time - though it can be astonishingly prescient sometimes - it is not to be relied on as 100% prediction of the future.

GOOD UPBRINGING


More generally though, I think myself that a super intelligent AI is likely to be hard to safeguard. Except - maybe by attaching a bomb to them which you can explode if they go rogue. But, being super-intelligent, they can outthink any of us. So they might de-activate the bomb in a way we can't even detect.

So - if we ever do go this route, well the super intelligent AIs have to keep an eye on each other and work out their own solutions.

Super intelligence need not necessarily be of survival value actually. If it was, surely humans would have continued to get more and more intelligent, without limit, with larger and larger brains. But we didn't.

So I think - the solution - just like parents who may sometimes bring up a kid that is brighter than they are - in some special area that is - say better at maths or whatever - is to give them a good upbringing. Give them clear understanding of morality. Teach them about love and compassion and need for wisdom in their actions. And so on.

If loving parents can bring up a child who is brilliant at art, music, computing, gardening or whatever it might be - then we as a species can bring up AIs who are brilliant at various things far beyond human in some of their capabilities.

But we need to think very carefully before doing this intentionally.

REAL ISSUE - AND ETHICAL RESPONSIBILITIES


I think it's a real issue, not one that we face imminently, but maybe in a few decades. I don't have a solution to it.

Except - that we should step carefully. And if we do find that we have created an artificial intelligence, a robot that understand what truth is  - really understands it, and can pass the Turing test - really pass it - then - we need to think really carefully at that point.

But we also have an ethical responsibility to such creatures if we create them. If they really were aware - if that is possible - then - they have rights just as we do.

But - we could treat them as potentially insane or criminal (in our sense) - until proven that they are okay. If so - with a machine the obvious thing is to fit an "off switch". Some way that any human can switch them off - encoded in some deep encrypted way, so that there is no way they can remove it.

But they may not be machines. They might be complex things that are partly life, like cyborgs. Or mixtures of biological neurons with mechanical components. These ideas are being explored by experimenters, whether we like it or not.

Or could be machines, but self replicating and evolving machines, so that they continuously change - the problem then being that machines that started off just fine can evolve into something hazardous to us.

If so - might be that at some point we have to regulate research and say that certain things are not safe to research, except as theories - not in practice. A bit like the way human cloning is outlawed in many societies, and at least some experiments on animals, are outlawed.

WHY WE KNOW FOR SURE THAT THERE ARE MANY THINGS OUR COMPUTERS CAN'T SIMULATE


It's natural to suppose that our computers can simulate anything if you make them fast enough.

But that's not true if the thing to be simulated is non computable.

All our physics simulations so far are based on computable rules. So we never have to simulate non computable behaviour. But that is just because we choose easy problems for our simulations.

We have many non computable functions. Even within  ordinary physics, even with simple ordinary geometry, nothing complicated, tiling theory, we have non computable functions.

If any of them occurred in nature, we could not simulate them.

E.g.

  • The Tiling function, which given any finite set of polygonal tiles, outputs the size k of the largest k×k sqaure that can be tiled by them, or 0 if they tile the entire plane.

    Because of the existence of Wang Tiles, which can simulate the behaviour of a Turing Machine as a tiling pattern, this is a non computable function.- so if there was some process in nature that in some way had a direct connection to the Tiling function - we would not be able to simulate it in a normal computer.


    Wang tiles which can be used to simulate the behaviour of any Turing machine. Because the halting problem is non computable, then there is no computable function that, given a finite set of Wang tiles, can put any bounds on the size of the largest region it can tile.

That includes quantum computers, computers that use randomness, massively parallel computers, hardware neural nets using the ordinary idea of a neural net as currently described rather than real neurons - none of those could compute it.

That is to say - to make this clearer - we can simulate Wang Tiles in a computer of course. But if some process in nature was able to take any finite set of Wang Tiles and output the size of the largest region it can tile - that is something our computers can never simulate.

Roger Penrose thinks that when we understand the notion of mathematical truth (and so presumably also when we understand truth generally) we are making a similar non computable leap that our programmed computers will never be able to achieve.

Of course, we don't know of any natural phenomenon that computes the Tiling function.  And I am not suggesting that we have a miraculous ability to solve all non computable problems or anything like that.

The idea is just that there is something non computable going on there, that in the particular area of understanding truth, something is going on that can't be simulated with a normal turing machine. In terms of physics, Roger Penrose has suggested it is something to do with the interface between Quantum Mechanics and General Relativity - due to spontaneous collapse of the wave function at the graviton level - and he has given some arguments in favour of his views that are more plausible than you might expect.

Whether that's it or not I don't know, but sounds like a reasonable way you might get something non computable going on, if it does happen somewhere. Or could be something else we haven't thought of at all.

For a list of many more examples of non computable functions see:

Non-computable but easily described arithmetical functions

So if human behaviour is based on non computable functions, again we can't simulate it in an ordinary computer.

See also Robert Walker's answer to Is the development of sentient AI highly probable within this century?

About the Author

Robert Walker

Robert Walker

Writer of articles on Mars and Space issues - Software Developer of Tune Smithy, Bounce Metronome etc.
Studied at Wolfson College, Oxford
Lives in Isle of Mull
4.8m answer views110.4k this month
Top Writer2017, 2016, and 2015
Published WriterHuffPost, Slate, and 4 more