This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
Robert Walker
Well some people think the whole approach of using programming to simulate human understanding is wrong if you want it to be human-like. It might be that there is no way, using these  techniques, to create a machine that is able to understand what truth is.

If it has no idea of what is meant by truth, then though it may simulate our behaviour in more and more ways it's not really "human-like".

Yet, it might be far better at driving cars, eventually, better than us at playing chess and more and more things it can do better. But if it doesn't understand truth, it doesn't really understand what it is doing. It can simulate some aspects of human behaviour. Sometimes in ways that astronish us.

But then so also can the clockwork automata that so amazed people in the eighteenth and nineteenth centuries.

Like this 200 year old Japanese automaton that could fire an arrow
At one point it probably seemed that you could do almost anything with clockwork.

Now our computer programs are so advanced in some ways that it may seem that you can do almost anything with computer programs.

But - like clockwork machines, they may have limitations.

For more about this, and why some people think that programmable computers can never achieve true human like understanding, see Why Computer Programs Can't Understand Truth - And Ethics Of Artificial Intelligence Babies

About the Author

Robert Walker

Robert Walker

Writer of articles on Mars and Space issues - Software Developer of Tune Smithy, Bounce Metronome etc.
Studied at Wolfson College, Oxford
Lives in Isle of Mull
4.8m answer views110.4k this month
Top Writer2017, 2016, and 2015
Published WriterHuffPost, Slate, and 4 more