Well some people think the whole approach of using programming to simulate human understanding is wrong if you want it to be human-like. It might be that there is no way, using these techniques, to create a machine that is able to understand what truth is.
If it has no idea of what is meant by truth, then though it may simulate our behaviour in more and more ways it's not really "human-like".
Yet, it might be far better at driving cars, eventually, better than us at playing chess and more and more things it can do better. But if it doesn't understand truth, it doesn't really understand what it is doing. It can simulate some aspects of human behaviour. Sometimes in ways that astronish us.
But then so also can the clockwork automata that so amazed people in the eighteenth and nineteenth centuries.
Like this 200 year old Japanese automaton that could fire an arrow At one point it probably seemed that you could do almost anything with clockwork.
Now our computer programs are so advanced in some ways that it may seem that you can do almost anything with computer programs.
But - like clockwork machines, they may have limitations.