No. There are two separate things here, the Turning Test - which is not described in experimental detail, but idea is that the computer can't be distinguished from a human in a reasonably thorough test - and a prediction made by Turing in 1950 - which is NOT THE TURING TEST
The prediction was that by 2000 a computer would be able to fool human judges 30% of the time in a five minute test.
So, this did fool the human judges 30% of the time in a five minute test - but - it didn't really fit the spirit of Turing's prediction as it "cheated" much like an earlier program that pretended to be a schizophrenic patient, by pretending to be a human with poor grasp of English.
And, it wasn't a supercomputer, as some of the stories said. It was a chatbot. Have been many of those and some better at fooling humans than this one.
It's basically a media hoax that somehow got picked up by nearly all the major news outlets - shows that journalists don't have time to check their sources, and just print it as is, if the press release comes from an apparently reputable authority - in this case the University of Reading in England. I don't know why the journalists didn't bother to contact another logician, plenty of them they could have contacted, for a second opinion on the story.
Turing didn't specify the rules of the Turing test exactly but this is an updated version by Ray Kurzweil and Mitchell Kapor who have a wager on the outcome - gives an idea of one way you could make his ideas concrete - is a bit stronger than Turing's original test, but I think in the spirit of it - the idea of a computer that genuinely fools a human judge to think that it is human in reasonably testing conditions.