Rebane's Ruminations
July 2009
S M T W T F S
 1234
567891011
12131415161718
19202122232425
262728293031  

ARCHIVES


OUR LINKS


YubaNet
White House Blog
Watts Up With That?
The Union
Sierra Thread
RL “Bob” Crabb
Barry Pruett Blog

George Rebane

AlanTuring Definitely not.  This question is asked by Dr. Aladdin Ayesh in a brief but serious article with the above title.  I presume Dr. Ayesh is a computer scientist, but he seems to be confused about how Alan Turing proposed his test for machine intelligence or, if you wish, AI almost sixty years ago.

Ayesh states that – Back in 1950, Turing proposed that a machine can demonstrate and be proven to possess true intelligence through a simple test of conversation, in which a human user converses blindly with two entities: a machine and another human. If that user could not tell the difference between the two entities, the machine is considered to be ‘intelligent’.

Turing did not offer his test as any definitive method to ‘prove’ that a machine is intelligent.  In fact, Turing did not even want to answer the seminal question, ‘What is intelligence?’  Instead, in his characteristic and deep fashion, Turing offered an operational answer to a difficult but related question that was posed to him – ‘Can a machine be intelligent?’ – by addressing the question ‘Can machines think?’

The short answer that Turing gave in that paper is that if a machine can fool a collection of human interrogators into believing that it was a human, then presumably the machine would be ascribed to be intelligent since the human it was competing with was prima facie ascribed as intelligent.  In Turing’s ‘Imitation Game’ no limits were placed on the questions which could be posed to the hidden human and machine.  The interrogation would be conducted and terminated at the pleasure of the interrogator at which time he would cast his vote as to who was human and machine. And that was it, period.

As such, the Turing Test is a perfectly sound test for ascribing intelligence to a machine since during the interrogation it can be asked to do anything that a presumably intelligent human can do – and more.  In its effort to pass itself off as a human, the machine will have to master such things as ascription of cause (Pearl, 2000) and the proper processing of counter-factuals.  If it does this effectively with a cohort of interrogators, who could possibly say that the machine were not at least as intelligent as its competition?

A number of people, including Ayesh, seem to have a problem with this and seek to confound matters.  Ayesh goes on to say –

For an intelligent machine to pass the test it has to be able to pretend to be human. This requires that the machine is conscious of the fact that it is a machine, it is conscious of the fact the test requires it to come across as human, it is conscious of time and visual limitation, and finally it is conscious of what makes a human comes across as human, i.e. human quirkiness. After all, we would be much quicker to accept a robot as intelligent if it could hold a light-hearted conversation about football.

In my opinion, the Turing Test does not test intelligence, or at least not solely so. It tests consciousness, self-awareness and the ability to lie. The last is the most important, because the ability to lie is a distinctively human characteristic associated with our ability to create from imagination.

A conscious, creative machine with imagination is a very interesting machine, but are these pre-requisites of intelligence?

Turing posed no limits as to the additional attributes the machine may possess.  As with pornography, we don’t yet know what intelligence is, only how to ascribe a presentation of it.

Finally, nowhere does Dr. Ayesh offer a definition of intelligence before posing his question.  Without doing that in an acceptable manner, he has not stepped ahead of Turing.  And without such a definition in hand, we cannot presume to start planning a retirement party for the Turing Test, or even determine whether one is needed.

 

Posted in ,

4 responses to “‘Is it Time for the Turing Test to Retire?’”

  1. kad Avatar
    kad

    George,
    The Turing test is indeed a reasonable operational definition of intelligence. And Dr. Ayesh gets it completely backwards when he suggests that it is a measure of consciousness. However, I think he is on the right track here in his unfortunate attempt to de-emphasize the test.
    The importance of the Turing test has been exaggerated beyond all measure by the Singularity believers, why typically conflate it with machine consciousness and the transcendence of biology by machine. Case in point: Kurzweil (The Singularity Is Near, 297), who leaps from the passing of the Turing test to “[intelligence] non-biological – capable of comprehending and leveraging its own powers.”
    A passer of the Turing test could presumably “leverage” its own powers. And I take it as a given that silicon based computers will pass the Turing test and surpass human inference in many areas, but that has nothing to do with “comprehending” its own powers. Comprehension in this case is consciousness comprehension, and indeed machine consciousness rather than just intelligence is required to say that machines have generally surpassed biology (or else they have not surpassed it in this area). Certainly the meaningful immortality that Kurzweil fervently prays occurs through downloading of human minds into hardware requires consciousness in hardware (is there any point in your mind living on if it does not consciously experience this?).
    So the interesting question is not the Turing test, but rather machine consciousness, of which the test says little to nothing. Indeed, machine consciousness may well be impossible to test – I do not even know for sure that any human other than me is consciousness, but that does not make it less interesting. And Occam’s criteria tells us not to ascribe consciousnesses to the passer of a test that merely requires intelligence. The vague hope that consciousness will arrive as a systems effect of enough intelligence is about as rigorous as your randomly chosen religion.
    In fact it seems that consciousness and intelligence may be close to orthogonal. Certainly the achievements of current AI suggest that the latter can occur without the former – common sense (though no testable due to lack of any test) says that AI has made no, as in not one iota of, progress towards consciousness despite some nice moves towards intelligence. Similarly, deep meditative states can be profoundly alive with consciousness yet absolutely devoid of thought or any externally apparent environmental processing.
    IMHO, consciousness in solid state devices is a pipe dream not dissimilar to those of the cognitive scientists of a century ago who believed that the mind could be modeled with a hydraulic metaphor, as that was their reigning technology.
    This is not to say that humans will not create conscious entities. But I would bet dimes to dollars that this will happen only after our information processing paradigm has shifted towards biological systems at the least and probably towards quantum effects within such. The singularity of the Kurzweil dream, of the total superiority of created being to human being, of potential immortality, is not even close, and when it comes it will be more like us creating enhanced versions of ourselves rather than us replacing ourselves with solid state machines. Singular beings will look on us like we do on other mammals, not at as a totally different form of life.

    Like

  2. George Rebane Avatar
    George Rebane

    Agreed. Great comment Keith.
    On the orthogonality of intelligence and consciousness, specifically as it relates to the TT, do you believe that an intelligent human interrogator can devise a line of questioning that would reveal the existence or absence of the respondent’s consciousness? (Until a better one comes along, I am still using Julian Jaynes’ definition of consciousness. http://en.wikipedia.org/wiki/Julian_Jaynes)
    In short, I’m not struggling with the notion that a critter can be conscious and not be intelligent in any knowledge domain specific way (several domains in which machines are already more intelligent than any human). But I am not sure that, within the TT-specified process, the successful machine revealing its intelligence in the guise of appearing human, would not also convince the interrogator of its attendant consciousness (and therefore, presumably, being conscious).
    Since we can never directly experience another’s consciousness, we are reduced to assessing the reliability of our test(s) of another’s consciousness – i.e. our belief. This proposition seems to hold until we can find an operational foundation for consciousness such as, perhaps, suggested by Penrose in his description of the quantum events which take place in the brain’s inter-neuronal synapses. This would be a different basis than assuming that the rise of consciousness needs only a sufficiently complex substrate.
    Thoughts?

    Like

  3. kad Avatar

    George, tried to post this at yesterday, but the system refused to allow it.
    The machine passing the TT would by definition force the interrogator to admit that s/he does not have sufficient data to determine which of the conversation partners is consciousness and which is not. The human conclusion, however, bears no necessary relationship to reality.
    My conjecture is that with enough intelligence and data it should be quite possible to simulate all externally verifiable aspects of consciousness without actually possessing such.
    This of course means that the status of consciousness itself is not an empirically testable fact, so yes we are reduced to assessing the imperfect reliability of our tests. Perhaps at some point something along the lines of what Penrose suggests, or something related to observer collapsed wave functions, or some technology (or self training regime) that actually lets us experience the consciousness of another will save the day.
    Some will then conclude that we should therefore accept the presence of consciousness based on the best imperfect test we can devise, or that consciousness is but an epiphenomenon. I am not in that camp because every test I can devise is so far from being conclusive that it cannot begin to overcome the priors. I do not think that, in the end, Kurzweil will be in that camp after he has downloaded his thought patterns into a solid state machine to gain immortality, only to find (though he would not be aware of it) that he had effectively negated himself because his thought patterns do not know they exist (even though perhaps no external observer would know this).

    Like

  4. George Rebane Avatar
    George Rebane

    Keith, given that the existence of another entity’s consciousness or a means of testing for it does not (yet) exist, why would “the machine passing the TT force the interrogator to admit” anything about the machine being sentient/conscious? Since the TT was designed only as a probabilistic (i.e. unreliable) detector of sapience/intelligence – with the constraint that behind one of ‘doors’ was a human and the other, a machine – the interrogator can only correctly ascribe sentience to within a probability. And this only with the presumption that at least the human is sentient. But in any event, I do agree that “the human conclusion, however, bears no necessary relationship to reality.”
    Agree with the remainder of your comments.
    Apropos to your last assertion re Kurzweil, I have for years held the belief that a perfect fidelity xyzt-space duplicate of a person is not also a duplicate in the sentience sense. Subjectively you/I will rebel against such a notion as considered with the help of my ‘357’ thought experiment wherein you/I as person A, holding a 357 magnum, are left alone in a room with the above described duplicate B. For a number of obvious reasons, only A or B should leave the room alive (assume a reliable colleague will undetectably dispose of the remains). After some, perhaps extended conversation, A is satisfied that B is a perfect duplicate of himself, and would not be detected as a copy by A’s closest relatives and friends should B be the survivor of this experiment.
    I maintain that A would never consider letting B survive their encounter and will make it so. Thereby A will always reject the belief that ‘he’ would survive in the embodiment of B – i.e. that A’s essential sentience has not been transferred in an experiential sense even if B were also sentient. This was presumably confirmed during their private meeting since A could experience the world only through his normal ‘self’.
    Now if A were to be morphed into a duplicate B or another embodiment bit by piece, so that there would occur no predictable discontinuity of sentience, then this problem might be avoided. However, one would never know if one of the small morphing steps would cause A to ‘blink out’. If sapience transfer continued, would we be able to detect that now the morphing being no longer possessed the sentience of A?
    (BTW, this discussion thread could continue under the 20jul09 ‘Singualrity in Ten Years?’ post which deals specifically with questions of machine sentience.)

    Like

Leave a comment