Read Alan Turing: The Enigma Online

Authors: Andrew Hodges

Tags: #Biography & Autobiography, #Science & Technology, #Computers, #History, #Mathematics, #History & Philosophy

Alan Turing: The Enigma (92 page)

BOOK: Alan Turing: The Enigma
4.04Mb size Format: txt, pdf, ePub
ads

That ‘forgetting about them’ would be precisely the element of ‘selecting certain details’ necessary to the scientific method. He conceded that the nervous system was itself continuous, and therefore

 

certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to
the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.

But he argued that whatever kinds of continuous or random elements were involved in the system – as long as the brain worked in
some
definite way, in fact – it could be simulated as closely as one pleased by a discrete machine. This was reasonable since it was only applying the same method of approximation as worked very well in most applied mathematics and in the replacement of analogue by digital devices.

Natural Wonders
had begun by proposing the question, ‘What have I in common with other living things, and how do I differ from them?’ Now Alan was asking what he had in common with a computer, and in what ways he differed. Besides the distinction of ‘continuous’ and ‘discrete’, there was also that of ‘controlling’ and ‘active’ to consider. Here he met the question as to whether his senses, muscular activity and bodily chemistry, were irrelevant to ‘thinking’, or at least, whether they could be absorbed into a purely ‘controlling’ model in which the physical effects did not matter. Discussing this problem, he wrote:

 

It will not be possible to apply exactly the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these deficiencies might be overcome by clever engineering, one could not send the creature to school without the other children making excessive fun of it. It must be given some tuition. We need not be too concerned about the legs, eyes, etc. The example of Miss Helen Keller shows that education can take place provided that communication in both directions between teacher and pupil can take place by some means or other.

He was not dogmatic about this line of argument. At the end of the article he wrote (perhaps so as to be on the safe side):

 

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

But this was not where he placed his own bets. Later he went as far as to say:
39

 

… I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics, such as the shape of the human body. It appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in a different category.

In the subjects proposed
for automation in 1948, he had been careful to choose those which involved no ‘contact with the outside world’. Chess playing, pre-eminently, would involve no relevant fact but the state of the chessboard and the state of the players’ brains. The same could certainly be claimed of mathematics, and indeed of any purely symbolic system, involving anything
technical
, any matter of
technique
. He himself had included cryptanalysis in this scope, but hesitated over language translation. The
Mind
paper, however, boldly extended the range of ‘intelligent machinery’ to general conversation. As such it was vulnerable to his own criticism, that it would require ‘contact with the outside world’ for this to be possible.

He did not meet the problem that to speak seriously is to
act
, and not only to issue a string of symbols. Speech may be uttered in order to effect changes in the world, changes inextricably connected with the meaning of the words uttered. The word ‘meaning’ led Polanyi into extra-material, religious connotations, but there is nothing at all supernatural about the mundane fact that human brains are connected with the world by devices other than a teleprinter. A ‘controlling machine’ was to have physical effects ‘as small as we please’, but speech, to be audible or legible, has to have a definite physical effect, tied into the structure of the outside world. The Turing model held that this was an irrelevant fact, to be discarded in the selecting of certain details, but the argument for this irrelevance was left weakly supported.

If, as Alan Turing himself suggested, knowledge and intelligence in human beings derive from interaction with the world, then that knowledge must be stored in human brains in some way that depends upon the nature of that interaction. The structure of the brain must connect the words it stores, with the occasions for using those words, and with the fists and tears, blushes and fright associated with them, or for which they substitute. Could the words be stored for ‘intelligent’ use, within a discrete state machine model of the brain, unless that model were also equipped with the brain’s sensory and motor and chemical peripheries? Is there intelligence without life? Is there mind without communication? Is there language without living? Is there thought without experience? These were the questions posed by Alan Turing’s argument – questions close to those that worried Wittgenstein. Is language a
game
, or must it have a connection with real life? For chess thinking, for mathematical thinking, for technical thinking and any kind of purely symbolic problem-solving, there were arguments of great force behind Alan’s view. But in extending it to the domain of all human communication the questions he raised were not properly faced, let alone resolved.

Indeed, they had been faced more openly in the 1948 report, in choosing activities for a ‘disembodied’ brain. He had narrowed them down to those not requiring ‘senses or locomotion’. But even there, in his choice of
cryptanalysis as a suitable field for intelligent machinery, he had played down the difficulties arising from human interaction. To portray cryptanalysis as a purely symbolic activity was very much a Hut 8 view of the war, sheltered from the politics and military activity, and trying to work in a self-contained way without interference from outside. The hero of
The Small Back Room
had said rather ironically:

 

It’s a great pity when you come to think of it that we can’t abolish the Navy, the Army and the Air Force and just get on with winning the war without them.

But they could not do without the fighting services. There had to be some integration of Intelligence and Operations, in order for Bletchley to have any meaning. Indeed, the difficulty of the authorities was that of trying to draw a line between them, where no line really existed. The intelligence analysts invaded the field of appreciation. Appreciation held consequences for operations, which in turn were necessary for more effective cryptanalysis. But the Operations actually happened, in the war-winning, ship-sinking physical world. It was hard to believe in Hut 8, where the war was like a dream, but they were actually doing something.

To the mathematicians, it might well be tempting to
regard
the machines and the pieces of paper as purely symbolic. But the fact that they had physical embodiment mattered very much to those for whom knowledge was power. If there was a real secret to Bletchley it lay in the integration of those different kinds of description of its activities: logical, political, economic, social. It was so complex, not just within one system, but in its meshing of many systems, that a Churchillian ‘Spirit of Britain’ was as good an explanation of how it worked as any. But Alan had always leant towards keeping his work self-contained, as a technical puzzle, and was resistant to what he regarded as administrative interference. It was the same problem with his model of the brain, as in his work for the Brain of Britain. There was the same problem again, in the fate of the ACE. Having set down a highly intelligent plan, Alan tended to assume that the political wheels would turn as if by magic to put it into effect. He never allowed for the interaction required to achieve anything in the real world.

This was the objection that lay at the heart of Jefferson’s remarks, confused as they might be. It was not that Alan avoided it entirely, for he went as far as to concede:

 

There are, however, special remarks to be made about many of the disabilities that have been mentioned. The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities,
e.g
. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or black man and black man.

Yet this was not a special, but a very substantial concession, opening up the whole question as to the part played by such human faculties, in the ‘intelligent’ use of language. This question he failed to explore.

In a rather similar way, he did not avoid giving a direct answer to Jefferson’s objection that a machine could not appreciate a sonnet ‘because of emotions genuinely felt’. Jefferson’s ‘sonnets’ had about them the quality of Churchill’s advice to R.V. Jones:
40
‘Praise the humanities, my boy. That’ll make them think you’re broadminded!’ – and accordingly, Alan fastened on to the phoney culture of this Shakespeare-brandishing, perhaps a little cruelly. He rested his case on the imitation principle. If a machine could argue as apparently genuinely as a human being, then how could it be denied the existence of feelings that would normally be credited to a human respondent? He gave a paradigm conversation to illustrate what he had in mind:

INTERROGATOR: In the first line of your sonnet which reads ‘Shall I compare thee to a summer’s day’, would not ‘a spring day’ do as well or better?
WITNESS: It wouldn’t scan.
INTERROGATOR: How about ‘a winter’s day’. That would scan all right.
WITNESS: Yes, but nobody wants to be compared to a winter’s day.
INTERROGATOR: Would you say that Mr Pickwick reminded you of Christmas?
WITNESS: In a way.
INTERROGATOR: Yet Christmas is a winter’s day, and I do not think Mr Pickwick would mind the comparison.
WITNESS: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

But this answer to
the objection would prompt the same questions about the role of interaction with the world in ‘intelligence’. This play with words was the strawberries and cream, and not the meat, of literary criticism. It was a view of sonnets from the back of Ross’s English class! Where lay the ‘genuine feeling’? What Jefferson could well have intended, was something more like intellectual
integrity
than examination mark-scoring: truthfulness or sincerity pointing to some connection between the words, and experience of the world. But such integrity, a constancy and consistency in word and action, could not be enjoyed by the discrete state machine alone. The issue would be clearer if the machine were confronted with a question such as ‘Are you or have you ever been …’ or ‘What did you do in the war?’ Or staying with the sexual guessing game, asked to interpret some of the more ambiguous of Shakespeare’s sonnets. If asked to discuss proposed alterations to literature, Dr Bowdler’s preference for

 

Under the greenwood tree
Who loves to
work
with me

would make a telling topic. Questions involving sex, society, politics or secrets would demonstrate how what it was possible for people to
say
might
be limited not by puzzle-solving intelligence but by the restrictions on what might be
done
. Such questions, however, played no part in the discussion.

Alan disliked anything with a churchy or pretentious flavour, and employed a light style with homely metaphors in order to make his serious points. It was in the Apostolic tradition, and also shared with Samuel Butler and Bernard Shaw. But rather like those writers, his examples of ‘intelligence’ could be accused of a touch of the blarney, of arguing for the sake of it, of mere cleverness, or of making debating points. He enjoyed the play of ideas – but a logical jousting with God and Gödel, the Lion and Unicorn tussle of free will and determinism, was not enough.

It was not necessary to be either ‘soupy’ or pretentious in order to approach the questions of ‘thinking’ or ‘consciousness’ in another way. It was the year 1949 that saw
Nineteen Eighty-Four
– a book that Alan read, impressed: it elicited from him an unusually political comment when talking with Robin Gandy: ‘… I find it very depressing. … I suppose absolutely the
only
hope lies in those proles.’ Orwell’s discussion of the capacity of political structure to determine language, and language to determine thought, was itself highly relevant to Alan Turing’s thesis. And Orwell might have been thinking of the Turing sonnet-writing computer, with his ‘versificators’, machines to turn out popular songs.

But that was not the central issue, for Orwell was not concerned to reserve for human beings the intelligent, indeed intellectually satisfying, work of rewriting history at the Ministry of Truth. His passion was reserved for intellectual integrity: keeping the mind whole, keeping it in contact with external reality. ‘You must get rid of those nineteenth-century ideas about the laws of Nature,’ O’Brien told Winston Smith, ‘We make the laws of nature. … Nothing exists except through human consciousness.’ Here lay Orwell’s fear, and to counter it he seized upon scientific truth as an external reality that political authority could not gainsay: ‘Freedom is the freedom to say two and two make four.’ He added in the unchangeable past, and sexual spontaneity, as things that
were so
, whatever anyone said. Science and sex! – they had been the two things that allowed Alan Turing to jump out of the social system in which he was trained. But the machine, the pure discrete state machine, could have none of this. Its universe would be a void, but for the word of its teacher. It might as well be told that space was five-dimensional, or even that two and two made five when Big Brother decreed. How could it ‘think for itself, as Alan Turing asked of it?

BOOK: Alan Turing: The Enigma
4.04Mb size Format: txt, pdf, ePub
ads

Other books

Packing Heat by Penny McCall
The Truth About Celia by Kevin Brockmeier
Return to Night by Mary Renault
ClarenceBN by Sarah M. Anderson
Saint Jack by Paul Theroux
Lost Bear by Ruby Shae