Read Alan Turing: The Enigma Online

Authors: Andrew Hodges

Tags: #Biography & Autobiography, #Science & Technology, #Computers, #History, #Mathematics, #History & Philosophy

Alan Turing: The Enigma (79 page)

BOOK: Alan Turing: The Enigma
3.79Mb size Format: txt, pdf, ePub
ads

 

The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself. It may happen however that the masters will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well chosen gibberish, whenever any dangerous suggestions were made. I think that a reaction of this kind is a very real danger. This topic naturally leads to the question as to how far it is in principle possible for a computing machine to simulate human activities.

This was a more controversial claim. Hartree, for instance, writing to
The Times
in November, had repeated his statement in
Nature
that ‘use of the machine is no substitute for the thought of organising the computations, only for the labour of carrying them out.’ Darwin had written more expansively that

 

In popular language the word ‘brain’ is associated with the higher realms of the intellect, but in fact a very great part of the brain is an unconscious automatic machine producing precise and sometimes very complicated reactions to stimuli. This is the only part of the brain we may aspire to imitate. The new machines will in no way replace thought, but rather they will increase the need for it …

To describe such careful and responsible statements as ‘gibberish’ was not the most tactful policy.

Darwin and Hartree were, in fact, echoing the comment by Ada, Countess of Lovelace, who wrote an account
38
of Babbage’s planned Analytical Engine in 1842, and claimed that ‘The Analytical Engine has no pretensions whatever to
originate
anything. It can do
whatever we know how to order it
to perform.’ At one level, this assertion certainly had to be
urged against the very naive view that a machine doing long and elaborate sums could be called clever for so doing. As the first writer of programs for a universal machine, Lady Lovelace knew that the cleverness lay in her own head. Alan Turing would not have disputed this point, as far as it went. The manager who took all decisions from the rule book would hardly be ‘intelligent’, or really taking a decision. It would be the writer of the rule book who was determining what happened. But he held that there was no reason in principle why the machine should not take over the work of the ‘master’ who programmed it, to a point where, according to the imitation principle, it
could
be called intelligent or original.

What he had in mind went much further than the development of languages which would take over the detailed work of the ‘masters’ in compiling instruction tables. He mentioned this future development, which in the ACE report he had already explored a little, quite briefly:

 

Actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machines were given instruction tables which would enable it to interpret that logical system. This should mean that there will be much more practical scope for logical systems than there has been in the past. Some attempts will probably be made to get the machines to do actual manipulations of mathematical formulae. To do so will require the development of a special logical system for the purpose. This system should resemble normal mathematical procedure closely, but at the same time should be as unambiguous as possible.

Rather, in speaking of a computer ‘simulating human activities’, he had in mind the simulation of
learning
, in such a way that after a point the machine would not merely be doing ‘whatever we know how to order it to perform,’ as Lady Lovelace had claimed, for no one would know how it was working:

It has been said that computing machines can only carry out the purposes that they are instructed to do. This is certainly true in the sense that if they do something other than what they were instructed then they have just made some mistake. It is also true that the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands in principle what is going on all the time. Up till the present machines have only been used in this way. But is it necessary that they should always be used in such a manner? Let us suppose we have set up a machine with certain initial instruction tables, so constructed that these tables might on occasion, if good reason arose, modify those tables. One can imagine that after the machine had been operating for some time, the instructions would have altered out of recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations.

It was in this passage that he drew first attention to the richness inherent in a stored-program universal machine. He was well aware that strictly
speaking, exploitation of the ability to modify the instructions could not enlarge the scope of the machine, later writing:
39

 

How can the rules of a machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. …The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.

But with that strictly logical reservation, he held the process of changing instructions to be significantly close to that of human learning, and deserving of emphasis. He imagined the progress of the machine altering its own instructions, as like that of a ‘pupil’ learning from a ‘master’. (It was a typically quick shift to the ‘states of mind’ idea of the machine from the ‘instruction note’ view.) A learning machine, he went on to explain:

 

might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner. In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence. As soon as one can provide a reasonably large memory capacity it should be possible to begin to experiment on these lines. The memory capacity of the human brain is of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress with a few million digits, especially if one confined one’s investigation to some rather limited field such as the game of chess.

The ACE, as planned, would have at most 200,000 digits in store, so to speak of a ‘few million’ was looking well into the future. He described the storage planned for the ACE as ‘comparable with the memory capacity of a minnow’. But even so, he perceived the development of ‘learning’ programs as something that would be feasible within a short period: not merely a hypothetical possibility, but affecting current research in a practical way. On 20 November 1946 he had replied to an enquiry from W. Ross Ashby, a neurologist eager to make progress with mechanical models of cerebral function, in the following terms:
40

 

The ace will be used, as you suggest, in the first instance in an entirely disciplined manner, similar to the action of the lower centres, although the reflexes will be extremely complicated. The disciplined action carries with it the disagreeable feature, which you mentioned, that it will be entirely uncritical when anything goes wrong. It will also be necessarily devoid of anything that could be called originality. There is, however, no reason why the machine should always be used in such a manner: there is nothing in its construction which obliges us to
do so. It would be quite possible for the machine to try out variations of behaviour and accept or reject them in the manner you describe and I have been hoping to make the machine do this. This is possible because, without altering the design of the machine itself, it can, in theory at any rate, be used as a model of any other machine, by making it remember a suitable set of instructions. The ace is in fact, analogous to the ‘universal machine’ described in my paper on computable numbers. This theoretical possibility is attainable in practice, in all reasonable cases, at worst at the expense of operating slightly slower than a machine specially designed for the purpose in question. Thus, although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model, within the ace, in which this possibility was allowed for, but in which the actual construction of the ace did not alter, but only the remembered data, describing the mode of behaviour applicable at any time. I feel that you would be well advised to take advantage of this principle, and do your experiments on the ace, instead of building a special machine.

Enlarging in his talk upon the favourite example of chess-playing, Alan claimed that

 

It would probably be quite easy to find instruction tables which would enable the ace to win against an average player. Indeed Shannon of Bell Telephone Laboratories tells me that he has won games playing by rule of thumb; the skill of his opponents is not stated.

This was probably a misunderstanding. Shannon had been thinking about mechanising chess-playing, since about 1945, by a minimax strategy requiring the ‘backing up’ of search trees – the same basic idea as Alan and Jack Good had formalised in 1941. But he had not claimed to have produced a
winning
program. In any case, however, Alan

 

would not consider such a victory very significant. What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this, but this of course does not get us very far.

Alan next turned a little aside from this central idea in order to consider the objection to the idea of machine ‘intelligence’ that was raised by the existence of problems insoluble by a mechanical process – by the discovery of
Computable Numbers
, in fact. In the ‘ordinal logics’ he had invested the business of seeing the truth of an unprovable assertion, with the psychological significance of ‘intuition’. But this was not the view that he put forward now. Indeed, his comments verged on saying that such problems were irrelevant to the question of ‘intelligence’. He did not probe far into the significance of Gödel’s theorem and his own result, but instead cut the Gordian knot:

 

I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the
human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

This was very true. Gödel’s theorem and his own result were concerned with the machine as a sort of papal authority, infallible rather than intelligent. But his real point lay in the imitation principle, couched in traditional British terms of ‘fair play for machines’, when it came to ‘testing their IQ’, a point which brought him back to the idea of mechanical learning by experience:

 

A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge. Why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the opponent will automatically provide this contact.

At the end of this talk there was a moment of stunned incredulity, during which his audience looked round with disbelief. This was probably much to Alan’s delight. He knew perfectly well that he was upsetting the conventional armistice between science and religion, and it was all the more grist to his mill. He had thought it all out since reading Eddington while in the sixth form, and he was not now going to toe this official line that separated the ‘unconscious automatic machine’ from the ‘higher realms of the intellect’. There was
no
such line – that was his thesis.

At heart it was the same problem of mind and matter that Eddington had tried to rescue for the side of the angels by invoking the Heisenberg Uncertainty Principle. But there was a difference. Eddington had addressed himself to the determinism of physical law, in order to deal with the kind of Victorian scientific world-picture that Samuel Butler had parodied in
Erewhon:

 

If it be urged that the action of the potato is chemical and mechanical only, and that it is due to the chemical and mechanical effects of light and heat, the answer would seem to lie in an enquiry whether every sensation is not chemical and mechanical in its operation? …Whether there be not a molecular action of thought, whence a dynamical theory of the passions shall be deducible? Whether strictly speaking we should not ask what kinds of levers a man is made of rather than what is his temperament? How are they balanced? How much of such and such will it take to weigh them down so as to make him do so and so?
BOOK: Alan Turing: The Enigma
3.79Mb size Format: txt, pdf, ePub
ads

Other books

Stone Beast by Bonnie Bliss
Christmas Yet to Come by Marian Perera
Emily For Real by Sylvia Gunnery
Worth the Risk by Anne Lange
The Bilbao Looking Glass by Charlotte MacLeod
The Innocence Game by Michael Harvey