Sunday 14 June 2015

Artificial Intelligence in 1966-1973

A Dose of Reality (1966-1973)

From the beginning, A1 researchers were not shy about making predictions of their coming
successes. The following statement by Herbert Simon in 1957 is often quoted:


It is not my aim to surprise or shock you - but the simplest way  1  can summarize is to say
that there are now in the world machines that think, that learn and that create. Moreover,
their ability to do these things is going to increase rapidly until - in a visible future - the
range  of  problems they can handle will be coextensive with the range to which the human
mind has been applied.

Terms such as  " visible future "  can be interpreted in various ways, but Simon also made a
more concrete prediction: that within 10 years a computer would be chess champion, and a
significant mathematical theorem would be proved by machine. These predictions came true
(or approximately true) within 40 years rather than 10. Simon's over - confidence was due
to the promising performance of early A1 systems on simple examples. In almost all cases.
however, these early systems turned out to fail miserably when tried out on wider selections
of problems and on more difficult problems.

The first kind of difficulty arose because most early programs contained little or no
knowledge of their subject matter; they succeeded by means of simple syntactic manipulations. A typical story occurred in early machine translation efforts, which were generously
funded by the U.S. National Research Council in an attempt to speed up the translation of
Russian scientific papers in the wake of the Sputnik launch in 1957. It was thought initially that simple syntactic transformations based on the grammars of Russian and English,
and word replacement using an electronic dictionary, would suffice to preserve the exact
meanings of sentences. 
The fact is that translation requires general knowledge of the subject
matter in order to resolve ambiguity and establish the content of the sentence. The famous
re - translation of  " the spirit is willing but the flesh is weak" as  " the vodka is good but the
meat is rotten ' illustrates the difficulties encountered. In 1966, a report by an advisory committee found that  " there has been no machine translation of general scientific text, and none
is in immediate prospeclt." All U.S. government funding for academic translation projects
was canceled. Today, machine translation is an imperfect but widely used tool for technical,
commercial, government, and Internet documents.

The second kind of difficulty was the intractability of many of the problems that  A1  was
attempting to solve. Most of the early  A1  programs solved problems by trying oul different
combinations of steps until the solution was found. This strategy worked initially because
micro worlds contained very few objects and hence very few possible actions and very short
solution sequences. Before the theory of computational complexity was developed, it was
widely thought that  " scaling up "  to larger problems was simply a matter of faster hardware
and larger memories. The optimism that accompanied the development of resolution theorem proving, for example, was soon dampened when researchers failed to prove theorems involving more than a few dozen facts.  The fact that a program can find a solution in principle does
not mean that the program contains any  of  the mechanisms needed to Join it in practice.

The illusion of unlimited computational power was not confined to problem - solving  programs. Early experiments in machine evolution (now called genetic algorithms) (Friedberg, 1958; Friedberg  et  al.,  1959) were based on the undoubtedly correct belief that by
making an appropriate series of small mutations to a machine code program, one can generate a program with good performance for any particular simple task. The idea, then, was to
try random mutations with a selection process to preserve mutations that seemed useful. Despite thousands of hours of CPU time, almost no progress was demonstrated. Modern genetic
algorithms use better representations and have shown more success.

Failure to come to grips with the  " combinatorial explosion "  was one of the main criticisms of A1 contained in the Lighthill report (Lighthill, 1973), which formed the basis for the
decision by the British government to end support for A1 research in all but two universities.
(Oral tradition paints a somewhat different and more colourful picture, with political ambitions
and personal animosities whose description is beside the point.)

A third difficulty arose because of some fundamental limitations on the basic structures
being used to generate intelligent behavior. For example, Minsky and Papert's book  Perceptrons  (1969) proved that, although perceptions (a simple form of neural network) could be
shown to learn anything they were capable of representing, they could represent very little.
In particular, a two - input perception could not be trained to recognize when its two inputs
were different.

 Although their results did not apply to more complex, multilayer networks,
research funding for neural - net research soon dwindled to almost nothing. Ironically, the new
back - propagation learning algorithms for multilayer networks that were to cause an enormous resurgence in neural - net research in the late 1980s were actually discovered first in
1969 (Bryson and Ho, 1969).

0 comments:

Post a Comment