Sunday 14 June 2015

AI Knowledge Based Systems

Knowledge - based systems: The key to power?  (1969 - 1979)

The picture of problem solving that had arisen during the first decade of A1 research was of
a general - purpose search mechanism trying to string together elementary reasoning steps to find complete solutions. Such approaches have been called weak methods, because, although general, they do not scale up to large or difficult problem instances.

 The alternative to weak methods is to use more powerful, domain - specific knowledge that allows larger reasoning steps and can more easily handle typically occurring cases in narrow areas of expertise. One might say that to solve a hard problem, you have to almost know the answer already.

The  D E N D R A L  program (Buchanan  et al.,  1969) was an early example of this approach.
It was developed at Stanford, where Ed Feigenbaum (a former student of Herbert Simon),
Bruce Buchanan (a philosopher turned computer scientist), and Joshua Lederberg (a Nobel
laureate geneticist) teamed up to solve the problem of inferring molecular structure from the
information provided by a mass spectrometer. The input to the program consists of the elementary formula of the molecule (e.g., CsHI3NO2) and the mass spectrum giving the masses
of the various fragments of the molecule generated when it is bombarded by an electron beam.

For example, the mass spectrum might contain a peak at  m  =  15,  corresponding to the mass
of a methyl (CH3) fragment.
The naive version of the program generated all possible structures consistent with the
formula, and then predicted what mass spectrum would be observed for each, comparing this
with the actual spectrum. As one might expect, this is intractable for decent - sized molecules.
The DENDRAL  researchers consulted analytical chemists and found that they worked by looking for well - known patterns of peaks in the spectrum that suggested common substructures in
the molecule. For example, the following rule is used to recognize a ketone (C=O) subgroup
(which weighs  28):
if  there are two peaks at  XI  and  2 2  such that
(a)  XI  +  2 2  =  M  +  28 (M is the mass of the whole molecule);
(b)  zl  -  28 is a high peak;
(c)  xz  -  28 is a high peak;
(d)  At least one of  XI  and  xz  is high.
then  there is a ketone subgroup
Recognizing that the molecule contains a particular substructure reduces the number of possible candidates enormously. D ENDRAL  was powerful because
All the relevant theoretical knowledge to solve these problems has been mapped over from
its general form in  the  [spectrum prediction component] ( " first principles " ) to efficient
special forms ( " cookbook recipes " ). (Feigenbaum  et  al.,  1971)
The significance of DENDRAL  was that it was the first successful  knowledge - intensive  system: its expertise derived from large numbers of special - purpose rules. Later systems also
incorporated the main theme of McCarthy's Advice Taker approach - the clean separation of
the knowledge (in the form of rules) from the reasoning component.
With this lesson in mind, Feigenbaum and others at Stanford began the Heuristic Programming Project  (HPP),  to investigate the extent to which the new methodology of  expert
  systems  could be applied to other areas of human expertise. The next major effort was in
the area of medical diagnosis. Feigenbaum, Buchanan, and Dr. Edward Short life developed
MYCIN  to diagnose blood infections. With about  450  rules, MYCIN  was able to perform
as well as some experts, and considerably better than junior doctors. It also contained two
major differences from DENDRAL . First, unlike the DENDRAL  rules, no general theoretical
model existed from which the MYCIN rules could be deduced. They had to be acquired from
extensive interviewing of experts, who in turn acquired them from textbooks, other experts,
and direct experience of cases. Second, the rules had to reflect the uncertainty associated with
medical knowledge. MYCIN  incorporated a calculus of uncertainty called  certainty factors
which seemed (at the time) to fit well with how doctors assessed the impact
of evidence on the diagnosis.
The importance of domain knowledge was also apparent in the area of understanding
natural language. Although Winograd's SHRDLU  system for understanding natural language
had engendered a good deal of excitement, its dependence on syntactic analysis caused some
(of the same problems as occurred  in  the early machine translation work. It was able to
{overcome ambiguity and understand pronoun references, but this was mainly because it was designed specifically for one area - the blocks world. Several researchers, including Eugene
Charniak, a fellow graduate student of Winograd's at MIT, suggested that robust language
understanding would require general knowledge about the world and a general method for
using that knowledge.

At Yale, the  linguist - turned - AI - researcher  Roger Schank emphasized this point, claiming,  " There is no such thing as syntax, "  which upset a lot of linguists, but did serve to start a
useful discussion. Schank and his students built a series of programs (Schank and Abelson,
1977; Wilensky, 1978; Schank and Riesbeck, 1981; Dyer, 1983) that all had the task of under -
standing natural language. The emphasis, however, was less on language  parse  and more on
the problems of representing and reasoning with the knowledge required for language under -
standing. The problems included representing stereotypical situations (Cullingford, 1981),
describing human memory organization (Rieger, 1976; Kolodner, 1983), and understanding
plans and goals (Wilensky, 1983).

The widespread growth of applications to real - world problems caused a concurrent increase in the demands for workable knowledge representation schemes. A large number
of different representation and reasoning languages were developed. Some were based on
logic - for example, the Prolog language became popular in Europe, and the PLANNER  family in the United States. Others, following Minsky's idea of  frames  (1975), adopted a more
structured approach, assembling facts about particular object and event types and arranging
the types into a large taxonomic hierarchy analogous to a biological taxonomy.

0 comments:

Post a Comment