Saturday 13 June 2015

Control theory, Cybernetics & Linguistics

 How can artifacts operate under their own control?

Ktesibios of Alexandria built the first self - controlling machine: a water clock with a regulator that kept the flow of water running through it at a constant, predictable pace. This invention changed the definition of what an artifact could do. Previously, only living things could modify their behavior in response to changes in the environment. Other examples of self - regulating feedback control systems include the steam engine governor, created by James Watt (1736 - 1 8 19),
First Steam Engine Governor

and the thermostat, invented by Colnelis Drebbel (1 572 - 1633), who also invented the submarine. The mathematical theory of stable feedback systems was developed in the 19th century.
submarine
The central figure in the creation of what is now called  control theory  was Norbert Wiener (1894 - 1964). Wiener was a brilliant mathematician who worked with Bertrand Russell, among others, before developing an interest in biological and mechanical control systems and their connection to cognition. Like Craik (who also used control systems as psychological models), Wiener and his colleagues Arturo Rosenblueth and Julian Bigelow challenged the behaviorist orthodoxy (Rosenblueth  et  al.,  1943). They viewed purposive behavior as arising from a regulatory mechanism trying to minimize  " error " - the difference between current state and goal state. In the late 1940s, Wiener, along with Warren McCulloch, Walter pitts, and John von Neumann, organized  a  series of conferences that explored the new mathematical and computational models of cognition and influenced many other researchers in the behavioral sciences. Wiener's book  Cybernetics  (1948) became a bestseller and awoke the public to the possibility of artificially intelligent machines.

Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems that maximize an  objective function  over time. This roughly matches our view of  AH:  designing systems that behave optimally. Why, then, are AI and control theory two different fields, especially given the close connections among their founders? The answer lies in the close coupling between the mathematical techniques that were familiar to the participants and the corresponding sets of problems that were encompassed in each world view. Calculus and matrix algebra, the tools of control theory, lend themselves to systems that are describable  b,y  fixed sets of continuous variables; furthermore, exact analysis is typically feasible only for  linear  systems.  A1  was founded in part as a way to escape from the limitations of the mathematics of control theory in the 1950s. The tools of logical inference and computation allowed  A1  researchers to consider some problems such as language, vision, and planning, that fell completely outside the control theorist's purview.


Linguistics

How does language relate to thought?

In 1957,  B.  F.  Skinner published  Verbal  Behavior.  This was a comprehensive, detailed account of the behaviorist approach to language learning, written by the foremost expert in the field. But curiously, a review of the book became as well known as the book itself, and served to almost kill off interest in behaviourism. The author of the review was Noam Chomsky, who had just published a book on his own theory,  Syntactic Structures.  Chomsky showed how the behaviorist theory did not address the notion of creativity in language - it did not explain how a child could understand and make up sentences that he or she had never heard before. Chomsky's theory - based on syntactic models going back to the Indian linguist Panini (c. 350  ~.c.)-could explain this, and unlike previous theories, it was formal enough that it could in principle be programmed.
Modem linguistics and AI, then, were  " born "  at about the same time, and grew up together, intersecting in a hybrid field called  computational linguistics  or  natural language processing.  The problem of understanding language soon turned out to be considerably more complex than it seemed in 1957. Understanding language requires an understanding of the subject matter and context, not just an understanding of the structure of sentences. This might seem obvious, but it was not widely appreciated until the 1960s. Much of the early work in knowledge representation  (the study of how to put knowledge into a form that  a  computer can reason with) was tied to language and informed by research in Linguistics, which was connected in turn to decades of work on the philosophical analysis of language.

0 comments:

Post a Comment