Cybernetics. Information Theory. Artificial Intelligence
An important part of cybernetics commonly is called information theory, although arguably more appropriately termed communication theory. It depends on the fact that information, in a certain sense of the word, can be measured and expressed in units. The amount of such information in a message is a measure of the difficulty of transmitting it from place to place or of storing it, not a measure of its significance.
The unit of information is the “bit,’’ the word being derived from ‘‘binary digit.’’ It is the amount of information required to indicate a choice between two possibilities that previously were equally probable. The capacity of a communication channel can be expressed in bits per second. The principal author of the modern theory is Claude Shannon, although a similar measure of information was introduced by R. V. L. Hartley as early as 1928. The later work greatly extends the theory, particularly in taking account of noise.
In its simple form, with reference to discrete choices, the theory accounts nicely for some biological phenomena, for instance, the reaction times of subjects in multiple-choice experiments. It is extended to apply to continuous signals and to take account of corruption by random disturbances or ‘‘noise.’’ The mathematical expressions then correspond, with a reversal of sign, to those expressions for the evaluation of entropy in thermodynamics. The theory applies to the detection of signals in noise and, therefore, to perception generally, and one notable treatment deals with its application to optimal recovery and detection of radar echoes in noise.
The effects of noise often can be overcome by exploiting redundancy, which is information (in the special quantitative sense) additional to that needed to convey the message in the absence of noise. Communication in natural language, whether spoken or written, has considerable redundancy, and meaning usually can be guessed with a fair degree of confidence when a substantial number of letters, syllables, or words effectively are lost because of noise, interference, and distortion.
Much attention has been given to error-detecting and error-correcting coding that allow the introduction of redundancy in particularly effective ways. One theorem of information theory refers to the necessary capacity of an auxiliary channel to allow the correction of a corrupted message and corresponds to Ashby’s principle of requisite variety, which has found important application in management.
Artificial Intelligence. In the attempt to understand the working of the brain in mechanistic terms, many attempts were made to model some aspect of its working, usually that of learning to perform a particular task. Often the task was a form of pattern classification, such as recognition of hand-blocked characters. An early assumption was that an intelligent artifact should model the nervous system and should consist of many relatively simple interacting units.
Variations on such a scheme, indicated by the term ‘‘perceptron’’ devised by Frank Rosenblatt, could learn pattern classification but only of a simple kind without significant learned generalization. The outcomes of these early attempts to achieve ‘‘artificial intelligence’’ were not impressive, and at a conference in 1956 the term “Artificial Intelligence’’ (with capitals), or AI, was given a rather different meaning.
The aim of the new AI was to use the full power of computers, without restriction to a neural net or other prescribed architecture, to model human capability in areas that are accepted readily as demonstrating ‘‘intelligence.’’ The main areas that have received attention are as follows:
Theorem Proving. The automatic proving of mathematical theorems has received much attention, and search methods developed have been applied in other areas, such as path planning for robots. They also are the basis of ways of programming computers declaratively, notably using the language PROLOG, rather than procedurally. In declarative programming, the required task is presented effectively as a mathematical theorem to be proved and, in some application areas, allows much faster program development than is possible by specifying procedures manually.
Game Playing. Chess has been seen as a classical challenge, and computers now can compete at an extremely high level, such that a computer beat the highest-scoring human chess player in history, Gary Kasparov. Important pioneering work was done using the game of checkers (or ‘‘draughts’’).
Pattern Recognition. Pattern recognition can refer to visual or auditory patterns; or patterns in other or mixed modalities; or in no particular modality, as when used to look for patterns in medical or weather data. Attention also has been given to the analysis of complete visual scenes, which presents special difficulty because, among other reasons, objects can have various orientations and can obscure each other partially. Scene analysis is necessary for advanced developments in robotics.
Use of Natural Language. Question-answering systems and mechanical translation have received attention, and practical systems for both have been implemented but leave much to be desired. Early optimistic predictions of computer performance in this area have not materialized fully. This lack is largely because the ‘‘understanding’’ of text depends on semantic as well as syntactical features and, therefore, on the huge amount of knowledge of the world that is accumulated by a person.
Robotics. Robotics has great practical importance in, for example, space research, undersea exploration, bomb disposal, and manufacturing. Many of its challenges are associated with processing sensory data, including video images, so as to navigate, recognize, and manipulate objects with dexterity and energy efficiency. Apart from their immediate use, these developments can be expected to throw light on corresponding biological mechanisms.
Bipedal locomotion has been achieved only with great difficulty, which shows the complexity of the biological control of posture and balance. For practical mobile robots, wheeled or tracked locomotion is used instead. A topic area associated with advanced robotics projects is that of virtual reality, where a person is given sensory input and interactions that simulate a nonexistent environment. Flight simulators for pilot training were an early example, and computer games implement the effect to varying degrees.
Expert Systems. This term has been used to refer to systems that explicitly model the responses of a human ‘‘domain expert,’’ either by questioning the expert about his/her methods or deriving rules from examples set to him/her. A favourite application area has been the medical diagnosis in various specializations, both for direct use and for training students.
The general method has been applied to a very wide range of tasks in which human judgement is superior to any known analytic approach. Under the general heading of diagnosis, this range of topics includes fault finding in computers and other complex machinery or in organizations. Other applications are made to business decisions and military strategy.
A great deal has been achieved under the heading of AI. It has underlined the importance of heuristics, or rules that do not always ‘‘work’’ (in the sense of leading directly to a solution of a problem). Heuristics indicate where it may be useful to look for solutions and are certainly a feature of human, as well as machine, problem-solving.
In this and other ways, studies of AI have contributed to the understanding of intelligence, not least by recognizing the complexity of many of the tasks studied. Apart from this, the influence of AI studies on computer programming practice has been profound; for example, the use of ‘‘list-processing,’’ which has sometimes been seen as peculiar to AI programs, is used widely in compilers and operating systems.
Nevertheless, progress in AI is widely felt to have been disappointing. Mathematical theorem-proving and chess playing are forms of intellectual activity that people find difficult, and AI studies have produced machines proficient in them but unable to perform ordinary tasks like goinround a house and emptying the ashtrays.
Recognizing chairs, tables, ashtrays, and so forth in their almost infinite variety of shapes and colors is hard because it is hard to define these objects in a way that is ‘‘understandable’’ to a robot and because more problems arise in manipulation, trajectory planning, and balance. If the evolution of machine intelligence is to have correspondence to that of natural intelligence, then what are seen as low-level manifestations should appear first. The ultimate possibilities for machine intelligence were discussed comprehensively by Turing and more recently and sceptically using the parable of Searle’s ‘‘Chinese Room’’ in which an operator manipulates symbols without understanding.
In relatively recent decades a revival of interest has grown in artificial neural nets (ANNs). This revival of interest is attributable partly to advances in computer technology that make feasible the representation and manipulation of large nets, but a more significant factor is the invention of useful ways of implementing learning in ANNs.
The most powerful of these ways is ‘‘backpropagation,’’ which depends on information pathways in the net that are additional to those serving its primary function, conducting in the opposite direction. Some applications are of a ‘‘control’’ or continuous-variable kind where the net provides a means of learning the continuous relation between a number of continuous variables, one of them a desired output that the net learns to compute from the others. Other application areas have an entirely different nature and include linguistics.
These relatively recent studies have been driven mainly by practical considerations, and the correspondence to biological processing often is controversial. The ‘‘backpropagation of error’’ algorithm, the basis of the majority of applications, has been argued to be unlikely to operate in biological processing. However, other forms of backpropagation probably do play a part, and biological considerations are invoked frequently in arguing the merits of schemes using ANNs.
Date added: 2024-06-15; views: 147;