[Given at a conference at the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences in St. Petersburg, Russia, late May 1992] © 1992 Dennis R. Papazian

ARTIFICIAL INTELLIGENCE AND MACHINE TECHNOLOGY: MASTER OR SLAVE?

by Dennis R. Papazian, Ph.D.

The University of Michigan-Dearborn

While I am certainly not an expert in this field, the invitation to attend this conference inspired me to do a bit of reading on the topic and to reflect on it in terms of some of the work that has been done and a few of the social ramifications. I hope this essay, although not a scientific paper, will provide food for thought and reflections. I, in turn, expect to learn a great deal from the proceedings here.

The vision.

The rise of modern science has inspired in theorists the vision of machines which can finally free mankind from the drudgery of labor. Such machines to do mankind's work were dubbed robots, and science fiction books and movies, and even more serious works, are replete with imaginary examples of robots which are either almost human or, indeed, sometimes super-human, in their behavior.

The rise of modern computer technology, in its turn, has made such visions seem more realistic and even attainable in our lifetime. Such expectations have encouraged some of the best minds of our era to work at producing thinking machines, first to be applied to industrial output (complex automation) and then to even replicate human intelligence itself.

Philosophers and then scientists have long debated the question of "how intelligent can machines be?" from even before the computer age, which now makes the question of more direct import. What we currently see, interestingly enough, is that some of the issues that have shaped the philosophical debate are the very ones which today confront those now dealing with the various aspects of artificial intelligence--the meanings of knowledge, logic, reason, understanding, insight, perception, instincts, and simple common sense and how to replicate them.

A fundamental question is: Can man hope (or fear) that he can create machines which will become more intelligent than he? The traditional answer of philosophy is that machines, indeed, cannot be more intelligent than people simply because man is the creator and the machine the created. They supported this view with the proposition that only humans have "original intent" while machines can only have "derived intent." Only time will settle this question; but, hopefully, man still must be the judge. One thing is now clear, that in performing specific and limited tasks, present machines are--even now--more dependable than most people; yet while in dealing with complex matters, these machines can seem rather stupid and inept.

What can machines do?

Modern machines, in effect computers, certainly have done remarkable work in the design and manipulation of inanimate objects (what we call engineering and manufacturing), while they have been less helpful in the definition, coalescing and application of goals (what we generally call management or governance).

A desire to rationalize manufacturing should be analyzed in three ways: First, to what extent is it reasonable, considering the current level of science and computer technology, to use robots in production? In other words, what level and type of automation is it reasonable to attempt presently when we consider a cost/benefit ratio of inputs to outputs? We have only to recall the momentous failure of General Motors when it tried to introduce robots on the assembly line, at astounding expense, only having to replace them shortly thereafter with more conventional automated machinery. Second, considering the current level of science and computer technology, what are the theoretical limits of the use of machines in production? In essence, what should be our goals and aspirations? Where should we currently invest our limited resources of time, materials, and money? Third, what are the prospects for an advance in technology, a totally new break-through, which can make possible things now impossible? Should engineers wait on certain applications until scientists have made qualitative progress or should they attempt to do all things using current technology?

Machine intelligence can certainly be used currently for the rapid accomplishment of certain rather complex customized tasks, including physical and electronic designs, which would be nigh impossible even a few years ago for any human individual. These functions would include a plethora of activities from the more general use of supercomputers to handle vast amounts of data to predict the weather to the design of computer chips of the commodity or even the "designer" types.

It is much less certain that scientists and specialists can make machines which can replicate the function of the human mind, an ambition which seems only to recede as the current technology is applied to that elusive goal.

Finally, there is, as always, the concern for the social implications of all these endeavors. Will these efforts finally release mankind from the endless toil and drudgery of the centuries, or will they in some way limit man's options and make machines the masters and man the slave? Will machines make society richer and impoverish individuals, or will every individual gain as society gains? This is an important question which must be answered.

Robots and machine intelligence.

The initial optimism of the past thirty years, when computers and electronics were sophisticated enough to make earlier predictions of working robots plausible, was dashed by the failure of scientists and engineers to replicate the vast complexities of the human mind and body. On the other hand, the attempts have at least defined the problems, organized them into areas of research, and have produced direct, limited, but important results.

The problem has become divided. Now, the machines are expected to do only more specifically defined and highly specialized work. The work may be complex, but it is limited in scope. New jobs require substantial modification and reprogramming of equipment, if it can be modified at all. The question of "intelligence" has been separated from the physical equipment and has for the present time become computer based.

Furthermore, there is a difference between a machine doing the work of an intelligent person and true intelligence. Computers can now defeat all but grandmasters at chess, they can do your income tax from questions you answer, and they can deliver your mail within a building. These specialized programs and machines, however, cannot at the same time deal with life processes. The mail delivery "robot" would continue on its rounds even if an accidental release of poison gas would empty a building of all its occupants. This circumscribed "intelligence" of the robot, consequently, has been dubbed "artificial intelligence," or "specialized intelligence," to distinguish it from true human or "life intelligence," much less "creative intelligence."

Those who have been on the forefront of creating machine intelligence have observed "how easy the 'hard' things were to do, and how hard the 'easy' things." The application of a machine to certain complex tasks, which often may exceed human ability to equal, can be contrasted with the inability of a machine to comprehend a nursery rhyme or to leave a building in case of fire.

AI: Schools of thought.

One of the first groups which dealt with machine intelligence in replicating the activities of the brain took its inspiration from psychology. Herbert Simon and Allen Newell of Carnegie Mellon University, produced in the 1950s a "General Problem Solver" which was based on their perception of how humans think. These ideas were gained through a study of psychology and human intuition. The results of their endeavors, the program which they produced, was quite undependable: it as readily produced quite stupid conclusions as often as brilliant ones. Much work needs to be done if these systems are to be allowed to apply their conclusions without having to be constantly checked, of necessity, by the human mind.

Another approach is represented by the "logicists," those who do not try to deal with psychology but rather limit themselves to the use of traditional, classical logic. Conventional logic falls apart in the superstructure, the line of reasoning, if any of the assumptions on which it is based turn out to be false. This problem, which hindered many efforts at seeking proper conclusions, inspired a new form of "non-monotonic" logic to be developed. It seeks to discover if there is a consistent way for logic to be adjusted, perhaps backward, to reestablish a cogent line of reasoning.

Presently, there are those who may be called the "structuralists," who eschew conventional silicone chips and try to replicate the information-processing style of the brain synapses with specially designed chips. The "connectionists," a sub-family of the "structuralists," are attempting to build interconnected networks of artificial "neurons," on silicon or even with organic materials. While this process has not developed very far, it has intriguing implications. The use of multiple and parallel processors, recently come of age, has furthered these investigations.

AI: Expert Systems.

While earlier enthusiasts hoped to find a general theory of intelligence, today most artificial intelligence (AI) researchers are producing "toolboxes" of problem solving techniques to deal with more discrete problems.

Computers programmed to capture human expertise, to replicate logic and experience, also have significant but limited use. A program to prescribe medicine for bacterial infection may do so better than most physicians, but they cannot distinguish between an infected woman and one in the pains of childbirth. The answer at present, of course, is to have humans work closely with machines to take advantage of the best elements of both, the machine's logic and memory, and the person's "common sense." Rather than replacing people, these expert systems make people of modest intelligence equal, in certain tasks, to those who are brilliant and have vast experience. Thus we see here the seeds of profound changes in the way people work and the potential benefit for society.

Such successes, however, may conceal the fact that machine intelligence is not transferable. If a program is devised to maximize investments in the stock market by the manipulation of futures, it cannot ipso facto be "intelligent" enough to solve problems of another sort. Unlike human intelligence and learning, machine intelligence cannot be easily transferred to new and unexpected tasks.

There has been, however, some unexpected and beneficial fall-outs from some of the major general projects, funded by government and industry, which have failed. Technological breakthroughs, which were intended to be a part of a larger system, have been applied to more mundane, but still useful purposes. A program at Massachusetts Institute of Technology (MIT), which used a supercomputer to work on computer-vision research, has provided the technology which Matsushita uses to stabilize miniature video cameras to minimize the shake when the cameras are held by hand. A simplified form of the MIT developed technology as been compacted into a miniature chip and put to commercial use.

Another program, produced by Ascent Technology to help Delta Airlines allocate landing gates at busy airports, was used by the United States government to help plan the deployment of men and materials in the recent Gulf war.

Other projects which did not work, for example computer-vision, have been transferred to different technologies. While the first research on computer vision was done in terms of robot application, it is now being separated from abstract computer reasoning and is being dealt with more in the realm of electronic signal processing.

Social implications.

The modern production line, which reached its apogee after the turn of the century, was a result of so-called "scientific management" in which the parts of the work were broken down into simple, repetitive steps that could be learned in a few minutes by almost any worker.

Next came "functional management," devised by Alfred Sloan, the creator of General Motors, who separated white-collar workers into various classes according to function, an organization similar to that of the production workers.

These "innovations" made possible the industrial giants of the 20th century (the so-called monopolies and combines), the products of the "iron age" of mass production and mass bureaucracy. At the present time, however, with new technology driven by computer production, downsizing, and product variety, these old mass industries are obsolete. Their rigid and inert bureaucracies and their division between management and workers are no longer suitable to keep up with the rapid changes in products, design and production brought on by the hi-tech era, but rather serve as a braking mechanism to change.

One of the new problems which arises as the responsibility of the individual worker or functionary broadens, is that not all employees have the knowledge or skills to do their work properly or the overview to do their work in harmony with others. It is in this regard that artificial intelligence (AI) comes into play. Even today in many American industries the office worker has by his side a computer with AI into which he feeds certain parameters and is given, in return, advice on how to proceed with his project.

It is an anomaly that the purpose of AI is to break down the human bureaucracy and yet the expert system is in itself inflexible, being made up of thousands or tens of thousands discrete, logical propositions to serve the literal-mindedness of the computer. The amount of detail which is required to satisfy the needs of the computer has in turn produced a plethora and profusion of rules. One wonders if, after all this investment, companies can afford to change--in the long run--as much as they might like to do as times and products change.

One attempt to overcome this problem has been to attempt to link intuitive related ideas together into "frames," so that the operator can find answers without being too literal and precise. Other attempts are based on allowing expert systems to retract inferences made on its rules, and build new inferences by another route.

To improve expert systems an attempt is being made to use "fuzzy logic," to express rules in terms of weighted, logical probabilities. Another is to use statistical probability. And still another, to solve problems by precedent. Finally, there is an attempt to seek solutions by analogy.

Yet, in the final analysis, the expert system is no more than a rule book, with exceptionally good cross references based on ideas or key words, which can be quickly manipulated through electronics. These systems have the ability to search through billions of possible sequences of moves in order to find the best one.

Any breakthrough in productivity for expert systems would require that machines be able to learn, that is to teach themselves from available internal or shared information, either through concepts, experience, or reason. Little progress, because of conflicting designs, has been made in computers digesting shared information.

The expert system is thus only a partial aid to a skilled individual who, hopefully, is freed from the drudgery of research to apply real human intelligence, common sense and creativity to his job.

A word of warning. Since parsing sentences by machine is an imperfect business, instead of teaching machines the full and varied meaning of words, people are driven to using only words the machine can understand. What this might do to the human creative thought process is anybody's guess.

We are entering a new and unknown world. The introduction of movable-type printing prepared the way for cheap books, newspapers, and mass education. The industrial revolution provided relative plenty, although of little variety, for the masses. The electronic revolution brought the telephone, radio, television and private entertainment into the home. Modern digital technology has provided the basis for variety and new products in such plenty as totally unimagined in the past. What will the age of artificial intelligence bring? Decision-making will be both constrained and expanded by machine technology, with many broad implications.

The immediate challenge for those who want to support commercial applications of machine intelligence on a broader scale is to get computers to understand commands typed in plain, natural languages (English in my case), to read printed or written material and to comprehend it, to under- stand and to obey the spoken word, and to communicate verbally. The more intelligent the machine, the more free it will leave the human spirit; the more constricted, the more it will bend human intelligence to its own limitations. This is the type of revolution which should not be stopped in midstream. Either we go all the way or we suffer the possibility of a new Middle Ages.

Unfulfilled promises.

In the 1960s it was said that the mainframe computer would help productivity by centralizing decision making. It did not. It only gave upper-level management access to more and more information which had to be processed in order to be understood. It also gave power and control over information to those who managed the mainframes.

In the 1970s and early 1980s, it was said that the mini- and micro-computers would help productivity by decentralizing decision-making. It did not. The fragmentation of information had to be, in the later 1980s, brought together through "networks." It remains to be seen what the effect of networks will be. It will certainly give a vast army of individuals access to information heretofore limited in its availability. Unfortunately, it also gives network administrators the ability to monitor the work and activities of individuals as never before. Will all that information really improve decision-making and/or improve the design, production, and distribution cycle or will it, in turn, applies new fetters to the human imagination?

Problems and prospects

The problems we confront are huge and need our attention as we move into the brave new world. The new technology offers mankind a new material wealth heretofore unimagined. How will that wealth be shared between the masters of politics, capital, and technology and the working masses? Will the rich get richer and the poor get poorer, or can we devise new systems of sharing without injuring the human impulses of self-development, personal responsibility, and initiative by providing appropriate incentives?

The masters of the new technology will undoubtedly have a higher intelligence and education than that of the masses. Will the gap between the educated and uneducated grow, or can we encourage every man to reach his own personal potential?

Will the inflexibility of present computers limit human responses? Will our own minds narrow to match the literal-mindedness of the computer, or can we overcome that possibility? Will we be able to master the databases or will they constrict our freedom? Anyone who has tried to have his credit rating corrected in the databases of credit-rating companies, such as TRW, understands this potential for abuse. How will privacy be protected, when our whole lives will be databased from the cradle to the grave?

Will we be able to devise new and flexible arrangements for the organization of labor, or will new computerized bureaucracies replace the old. Once databases become too extensive, they may become a superconservative brake on society: the cost of change may be too high to bear. Will we be able to devise new and flexible organization of management, or will old ways of managing limit our progress into the future. The worst of all possible worlds, in my opinion, would be for mankind to stop in the middle of this most profound revolution. Either we ride it to the end, or its worst features will harness us in a way no previous society, no previous system, and no previous government has ever managed to do. Either there is competition, a built-in mechanism for change, or there may be established a great hinderance to a free, changing, and open society.

But the real problem we confront is several magnitudes of scale even more challenging and difficult. Digital technology and downsizing have produced totally new products and have also vastly improved many older ones. We now have come to depend on xerography, cellular telephones, optical cable, computers, modems, telefaxes, transmission of sounds and visuals by satellites, and digital recording on tape and on diskette. The computer manipulation of sounds and visuals is a rapidly developing current technology. Electronic photography is upon us, and it will not be long before we have interactive (responsive) television and customized books printed directly from an assortment of written materials stored in digital form. Compact disks now place on your desk the amount of written material--encyclopedias, books, journals and articles--that once covered whole walls of libraries.

Even now, in a more profound sense, television defines reality. It has become, in many societies, more real than life. That which is not seen on television is relegated, especially by younger people, to a less-than-full reality. As the electronic storage of text slowly replaces the printed book, all power will go to the databases. If television defines reality, certainly electronic storage of words (books, newspapers, and written records) will define its outer limits. Books and other printed material not in the databases will not exist for most people in the future.

The digital manipulation of sounds, pictures and the written (electronic) word will soon make it impossible for most people to know what is original and unaltered and what is artificial and contrived, or even forged. Photographs can be altered in totally convincing and undetectable ways. The words of television can be altered and replaced by dubbing, a system even now widely used. Taped conversations can be digitally altered or whole new conversations artificially created. Pictures can be created and destroyed, sounds manipulated, and certainly the written (electronic) word can be totally and fundamentally altered. Fundamental reality will fade away in a miasma of electronic impulses, all under the control of the computer masters. George Orwell's 1984 will be upon us in a decade or two.

Furthermore, networks and satellites can tie people together in all parts of the globe, but that only produces a false sense of community. We may be tied together with electronic strings, but we may remain far apart in terms of social cohesion and fundamental human ideology. Will human values be lost? What will become of such traditional abstractions as love, honor, morality, decency, pride, humility, sympathy, responsibility, community spirit, and so forth? These are values which arose in families, extended families, tribes, nations, and through religion and philosophy. How will they survive electronic miasmic reality? Master or slave? What will the new technology be for mankind?

* * *

For Further Reading

Books:

Karamjit S. Gill (ed.), Artificial Intelligence for Society, New York, Wiley, 1986.

Articles:

James Bailey, "First We Reshape Our Computers, Then Our Computers Reshape Us: The Broader Intellectual Impact of Parallelism," Daedalus, 121 (Winter 1992), pp. 67-86.

Margaret Boden, "Artificial Intelligence and Images of Man," Michigan Quarterly Review, 24 (Spring 1985), pp. 186-96.

Paul M. Churchland and Patricia Smith Churchland, "Could a Machine Think?," Scientific American, 262 (January 1990), pp. 32-7.

Jack Cushman, "Driverless Land Rovers, Co-Pilotless Planes," Electronic Business, 10 (February 1984), p. 50.

Peter C. Denning, "The Science of Computing: Modelling Reality," American Scientist, 78 (Nov/Dec 1990), pp. 495-8.

Gery D'Ydewalle and Patrick Delhaye, "Artificial Intelligence, Knowledge Extraction and the Study of Human Intelligence," International Social Science Journal, 40 (February 1988), pp. 63-72.

Mary Jo Foley, "AI: The Promise and the Reality," Electronic Business, 12 (February 1, 1986), p. 96.

Barry Fox, "The Human Factor," New Scientist, 125 (February 24, 1990), p. 36.

David H. Freedman, "Common Sense and the Computer," Discover, 11 (August 1990), pp. 64-71.

Jerome C. Glenn, "Conscious Technology: The Co-Evolution of Mind and Machine," The Futurist, 23 (Sept/Oct 1989), pp. 15-20.

Michael Gurstein, "Social Impacts of Selected Artificial Intelligence Applications: The Canadian Context," Futures, 17 (December 1985), pp. 652-71.

O.B. Hardison, "The Disappearance of Man," The Georgia Review, 42 (Winter 1988), pp. 679-713.

Donald M. MacKay, "Machines, Brains, and Persons," Zygon, 20 (December 1985), pp. 401-12.

Tim Maudlin, "Computation and Consciousness," The Journal of Philosophy, 86 (August 1989), pp. 407-32.

Belden Menkus, "Artificial Intelligence is not Real," Journal of Systems Management, 40 (November 1989), p. 6.

Philip R. Merrifield, "Artificial, Artifactual, and Actual Intelligence," Thought, 61 (December 1986), pp. 468-81.

John Seaman, "AI moves into the Mainstream," Computer Decisions, 17 (December 3, 1985), pp. 100-2.

Bohdan Szuprowicz, "The Dark Side of Artificial Intelligence," Planning Review, 13 (July 1985), p. 37.

Jane Morrill Tazelaar et al, "AI: Metamorphosis or Death?," Byte, 16 (January 1991), pp. 236-301.

Kalman A. Toth, "The Workless Society: How Machine Intelligence will bring Ease and Abundance," The Futurist, 24 (May/June 1990), pp. 32-7.

M. Mitchell Waldrop, "Fast, Cheap, and Out of Control," Science, 248 (May 25, 1990), pp. 959-61.

Susan Watts, "Expert Systems can seriously damage your health," New Scientist, 122 (May 20, 1989), p. 35.

Paul Weiss, "On the Impossibility of Artificial Intelligence," The Review of Metaphysics, 44 (December 1990), pp. 334-41.

A. Martin Wildberger, "AI & Simulation," Simulation, 57 (November 1991), p. 284.

Alan Wolfe, "Mind, Self, Society, and Computer: Artificial Intelligence and the Sociology of Mind," American Journal of Sociology, 96 (March 1991), pp. 1073-96.

Steve Woolgar, "Why Not a Sociology of Machines? The case of sociology and artificial intelligence," Sociology, 19 (November 1985), pp. 557-72.

Return to Selected Writings.