A comp.ai.philosophy FAQ


Copyright 1999 by Donald R. Tveter, http://www.dontveter.com, commercial use is prohibited. This material cannot be quoted at length or posted elsewhere on the net or included in CD ROM collections. Short quotations are permitted provided proper attribution is given.


What Should the Definition of Artificial Intelligence Be?

In the minds of most people (especially the general public) the field of artificial intelligence is or should be going about the business of producing intelligence in digital computers. In most cases people mean it should have all the flexibility that human beings have in dealing with the real world. Perhaps the most important of these is that the system should learn rather than just being programmed. Almost everything it knows should come from interacting with the world. Plain old computer programs do not qualify as having intelligence by this standard. In fact, this was the original main goal of artificial intelligence researchers.

But every once in a while I've been surprised to have AI students or researchers claim that producing a general purpose system as capable as a human being is NOT the main goal of AI so I think its important to document what was happening at the beginning of AI. Nils J. Nilsson was in on the very beginning of AI and had an article, "Eye on the Prize", (available in postscript from Stanford University) in the Summer 1995 edition of AI Magazine. In the introduction Nilsson wrote:

In its early stages, the field of AI had as its main goal the invention of computer programs having the general problem-solving abilities of humans. Along the way, a major shift of emphasis developed from general purpose programs toward performance programs, ones whose competence was highly specialized and limited to particular areas of expertise. In this article, I claim that AI is now at the beginning of another transition, one that will reinvigorate efforts to build programs of general, humanlike competence. These programs will use specialized performance programs as tools, much like humans do.
In the article Nilsson goes on to mention some of these specialized programs and then says:
High performance programs such as these are all very useful; they are important and worthy projects for AI, and undoubtedly, they have been excellent investments. Do they move AI closer to its original, main goal of developing a general, intelligent system? I think not.

Unfortunately this original main goal of AI has not been going very well and as Nilsson says in his article AI has turned to making small systems. Nilsson lists many reasons, the main reason being that AI is hard.

Another person who was in on the beginning of AI is John McCarthy. In a page called From Here to Human Level AI McCarthy has this to say at the very beginning of his article:

It is not surprising that reaching human-level AI has proved to be difficult and progress has been slow - though there has been important progress. The slowness and the demand to exploit what has been discovered has led many to mistakenly redefine AI, sometimes in ways that preclude human-level AI - by relegating to humans parts of the task that human-level computer programs would have to do.

The "many" that have mistakenly gone on to redefine AI are the AI researchers (AI insiders) themselves (not the public in general) although to be sure not all AI researchers are happy with this redefining of AI. Now the definition you are likely to get will go something like AI consists of getting a computer to do things that in human beings would require the use of intelligence and the exact means of doing so does not matter, any fixed algorithm would be enough and learning would be optional. So, for example if a program is able to play chess but it does not learn from its experience that is still artificial intelligence (with a big emphasis on artificial because that is not the way people handle chess!). Other insider definitions have AI as the "science of knowledge" or "symbolic computing". Many AI insiders assume the brain is a digital computer and to them if the brain is thinking then digital computers are thinking and therefore computers are intelligent, period.

The "insider" definitions are nonsense to the general public. By the new definitions you would have to call an accounting program an example of artificial intelligence. Or even a program that simply does arithmetic is also then an example of artificial intelligence. Even a piece of hardware like a Pentium processor that does arithmetic is then considered to be intelligent. By the "general public's" definition of AI if the program or the chip is really intelligent then when it makes a mistake it should be able to understand the problem and fix it so it doesn't make the same mistake over and over. The classic case is the Intel Pentium chip with the division bug. It didn't know it was making a mistake, it could not be told what the mistake was and it could not fix it. It was dumb not intelligent. People who believe in the new "insider" definitions and go around claiming their program is an example of "artificial intelligence" are just not going to be taken seriously by people who believe in the original definition and that means the general public will not take the whole field of artificial intelligence seriously.

So actually by the original definition of AI, AI has basically been failing all these years. (Is it any wonder that "AI" researchers want to change the definition!) Given the shift of emphasis within AI maybe the most honest definition right now is that AI is "advanced computing techniques". Lest anyone claim that this "advanced computing techniques" definition is a fabrication let me say I first heard this definition around 1985 and simply agreed with it. More recently it showed up in a March 1994 article in Communications of the ACM where guest editor Toshinori Munakata wrote:

If we mean AI to be a realization of real human intelligence in the machine, its current state may be considered primitive. In this sense, the name "artificial intelligence" can be misleading. However, when AI is looked at as advanced computing, it can be seen as much more. In the past few years, the repertory of AI techniques has evolved and expanded, and applications have been made in everyday commercial and industrial domains. AI applications today span the realm of manufacturing, consumer products, finance, management and medicine.

Here are a couple of online articles that feature definitions of AI: