The Pattern Recognition Basis of Artificial Intelligence

Chapter 6. Complex Architectures

The book starts with some of the simplest pattern recognition algorithms and moves on to higher levels of thought and reasoning. For the most part (not completely!) chapters 2 through 5 deal with one step pattern recognition type problems. But that is not good enough. Most problems involve doing many steps of pattern recognition and we really need to find an architecture that will do many steps of pattern recognition to solve one problem or to cope with the real world. Unfortunately not a lot can be done with this subject. What is done is to introduce the idea of a short term memory that works in conjunction with a long term memory.

Besides the architecture problem there is also the problem of how to represent thoughts within such a system. Symbol processing advocates simply propose structures of symbols. Neural networking advocates have yet to establish good methods for storing structures of thoughts (unless you consider pictures as structured objects but hardly anyone has worked on this).

6.1 The Basic Human Architecture

This section lists some requirements for a human-like architecture including short-term and long-term memory.

6.2 Flow of Control

The point here is that the human architecture is interrupt-driven, analog and fallible.

6.3 The Virtual Symbol Processing Machine Proposal

A short section that describes roughly how a PDP architecture can be used to simulate a symbol processing architecture.

6.4 Mental Representation and Computer Representation

This section is largely concerned with the difficulties involved in trying to describe the world entirely with the use of symbols. I believe that Harnad is right in saying that symbols must be defined in terms of pictures, the idea of symbol grounding. The paper by Steven Harnad called, "The Symbol Grounding Problem" is a compressed text file and is available from the Ohio State neuroprose archive.

6.5 Storing Sequential Events

Its rather easy to store a sequence of symbols, like the words in a line of poetry, in a symbol processing format, you simply use a linked list. To do the same thing in a neural networking format you can use a recurrent network with a short-term memory and get it to "memorize" a list of words. The data for the example network that memorizes the two lines of poetry is included my backprop software.

6.6 Structuring Individual Thoughts

This section considers how to store the ideas in simple sentences like "John and Mary ate cookies". This is easily done in a symbol processing framework but its not so easily done (as yet) in a neural networking format. This section contains a few of the many ways of storing structured thoughts that have been proposed. I doubt that any of them, not the neural and not the symbolic are the "right" one, the one that people use. If you wonder why I'm obsessed with finding out what people do its because I suspect whatever people are doing in terms of storing and using information is rather clever. After we find out what is going on we may be able to apply the principles somewhat differently in an artificial system. MacLennan commented on some neural methods that are essentially neural implementations of symbolic methods by saying they are "just putting a fresh coat of paint on old, rotting theories".

The RAAM articles by Jordan Pollack are online, first "Implications of Recursive Distributed Representations" from the Ohio State neuroprose archive and second "Recursive Distributed Representations" from the Ohio State neuroprose archive. RAAM is a cute idea but I have to wonder whether its realistic or not.

A variation on RAAM called SRAAM for Sequential RAAM turns trees into a linear list and then stores the list in a RAAM-like procedure. See the article "Tail-Recursive Distributed Representations and Simple Recurrent Networks" by Stan C. Kwasny and Barry L. Kalman available by http from: Washington University in St. Louis. This is not covered in the book.

A newer scheme that may be more realistic and which I don't cover in this section is in the article "Holographic Reduced Representations", by Tony Plate, published in the May 1995 IEEE Transactions on Neural Networks and also online from the University of Toronto. Actually if you FTP to Tony Plate's directory at the University of Toronto you'll find this topic comes in documents of various sizes including a whole thesis.

6.7 Frames and Scripts

This section describes these traditional AI data structures. You could make a case that scripts ought to be left until chapter 10 on natural language but since they involve data structuring they also fit in here. Chapter 10 does do more with scripts and I don't do the neural version of scripts here, I leave that for chapter 10.


If you have any questions or comments, write me.

To Don's Home Page