The idea in this chapter is to do a little pattern recognition with concrete down to Earth problems likes recognizing letters, digits and words. The more abstract version of pattern recognition comes along in chapter 3. You could make a point that chapter 3 should come before chapter 2 then, except I think abstraction should come second not first. Also in this chapter it is shown that "simple" pattern recognition like recognizing letters is not so simple after all: it is tied up with everything we know, all the way up to the highest levels.
The idea here is to show a simple hand-crafted neural network inspired by James' principles of association that can recognize a few letters of the alphabet. The weights are chosen by hand however the algorithms coming up in the next chapter can be applied to setting the weights as well. They probably do a better job than anything you can hand-craft. In this section I assume the matrix containing the letters is simply passed to a conventional program which finds the short line segments that are used. This portion isn't really neural however in the next section I show how this too can be done with a neural network.
Its quite easy to program the model in this section.
From time to time people ask in comp.ai.neural-nets how to do character recognition so here is a description of it based on what I have in the book.
This section shows how even finding the short line segments in a pattern can be done neurally. The weight adjustment formulas and activation functions of the neocognitron are not discussed. The neocognitron is capable of two types of learning, the supervised mode where you give the network the answer and an unsupervised mode (much slower) where you simply present it with pattern after pattern.
Recognizing letters and symbols is not done in a vacuum, it is normally done as part of some task where your thinking biases you to see certain patterns. In this section I show how a more sophisticated type of network, an interactive activation network, can be used to capture this bias in a realistic way. The example comes from a famous work by Rumelhart and McClelland.
I did a not at all fancy version of the interactive activation network and the C source, DOS binaries and elementary instructions. It can be used for some of the exercises.
If you think recognizing letters of the alphabet is a problem then it gets still worse because interpreting the meaning of a sentence depends on knowing a lot about language and a lot about the world and this section shows some examples.
I thought it best to do pattern recognition from the standpoint of identifying letters of the alphabet because it is easy to relate to this and it shows how many levels of knowledge are involved in the process but there is a lot more to the subject. First, the methods given here are not completely realistic in that human pattern recognition is much more complex than these algorithms. My guess is that the human algorithms will prove to be better than any of the simple man-made ones. Second, its also important to be able to interpret pictures of arbitrary scenes, not just letters and algorithms however this important topic was not covered here because you just can't do everything and because the principles involved are quite similar.
For the interactive activation network exercises here note that if you're not inclined to program it yourself use my software. (Actually I wrote it specifically to do the tic-tac-toe problem.)