|Copyright 1996-2001 by Donald R. Tveter, commercial use is prohibited. Short quotations are permitted if proper attribution is given. This material CAN be posted elsewhere on the net if the posted files are not altered in any way but please let me know where it is posted. The main location is: http://dontveter.com/bpr/bpr.html|
From time to time someone asks about using integer weights and/or arithmetic to speed up backprop training. Once upon a time I used a UNIX PC, a machine without floating point hardware. In order to get networks trained in a reasonable amount of time I had to write an integer version of backprop. It used 16-bit weights and 16 and 32-bit arithmetic. The weights ranged from -32 to 31.999. With this arrangement the training would sometimes fail when the weights hit these limits and other times it would not matter. The other problem is that weight changes could become too small and prevent any learning. To get reasonable results you need to round when computing the errors passed backward through the network. The activation function can be piece-wise linear or use a look-up table. The number of iterations required was only slightly larger than the floating point version. On a 486DX33 based PC and Pentium class processors the integer version is only very slightly faster than the floating point version so I no longer use the integer version. If you're interested in trying anyway or seeing how to compute errors it is still in my older 4/27/96 free version of backprop. Many papers have been written on the subject as well.