Efficient Architectures for MLP-BP Artificial Neural Networks Implemented on FPGAs

By Antony Savich, May 2006
****

Abstract: Artificial Neural Networks, and Multi-Layer Perceptron with Back Propagation (MLP-BP) algorithm in particular, have historically suffered from slow training. Unfortunately, many applications require real-time training. This thesis studies aspects of MLP-BP implementation in FPGA hardware (Field Programmable Gate Arrays) for accelerating network training. This task is accomplished through analysis of numeric representation and its effect on network convergence, hardware performance and resource consumption. The effects of pipelining on the Back Propagation algorithm are analyzed, and a novel hardware architecture is presented. This new architecture allows extended flexibility in terms of selected numeric representation, degree of system-level parallelism, and network virtualization. A high degree of resource consumption efficiency is accomplished through a careful architectural design, which allows placement of large network topologies within a single FPGA. Examination of performance for this pipelined architecture demonstrates at least three orders of magnitude improvement over software implementation techniques.