A Reconfigurable Computing Architecture for Implementing ANNs on FPGAs
By Kristian Nichols, December 2003
Abstract:
Artificial Neural Networks (ANNs), and the backpropagation
algorithm in particular, is a form of artificial intelligence that has
traditionally suffered from slow training and lack of clear methodology to
determine network topology before training starts. Past researchers have used
reconfigurable computing as one means of accelerating ANN testing.
The goal of this research was to learn how recent improvements in the tools and
methodologies used in reconfigurable computing have helped advanced the field,
and thus, strengthened its applicability towards accelerating ANNs.
A new FPGA-based ANN architecture, called RTR-MANN, was created to demonstrate
the performance enhancements gained from using current-generation tools and
methodologies. RTR-MANN was shown to have an order of magnitude more
scalability and functional density compared to older-generation FPGA-based
ANN architectures. In addition, use of a new system design methodology (via
High-level Language) led to a more intuitive verification / validation phase,
which was an order of magnitude faster compared traditional HDL simulators.