Abstract
Self-Organizing Maps (SOMs) are extensively used for data clustering and dimensionality reduction. However, if applications are to fully benefit from SOM based techniques, high-speed processing is demanding, given that data tends to be both highly dimensional and yet “big”. Hence, a fully parallel architecture for the SOM is introduced to optimize the system’s data processing time. Unlike most literature approaches, the architecture proposed here does not contain sequential steps - a common limiting factor for processing speed. The architecture was validated on FPGA and evaluated concerning hardware throughput and the use of resources. Comparisons to the state of the art show a speedup of 8.91× over a partially serial implementation, using less than 15% of hardware resources available. Thus, the method proposed here points to a hardware architecture that will not be obsolete quickly.
Original language | English |
---|---|
Pages (from-to) | 818-827 |
Number of pages | 10 |
Journal | Neural Networks |
Volume | 143 |
Early online date | 21 May 2021 |
DOIs | |
Publication status | Published - Nov 2021 |
Bibliographical note
Funding Information:This study was funded in part by the Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) - Institutional Program for Internationalization (CAPES - PrInt), Brazil.
Publisher Copyright:
© 2021 Elsevier Ltd
Keywords
- self-organizing maps (SOM)
- FPGA
- Parallel design
- Hardware
- Self-Organizing Map
ASJC Scopus subject areas
- Artificial Intelligence
- Cognitive Neuroscience