Efficient parallel implementation of reservoir computing systems

Show simple item record

dc.contributor.author Alomar, M.L.
dc.contributor.author Skibinsky-Gitlin, Erik S.
dc.contributor.author Frasser, Christiam F.
dc.contributor.author Canals, Vincent
dc.contributor.author Isern, Eugeni
dc.contributor.author Roca, Miquel
dc.contributor.author Rosselló, Josep L.
dc.date.accessioned 2024-02-09T08:30:38Z
dc.date.available 2024-02-09T08:30:38Z
dc.identifier.uri http://hdl.handle.net/11201/164662
dc.description.abstract Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the implementation of massive parallel-connected reservoir networks is costly in terms of circuit area and power, mainly due to the requirement of implementing synapse multipliers that increase gate count to prohibitive values. Most HRC systems present in the literature solve this area problem by sequencializing the processes, thus loosing the expected fault-tolerance and low latency of fully parallel-connected HRCs. Therefore, the development of new methodologies to implement fully parallel HRC systems is of high interest to many computational intelligence applications requiring quick responses. In this article, we propose a compact hardware implementation for Echo-State Networks (an specific type of reservoir) that reduces the area cost by simplifying the synapses and using linear piece-wise activation functions for neurons. The proposed design is synthesized in a Field-Programmable Gate Array and evaluated for different time-series prediction tasks. Without compromising the overall accuracy, the proposed approach achieves a significant saving in terms of power and hardware when compared with recently published implementations. This technique pave the way for the low-power implementation of fully parallel reservoir networks containing thousands of neurons in a single integrated circuit.
dc.format application/pdf
dc.relation.isformatof Versió postprint del document publicat a: https://doi.org/10.1007/s00521-018-3912-4
dc.relation.ispartof Neural Computing & Applications, 2018, vol. 32, num. 7, p. 2299-2313
dc.subject.classification 62 - Enginyeria. Tecnologia
dc.subject.other 62 - Engineering. Technology in general
dc.title Efficient parallel implementation of reservoir computing systems
dc.type info:eu-repo/semantics/article
dc.type info:eu-repo/semantics/acceptedVersion
dc.date.updated 2024-02-09T08:30:39Z
dc.subject.keywords Artificial neural networks
dc.subject.keywords recurrent neural networks
dc.subject.keywords reservoir computing
dc.subject.keywords field-programmable gate arrays (FPGA)
dc.subject.keywords time series
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.identifier.doi https://doi.org/10.1007/s00521-018-3912-4


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account

Statistics