dc.creator | Urgese, Gianvito | es |
dc.creator | Ríos-Navarro, Antonio | es |
dc.creator | Linares Barranco, Alejandro | es |
dc.creator | Stewart, Terrence C. | es |
dc.creator | Michmizos, Konstantinos | es |
dc.date.accessioned | 2023-07-07T13:53:35Z | |
dc.date.available | 2023-07-07T13:53:35Z | |
dc.date.issued | 2023-05 | |
dc.identifier.issn | 1662-453X | es |
dc.identifier.uri | https://hdl.handle.net/11441/147808 | |
dc.description.abstract | The brain, this 3-pound mass of tissue that can easily be held in one's palm, has an inherent computational complexity that has always inspired efforts to endorse machines with some of its remarkable characteristics. Ironically, the brain computes in its own way, compared to analog or digital computers, despite sharing key concepts with both. It employs analog computation but digital communication, through spikes, both of which improve robustness to noise. This unique combination defines a new computational paradigm that we have just started to explore.
The reasons why neuromorphic systems are one of the fastest growing applications are not purely scientific, but mainly technological. For 50 years, the principle guiding computations has been Moore's Law, a macroscopic observation that we will always find ways to engineer faster, smaller and cheaper chips. But there are several reasons why Moore's Law is no longer able to keep up. First, physics: As we downsize transistors close to the atomic scale, it becomes difficult to regulate electron flow. Electrons do not necessarily adhere to Newtonian physics and may pass through the transistor barriers, a phenomenon called quantum tunneling. This makes our computer architectures inefficient. Second, we have long accepted the existence of a trade-off between computing faster and consuming less power, but this has never been a problem until we started approaching the physical limits of fabricating transistors. And the final nail in Moore's Law coffin is put by deep learning. Our computational needs are now orders of magnitude higher than what our systems can deliver.
The von-Neumann paradigm is already having more than its fair share of inefficiency—to match its brilliance. And the reason is simple. Computers have been designed with feasibility, not efficiency, at their center. And nowhere are the effects of a bad design more imminent—or the opportunity for an alternative design more compelling – than in emerging technologies, such as edge intelligence, where the computing needs become distilled, asking for real-time solutions to problems constrained by big data. A deep network running on a wearable device will deplete its battery within minutes. The sensors of an autonomous car can easily generate 1 GBps of data. These are examples of the need for real-time computing. The explosive growth of IoT is limited by the efficiency of our computing systems. We are nowhere near ready or prepared for this computational tsunami.
There is no better time than this to reconsider the feasibility of alternative solutions. What we need is a computing paradigm that is versatile, robust and power efficient, to handle these seismic shifts in our needs. And what we have now, is enough knowledge on how the brain can achieve these goals. The brain is fault tolerant. It is also extremely power efficient. And it becomes useless if you detach it from its environment: It performs self-learning, one of its most important attributes, by self-organizing based on the input it receives from the environment and other brains.
In this topic, we present efforts for advancing non-Von Neumann computations that draw from the brain's functional analogies. Below we walk through the rationale, the challenges and the advantages of redefining algorithms as spiking neural networks, where memory, learning and computing are tightly integrated to advance the implementation of enhanced IoT solutions. | es |
dc.format | application/pdf | es |
dc.format.extent | 3 p. | es |
dc.language.iso | eng | es |
dc.publisher | Froontiers Media S.A. | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | Brain-inspired computational primitives | es |
dc.subject | Neuromorphic engineering | es |
dc.subject | Neuromorphic IoT applications | es |
dc.subject | Neuromorphic tools | es |
dc.subject | Sensory fusion | es |
dc.subject | Neuromorphic computing | es |
dc.subject | Neuromorphic framework | es |
dc.title | Editorial: Powering the next-generation IoT applications: new tools and emerging technologies for the development of Neuromorphic System of Systems | es |
dc.type | info:eu-repo/semantics/article | es |
dcterms.identifier | https://ror.org/03yxnpp24 | |
dc.type.version | info:eu-repo/semantics/publishedVersion | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.contributor.affiliation | Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores | es |
dc.relation.projectID | MINDROB PID2019-105556GB-C33/AEI/10.13039/501100011033 | es |
dc.relation.projectID | SMALL PCI2019-111841-2/AEI/10.1309/501100011033 | es |
dc.relation.publisherversion | https://www.frontiersin.org/articles/10.3389/fnins.2023.1197918/full | es |
dc.contributor.group | Universidad de Sevilla. TEP108: Robótica y Tecnología de Computadores | es |
idus.validador.nota | This article is part of the Research Topic:
Powering the next-generation IoT applications: New tools and emerging technologies for the development of Neuromorphic System of Systems | es |
dc.journaltitle | Frontiers in Neuroscience | es |
dc.publication.volumen | 17 | es |
dc.publication.issue | 1197918 | es |
dc.contributor.funder | Spanish grant with support from the European Regional Development Fund MINDROB PID2019-105556GB-C33/AEI/10.13039/501100011033 | es |
dc.contributor.funder | Spanish grant with support from the European Regional Development Fund SMALL PCI2019-111841-2/AEI/10.1309/501100011033 | es |