Mostrar el registro sencillo del ítem

Artículo

dc.creatorPérez Carrasco, José Antonioes
dc.creatorZhao, Boes
dc.creatorSerrano Gotarredona, María del Carmenes
dc.creatorAcha Piñero, Begoñaes
dc.creatorSerrano Gotarredona, María Teresaes
dc.creatorCheng, Shouchunes
dc.creatorLinares Barranco, Bernabées
dc.date.accessioned2018-10-25T15:32:44Z
dc.date.available2018-10-25T15:32:44Z
dc.date.issued2013
dc.identifier.citationPérez Carrasco, J.A., Zhao, B., Serrano Gotarredona, M.d.C., Acha, B., Serrano Gotarredona, T., Cheng, S. y Linares Barranco, B. (2013). Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed-Forward ConvNets. IEEE transactions on pattern analysis and machine intelligence, 35 (11), 2706-2719.
dc.identifier.issn0162-8828es
dc.identifier.urihttps://hdl.handle.net/11441/79657
dc.description.abstractEvent-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given “frame rate”. Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of Event-driven sensor is the so called Dynamic-Vision-Sensor (DVS) where each pixel computes relative changes of light, or “temporal contrast”. The sensor output consists of a continuous flow of pixel events which represent the moving objects in the scene. Pixel events become available with micro second delays with respect to “reality”. These events can be processed “as they flow” by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper we present a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event-driven representation. The method is illustrated by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The Event-driven ConvNet is fed with recordings obtained from a real DVS camera. The Event-driven ConvNet is simulated with a dedicated Event-driven simulator, and consists of a number of Event-driven processing modules the characteristics of which are obtained from individually manufactured hardware modules.es
dc.formatapplication/pdfes
dc.language.isospaes
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)es
dc.relation.ispartofIEEE transactions on pattern analysis and machine intelligence, 35 (11), 2706-2719.
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectFeature Extractiones
dc.subjectConvolutional Neural Networkses
dc.subjectObject Recognitiones
dc.titleMapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed-Forward ConvNetses
dc.typeinfo:eu-repo/semantics/articlees
dc.type.versioninfo:eu-repo/semantics/submittedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Teoría de la Señal y Comunicacioneses
dc.relation.publisherversionhttps://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6497055es
dc.identifier.doi10.1109/TPAMI.2013.71es
idus.format.extent14 p.es
dc.journaltitleIEEE transactions on pattern analysis and machine intelligencees
dc.publication.volumen35es
dc.publication.issue11es
dc.publication.initialPage2706es
dc.publication.endPage2719es
dc.identifier.sisius20530392es

FicherosTamañoFormatoVerDescripción
tpami2013_AuthorAcceptedVersion.pdf1.618MbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Attribution-NonCommercial-NoDerivatives 4.0 Internacional