Mostrar el registro sencillo del ítem

Ponencia

dc.creatorGruel, Améliees
dc.creatorMartinet, Jeanes
dc.creatorSerrano Gotarredona, María Teresaes
dc.creatorLinares Barranco, Bernabées
dc.date.accessioned2023-04-18T09:55:55Z
dc.date.available2023-04-18T09:55:55Z
dc.date.issued2022-02
dc.identifier.citationGruel, A., Martinet, J., Serrano Gotarredona, M.T. y Linares Barranco, B. (2022). Event data downscaling for embedded computer vision. En Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (245-253), Online Streaming: SciTePress.
dc.identifier.isbn978-989-758-555-5 (Vol. 4: VISAPP)es
dc.identifier.issn2184-4321es
dc.identifier.urihttps://hdl.handle.net/11441/144573
dc.descriptionVISAPP 2022 forma parte de VISIGRAPPes
dc.description.abstractEvent cameras (or silicon retinas) represent a new kind of sensor that measure pixel-wise changes in brightness and output asynchronous events accordingly. This novel technology allows for a sparse and energy-efficient recording and storage of visual information. While this type of data is sparse by definition, the event flow can be very high, up to 25M events per second, which requires significant processing resources to handle and therefore impedes embedded applications. Neuromorphic computer vision and event sensor based applications are receiving an increasing interest from the computer vision community (classification, detection, tracking, segmentation, etc.), especially for robotics or autonomous driving scenarios. Downscaling event data is an important feature in a system, especially if embedded, so as to be able to adjust the complexity of data to the available resources such as processing capability and power consumption. To the best of our knowledge, this works is the first attempt to formalize event data downscaling. In order to study the impact of spatial resolution downscaling, we compare several features of the resulting data, such as the total number of events, event density, information entropy, computation time and optical consistency as assessment criteria. Our code is available online at https://github.com/amygruel/EvVisu.es
dc.formatapplication/pdfes
dc.format.extent9es
dc.language.isoenges
dc.publisherSciTePresses
dc.relation.ispartofProceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (2022), pp. 245-253.
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectEvent camerases
dc.subjectComputer visiones
dc.subjectData reductiones
dc.subjectPreprocessinges
dc.subjectVisualisationes
dc.titleEvent data downscaling for embedded computer visiones
dc.typeinfo:eu-repo/semantics/conferenceObjectes
dcterms.identifierhttps://ror.org/03yxnpp24
dc.type.versioninfo:eu-repo/semantics/publishedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadoreses
dc.relation.publisherversionhttps://www.scitepress.org/Link.aspx?doi=10.5220/0010991900003124es
dc.identifier.doi10.5220/0010991900003124es
dc.publication.initialPage245es
dc.publication.endPage253es
dc.eventtitleProceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applicationses
dc.eventinstitutionOnline Streaminges

FicherosTamañoFormatoVerDescripción
VISAPP_2022.pdf404.8KbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Attribution-NonCommercial-NoDerivatives 4.0 Internacional