Mostrar el registro sencillo del ítem

Artículo

dc.creatorCarranza García, Manueles
dc.creatorTorres Mateo, Jesúses
dc.creatorLara Benítez, Pedroes
dc.creatorGarcía Gutiérrez, Jorgees
dc.date.accessioned2022-02-18T09:03:49Z
dc.date.available2022-02-18T09:03:49Z
dc.date.issued2021
dc.identifier.citationCarranza García, M., Torres Mateo, J., Lara Benítez, P. y García Gutiérrez, J. (2021). On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data. Remote Sensing, 13 (1)
dc.identifier.issn2072-4292es
dc.identifier.urihttps://hdl.handle.net/11441/130053
dc.description.abstractObject detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use theWaymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.es
dc.description.sponsorshipMinisterio de Economía y Competitividad TIN2017-88209-C2-2-Res
dc.description.sponsorshipJunta de Andalucía US-1263341es
dc.description.sponsorshipJunta de Andalucía P18-RT-2778es
dc.formatapplication/pdfes
dc.format.extent23es
dc.language.isoenges
dc.publisherMDPIes
dc.relation.ispartofRemote Sensing, 13 (1)
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAutonomous vehicleses
dc.subjectConvolutional neural networkses
dc.subjectDeep learninges
dc.subjectObject detectiones
dc.subjectTransfer learninges
dc.titleOn the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Dataes
dc.typeinfo:eu-repo/semantics/articlees
dcterms.identifierhttps://ror.org/03yxnpp24
dc.type.versioninfo:eu-repo/semantics/publishedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticoses
dc.relation.projectIDTIN2017-88209-C2-2-Res
dc.relation.projectIDUS-1263341es
dc.relation.projectIDP18-RT-2778es
dc.relation.publisherversionhttps://www.mdpi.com/2072-4292/13/1/89es
dc.identifier.doi10.3390/rs13010089es
dc.journaltitleRemote Sensinges
dc.publication.volumen13es
dc.publication.issue1es
dc.contributor.funderMinisterio de Economía y Competitividad (MINECO). Españaes
dc.contributor.funderJunta de Andalucíaes

FicherosTamañoFormatoVerDescripción
On the Performance of One-Stage ...5.202MbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Attribution-NonCommercial-NoDerivatives 4.0 Internacional