Mostrar el registro sencillo del ítem

Artículo

dc.creatorCarranza García, Manueles
dc.creatorGalán Sales, Francisco Javieres
dc.creatorLuna Romera, José Maríaes
dc.creatorRiquelme Santos, José Cristóbales
dc.date.accessioned2024-03-15T08:53:53Z
dc.date.available2024-03-15T08:53:53Z
dc.date.issued2022
dc.identifier.citationCarranza García, M., Galán Sales, F.J., Luna Romera, J.M. y Riquelme Santos, J.C. (2022). Object detection using depth completion and camera-LiDAR fusion for autonomous driving. INTEGRATED COMPUTER-AIDED ENGINEERING, 29 (3), 241-258. https://doi.org/10.3233/ICA-220681.
dc.identifier.issn1069-2509es
dc.identifier.urihttps://hdl.handle.net/11441/156300
dc.description.abstractAutonomous vehicles are equipped with complimentary sensors to perceive the environment accurately. Deep learning models have proven to be the most effective approach for computer vision problems. Therefore, in autonomous driving, it is essential to design reliable networks to fuse data from different sensors. In this work, we develop a novel data fusion architecture using camera and LiDAR data for object detection in autonomous driving. Given the sparsity of LiDAR data, developing multi modal fusion models is a challenging task. Our proposal integrates an efficient LiDAR sparse-to-dense completion network into the pipeline of object detection models, achieving a more robust performance at different times of the day. The Waymo Open Dataset has been used for the experimental study, which is the most diverse detection benchmark in terms of weather and lighting conditions. The depth completion network is trained with the KITTI depth dataset, and transfer learning is used to obtain dense maps on Waymo. With the enhanced LiDAR data and the camera images, we explore early and middle fusion approaches using popular object detection models. The proposed data fusion network provides a significant improvement compared to single-modal detection at all times of the day, and outperforms previous approaches that upsample depth maps with classical image processing algorithms. Our multi-modal and multi-source approach achieves a 1.5, 7.5, and 2.1 mean AP increase at day, night, and dawn/dusk, respectively, using four different object detection meta-architectureses
dc.formatapplication/pdfes
dc.format.extent17es
dc.language.isoenges
dc.publisherIOS Presses
dc.relation.ispartofINTEGRATED COMPUTER-AIDED ENGINEERING, 29 (3), 241-258.
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAutonomous drivinges
dc.subjectdata fusiones
dc.subjectdeep learninges
dc.subjectobject detectiones
dc.subjecttransfer learninges
dc.titleObject detection using depth completion and camera-LiDAR fusion for autonomous drivinges
dc.typeinfo:eu-repo/semantics/articlees
dc.type.versioninfo:eu-repo/semantics/publishedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticoses
dc.identifier.doi10.3233/ICA-220681es
dc.journaltitleINTEGRATED COMPUTER-AIDED ENGINEERINGes
dc.publication.volumen29es
dc.publication.issue3es
dc.publication.initialPage241es
dc.publication.endPage258es

FicherosTamañoFormatoVerDescripción
Object detection using depth ...498.3KbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Attribution-NonCommercial-NoDerivatives 4.0 Internacional