Mostrar el registro sencillo del ítem

Ponencia

dc.creatorDomínguez Morales, Juan Pedroes
dc.creatorLiu, Qianes
dc.creatorJames, Robertes
dc.creatorGutiérrez Galán, Danieles
dc.creatorJiménez Fernández, Ángel Franciscoes
dc.creatorDavidson, Simónes
dc.creatorFurber, Steve B.es
dc.date.accessioned2020-01-22T11:35:35Z
dc.date.available2020-01-22T11:35:35Z
dc.date.issued2018
dc.identifier.citationDomínguez Morales, J.P., Liu, Q., James, R., Gutiérrez Galán, D., Jiménez Fernández, Á.F., Davidson, S. y Furber, S. B. (2018). Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach. En IJCNN 2018 : International Joint Conference on Neural Networks Rio de Janeiro, Brazil: IEEE Computer Society.
dc.identifier.isbn978-1-5090-6014-6es
dc.identifier.issn2161-4407es
dc.identifier.urihttps://hdl.handle.net/11441/92113
dc.description.abstractSpeech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.es
dc.description.sponsorshipMinisterio de Economía y Competitividad TEC2016-77785-Pes
dc.formatapplication/pdfes
dc.language.isoenges
dc.publisherIEEE Computer Societyes
dc.relation.ispartofIJCNN 2018 : International Joint Conference on Neural Networks (2018),
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectSpeech recognitiones
dc.subjectAudio processinges
dc.subjectSpiking neural networkes
dc.subjectConvolutional Neural Networks (CNN)es
dc.subjectNeuromorphic hardwarees
dc.subjectDeep learninges
dc.titleDeep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approaches
dc.typeinfo:eu-repo/semantics/conferenceObjectes
dc.type.versioninfo:eu-repo/semantics/submittedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadoreses
dc.relation.projectIDTEC2016-77785-Pes
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/8489381es
dc.identifier.doi10.1109/IJCNN.2018.8489381es
dc.contributor.groupUniversidad de Sevilla. TEP-108: Robótica y Tecnología de Computadores Aplicada a la Rehabilitaciónes
idus.format.extent8es
dc.eventtitleIJCNN 2018 : International Joint Conference on Neural Networkses
dc.eventinstitutionRio de Janeiro, Braziles
dc.relation.publicationplaceNew York, USAes

FicherosTamañoFormatoVerDescripción
Deep Spiking Neural Network ...988.2KbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Attribution-NonCommercial-NoDerivatives 4.0 Internacional