Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics
|Author||Linares Barranco, Alejandro
Ríos Navarro, José Antonio
Gómez Rodríguez, Francisco de Asís
Moeys, Diederick P.
|Department||Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores|
|Published in||Entropy, 20 (6)|
|Abstract||Taking inspiration from biology to solve engineering problems using the organizing
principles of biological neural computation is the aim of the field of neuromorphic engineering.
This field has demonstrated success in ...
Taking inspiration from biology to solve engineering problems using the organizing principles of biological neural computation is the aim of the field of neuromorphic engineering. This field has demonstrated success in sensor based applications (vision and audition) as well as in cognition and actuators. This paper is focused on mimicking the approaching detection functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to robotics. These RGCs transmit action potentials when an expanding object is detected. In this work we compare the software and hardware logic FPGA implementations of this approaching function and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation, on a Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz. The entropy has been calculated to demonstrate that the system is not totally deterministic in response to approaching objects because of several bioinspired characteristics. It has been measured that a Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent to human reaction latencies to visual stimulus.
|Cite||Linares Barranco, A., Liu, H., Rios Navarro, A., Gómez Rodríguez, F.d.A., Moeys, D.P. y Delbruck, T. (2018). Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics. Entropy, 20 (6)|