Artículos (Arquitectura y Tecnología de Computadores)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11292
Examinar
Envíos recientes
Artículo Análisis de la tasa de abandono en un Centro con varios Grados en Ingeniería Informática(2017) Ruiz Cortés, David; Gómez Rodríguez, Francisco de Asís; Ruiz Reina, José Luis; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. Departamento de Ciencias de la Computación e Inteligencia ArtificialEn este trabajo se muestra el análisis realizado del impacto que sobre la tasa de abandono tiene el cambio de estudios entre los tres Grados en Ingeniería Informática que se imparten en un Centro concreto. Dicho análisis ha sido llevado a cabo por el Equipo de Dirección del Centro a instancia de los informes realizados tras las visitas para la renovación de la acreditación de dichos títulos. Las principales conclusiones a las que hemos llegado son: i) el cambio de estudios entre Grados en Informática siempre tiene un efecto negativo sobre la tasa de abandono, oscilando este entre el 3% y el 20 %; ii) dicho cambio de estudios puede responder a cuestiones académicas en algunos casos, pero también se apuntan cuestiones económicas por el ahorro que puede llegar a suponer; iii) aproximadamente un tercio de nuestros estudiantes abandona los estudios en Ingeniería Informática; iv) la tasa de abandono a lo largo de los últimos 5 años se ha mantenido acorde con lo establecido en las memorias de verificación y conforme a la media nacional en la rama de conocimiento de Ingeniería y Arquitectura; v) los sistemas de indicadores definidos por los distintos sistemas de garantía de calidad de los Títulos en ocasiones no son homogéneos, lo que dificulta realizar cualquier tipo de análisis.Artículo Bio-inspired computational memory model of the Hippocampus: An approach to a neuromorphic spike-based Content-Addressable Memory(Elsevier, 2024-10) Casanueva Morato, Daniel; Ayuso Martínez, Álvaro; Domínguez Morales, Juan Pedro; Jiménez Fernández, Ángel Francisco; Jiménez Moreno, Gabriel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia, Innovación y Universidades (MICIU). España; European Commission (EC). Fondo Europeo de Desarrollo Regional (FEDER); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe brain has computational capabilities that surpass those of modern systems, being able to solve complex problems efficiently in a simple way. Neuromorphic engineering aims to mimic biology in order to develop new systems capable of incorporating such capabilities. Bio-inspired learning systems continue to be a challenge that must be solved, and much work needs to be done in this regard. Among all brain regions, the hippocampus stands out as an autoassociative short-term memory with the capacity to learn and recall memories from any fragment of them. These characteristics make the hippocampus an ideal candidate for developing bio-inspired learning systems that, in addition, resemble content-addressable memories. Therefore, in this work we propose a bio-inspired spiking content-addressable memory model based on the CA3 region of the hippocampus with the ability to learn, forget and recall memories, both orthogonal and non-orthogonal, from any fragment of them. The model was implemented on the SpiNNaker hardware platform using Spiking Neural Networks. A set of experiments based on functional, stress and applicability tests were performed to demonstrate its correct functioning. This work presents the first hardware implementation of a fully-functional bio-inspired spiking hippocampal content-addressable memory model, paving the way for the development of future more complex neuromorphic systems.Artículo Localizing unknown nodes with an FPGA-enhanced edge computing UAV in wireless sensor networks: Implementation and evaluation(Elsevier, 2024-10) Mani, Rahma; Ríos-Navarro, Antonio; Sevillano Ramos, José Luis; Liouane, Noureddine; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresGreat interest is directed toward real-time applications to determine the exact location of sensor nodes deployed in an area of interest. In this paper, we present a novel approach using a combination of the Kalman filter and regularized bounding box method for localizing unknown nodes in an area using an FPGA-enhanced edge computing UAV whose trajectory is known and is represented as the position of many anchors. The UAV is equipped with a GPS system that allows it to gather location data of sensor nodes as it moves around its environment. We employ a regularized bounding box to predict the positions of the unknown nodes using regularization factors and we use the Kalman filter algorithm to smooth and improve the accuracy of the sensor nodes to be localized. In order to localize the unknown nodes, the UAV receives the number of hops from each node and uses this information as input to the localization algorithm. Furthermore, the use of an FPGA board allows for real-time processing of sensory data, enabling the UAV to make fast and accurate decisions in dynamic environments. The localization algorithm was implemented on the FPGA board ‘‘Zynq MiniZed 7007s evaluation board’’ using Xilinx blocks in Simulink, and the generated code was converted into VHDL using Xilinx System Generator. The algorithm was simulated and synthesized using ‘‘Vivado’’ software. In fact, the proposed system was evaluated by comparing the performances achieved through two different implementations: Hardware and Software implementation. In effect, the performance of FPGA hardware implementation presents a new achievement in localization due to its easy testing and fast implementation. Our results show that this approach can efficiently locate unknown nodes with good latency and high accuracy. In fact, the execution time of the FPGA-integrated algorithm is reduced by about 60 times compared to the software implementation and the power consumption is about 100 mW, which proves the suitability of FPGA for localization in WSNs, offering a promising solution for various mobile WSN applications.Artículo Editorial: Emerging talents in neuromorphic engineering(Frontiers Media, 2024-02-28) Rostro González, Horacio; Domínguez Morales, Juan Pedro; Girau, Bernard; Pérez-Peña, Fernando; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThis Research Topic provides a platform to highlight the outstanding contributions of emerging talents in the field of neuromorphic engineering. Through this dedicated series, we aim to showcase the promising work of student researchers within Neuromorphic Engineering.Artículo Roadmap to neuromorphic computing with emerging technologies(American Institute of Physics, 2024-10-21) Mehonic, Adnan; Ielmini, Daniele; Roy, Kaushik; Mutlu, Onur; Kvatinsky, Shahar; Serrano Gotarredona, María Teresa; Linares Barranco, Bernabé; Spiga, Sabina; Savel’ev, Sergey; Balanov, Alexander G.; Chawla, Nitin; Desoli, Giuseppe; Malavena, Gerardo; Monzio Compagnoni, Christian; Wang, Zhongrui; Yang, J. Joshua; Sarwat, Syed Ghazi; Sebastian, Abu; Mikolajick, Thomas; Slesazeck, Stefan; Noheda, Beatriz; Dieny, Bernard; Hou, Tuo Hung; Varri, Akhil; Brückerhoff-Plückelmann, Frank; Pernice, Wolfram; Zhang, Xixiang; Pazos, Sebastian; Lanza, Mario; Wiefels, Stefan; Dittmann, Regina; Ng, Wing H.; Buckwell, Mark; Cox, Horatio R.J.; Mannion, Daniel J.; Kenyon, Anthony J.; Lu, Yingming; Yang, Yuchao; Querlioz, Damien; Hutin, Louis; Vianello, Elisa; Chowdhury, Sayeed Shafayet; Mannocci, Piergiulio; Cai, Yimao; Sun, Zhonghao; Pedretti, Giacomo; Strachan, John Paul; Strukov, Dmitri B.; Le Gallo, Manuel; Ambrogio, Stefano; Valov, Ilia; Waser, Rainer; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresThe growing adoption of data-driven applications, such as artificial intelligence (AI), is transforming the way we interact with technology. Currently, the deployment of AI and machine learning tools in previously uncharted domains generates considerable enthusiasm for further research, development, and utilization. These innovative applications often provide effective solutions to complex, longstanding challenges that have remained unresolved for years. By expanding the reach of AI and machine learning, we unlock new possibilities and facilitate advancements in various sectors. These include, but are not limited to, scientific research, education, transportation, smart city planning, eHealth, and the metaverse. However, our predominant focus on performance can sometimes lead to critical oversights. For instance, our constant dependence on immediate access to information might cause us to ignore the energy consumption and environmental consequences associated with the computing systems that enable such access. Balancing performance with sustainability is crucial for the technology’s continued growth.Artículo Predictive Maintenance Edge Artificial Intelligence Application Study Using Recurrent Neural Networks for Early Aging Detection in Peristaltic Pumps(Institute of Electrical and Electronics Engineers, 2024-11-15) Montes Sánchez, Juan Manuel; Yoshifumi Nishio; Vicente Díaz, Saturnino; Jiménez Fernández, Ángel Francisco; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Agencia Estatal de Investigación. España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresPeristaltic pumps are widely used in many industrial applications, especially in medical devices. Their reliability depends on proper maintenance, which includes the total replacement of tubes regularly due to the aging of the materials. The proper use of predictive maintenance techniques could potentially improve the efficiency of maintenance interventions and prevent failures by having a way to determine when the tube has passed its replacement time. We recorded a dataset using six different sensors (three accelerometers, one gyroscope, one magnetometer, and one microphone) using several cassettes (three new units and three units with expired life span). The recording was done at the highest possible frequency (100–6667 Hz, different for each sensor) and then downsampled several times to obtain frequencies as low as 12 Hz. This dataset is now publicly available. We trained 939 different models, which were the result of combining all different sensors as inputs but the microphone, and four basic architectures of recurrent neural network: One or two layers of either gated recurrent unit or long short-term memory with different number of nodes per layer (from 2 to 64). Among all trained models, we selected the ten best performing networks in terms of both accuracy and complexity. All of them reached an F1 score of 0.99 or 1 with holdout cross-validation. Those models were deployed on four different edge AI devices. For all combinations of model and edge AI devices we obtained metrics of memory size (from 0.3% to 160.6% RAM, and from 0.9% to 21.3% flash), inference time (from 0.39 to 1463.91 ms), and average consumption (from 0.15 to 5.30 mA). Nine out of ten models were proven viable for deployment. We concluded that the four models based on magnetometer data were significantly better in terms of consumption and inference time. To the best of our knowledge, the use of magnetometer data is a very uncommon approach to failure detection in predictivemaintenance applications, and this is probably the first time it has been used for peristaltic pump aging detection, so our results are very promising for future applications. Also, since most trained models use little resources, we have proved that our approach is perfectly compatible with running other communication and control algorithms on the same device, which is ideal for easy integration and scalability in industrial systems. Some limitations for real deployment include facing environmental factors (noise) and long-term monitoring, so we also proposed a protocol that should reduce the impact of those factors by taking measurements in a controlled way.Artículo Analog Implementation of a Spiking System for Performing Arithmetic Logic Operations on Mixed-Signal Neuromorphic Processors(Wiley, 2024-11) Ayuso Martínez, Álvaro; Casanueva Morato, Daniel; Domínguez Morales, Juan Pedro; Indiveri, Giacomo; Jiménez Fernández, Ángel Francisco; Jiménez Moreno, Gabriel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; European Commission. Fondo Social Europeo (FSO); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresIn recent years, physical limitations in the integration of transistors in computers have forced the search for low-computational-power alternatives in hardware design. Although doubts may arise regarding the limit of the relationship between performance and power consumption in computers, these disappear when considering the brain, which is one of the most efficient computing systems. In this way, bioinspired applications try to benefit from the low-power consumption present in the biological nervous system. Previous work has shown the feasibility of implementing spiking neural networks that operate in a Boolean manner on digital platforms, such as SpiNNaker, using basic logic gates and a spiking memory, which suggests the potential for constructing a low-power consumption spiking computer. This work takes a first step in the implementation of a spiking central processing unit by developing an arithmetic logic unit, which is an essential block for instruction execution, demonstrating its correct operation on Dynap-SE1. The results confirm the feasibility of using this Boolean approach on this platform, despite certain limitations in the number of inputs and operating frequencies of the blocks, and pave the way for the construction of a spiking computer.Artículo Time series segmentation for recognition of epileptiform patterns recorded via microelectrode arrays in vitro(Public Library of Science, 2025-01) Galeote-Checa, Gabriel; Panuccio, Gabriella; Canal-Alonso, Ángel; Linares Barranco, Bernabé; Serrano Gotarredona, Teresa; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; European UnionEpilepsy is a prevalent neurological disorder that affects approximately 1% of the global population. Approximately 30-40% of patients respond poorly to antiepileptic medications, leading to a significant negative impact on their quality of life. Closed-loop deep brain stimulation (DBS) is a promising treatment for individuals who do not respond to medical therapy. To achieve effective seizure control, algorithms play an important role in identifying relevant electrographic biomarkers from local field potentials (LFPs) to determine the optimal stimulation timing. In this regard, the detection and classification of events from ongoing brain activity, while achieving low power consumption through computationally inexpensive implementations, represents a major challenge in the field. To address this challenge, we here present two algorithms, the ZdensityRODE and the AMPDE, for identifying relevant events from LFPs by utilizing time series segmentation (TSS), which involves extracting different levels of information from the LFP and relevant events from it. The algorithms were validated validated against epileptiform activity induced by 4-aminopyridine in mouse hippocampuscortex (CTX) slices and recorded via microelectrode array, as a case study. The ZdensityRODE algorithm showcased a precision and recall of 93% for ictal event detection and 42% precision for interictal event detection, while the AMPDE algorithm attained a precision of 96% and recall of 90% for ictal event detection and 54% precision for interictal event detection. While initially trained specifically for detecting ictal activity, these algorithms can be fine-tuned for improved interictal detection, aiming at seizure prediction. Our results suggest that these algorithms can effectively capture epileptiform activity, supporting seizure detection and, possibly, seizure prediction and control. This opens the opportunity to design new algorithms based on this approach for closed-loop stimulation devices using more elaborate decisions and more accurate clinical guidelines.Artículo A Bio-inspired Implementation of A Sparse-learning Spike-based Hippocampus Memory Model(Institute of Electrical and Electronics Engineers, 2024) Casanueva Morato, Daniel; Ayuso Martínez, Álvaro; Domínguez Morales, Juan Pedro; Jiménez Fernández, Ángel Francisco; Jiménez Moreno, Gabriel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe brain is capable of solving complex problems simply and efficiently, far surpassing modern computers. In this regard, neuromorphic engineering focuses on mimicking the basic principles that govern the brain in order to develop systems that achieve such computational capabilities. Within this field, bio-inspired learning and memory systems are still a challenge to be solved, and this is where the hippocampus is involved. It is the region of the brain that acts as a short-term memory, allowing the learning and storage of information from all the sensory nuclei of the cerebral cortex and its subsequent recall. In this work, we propose a novel bio-inspired hippocampal memory model with the ability to learn memories, recall them from a fragment of itself (cue) and even forget memories when trying to learn others with the same cue. This model has been implemented on SpiNNaker using Spiking Neural Networks, and a set of experiments were performed to demonstrate its correct operation. This work presents the first simulation implemented on a special-purpose hardware platform for Spiking Neural Networks of a fully functional bio-inspired spike-based hippocampus memory model, paving the road for the development of future more complex neuromorphic systems.Artículo A 128×128 Electronically Multi-Foveated Dynamic Vision Sensor With Real-Time Resolution Reconfiguration(Institute of Electrical and Electronics Engineers, 2024-12) Faramarzi, Farnaz; Linares Barranco, Bernabé; Serrano Gotarredona, María Teresa; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; CHIST-ERA; European Union (UE); Junta de Andalucía; Ministerio de Ciencia, Innovación y Universidades (MICIU). España; European Commission (EC). Fondo Europeo de Desarrollo Regional (FEDER)This paper presents the design and implementation of a 128 × 128 electronically foveated dynamic vision sensor (EF-DVS) fabricated using 350 nm CMOS technology. The EF-DVS integrates a novel pixel grouping approach that permits real-time dynamic resolution adjustments via external digital signals. Previous approaches rely on physically crafting high- and low-resolution regions, which require a mechanical setup for tracking moving objects within the fovea. Here, our innovation supports flexible and fast operation modes, acting on amplified photocurrents, allowing the sensor to operate in both highresolution and low-resolution settings, and to configure multiple high-resolution regions of interest (ROIs) with arbitrary shapes and sizes within the pixel array in real-time. Although the pixel circuitry is more complex than its un-foveated predecessor, we have kept the same pixel area, sacrificing slightly fixed pattern noise (FPN). The sensor achieves a latency of 3.66 μs and demonstrates a contrast sensitivity down to 1.03%. It maintains a dynamic range exceeding 120 dB and an intra-scene dynamic range above 60 dB. Notably, the power consumption is reduced with respect to its predecessor, down to 0.7mWat 100 Keps when configured in low-resolution mode with a 2 × 2 pixel grouping. The ability to dynamically adjust spatial resolution reduces noise event rates, enhances sensitivity, and lowers both data bandwidth and processing requirements. These features make the EF-DVS a suitable candidate for applications in robotics, surveillance, and real-time monitoring systems where efficient data processing and low latency are critical.Artículo Competitive cost-effective memory access predictor through short-term online SVM and dynamic vocabularies(Elsevier, 2025-03) Sánchez Cuevas, Pablo; Díaz del Río, Fernando; Casanueva Morato, Daniel; Ríos Navarro, José Antonio; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; MCIN/AEI 10.13039/501100011033; SAINEVRA; SANEVEC; Ministry for Digital Transformation and Public FunctionIn recent years, there has been a significant increase in the processing of massive amounts of data, driven by the growing demands of mobile systems, parallel and distributed architectures, and real-time systems. This applies to various types of platforms, both specific and general-purpose. Despite numerous advancements in Computer Systems, a critical challenge remains: the efficiency and speed of memory access. This bottleneck is being addressed through cache prefetching, that is, by predicting the next memory address to be accessed and then by having always prefetched in the cache system those data to be used shortly by the processor. This paper explores established intelligent techniques for address prediction, examining their limitations and analyzing the memory access patterns of popular software applications. Building on the successes of previous intelligent predictors based on Machine and Deep Learning models, we introduce a new predictor, SVM4AP (Support Vector Machine For Address Prediction), designed to overcome the identified drawbacks of its predecessors. The architecture of SVM4AP improves the trade-off between performance and cost, compared to those previous proposals in the literature, achieving high accuracy through short-term learning. Comparisons are made with two prominent predictors from the literature: the classical DFCM (Differential Finite Context Method) and the contemporary Deep Learning-based DCLSTM (Doubly Compressed Long-Short Term Memory). The results demonstrate that SVM4AP achieves superior cost-effectiveness across various configurations. Simulations reveal that SVM4AP configurations dominate both DFCM and DCLSTM counterparts, forming the majority of the first Paretto front. Particularly noteworthy is the significant advantage of our proposal for small-size predictors. Furthermore, we release an open-source tool enabling the scientific community to reproduce the results presented in this paper using a set of benchmark traces.Artículo Energy–time modelling of distributed multi-population genetic algorithms with dynamic workload in HPC clusters(Elsevier, 2025-06) Escobar, Juan José; Sánchez Cuevas, Pablo; Prieto, Beatriz; Kızıltepe, Rukiye Savran; Díaz del Río, Fernando; Kimovski, Dragi; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; MICIU/ AEI/10.13039/501100011033; ESF+ (“NextGenerationEU/PRTR”); Ministerio de Universidades; Universidad de Granda; Ministerio de Economía, Transformación e Industria.Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy-time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU-GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy-time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.Artículo A new approach for software-simulation of membrane systems using a multi-thread programming model(Elsevier, 2024) Cascado Caballero, Daniel; Díaz del Río, Fernando; Cagigas Muñiz, Daniel; Orellana Martín, David; Pérez Hurtado de Mendoza, Ignacio; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresThe evolution of simulation and implementation of P systems has been intense since the theoretical model of computation was created. In the field of software simulation of P systems, the proposals made so far have taken advantage mainly of the parallelism of GPUs, but not of the parallelism of existing multi-core processors. This paper proposes a new model for simulating P systems using a multi-threaded approach in a multi-core processor. This simulation approach establishes a new paradigm that is entirely in line with the philosophy of P-systems: since objects must react in parallel, asynchronously and autonomously with other objects, simulation using multiple synchronized threads completely mimics the behavior of objects within a membrane. This proposal has been implemented and tested using a simulator programmed in C#, and its correct operation has been tested for confluent and non-confluent systems. The experimental results confirm that the simulator scales well with the number of hardware threads of a multiprocessor. The obtained results show that the new model is correct and that it can be extended to other more complex types of P systems, in order to discover which are the limit of this multi-threaded approach when running it in multi-core processors.Artículo Optimized Machine Learning Classifiers for Symptom-Based Disease Screening(MDPI, 2024-09-14) Fuster-Palà, Aub; Luna Perejón, Francisco; Miró Amarante, María Lourdes; Domínguez Morales, Manuel Jesús; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresThis work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes pre-processing and serves as the data source to train and test various machine learning models, including SVM, RFs, KNN, and ANNs. A three-phase optimization process is developed to obtain the best classifier: first, the dataset is pre-processed; secondly, a grid search is performed with several hyperparameter variations to each classifier; and, finally, the best models obtained are subjected to additional filtering processes. The best-results model, selected based on the performance and the execution time, is a KNN with 2 neighbors, which achieves an accuracy and F1 score of over 98%. These results demonstrate the effectiveness and improvement of the evaluated models compared to previous studies, particularly in terms of accuracy. Although the ANN model has a longer execution time compared to KNN, it is retained in this work due to its potential to handle more complex datasets in a real clinical context.Artículo Energy efficiency in edge TPU vs. embedded GPU for computer-aided medical imaging segmentation and classification(Pergamon-Elsevier, 2024) Rodríguez Corral, José María; Civit Masot, Javier; Luna Perejón, Francisco; Díaz Cano, Ignacio; Morgado-Estévez, Arturo; Domínguez Morales, Manuel Jesús; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadoresand classification of medical images implemented on Edge TPU and embedded GPU processors. We use glaucoma diagnosis based on color fundus images as an example to show the possibility of performing segmentation and classification in real time on embedded boards and to highlight the different energy requirements of the studied implementations. Several other works develop the use of segmentation and feature extraction techniques to detect glaucoma, among many other pathologies, with deep neural networks. Memory limitations and low processing capabilities of embedded accelerated systems (EAS) limit their use for deep network-based system training. However, including specific acceleration hardware, such as NVIDIA’s Maxwell GPU or Google’s Edge TPU, enables them to perform inferences using complex pre-trained networks in very reasonable times. In this study, we evaluate the timing and energy performance of two EAS equipped with Machine Learning (ML) accelerators executing an example diagnostic tool developed in a previous work. For optic disc (OD) and cup (OC) segmentation, the obtained prediction times per image are under 29 and 43 ms using Edge TPUs and Maxwell GPUs, respectively. Prediction times for the classification subsystem are lower than 10 and 14 ms for Edge TPUs and Maxwell GPUs, respectively. Regarding energy usage, in approximate terms, for OD segmentation Edge TPUs and Maxwell GPUs use 38 and 190 mJ per image, respectively. For fundus classification, Edge TPUs and Maxwell GPUs use 45 and 70 mJ, respectively.Artículo Brain Tumor Detection Using Magnetic Resonance Imaging and Convolutional Neural Networks(MDPI, 2024-09-21) Martínez-Del-Río-Ortega, Rafael; Civit Masot, Javier; Luna Perejón, Francisco; Domínguez Morales, Manuel Jesús; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresEarly and precise detection of brain tumors is critical for improving clinical outcomes and patient quality of life. This research focused on developing an image classifier using convolutional neural networks (CNN) to detect brain tumors in magnetic resonance imaging (MRI). Brain tumors are a significant cause of morbidity and mortality worldwide, with approximately 300,000 new cases diagnosed annually. Magnetic resonance imaging (MRI) offers excellent spatial resolution and soft tissue contrast, making it indispensable for identifying brain abnormalities. However, accurate interpretation of MRI scans remains challenging, due to human subjectivity and variability in tumor appearance. This study employed CNNs, which have demonstrated exceptional performance in medical image analysis, to address these challenges. Various CNN architectures were implemented and evaluated to optimize brain tumor detection. The best model achieved an accuracy of 97.5%, sensitivity of 99.2%, and binary accuracy of 98.2%, surpassing previous studies. These results underscore the potential of deep learning techniques in clinical applications, significantly enhancing diagnostic accuracy and reliability.Artículo Potential of Large Language Models in Health Care: Delphi Study(JMIR Publications, INC, 2024-05-13) Denecke, Kerstin; May, Richard; Rivera Romero, Octavio; de Arriba-Muñoz, Antonio; Chapman, Wendy; Chow, James C.L.; Lacalle Remigio, Juan Ramón; Ropero Rodríguez, Jorge; Sevillano Ramos, José Luis; Verspoor, Karin; Universidad de Sevilla. Departamento de Tecnología Electrónica; Universidad de Sevilla. Departamento de Medicina Preventiva y Salud Pública; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresBackground: A large language model (LLM) is a machine learning model inferred from text data that captures subtle patterns of language use in context. Modern LLMs are based on neural network architectures that incorporate transformer methods. They allow the model to relate words together through attention to multiple words in a text sequence. LLMs have been shown to be highly effective for a range of tasks in natural language processing (NLP), including classification and information extraction tasks and generative applications. Objective: The aim of this adapted Delphi study was to collect researchers’ opinions on how LLMs might influence health care and on the strengths, weaknesses, opportunities, and threats of LLM use in health care. Methods: We invited researchers in the fields of health informatics, nursing informatics, and medical NLP to share their opinions on LLM use in health care. We started the first round with open questions based on our strengths, weaknesses, opportunities, and threats framework. In the second and third round, the participants scored these items. Results: The first, second, and third rounds had 28, 23, and 21 participants, respectively. Almost all participants (26/28, 93% in round 1 and 20/21, 95% in round 3) were affiliated with academic institutions. Agreement was reached on 103 items related to use cases, benefits, risks, reliability, adoption aspects, and the future of LLMs in health care. Participants offered several use cases, including supporting clinical tasks, documentation tasks, and medical research and education, and agreed that LLM-based systems will act as health assistants for patient education. The agreed-upon benefits included increased efficiency in data handling and extraction, improved automation of processes, improved quality of health care services and overall health outcomes, provision of personalized care, accelerated diagnosis and treatment processes, and improved interaction between patients and health care professionals. In total, 5 risks to health care in general were identified: cybersecurity breaches, the potential for patient misinformation, ethical concerns, the likelihood of biased decision-making, and the risk associated with inaccurate communication. Overconfidence in LLM-based systems was recognized as a risk to the medical profession. The 6 agreed-upon privacy risks included the use of unregulated cloud services that compromise data security, exposure of sensitive patient data, breaches of confidentiality, fraudulent use of information, vulnerabilities in data storage and communication, and inappropriate access or use of patient data. Conclusions: Future research related to LLMs should not only focus on testing their possibilities for NLP-related tasks but also consider the workflows the models could contribute to and the requirements regarding quality, integration, and regulations needed for successful implementation in practice.Artículo A cellular automata model of a laser reproducing laser passive and active Q-Switching(Elsevier, 2025-01) Jiménez-Morales, Francisco de Paula; Guisado Lizar, José Luis; Guerra, José Manuel; Universidad de Sevilla. Departamento de Física de la Materia Condensada; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia, Innovación y Universidades (MICINN). EspañaThe Q-switching (QS) phenomenon in lasers refers to the production of high intensity pulses by means of a saturable absorber (passive method) or by modifying the reflectivity or losses of the intracavity optics or mirrors (active method). Theoretically, the QS is studied through the laser rate equations which are useful to predict, at least qualitatively and roughly, the fundamental aspects of laser dynamics. However, specific details such as the spatial distribution of the intensity of the laser emission escape the simplicity of the rate equations. In this work we present a two dimensional cellular automata model (CA) to study the QS phenomenology for both the passive and the active method. To simulate the passive method we consider a spatial distribution of cells whose physical properties emulate those of the saturable adsorbers. And for the active method we introduce a periodic modulation of the lifetime of the photons inside the cavity. We have done numerous numerical simulations that show that despite the simplicity of the evolution rules, the AC model is capable of obtaining the main dynamics of operation of the laser by modifying the system parameters such as the pumping probability and the properties of the absorber.Artículo Supporting sustainability assessment of building element materials using a BIM-plug-in for multi-criteria decision-making(Elsevier, 2024-11-15) Soust-Verdaguer, Bernardette; Gutiérrez Moreno, José Antonio; Cagigas Muñiz, Daniel; Hoxha, Endrit; Llatas, Carmen; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; European Union (UE); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe environmental crisis requires the immediate implementation of accurate and robust sustainable solutions throughout the building life cycle. Life Cycle Sustainability Assessment (LCSA) is a scientifically recognised method that integrates the triple dimensions of the life cycle approach, and thereby enabling the evaluation of the performance of mitigation strategies implemented in building projects. However, implementing the LCSA in buildings is limited by the weighting of environmental, economic, and social dimensions to select the best option regarding the numerous materials available. In order to fill this knowledge gap, the paper aims to present the developed Smart BIM3LCA tool, which supports multi-dimensional assessment during the project's early design steps. Automatic integration of the LCSA, a multi-criteria decision-making tool such as TOPSIS, and building information modelling (BIM), was developed to support the selection of building materials. The BIM plug-in was then validated through its application to a multi-family residential building to select the most sustainable materials during the project's early design stage.Artículo Analog Sequential Hippocampal Memory Model for Trajectory Learning and Recalling: A Robustness Analysis Overview(Wiley, 2024-09-30) Casanueva Morato, Daniel; Ayuso Martínez, Álvaro; Indiveri, Giacomo; Domínguez Morales, Juan Pedro; Jiménez Moreno, Gabriel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe rapid expansion of information systems in all areas of society demands more powerful, efficient, and low-energy consumption computing systems. Neuromorphic engineering has emerged as a solution that attempts to mimic the brain to incorporate its capabilities to solve complex problems in a computationally and energy-efficient way in real time. Within neuromorphic computing, building systems to efficiently store the information is still a challenge. Among all the brain regions, the hippocampus stands out as a short-term memory capable of learning and recalling large amounts of information quickly and efficiently. Herein, a spike-based bio-inspired hippocampus sequential memory model is proposed that makes use of the benefits of analog computing and spiking neural networks (SNNs): noise robustness, improved real-time operation, and energy efficiency. This model is applied to robotic navigation to learn and recall trajectories that lead to a goal position within a known grid environment. The model is implemented on the special-purpose SNNs mixed-signal DYNAP-SE hardware platform. Through extensive experimentation together with an extensive analysis of the model's behavior in the presence of external noise sources, its correct functioning is demonstrated, proving the robustness and consistency of the proposed neuromorphic sequential memory system.