Artículos (Arquitectura y Tecnología de Computadores)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11292
Examinar
Envíos recientes
Artículo Supporting sustainability assessment of building element materials using a BIM-plug-in for multi-criteria decision-making(Elsevier, 2024-11-15) Soust-Verdaguer, Bernardette; Gutiérrez Moreno, José Antonio; Cagigas Muñiz, Daniel; Hoxha, Endrit; Llatas Oliver, Carmen; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; European Union (UE); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe environmental crisis requires the immediate implementation of accurate and robust sustainable solutions throughout the building life cycle. Life Cycle Sustainability Assessment (LCSA) is a scientifically recognised method that integrates the triple dimensions of the life cycle approach, and thereby enabling the evaluation of the performance of mitigation strategies implemented in building projects. However, implementing the LCSA in buildings is limited by the weighting of environmental, economic, and social dimensions to select the best option regarding the numerous materials available. In order to fill this knowledge gap, the paper aims to present the developed Smart BIM3LCA tool, which supports multi-dimensional assessment during the project's early design steps. Automatic integration of the LCSA, a multi-criteria decision-making tool such as TOPSIS, and building information modelling (BIM), was developed to support the selection of building materials. The BIM plug-in was then validated through its application to a multi-family residential building to select the most sustainable materials during the project's early design stage.Artículo Analog Sequential Hippocampal Memory Model for Trajectory Learning and Recalling: A Robustness Analysis Overview(Wiley, 2024-09-30) Casanueva Morato, Daniel; Ayuso Martínez, Álvaro; Indiveri, Giacomo; Domínguez Morales, Juan Pedro; Jiménez Moreno, Gabriel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThe rapid expansion of information systems in all areas of society demands more powerful, efficient, and low-energy consumption computing systems. Neuromorphic engineering has emerged as a solution that attempts to mimic the brain to incorporate its capabilities to solve complex problems in a computationally and energy-efficient way in real time. Within neuromorphic computing, building systems to efficiently store the information is still a challenge. Among all the brain regions, the hippocampus stands out as a short-term memory capable of learning and recalling large amounts of information quickly and efficiently. Herein, a spike-based bio-inspired hippocampus sequential memory model is proposed that makes use of the benefits of analog computing and spiking neural networks (SNNs): noise robustness, improved real-time operation, and energy efficiency. This model is applied to robotic navigation to learn and recall trajectories that lead to a goal position within a known grid environment. The model is implemented on the special-purpose SNNs mixed-signal DYNAP-SE hardware platform. Through extensive experimentation together with an extensive analysis of the model's behavior in the presence of external noise sources, its correct functioning is demonstrated, proving the robustness and consistency of the proposed neuromorphic sequential memory system.Artículo A systematic comparison of different machine learning models for the spatial estimation of air pollution(Springer, 2023) Cerezuela Escudero, Elena; Montes Sánchez, Juan Manuel; Domínguez Morales, Juan Pedro; Durán López, Lourdes; Jiménez Moreno, Gabriel; ; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108. Robótica y Tecnología de ComputadoresAir pollutants harm human health and the environment. Nowadays, deploying an air pollution monitoring network in many urban areas could provide real-time air quality assessment. However, these networks are usually sparsely distributed and the sensor calibration problems that may appear over time lead to missing and wrong measurements. There is an increasing interest in developing air quality modelling methods to minimize measurement errors, predict spatial and temporal air quality, and support more spatially-resolved health effect analysis. This research aims to evaluate the ability of three feed-forward neural network architectures for the spatial prediction of air pollutant concentrations using the measures of an air quality monitoring network. In addition to these architectures, Support Vector Machines and geostatistical methods (Inverse Distance Weighting and Ordinary Kriging) were also implemented to compare the performance of neural network models. The evaluation of the methods was performed using the historical values of seven air pollutants (Nitrogen monoxide, Nitrogen dioxide, Sulphur dioxide, Carbon monoxide, Ozone, and particulate matters with size less than or equal to 2.5 µm and to 10 µm) from an urban air quality monitoring network located at the metropolitan area of Madrid (Spain). To assess and compare the predictive ability of the models, three estimation accuracy indicators were calculated: the Root Mean Squared Error, the Mean Absolute Error, and the coefficient of determination. FFNN-based models are superior to geostatistical methods and slightly better than Support Vector Machines for fitting the spatial correlation of air pollutant measurementsArtículo Melanoma Breslow thickness classification using ensemble-based knowledge distillation with semi-supervised convolutional neural networks(EEE-Inst Electrical Electronics Engineers Inc, 2024-09-20) Domínguez Morales, Juan Pedro; Hernández Rodríguez, Juan Carlos; Durán López, Lourdes; Conejo-Mir Sánchez, Julián; Pereyra-Rodríguez, José-Juan; Universidad de Sevilla. Departamento de Medicina; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresMelanoma is considered a global public health challenge and is responsible for more than 90% deaths related to skin cancer. Although the diagnosis of early melanoma is the main goal of dermoscopy, the dis crimination between dermoscopic images of in situ and invasive melanomas can be a difficult task even for expe rienced dermatologists. Recent advances in artificial intel ligence in the field of medical image analysis show that its application to dermoscopy with the aim of supporting and providing a second opinion to the medical expert could be of great interest. In this work, four datasets from different sources were used to train and evaluate deep learning models on in situ versus invasive melanoma classification and on Breslow thickness prediction. Supervised learning and semi-supervised learning using a multi-teacher ensem ble knowledge distillation approach were considered and evaluated using a stratified 5-fold cross-validation scheme. The best models achieved AUCs of 0.8085±0.0242 and of 0.8232±0.0666 on the former and latter classification tasks, respectively. The best results were obtained using semi supervised learning, with the best model achieving 0.8547 and 0.8768 AUC, respectively. An external test set was also evaluated, where semi-supervision achieved higher performance in all the classification tasks. The results ob tained show that semi-supervised learning could improve the performance of trained models in different melanoma classification tasks compared to supervised learning. Au tomatic deep learning-based diagnosis systems could sup port medical professionals in their decision, serving as a second opinion or as a triage tool for medical centers.Artículo Coulomb- and nuclear-induced break-up of halo nuclei at bombarding energies around the Coulomb barrier(Elsevier BV, 1996) Guisado Lizar, José Luis; Jiménez Morales, Francisco; Guerra, J.M.; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresWe investigate the relative importance of the Coulomb and nuclear fields to induce the break-up of neutron-rich nuclei such as 11 Li at energies close to the Coulomb barrier. We assume that the mechanism that leads to the separation is the excitation of a low-lying dipole mode in which the weakly-bound neutron halo performs a collective oscillation against the residual nuclear core. To this end we exploit semiclassical prescriptions that are adequate to calculate not only the average break-up probabilities but also to estímate the size of fluctuations about the quantal expectation values. Possible outcomes are explored as a function of both bombarding energy and impact parameter. Consequences of the couplings for elastic scattering and fusion processes are also discussed.Artículo Application of Shannon's entropy to classify emergent behaviors in a simulation of laser dynamics(PERGAMON-ELSEVIER SCIENCE LTD, 2005) Guisado Lizar, José Luis; Jiménez Morales, Francisco; Guerra, J.M.; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresLaser dynamics simulations have been carried out using a cellular automata model. The Shannon's entropy has been used to study the different emergent behaviors exhibited by the system, mainly the laser spiking and the laser constant operation. lt is also shown that the Shannon's entropy of the distribution of the populations of photons and electrons reproduces the laser stability curve, in agreement with the theoretical predictions from the laser rate equations and with the experimental results.Artículo Using cellular automata for parallel simulation of laser dynamics with dynamic load balancing(Inderscience Publishers, 2008) Guisado Lizar, José Luis; Fernández de Vega, Francisco; Jiménez Morales, Francisco; Iskra, Kamil A.; Sloot, Peter M.A.; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresWe present an analysis of the feasibility of executing a parallel bioinspired model of laser dynamics, based on cellular automata (CA), on the usual target platform of this kind of applications: a heterogeneous non-dedicated cluster. As this model employs a synchronous cellular automaton, using the SPMD (Single Program, Multiple Data) paradigm, it is not clear in advance if an appropriate efficiency can be obtained on this kind of platform. We have evaluated its performance including artificial load to simulate other tasks or jobs submitted by other users. A dynamic load balancing strategy with two main differences from most previous implementations of CA based models has been used. First, it is possible to migrate load to cluster nodes initially not belonging to the pool. Second, a modular approach is taken in which the model is executed on top of a dynamic load balancing tool—the Dynamite system— gaining flexibility. Very satisfactory results have been obtained, with performance increases from 60% to 80%Artículo Large expert-curated database for benchmarking document similarity detection in biomedical literature search(Oxford University Press, 2019) Guisado Lizar, José Luis; RELISH Consortium; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresDocument recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the Relevant Literature Search consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical researchArtículo Building efficient computational cellular automata models of complex systems: background, applications, results, software, and pathologies(WORLD SCIENTIFIC PUBL CO PTE LTD, 2019) Jiménez Morales, Francisco; Guisado Lizar, José Luis; Lemos, M. Carmen; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de ComputadoresCellular automaton models of complex systems (CSs) are gaining greater popularity; simultaneously, they have proven the capability to solve real scientific and engineering applications. To enable everybody a quick penetration into the core of this type of modeling, three real applications of cellular automaton models, including selected open-source software codes, are studied: laser dynamics, dynamic recrystallization (DRX) and surface catalytic reactions. The paper is written in a way that it enables any researcher to reach the cutting-edge knowledge of the design principles of cellular automata (CA) models of the observed phenomena in any scientific ¯eld. The whole sequence of design steps is demonstrated: definition of the model using topology and local (transition) rule of a cellular automaton, achieved results, comparison to real experiments, calibration, pathological observations, °ow diagrams, software, and discussions. Additionally, the whole paper demonstrates the extreme expressiveness and flexibility of massively parallel computational approaches compared to other computational approaches. The paper consists of the introductory parts that are explaining CSs, self-organization and emergence, entropy, and CA. This allows readers to realize that there is a large variability in definitions and solutions of this class of models.Artículo Classification of skin blemishes with cell phone images using deep learning techniques(Elsevier, 2024-04) Rangel-Ramos, José Antonio; Luna Perejón, Francisco; Civit Balcells, Antón; Domínguez Morales, Manuel Jesús; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresSkin blemishes can be caused by multiple events or diseases and, in some cases, it is difficult to distinguish where they come from. Therefore, there may be cases with a dangerous origin that go unnoticed or the opposite case (which can lead to overcrowding of health services). To avoid this, the use of artificial intelligence-based classifiers using images taken with mobile devices is proposed; this would help in the initial screening process and provide some information to the patient prior to their final diagnosis. To this end, this work proposes an optimization mechanism based on two phases in which a global search for the best classifiers (from among more than 150 combinations) is carried out, and, in the second phase, the best candidates are subjected to a phase of evaluation of the robustness of the system by applying the cross-validation technique. The results obtained reach 99.95% accuracy for the best case and 99.75% AUC. Comparing the developed classifier with previous works, an improvement in terms of classification rate is appreciated, as well as in the reduction of the classifier complexity, which allows our classifier to be integrated in a specific purpose system with few computational resources.Artículo A systematic comparison of deep learning methods for Gleason grading and scoring(Elsevier, 2024-07) Domínguez Morales, Juan Pedro; Durán López, Lourdes; Marini, Niccolo; Vicente Díaz, Saturnino; Linares Barranco, Alejandro; Atzori, Manfredo; Muller, Henning; ; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Junta de Andalucía; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; European Union (UE). H2020; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresProstate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.Artículo A Neuromorphic Vision and Feedback Sensor Fusion Based on Spiking Neural Networks for Real-Time Robot Adaption(Wiley, 2024-05) López Osorio, Pablo; Domínguez Morales, Juan Pedro; Pérez-Peña, Fernando; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresFor some years now, the locomotion mechanisms used by vertebrate animals have been a major inspiration for the improvement of robotic systems. These mechanisms range from adapting their movements to move through the environment to the ability to chase prey, all thanks to senses such as sight, hearing, and touch. Neuromorphic engineering is inspired by brain problem-solving techniques with the goal of implementing models that take advantage of the characteristics of biological neural systems. While this is a well-defined and explored area in this field, there is no previous work that fuses analog and neuromorphic sensors to control and modify robotic behavior in real time. Herein, a system is presented based on spiking neural networks implemented on the SpiNNaker hardware platform that receives information from both analog (force-sensing resistor) and digital (neuromorphic retina) sensors and is able to adapt the speed and orientation of a hexapod robot depending on the stability of the terrain where it is located and the position of the target. These sensors are used to modify the behavior of different spiking central pattern generators, which in turn will adapt the speed and orientation of the robotic platform, all in real time. In particular, experiments show that the network is capable of correctly adapting to the stimuli received from the sensors, modifying the speed and heading of the robotic platform.Artículo Diagnosis Aid System for Colorectal Cancer Using Low Computational Cost Deep Learning Architectures(MDPI, 2024-06) Gago-Fabero, Álvaro; Muñoz Saavedra, Luis; Civit Masot, Javier; Luna Perejón, Francisco; Rodríguez Corral, José María; Domínguez Morales, Manuel Jesús; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresColorectal cancer is the second leading cause of cancer-related deaths worldwide. To prevent deaths, regular screenings with histopathological analysis of colorectal tissue should be performed. A diagnostic aid system could reduce the time required by medical professionals, and provide an initial approach to the final diagnosis. In this study, we analyze low computational custom architectures, based on Convolutional Neural Networks, which can serve as high-accuracy binary classifiers for colorectal cancer screening using histopathological images. For this purpose, we carry out an optimization process to obtain the best performance model in terms of effectiveness as a classifier and computational cost by reducing the number of parameters. Subsequently, we compare the results obtained with previous work in the same field. Cross-validation reveals a high robustness of the models as classifiers, yielding superior accuracy outcomes of 99.4 ± 0.58% and 93.2 ± 1.46% for the lighter model. The classifiers achieved an accuracy exceeding 99% on the test subset using low-resolution images and a significantly reduced layer count, with images sized at 11% of those used in previous studies. Consequently, we estimate a projected reduction of up to 50% in computational costs compared to the most lightweight model proposed in the existing literature.Artículo Towards neuromorphic FPGA-based infrastructures for a robotic arm(Springer, 2023-07) Canas Moreno, Salvador; Piñero Fuentes, Enrique; Ríos Navarro, José Antonio; Cascado Caballero, Daniel; Pérez-Peña, Fernando; Linares Barranco, Alejandro; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; European Commission (EC). Fondo Europeo de Desarrollo Regional (FEDER); European Union (UE). H2020; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresMuscles are stretched with bursts of spikes that come from motor neurons connected to the cerebellum through the spinal cord. Then, alpha motor neurons directly innervate the muscles to complete the motor command coming from upper biological structures. Nevertheless, classical robotic systems usually require complex computational capabilities and relative high-power consumption to process their control algorithm, which requires information from the robot’s proprioceptive sensors. The way in which the information is encoded and transmitted is an important difference between biological systems and robotic machines. Neuromorphic engineering mimics these behaviors found in biology into engineering solutions to produce more efficient systems and for a better understanding of neural systems. This paper presents the application of a Spike-based Proportional-Integral-Derivative controller to a 6-DoF Scorbot ER-VII robotic arm, feeding the motors with Pulse-Frequency-Modulation instead of Pulse-Width-Modulation, mimicking the way in which motor neurons act over muscles. The presented frameworks allow the robot to be commanded and monitored locally or remotely from both a Python software running on a computer or from a spike-based neuromorphic hardware. Multi-FPGA and single-PSoC solutions are compared. These frameworks are intended for experimental use of the neuromorphic community as a testbed platform and for dataset recording for machine learning purposes.Artículo Application of Knowledge Discovery in Databases (KDD) to environmental, economic, and social indicators used in BIM workflow to support sustainable design(Elsevier, 2024-08) Llatas, Carmen; Soust-Verdaguer, Bernardette; Castro Torres, Luis; Cagigas Muñiz, Daniel; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España; European Union (UE); European Commission (EC). Fondo Europeo de Desarrollo Regional (FEDER); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresLife Cycle Sustainability Assessment (LCSA) can help to predict the impact of products and services, such as buildings, during their entire life cycle. However, it is an extensive data method. The Building Information Modelling (BIM) method can contribute towards reducing the effort involved and simplifying the data collection relating to the building elements. To this end, databases adjusted to the BIM workflow are needed to systematise and harmonise the structure of the environmental, economic and social data of said elements. This paper provides a solution to this problem by presenting an innovative Triple Bottom Line (TBL) database with environmental, economic, and social indicators of building elements to support the triple assessment adapted to the BIM workflow. An analysis employing Knowledge Discovery in Databases (KDD) was performed for the first time on this type of database to better understand the correlations between the dimensions. The key contributions include correlation detection, 83 % of which were direct, which showed that, overall, the environmental (CO2 emissions), economic (cost), and social (labour) dimensions experience similar growth trends. Strong correlations between economic and social variables were found in 68 % of the cases, followed by those of the economic and environmental (32 %), and social and environmental (18 %) variables. Findings from the correlation analysis between the three dimensions reveal their influence on the type of building system, element and material. Four scenarios were thereby identified in accordance with these correlations, to aid in sustainable decision-making. Various growth trends were detected, which can facilitate the implementation of the LCSA.Artículo An innovative 12-lead resting electrocardiogram dataset in professional football(Elsevier, 2024-04-23) Muñoz-Macho, Adolfo Antonio; Domínguez Morales, Manuel Jesús; Sevillano Ramos, José Luis; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresThis paper aims to provide a comprehensive and innovative 12-lead electrocardiogram (ECG) dataset tailored to understand the unique needs of professional football players. Other ECG datasets are available but collected from common people, normally with diseases confirmed, while it is well known that ECG characteristics change in athletes and elite players as a result of their intense long-term physical training. This initiative is part of a broader research project employing machine learning (ML) to analyse ECG data in this athlete population and explore them according to the International criteria for ECG interpretation in athletes. The dataset is generated through the establishment of a prospective observational cohort consisting of 54 male football players from La Liga, representing a UEFA Pro-level team. Named the Pro-Football 12-lead Resting Electrocardiogram Database (PF12RED), it comprises 163 10-s ECG recordings, offering a detailed examination of the at-rest heart activity of professional football athletes. Data collection spans five phases over multiple seasons, including the 2018–2019 postseason, the 2019–20 preseason, the 2020–21 preseason, and the 2021–22 preseason. Athletes undergo medical evaluations that include a 10-s resting 12-lead ECG performed with General Electric's USB-CAM 14 module (https://co.services.gehealthcare.com/gehcstorefront/p/900995–002), with data saved using General Electric's CardioSoft V6.73 12SL V21 ECG Software. (https://www.gehealthcare.es/products/cardiosoft-v7). The data collection adheres to ethical principles, with clearance granted by the Autonomous Community of Andalusia Ethics Committee (Spain) under protocol number 1573-N-19 in December 2019. Participants provide informed consent, and data sharing is permitted following anonymization. The study aligns with the Declaration of Helsinki and adheres to the recommendations of the International Committee of Medical Journal Editors (ICMJE). The generated dataset serves as a valuable resource for research in sports cardiology and cardiac health. Its potential for reuse encompasses: 1) International Comparison: Enabling cross-regional comparisons of cardiac characteristics among elite football players, enriching international studies; 2) ML Model Development: Facilitating the development and refinement of machine learning models for arrhythmia detection, serving as a benchmark dataset; 3) Validation of Diagnostic Methods: Allowing the validation of automatic diagnostic methods, contributing to enhanced accuracy in detecting cardiac conditions; 4) Research in Sports Cardiology: Supporting future investigations into specific cardiac adaptations in elite athletes and their relation to cardiovascular health; 5) Reference for Athlete Protection Policies: Influencing athlete protection policies by providing data on cardiac health and suggesting guidelines for medical assessments; 6) Health Professionals Training: Serving as a training resource for health professionals interested in interpreting ECGs in sports contexts; 7) Tool and Application Development: Facilitating the development of tools and applications related to the visualization, simulation and analysis of ECG signals in athletes.Artículo Performance and Healthcare Analysis in Elite Sports Teams Using Artificial Intelligence: A Scoping Review(Frontiers Media, 2024-04-18) Muñoz-Macho, Adolfo Antonio; Domínguez Morales, Manuel Jesús; Sevillano Ramos, José Luis; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresIntroduction: In competitive sports, teams are increasingly relying on advanced systems for improved performance and results. This study reviews the literature on the role of artificial intelligence (AI) in managing these complexities and encouraging a system thinking shift. It found various AI applications, including performance enhancement, healthcare, technical and tactical support, talent identification, game prediction, business growth, and AI testing innovations. The main goal of the study was to assess research supporting performance and healthcare. Methods: Systematic searches were conducted on databases such as Pubmed, Web of Sciences, and Scopus to find articles using AI to understand or improve sports team performance. Thirty-two studies were selected for review. Results: The analysis shows that, of the thirty-two articles reviewed, fifteen focused on performance and seventeen on healthcare. Football (Soccer) was the most researched sport, making up 67% of studies. The revised studies comprised 2,823 professional athletes, with a gender split of 65.36% male and 34.64% female. Identified AI and non-AI methods mainly included Tree-based techniques (36%), Ada/XGBoost (19%), Neural Networks (9%), K-Nearest Neighbours (9%), Classical Regression Techniques (9%), and Support Vector Machines (6%). Conclusions: This study highlights the increasing use of AI in managing sports-related healthcare and performance complexities. These findings aim to assist researchers, practitioners, and policymakers in developing practical applications and exploring future complex systems dynamics.Artículo Bio-inspired parallel computing of representative geometrical objects of holes of binary 2D-images(INDERSCIENCE ENTERPRISES LTD, 2017) Díaz Pernil, Daniel; Berciano, Ainhoa; Peña Cantillana, Francisco; Gutiérrez Naranjo, Miguel Ángel; Universidad de Sevilla. Departamento de Ciencias de la Computación e Inteligencia ArtificialIn this paper, we present a bio-inspired parallel implementation of a solution of the problem of looking for the representative geometrical objects of the homology groups in a binary 2D image (extended-HGB2I problem), which is an extended version of a well-known problem in homology theory. In particular, given a binary 2D image, all black connected components and the representative curves of the holes of these components are obtained and labelled. To this aim, a new technique for labelling the connected components of a binary image is presented. In order to compute the solution, the formal framework uses techniques from membrane computing and the implementation has been done in a hardware architecture called compute unified device architecture (CUDA). The computational complexity of the proposed solution is O(m) with respect to the input (image) size m ∼ n2. Finally, some examples and applications are also presented.Artículo A lightweight xAI approach to cervical cancer classification(Springer, 2024-03) Civit Masot, Javier; Luna Perejón, Francisco; Domínguez Morales, Manuel Jesús; Civit Balcells, Antón; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Junta de Andalucía / European Commission (EC). Fondo Europeo de Desarrollo Regional (FEDER) FEDER research project MSF-PHIA (US-1263715); Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresCervical cancer is caused in the vast majority of cases by the human papilloma virus (HPV) through sexual contact and requires a specific molecular-based analysis to be detected. As an HPV vaccine is available, the incidence of cervical cancer is up to ten times higher in areas without adequate healthcare resources. In recent years, liquid cytology has been used to overcome these shortcomings and perform mass screening. In addition, classifiers based on convolutional neural networks can be developed to help pathologists diagnose the disease. However, these systems always require the final verification of a pathologist to make a final diagnosis. For this reason, explainable AI techniques are required to highlight the most significant data to the healthcare professional, as it can be used to determine the confidence in the results and the areas of the image used for classification (allowing the professional to point out the areas he/she thinks are most important and cross-check them against those detected by the system in order to create incremental learning systems). In this work, a 4-phase optimization process is used to obtain a custom deep-learning classifier for distinguishing between 4 severity classes of cervical cancer with liquid-cytology images. The final classifier obtains an accuracy over 97% for 4 classes and 100% for 2 classes with execution times under 1 s (including the final report generation). Compared to previous works, the proposed classifier obtains better accuracy results with a lower computational cost.Artículo Closed-loop sound source localization in neuromorphic systems(IOP Publishing, 2023-06) Schoepe, Thorben; Gutiérrez Galán, Daniel; Domínguez Morales, Juan Pedro; Greatorex, Hugh; Jiménez Fernández, Ángel Francisco; Linares Barranco, Alejandro; Chicca, Elisabetta; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Ciencia e Innovación (MICIN). España grant MINDROB and by the Cluster of Excellence Cognitive Interaction Technology (EXC 277), Bielefeld University, funded by the German Research Foundation PID2019-105556GB-C33; Universidad de Sevilla. TEP-108: Robótica y Tecnología de Computadores Aplicada a la RehabilitaciónSound source localization (SSL) is used in various applications such as industrial noise-control, speech detection in mobile phones, speech enhancement in hearing aids and many more. Newest video conferencing setups use SSL. The position of a speaker is detected from the difference in the audio waves received by a microphone array. After detection the camera focuses onto the location of the speaker. The human brain is also able to detect the location of a speaker from auditory signals. It uses, among other cues, the difference in amplitude and arrival time of the sound wave at the two ears, called interaural level and time difference. However, the substrate and computational primitives of our brain are different from classical digital computing. Due to its low power consumption of around 20 W and its performance in real time the human brain has become a great source of inspiration for emerging technologies. One of these technologies is neuromorphic hardware which implements the fundamental principles of brain computing identified until today using complementary metal-oxide-semiconductor technologies and new devices. In this work we propose the first neuromorphic closed-loop robotic system that uses the interaural time difference for SSL in real time. Our system can successfully locate sound sources such as human speech. In a closed-loop experiment, the robotic platform turned immediately into the direction of the sound source with a turning velocity linearly proportional to the angle difference between sound source and binaural microphones. After this initial turn, the robotic platform remains at the direction of the sound source. Even though the system only uses very few resources of the available hardware, consumes around 1 W, and was only tuned by hand, meaning it does not contain any learning at all, it already reaches performances comparable to other neuromorphic approaches. The SSL system presented in this article brings us one step closer towards neuromorphic event-based systems for robotics and embodied computing.