Ponencias (Lenguajes y Sistemas Informáticos)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11394
Examinar
Envíos recientes

Ponencia Context-Aware AI Agents for Clinical Dialogue Assistance through Large Language Models(Springer, 2026) Naranjo Pozas, Rodrigo; Doblado Mendoza, Pablo; Vega Márquez, Belén; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC134: Sistemas InformáticosThis research investigates the efficacy of Artificial Intelligence Agents in processing and responding to personalized, private contextual information. We studied how to implement a system designed to augment an open-source Large Language Model (LLM), such as Llama, Claude, and Gemma, with domain-specific knowledge bases. This augmentation is intended to facilitate the generation of contextually coherent and accurate responses to user queries. The system was developed and tested using different versions of a clinical conversation transcripts dataset between patients and medical professionals, enabling specialized knowledge integration. The architectural framework, built upon LangChain, FAISS, Ollama, and Gradio, demonstrates a simple, modular, scalable, and extensible design. This work helped us to take our first steps into the development of robust AI agents capable of leveraging external knowledge for enhanced conversational intelligence in specialized domains.
Ponencia Application of Machine Learning Techniques to the Prediction of Hospital Mortality: Beyond Conventional Clinical Models(Springer, 2026) López Ruz, Pedro; Vega Márquez, Belén; Pontes Balanza, Beatriz; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC134: Sistemas InformáticosThis study presents a comprehensive analysis of patient cohorts admitted to Intensive Care Units (ICUs), with the objective of evaluating, comparing, and optimizing the metrics employed for mortality prediction in critically ill patients. Given the inherent complexity of clinical prognosis in these settings, widely adopted scoring systems such as APACHE,SOFA,andSAPSarecritically examined, highlighting their principal strengths and limitations. To enhance predictive performance, the study proposes the integration of advanced Machine Learning and Data Science methodologies. The approach includes the implementation of machine learning algorithms, systematic variable selection, cross-validation, and performance assessment using metrics such as AUC-ROC, accuracy, sensitivity, and specificity. Model development will be based on real ICU patient data, ensuring strict adherence to ethical standards and data confidentiality. The best-performing model will subsequently be benchmarked against traditional tools to evaluate its capacity to improve mortality prediction and support clinical decision-making. Beyond predictive accuracy, the study emphasizes the importance of feasibility and applicability in real world clinical environments. Overall, this research seeks to contribute to the development of innovative technological solutions that enable more personalized, efficient, and evidence-based medical care in high-complexity hospital units, thereby supporting optimized management of resources, healthcare personnel, and clinical protocols within the ICU.
Ponencia Integrative analysis of breast cancer using multi-omics latent representations(Springer, 2026) Lakouifat Darkaoui, Yasmine; Pontes Balanza, Beatriz; Vega Márquez, Belén; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC134: Sistemas InformáticosThis study presents a comparative analysis of three dimensionality reduction techniques—MOFA (Multi-Omics Factor Analysis), IntNMF(Integrative Non-negative Matrix Factorization), and VAE (Variational Autoencoder)—applied to breast cancer multi-omics data from the TCGA-BRCA project. The integration of omics layers—including gene expression, protein expression, copy number variations, and mutations—was combined with key clinical variables to evaluate the performance of latent representations in both classification and clustering tasks. Major challenges such as high dimensionality and severe class imbalance were addressed through oversampling and undersampling strategies. Each method was evaluated for its effectiveness in predicting clinical outcomes and identifying meaningful molecular patterns. MOFA offered biologically interpretable and stable representations, IntNMF produced compact structures, and VAE yielded well-separated latent spaces. Enrichment analysis confirmed the relevance of extracted features, reinforcing the utility of latent factor models for robust multi-omics integrationin breast cancer research.
Ponencia Optimization of Denoising Autoencoders with Progressive Learning Strategies for scRNA-seq Data(Springer, 2026) Franco Ruíz, Francisco Javier; Fernández Malvido, Sara; Vega Márquez, Belén; Nepomuceno Chamorro, Isabel de los Ángeles; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC134: Sistemas InformáticosAdvances in single-cell RNA sequencing (scRNA-seq) technologies have revolutionized cancer research by enabling detailed characterization of cellular heterogeneity [6]. However, predicting drug sensitivity at single-cell resolution remains challenging due to the scarcity of annotated data, high dimensionality, and technical noise. In this work, we reproduce and enhance the scDEAL framework, a deep transfer learning model that integrates bulk and single-cell transcriptomic data for drug response prediction. The enhancements focus on optimizing Denoising Autoencoders (DAE) and incorporating progressive training strategies. Specifically, we introduce Gene Prioritization Regularization (GPR) to emphasize biologically relevant genes, implement Curriculum Learning to gradually increase task complexity during training, and apply direct filtering of highly variable genes to reduce dimensionality. Experiments conducted on publicly available datasets from GDSC, CCLE, and GEO demonstrate that filtering the top 20% most variable genes leads to significant improvements in predictive performance, achieving an F1-score of 0.9641 and an AUC of 0.9549, while reducing computational costs by 80%. These results highlight the importance of feature selection and progressive training strategies for enhancing drug response prediction from scRNA-seq data.
Ponencia Towards an Explainability Agent: Leveraging LLMs to Interpret LIME Outputs(Springer, 2026) Vega Márquez, Belén; Rubio Escudero, Cristina; Pontes Balanza, Beatriz; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC134: Sistemas InformáticosIn clinical decision-making, the adoption of machine learning models requires not only high predictive performance but also transparent, trustworthy explanations. This work presents a hybrid explainability framework that combines Local Interpretable Model-agnostic Explanations (LIME) with a Large Language Model (LLM) to generate natural language explanations of classification outputs in healthcare applications. While LIME provides local feature attributions for individual predictions, these are often difficult to interpret by non-technical clinical staff. Our system leverages an LLM to translate LIME outputs into fluent, domain-aware explanations that align with the reasoning needs of health care professionals. We evaluate the method on two medical datasets involving patient risk classification tasks and obtain promising results in terms of interpretability and consistency with the model’s decision logic. However, the current evaluation lacks validation by external healthcare professionals, which we identify as an essential next step. Despite this, our approach represents a strong foundation for the development of adaptive explainability agents in clinical contexts. We discuss its potential impact on future decision support systems and propose directions for evolving toward interactive, trustworthy, and user-aligned AI tools in medicine.
Ponencia Bridging Apples and Oranges: A Schema for Defining QoS‑Aware Composition(Sistedes, 2025) Cavero Lopez, Francisco Javier; Parejo Maestre, José Antonio; Ruiz Cortés, Antonio; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). EspañaQoS-aware Web Service Composition (QoSWSC) addresses the challenge of selecting the most appropriate candidate services to fulfill a desired functionality while optimizing Quality of Service (QoS) attributes—such as cost, performance, and reliability—under user-defined constraints. Despite significant research in this area, reproducibility and comparability of results remain limited due to the lack of a standardized and formal problem definition. This paper introduces a problem definition schema for QoSWSC that captures the core elements required to describe such problems in a structured, unambiguous, and extensible way. By establishing a shared vocabulary and formal structure, the schema enables clearer communication, facilitates experimental replication, and supports future extensions—such as the development of a common model and syntax for defining problem instances. In addition, we explore how this schema addresses the persistent challenge of replicability and examine an emerging issue in the field: multi-configuration services. Together, these contributions pave the way for more rigorous, interoperable, and realistic approaches to modern service composition research.
Ponencia Open Science principles in software product lines: The case of the UVL ecosystem(Assoc Computing Machinery, 2024) Galindo Duarte, José Ángel; Romero Organvidez, David; Bhusham, Megha; Horcas Aguilera, José Miguel; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). España; TIC276: Diverso Lab - International ComputingOpen science is a movement aimed at making scientific research, data, and dissemination accessible. In this tutorial, we will explore how to adapt this research philosophy to the context of the software product line community. To achieve this, we present a tooling ecosystem created with open science in mind. Concretely, we will rely on uvlhub for dataset sharing, flamapy to enrich and extract the metrics, and the Fact-Label tool to visualize the data at a glance. Participants will gain hands-on experience with each tool and learn how these tools can be integrated into their research workflows.
Ponencia Kconfig metamodel: a first approach(Association for Computing Machinery, 2024) Romero Organvidez, David; Neira Ayuso, Pablo; Galindo Duarte, José Ángel; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). España; TIC276: Diverso Lab - International ComputingKconfig is the de facto configuration language for describing and configuring the variability of the Linux kernel. Nonetheless, it has been used since the early stages of kernel development. Moreover, Kconfig is also used as a niche configuration languages, such as microkernel compilation for air navigation systems, proprietary routers or embedded systems. In the last decade, the software product line (SPL) community worked intensively on observing Linux Kernel and Kconfig. However, the official documentation is difficult to understand, and the examples are long and challenging to synthesize for non-Kconfig experts, such as SPL engineers, and researchers. In this paper, we propose a Kconfig metamodel based on the documentation and the feedback of a kernel developer expert. Thanks to this metamodel, the design of transformations from Kconfig to other variability models such as UVL (Universal Variability Language) can be facilitated. To our knowledge, this is the first proposal for a metamodel of the Kconfig language. This opens the door to further research, such as Kconfig analysis and transformations, and leverage interoperability among the Kconfig toolchain and SPL tools.
Ponencia Optimizing Competitive Positioning through Software Product Lines: An Industrial Application in Digital Marketing(Assoc Computing Machinery, 2025) Sánchez Ruiz, José Manuel; López Durán, Noelia; Olivero González, Miguel Ángel; Domínguez Mayo, Francisco José; Benavides Cuevas, David Felipe; Macedo Figueroa, Carmen; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC276: Diverso Lab - International ComputingEntrepreneurial companies face the challenge of positioning their product in a market full of possibilities, where innovating and standing out from the competition is increasingly difficult. Understanding the products’ characteristics is key to highlight them against their competitors. However, comparing products is not simple, since gaining in-depth knowledge of all available market alternatives is highly complex. Product Lines, and their representation through Feature models are a relevant tool to identify, classify and compare different product regarding their features. SWOT matrix is a strategical artifact to better understand the differences among them. This study presents a software solution that enables the generation of a SWOT matrix through the configuration of a feature model and the use of LLMs to gather market information. This is the first study to combine software product lines and large language models to generate a product SWOT matrix based on in-depth insights about a product. It enables (i) structured product modeling via feature models using the Universal Variability Language and (ii) automatic generation of SWOT (Strengths, Weaknesses, Opportunities, Threats) analyses. Domain models are defined with expert input and transformed into interactive configuration forms, facilitating the definition of userdriven products without requiring SPL expertise. This approach allows to get a better understanding of market products and its differential value in relation to its competitors. Our results suggest that combining variability modeling and LLM-based reasoning provides a scalable and structured approach to automate market differentiation.
Ponencia Feedback Analysis in Software Product Line Forked Developments(Assoc Computing Machinery, 2025) Romero Organvidez, David; Díaz, Oscar; Tang, Yutian; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC276: Diverso Lab - International ComputingSoftware Product Lines (SPLs) enable the reuse of software components or assets to generate a family of related products. Sometimes, an SPL evolves into parallel developments (forks) to meet new requirements. However, these forks do not always stay synchronized with the central development, e.g., features can be added, removed, or changed in the forked projects. In DevOps practices, feedback analysis plays a central role in improving both software quality and delivery processes. DevOps feedback analysis evaluates data from the delivery pipeline and users to improve the software and its deployment process continuously. Despite its importance, feedback analysis has been underexplored in the context of SPLs. In this paper, we propose an approach to automate feedback analysis in forked software product line developments that can allow us to assist decision-making processes in answering questions such as: Which features need more testing? What new features can be incorporated? Which ones require refactoring? Which ones cause more issues in production? Information can be gathered from various data sources such as source code repositories, bug tracking systems, or continuous integration pipelines. To the best of our knowledge, this is the first proposal using information from forked SPL developments for feedback analysis.
Ponencia Feature-Based Characterization of GitHub OSS Projects for Performance Assessment(Assoc Computing Machinery, 2025) Sánchez Ruiz, José Manuel; Ciria González, Guillermo; Olivero González, Miguel Ángel; Domínguez Mayo, Francisco José; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; TIC276: Diverso Lab - International ComputingGitHub hosts a huge variety of Open-Source Software projects, each one with a particular project configuration. Such a diversity, and the freedom on the features combination, limits the ability to systematically compare them. Benchmarking, discovering best-practices, or informed decision-making is hindered due to the absence of structured characterization. This study captures the variabilities and commonalities among open-source projects, with the goal of identifying configurations that correlate with a higher performance. To deal with this wide diversity we outline a structured feature-based model considering the Software Product Line paradigm. By aligning our approach with SPL modeling, we can correlate projects’ configurations with their performance metrics, allowing the assessment and comparison of projects’ efficiency. To build and improve this model, we manually developed an initial feature model based on the analysis of GitHub features. We then used Large Language Models to expand and improve the model. We applied structured prompts to systematically explore new feature candidates across five models. Then, we analyzed the performance of 40 well-known OSS projects with technical relevance. The results helped enrich the feature model by linking specific GitHub configurations to performance indicators across these projects. This structured characterization forms the basis for a recommendation system designed to improve OSS development practices. By using feature-based analysis, our work supports better benchmarking, decision-making, and performance optimization in OSS ecosystems from a Software Product Line perspective.
Ponencia Design Thinking in Requirements Engineering: Understanding the Role of Internal and External Empathy(IEEE, Institute of Electrical and Electronics Engineers, 2025) Kahan, Ezequiel; Genero, Marcela; Bernárdez Jiménez, Beatriz; Oliveros, Alejandro; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). España; TIC205: Ingeniería del Software AplicadaBackground: A previously proposed Design Thinking-based Requirements Elicitation Process distinguishes between internal and external empathy as distinct sub-activities. To corroborate the effectiveness of this distinction, empirical validation is required. Objective: The main goal of this paper is to present a quasi-experiment carried out with the aim of comparing the use of the Vision Canvas technique for internal empathy and the Empathy Map technique for external empathy in requirements elicitation, assessing whether and how these techniques enhance stakeholder and requirements identification during Brainstorming sessions. Method: The quasi-experiment involved 138 undergraduate Computer Engineering students at University of Sevilla. Three treatments were considered: a combination of both techniques (Vision Canvas followed by Empathy Maps), Empathy Maps alone, and a control group relying solely on Brainstorming. Results: The results revealed an improvement in stakeholder identification using empathy-based techniques, with statistically significant differences observed when these techniques were applied in combination. Additionally, the analysis showed an increase in the number of requirements ideas generated with either technique, compared to the control group. Conclusions: While these findings support the distinction between internal and external empathy in requirements elicitation, further research is necessary to determine the optimal application of these techniques across different contexts.
Ponencia Software variability as a new dimension of computational thinking: an exploration(Assoc Computing Machinery, 2025) Moreno León, Jesús; Romero Organvídez, David; Robles, Gregorio; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Junta de Andalucía; TIC276: Diverso Lab - International ComputingComputational thinking is a discipline that fosters problem-solving by using tools, concepts and practices that are common in computer science. It was defined back in the 1980s, but has gained more and more popularity since Wing’s 2006 Viewpoint paper. The computational thinking movement promotes the learning of skills at any age and in any application domain, not only in computer science. The discipline has evolved incorporating new dimensions such as the one linked with artificial intelligence or quantum computing, just to mention a few. However, very little attention, if any, has been paid to software variability. In this paper, we propose including variability as a new dimension of computational thinking that complements existing ones. This dimension could be defined as the ability to think about a set of related solutions when solving a problem inspired by the way software product line engineering addresses the development of systems. We provide a first solution that extends existing tools to introduce students with variability thinking skills. Concretely, we use Snap! 8 metaprogramming capabilities to make it possible to learn about variability engineering by implementing a tangible solution of a computer game product line. We have validated our solution in a workshop with 19 students in the fourth year of a software engineering degree, and also with 15 experts in variability from 13 different universities. The results are promising according to the two surveys that we used with both students and experts, which show the feasibility of our proposal. We propose a path for the research needed to include the variability dimension in computational thinking, connecting both lines of research. Developing software variability as a new dimension of computational thinking paves the way for teaching it in a practical way in undergraduate courses and even in high school, democratizing its impact and understanding.
Ponencia From playmobil to product lines: towards a visual instrument for variability thinking measurement(Assoc Computing Machinery, 2025) Romero Organvidez, David; Moreno León, Jesús; Robles, Gregorio; Chacón Luna, Ana Eva; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Junta de Andalucía; TIC276: Diverso Lab - International ComputingVariability is a key concept in Software Product Line Engineering (SPLE), but there is no standardized method to assess the ability to reason about variation points, dependencies, or configuration rules. This absence hinders the evaluation of training outcomes and the empirical analysis of SPL-related practices. This article presents an assessment instrument designed to measure variability in thinking through visual reasoning tasks. The test consists of multiple-choice items that depict configuration scenarios using stylized illustrations inspired by Playmobil figures and settings. Each item focuses on a specific aspect of variability reasoning, such as dependency constraints or valid product derivation. The proposed approach adapts computational thinking assessment techniques to the Software Product Line (SPL) domain. It enables a structured measurement of variability-related reasoning patterns and contributes to the development of assessment methods and the design of educational activities.
Ponencia Generando modelos de características mediante Large Language Models manteniendo la coherencia sintáctica y semántica(Sociedad de Ingeniería de Software y Tecnologías de Desarrollo de Software (SISTEDES), 2023) Galindo Duarte, José Ángel; Domínguez, Antonio J.; White, Jules; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (FEDER)Los modelos de características representan aspectos comunes y variables de las líneas de producto software. El análisis automatizado de los modelos de características ha permitido probar, mantener y mejorar las líneas de productos de software. Para probar el análisis de los modelos de características suele ser necesario basarse en un gran número de modelos lo más realistas posible. Existen diferentes propuestas para generar modelos sintéticos de características; sin embargo, los métodos existentes no tienen en cuenta la semántica de los conceptos del dominio. Este artículo propone el uso de Large language models (LLM), como Codex o GPT-3, para generar variantes realistas de modelos que preserven la coherencia semántica al tiempo que mantienen la validez sintáctica.
Ponencia Pragmatic Random Sampling of the Linux Kernel: Enhancing the Randomness and Correctness of the conf Tool(ACM, 2024) Fernandez-Amoros, David; Galindo Duarte, José Ángel; Heradio, Ruben; Benavides Cuevas, David Felipe; Horcas Aguilera, José Miguel; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaThe configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides almost 19,000 configurable options described across more than 1,600 files in the Kconfig language. As a result, many analyses of the Kernel rely on sampling its configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, etc.). The Kernel can be sampled pragmatically, with its built-in tool conf, or idealistically, translating the Kconfig files into logic formulas. The pros of the idealistic approach are that it provides statistical guarantees for the sampled configurations, but the cons are that it sets out many challenging problems that have not been solved yet, such as scalability issues. This paper introduces a new version of conf called randconfig+, which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig+ has been tested on 20,000 configurations generated for 10 different Kernel versions from 2003 to the present day. The experimental results show that randconfig+ is compatible with all tested Kernel versions, guarantees the correctness of the generated configurations, and increases conf’s randomness for numeric and string options.
Ponencia UVL web-based editing and analysis with flamapy.ide(ACM, 2025) Benitez, Francisco Sebastian; Galindo Duarte, José Ángel; Romero Organvídez, David; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaFeature modeling is widely used to represent variability in software systems, but as feature models grow in size and complexity, manua analysis becomes infeasible. Automated Analysis of Feature Models (AAFM) is a set of tools and algorithms that enable the computer aided analysis of such models. Recently, the AAFM community has made an effort to enable the interoperability of tools by means of the UVL language, however, most of the supporting tools need to execute the operations in a server. This have two main drawbacks, first it requires users to upload the model to remote servers, imposing security concerns and second, limits the complexity of the operations that an online tool can offer. In this paper, we introduce flamapy.ide , an integrated development environment (IDE) based on the flamapy framework, and designed to perform AAFM directly within the browser by relying on WASM technologies.flamapy.ide provides SAT and BDD solvers for efficient feature model analysis and offers support for handling UVL files. Also, enables the configuration and visualization of such models relying on a fully client-side approach. This tool brings AAFM capabilities to web-based platforms, eliminating the need for server-side computation while ensuring ease of use and accessibility.
Ponencia Analyzing Tweets Using Topic Modeling and ChatGPT: WhatWe Can Learn about Teachers and Topics during COVID-19 Pandemic-Related School Closures(SciPress, 2024) Weigand, Anna C.; Jacob, Maj F.; Rauschenberger, Maria; Escalona Cuaresma, María José; Lenguajes y Sistemas InformáticosThis study examines the shifting discussions of teachers within the #twlz community on Twitter across three phases of the COVID-19 pandemic– before school closures and during the first and second school closures. Weanalyzed tweets from January 2020 to May 2021 to identify topics related to education, digital transforma tion, and the challenges of remote teaching. Using machine learning and ChatGPT, we categorized discussions that transitioned from general educational content to focused dialogues on online education tools during school closures. Before the pandemic, discussions were generally focused on education and digital transformation. During the first school closures, conversations shifted to more specific topics, such as online education and tools to adapt to distance learning. Discussions during the second school closures reflected more precise needs related to fluctuating pandemic conditions and schooling requirements. Our findings reveal a consistent in crease in the specificity and urgency of the topics over time, particularly regarding digital education
Ponencia AI-Based System for In-Bed Body Posture Identification Using FSR Sensor(Elsevier, 2024) Asadov, Akhmadbek; Gaiduk, Maksym; Ortega Ramírez, Juan Antonio; Madrid, Natividad Martinez; Seepold, Ralf; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia y Educación. EspañaNon-invasive sleep monitoring holds significant promise for enhancing healthcare by offering insights into sleep quality and patterns. In this context, accurate detection of body position is crucial, as it provides essential information for diagnosing and understanding the causes of various sleep disorders, including sleep apnea. The aim of this work is to develop an efficient system for sleep position detection using a minimal number of FSR (Force Sensitive Resistor) sensors and advanced machine learning techniques. A hardware setup was developed incorporating 3 FSR sensors, on-board signal processing for frequency boundary filtering and gain adjustment, an ADC (Analog-to-digital converter), and a computing unit for data processing. The collected data was then cleaned and structured before applying various machine learning models, including Logistic Regression, Random Forest Classifier, Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), and XGBoost. An experiment with 15 subjects in 4 different sleeping positions was conducted to evaluate the system. The SVC demonstrated notable performance with a test accuracy of 64%. Analysis of the results identified areas for future improvement, including better differentiation between similar positions. The study highlights the feasibility of using FSR sensors andmachine learning for effective sleep position detection. However, further research is needed to improve accuracy and explore more advanced techniques. Future efforts will aim to integrate this approach into a comprehensive, unobtrusive sleep monitoring system, contributing to better healthcare services.
Ponencia Bioinspired evolutionary metaheuristic based on COVID spread for discovering numerical association rules(ACM, 2025-03) Herruzo-Lodeiro, Cristina; Rodríguez-Díaz, Francesc; Troncoso, Alicia; Martínez Ballesteros, María del Mar; Lenguajes y Sistemas Informáticos; Ministerio de Cultura e Innovación. EspañaThesocial impact and global health crisis caused by the coronavirus since late 2019 led to the development of a novel bio-inspired al gorithm. This algorithm simulates the behavior and spread of the virus, known as the Coronavirus Optimization Algorithm. It pro vides several advantages over similar approaches and serves as a basis for generalizing pattern or association identification from nu merical datasets. In this study, essential updates and modifications are proposed to adapt the CVOA algorithm for mining numeri cal association rules. These changes involve adjustments to the encoding of individuals and the infection/mutation process. Addi tionally, parameter values are updated, and a new fitness function is proposed to be maximized. The main objective is to obtain high quality numerical association rules for any dataset regardless of the number and range of attributes in the dataset. The implemented algorithm is compared to others designed for mining quantitative association rules in order to validate the results. For this reason, different datasets from the BUFA repository are used, confirming that Coronavirus Optimization Algorithm is a promising option for discovering interesting association rules within numerical datasets.
