Ponencias (Lenguajes y Sistemas Informáticos)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11394
Examinar
Envíos recientes

Ponencia Generando modelos de características mediante Large Language Models manteniendo la coherencia sintáctica y semántica(Sociedad de Ingeniería de Software y Tecnologías de Desarrollo de Software (SISTEDES), 2023) Galindo Duarte, José Ángel; Domínguez, Antonio J.; White, Jules; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (FEDER)Los modelos de características representan aspectos comunes y variables de las líneas de producto software. El análisis automatizado de los modelos de características ha permitido probar, mantener y mejorar las líneas de productos de software. Para probar el análisis de los modelos de características suele ser necesario basarse en un gran número de modelos lo más realistas posible. Existen diferentes propuestas para generar modelos sintéticos de características; sin embargo, los métodos existentes no tienen en cuenta la semántica de los conceptos del dominio. Este artículo propone el uso de Large language models (LLM), como Codex o GPT-3, para generar variantes realistas de modelos que preserven la coherencia semántica al tiempo que mantienen la validez sintáctica.
Ponencia Pragmatic Random Sampling of the Linux Kernel: Enhancing the Randomness and Correctness of the conf Tool(ACM, 2024) Fernandez-Amoros, David; Galindo Duarte, José Ángel; Heradio, Ruben; Benavides Cuevas, David Felipe; Horcas Aguilera, José Miguel; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaThe configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides almost 19,000 configurable options described across more than 1,600 files in the Kconfig language. As a result, many analyses of the Kernel rely on sampling its configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, etc.). The Kernel can be sampled pragmatically, with its built-in tool conf, or idealistically, translating the Kconfig files into logic formulas. The pros of the idealistic approach are that it provides statistical guarantees for the sampled configurations, but the cons are that it sets out many challenging problems that have not been solved yet, such as scalability issues. This paper introduces a new version of conf called randconfig+, which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig+ has been tested on 20,000 configurations generated for 10 different Kernel versions from 2003 to the present day. The experimental results show that randconfig+ is compatible with all tested Kernel versions, guarantees the correctness of the generated configurations, and increases conf’s randomness for numeric and string options.
Ponencia UVL web-based editing and analysis with flamapy.ide(ACM, 2025) Benitez, Francisco Sebastian; Galindo Duarte, José Ángel; Romero Organvídez, David; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaFeature modeling is widely used to represent variability in software systems, but as feature models grow in size and complexity, manua analysis becomes infeasible. Automated Analysis of Feature Models (AAFM) is a set of tools and algorithms that enable the computer aided analysis of such models. Recently, the AAFM community has made an effort to enable the interoperability of tools by means of the UVL language, however, most of the supporting tools need to execute the operations in a server. This have two main drawbacks, first it requires users to upload the model to remote servers, imposing security concerns and second, limits the complexity of the operations that an online tool can offer. In this paper, we introduce flamapy.ide , an integrated development environment (IDE) based on the flamapy framework, and designed to perform AAFM directly within the browser by relying on WASM technologies.flamapy.ide provides SAT and BDD solvers for efficient feature model analysis and offers support for handling UVL files. Also, enables the configuration and visualization of such models relying on a fully client-side approach. This tool brings AAFM capabilities to web-based platforms, eliminating the need for server-side computation while ensuring ease of use and accessibility.
Ponencia Analyzing Tweets Using Topic Modeling and ChatGPT: WhatWe Can Learn about Teachers and Topics during COVID-19 Pandemic-Related School Closures(SciPress, 2024) Weigand, Anna C.; Jacob, Maj F.; Rauschenberger, Maria; Escalona Cuaresma, María José; Lenguajes y Sistemas InformáticosThis study examines the shifting discussions of teachers within the #twlz community on Twitter across three phases of the COVID-19 pandemic– before school closures and during the first and second school closures. Weanalyzed tweets from January 2020 to May 2021 to identify topics related to education, digital transforma tion, and the challenges of remote teaching. Using machine learning and ChatGPT, we categorized discussions that transitioned from general educational content to focused dialogues on online education tools during school closures. Before the pandemic, discussions were generally focused on education and digital transformation. During the first school closures, conversations shifted to more specific topics, such as online education and tools to adapt to distance learning. Discussions during the second school closures reflected more precise needs related to fluctuating pandemic conditions and schooling requirements. Our findings reveal a consistent in crease in the specificity and urgency of the topics over time, particularly regarding digital education
Ponencia AI-Based System for In-Bed Body Posture Identification Using FSR Sensor(Elsevier, 2024) Asadov, Akhmadbek; Gaiduk, Maksym; Ortega Ramírez, Juan Antonio; Madrid, Natividad Martinez; Seepold, Ralf; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia y Educación. EspañaNon-invasive sleep monitoring holds significant promise for enhancing healthcare by offering insights into sleep quality and patterns. In this context, accurate detection of body position is crucial, as it provides essential information for diagnosing and understanding the causes of various sleep disorders, including sleep apnea. The aim of this work is to develop an efficient system for sleep position detection using a minimal number of FSR (Force Sensitive Resistor) sensors and advanced machine learning techniques. A hardware setup was developed incorporating 3 FSR sensors, on-board signal processing for frequency boundary filtering and gain adjustment, an ADC (Analog-to-digital converter), and a computing unit for data processing. The collected data was then cleaned and structured before applying various machine learning models, including Logistic Regression, Random Forest Classifier, Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), and XGBoost. An experiment with 15 subjects in 4 different sleeping positions was conducted to evaluate the system. The SVC demonstrated notable performance with a test accuracy of 64%. Analysis of the results identified areas for future improvement, including better differentiation between similar positions. The study highlights the feasibility of using FSR sensors andmachine learning for effective sleep position detection. However, further research is needed to improve accuracy and explore more advanced techniques. Future efforts will aim to integrate this approach into a comprehensive, unobtrusive sleep monitoring system, contributing to better healthcare services.
Ponencia Bioinspired evolutionary metaheuristic based on COVID spread for discovering numerical association rules(ACM, 2025-03) Herruzo-Lodeiro, Cristina; Rodríguez-Díaz, Francesc; Troncoso, Alicia; Martínez Ballesteros, María del Mar; Lenguajes y Sistemas Informáticos; Ministerio de Cultura e Innovación. EspañaThesocial impact and global health crisis caused by the coronavirus since late 2019 led to the development of a novel bio-inspired al gorithm. This algorithm simulates the behavior and spread of the virus, known as the Coronavirus Optimization Algorithm. It pro vides several advantages over similar approaches and serves as a basis for generalizing pattern or association identification from nu merical datasets. In this study, essential updates and modifications are proposed to adapt the CVOA algorithm for mining numeri cal association rules. These changes involve adjustments to the encoding of individuals and the infection/mutation process. Addi tionally, parameter values are updated, and a new fitness function is proposed to be maximized. The main objective is to obtain high quality numerical association rules for any dataset regardless of the number and range of attributes in the dataset. The implemented algorithm is compared to others designed for mining quantitative association rules in order to validate the results. For this reason, different datasets from the BUFA repository are used, confirming that Coronavirus Optimization Algorithm is a promising option for discovering interesting association rules within numerical datasets.
Ponencia Analyzing the Evolution of Boards in Collaborative Work Management Tools(Springer Nature, 2025) Bravo Llanos, Alfonso; Cabanillas Macías, Cristina; Peña Siles, Joaquín; Resinas Arias de Reyna, Manuel; Lenguajes y Sistemas InformáticosBoard-Based Collaborative Work Management Tools (BBTs) like Trello and Microsoft Planner are widespread today. Their use includes the management of projects, static information, or processes, which is achieved by assigning and moving cards through lists represent ing specific states, steps, or other classification criteria. BBTs are a flex ible solution since boards, lists and cards can be changed by the user to adapt to new situations, e.g., changes in the processes or projects. How ever, understanding how a board is being used is challenging because what can be seen at a glance is a static snapshot of its current state. BBTs usually produce logs that capture all the activity that has taken place within the boards. In this paper, we leverage that data to mine BBT logs to understand how boards are used and evolve over time. Specifi cally, we introduce an approach that aims to detect structural changes in the boards, and visualize the evolution of the boards’ lists. We have analyzed 63 real-life BBT logs and tested the approach with three case studies
Ponencia Workstream and board-based collaborative work management tools to analyze and improve productivity at work(CEUR-WS, 2022) Bravo Llanos, Alfonso; Lenguajes y Sistemas Informáticos; MCIN/ AEI/ 10.13039/501100011033/Data and software are currently changing and growing by leaps and bounds. Consequently, the way we used to understand the world until now has radically changed. Many authors know this era of change and exponential growth that we are living in as the 4th Industrial Revolution. It is a context characterized by volatility, uncertainty, complexity and ambiguity, among many other problems, known as a VUCA environment. This situation worsens work performance, preventing workers from maintaining a high level of productivity. These problems affect knowledge workers in many different ways in their day-to-day work, as has been studied in [1], getting an ordered taxonomy of fourteen challenges that negatively affect productivity. On the other hand, Workstream Collaborative Work Management Tools (WSCTs in advance), as Slacks or Teams (i.e.), are one of the most trending software applications nowadays. The flexibility they offer and their different and powerful features make them one of the best techniques to deal with the problems arising from the VUCA context. In particular, we have already deepened one of these possible solutions in order to improve working performance in these environments in [2]. In this case, we focused on board-based collaborative work management tools (BBTs in advance) as Trello or Planner (i.e.), defining eight design patterns for boards of these tools. This PhD Project seeks a twofold contribution, aligned with the two above cited works. First, we want to extend the analysis of the challenges that negatively affect productivity at knowledge work, understanding their causes, relationships between them and the impact of each one, based on [1]. We will use WSCTs for that purpose, focusing on the four most important productivity challenges and their solutions. Secondly, in particular, we will focus and delve into the study of BBTs, analyzing how are they used and making its use help improve productivity (defining how to do it). In this way, we want to study if better use of BBTs leads to better productivity, how much it improves it and how it improves it (which problems it solves or mitigates).
Ponencia Some Initial Guidelines for Building Reusable Quantum Oracles(Springer Nature, 2024) Sanchez-Rivero, Javier; Talaván, Daniel; Garcia-Alonso, Jose; Ruiz Cortés, Antonio; Murillo, Juan Manuel; Lenguajes y Sistemas InformáticosThe evolution of quantum hardware is highlighting the need for advances in quantum software engineering that help developers create quantum software with good quality attributes. Specifically, reusability has been traditionally considered an important quality attribute. Increasing the reusability of quantum software will help developers create more complex solutions. This work focuses on the reusability of oracles, a well-known pattern of quantum algorithms that can be used to perform functions used as input by other algorithms. In this work, we present several guidelines for making reusable quantum oracles. These guidelines include three different levels for oracle reuse: the reasoning behind the oracle algorithm, the function which creates the oracle, and the oracle itself. To demonstrate these guidelines, two different implementations of a range of integers oracle have been built by reusing simpler oracles. The quality of these implementations is evaluated in terms of functionality and quantum circuit depth. Then, we provide an example of documentation following the proposed guidelines for both implementations to foster reuse of the provided oracles. This work aims to be a first point of discussion towards quantum software reusability.
Ponencia Operating with Quantum Integers: An Efficient ‘Multiples of’ Oracle(Springer Nature, 2023) Sanchez-Rivero, Javier; Talaván, Daniel; Garcia-Alonso, Jose; Ruiz Cortés, Antonio; Murillo, Juan Manuel; Lenguajes y Sistemas InformáticosQuantum algorithms are a very promising field. However, creating and manipulating these kind of algorithms is a very complex task, specially for software engineers used to work at higher abstraction levels. The work presented here is part of a broader research focused on providing operations of a higher abstraction level to manipulate integers codified as a superposition. These operations are designed to be composable and efficient, so quantum software developers can reuse them to create more complex solutions. Specifically, in this paper we present a ‘multiples of’ operation. To validate this operation, we show several examples of quantum circuits and their simulations, including its composition possibilities. A theoretical analysis proves that both the complexity of the required classical calculations and the depth of the circuit scale linearly with the number of qubits. Hence, the ‘multiples of’ oracle is efficient in terms of complexity and depth. Finally, an empirical study of the circuit depth is conducted to further reinforce the theoretical analysis.
Ponencia Automatic Generation of an Efficient Less-Than Oracle for Quantum Amplitude Amplification(IEEE, 2023) Sanchez-Rivero, Javier; Talavan, Daniel; Garcia-Alonso, Jose; Ruiz Cortés, Antonio; Murillo, Juan Manuel; Lenguajes y Sistemas InformáticosGrover's algorithm is a well-known contribution to quantum computing. It searches one value within an unordered sequence faster than any classical algorithm. A fundamental part of this algorithm is the so-called oracle, a quantum circuit that marks the quantum state corresponding to the desired value. A generalization of it is the oracle for Amplitude Amplification, that marks multiple desired states. In this work we present a classical algorithm that builds a phase-marking oracle for Amplitude Amplification. This oracle performs a less-than operation, marking states representing natural numbers smaller than a given one. Results of both simulations and experiments are shown to prove its functionality. This less-than oracle implementation works on any number of qubits and does not require any ancilla qubits. Regarding depth, the proposed implementation is compared with the one generated by Qiskit automatic method, UnitaryGate. We show that the depth of our less-than oracle implementation is always lower. This difference is significant enough for our method to outperform UnitaryGate on real quantum hardware.
Ponencia Interest-Driven Recommendations To Support Time Performance Analyses(CEUR-WS, 2023) Capitán Agudo, Carlos; Lenguajes y Sistemas Informáticos; Ministerio Educación. EspañaThis paper outlines a doctoral research work that investigates how to support analyses that address time performance questions. To do that, it is investigated how process analysts behave to answer them. Aligned with insights of their behaviors, a system will be developed to recommend the most interesting insights for a certain analyst, with the objective of automating the analysis process.
Ponencia On Programming Variability with Large Language Model-based Assistant(ACM, 2023) Acher, Mathieu; Galindo Duarte, José Ángel; Jézéquel, Jean Marc; Lenguajes y Sistemas InformáticosProgramming variability is central to the design and implementation of software systems that can adapt to a variety of contexts and requirements, providing increased flexibility and customization. Managing the complexity that arises from having multiple features, variations, and possible configurations is known to be highly challenging for software developers. In this paper, we explore how large language model (LLM)-based assistants can support the programming of variability.We report on new approaches made possible with LLM-based assistants, like: features and variations can be implemented as prompts; augmentation of variability out of LLM-based domain knowledge; seamless implementation of variability in different kinds of artefacts, programming languages, and frameworks, at different binding times (compile-time or run-time). We are sharing our data (prompts, sessions, generated code, etc.) to support the assessment of the effectiveness and robustness of LLMs for variability-related tasks.
Ponencia 24th International workshop on configuration (CONFWS'22)(ACM, 2022) Forza, Cipriano; Galindo Duarte, José Ángel; Stettinger, Martin; Vareilles, Élise; Lenguajes y Sistemas InformáticosConfiguration is composing product models of complex and variant systems requiring parameterizable components. This traditionally relies on knowledge-representation formalisms that enable the reasoning and automatization of different software engineering tasks when developing such systems. The main goal of this workshop is to promote research in the application areas related to configuration. Traditionally, this workshop has managed to bridge the gap with contributions from both the academy and the industry, thus, enabling the exchange of ideas and challenges. It provides a forum for discussing ideas, evaluations and experiences, especially in using AI techniques for solving configuration problems.
Ponencia Enterprise information integration(IO Press, 2016) Hernández Salmerón, Inmaculada Concepción; Lenguajes y Sistemas InformáticosIntegrating a web application into an automated business process requires to design wrappers that get user queries as input and map them onto the search forms that the application provides. Such wrappers build on automatic navigators which are responsible for navigating to the pages that provide the information required to answer the original user queries. A navigator relies on a web page classifier that discerns which pages provide the information and which do not. In the literature, there are many proposals to classify web pages, but none of them fulfills the requirements for a web page classifier in a navigator context. We address the problem of designing an unsupervised web page classifier that builds solely on the information provided by the URLs and does not require extensive crawling of the site being analysed. Our contribution is CALA, a new automated proposal to generate URL-based web page classifiers. Its salient features are that it does not need to previously crawl the complete web site, it is unsupervised, it does not require to download a page before classifying it, and it is computationally tractable. It has been validated by a number of experiments using real-world, top-visited web sites.
Ponencia Advisory. Una herramienta para identificar los riesgos de seguridad(2022) Márquez Trujillo, Antonio Germán; Varela Vaca, Ángel Jesús; Galindo Duarte, José Ángel; Lenguajes y Sistemas InformáticosEn el desarrollo de un proyecto software actual es frecuente delegar parte de la funcionalidad en librerías o dependencias de terceros. Este uso extensivo de dependencias puede introducir problemas de seguridad en el software que estamos desarrollando y que cada vez afecta a más proyectos software dada la necesidad de conocer cada una de las vulnerabilidades de estas dependencias. Para aliviar este problema, presentamos Advisory , una herramienta que aplica técnicas de análisis automático de la variabilidad al análisis de seguridad de proyectos software.
Ponencia Simultaneous Evolutionary Optimization of Features Subset and Clusters Number(ACM, 2023-07-24) Martín, José David; Pontes Balanza, Beatriz; Riquelme Santos, José Cristóbal; Lenguajes y Sistemas InformáticosCluster analysis is a popular technique used to identify patterns in data mining. However, evaluating the accuracy of a clustering task is a challenging process which remains to be an open issue. In this work, we focus on two factors that significantly influence clustering performance: the optimal number of clusters and the subset of relevant attributes. While the former has been extensively studied, the latter has received comparatively less attention, especially in relation to its equivalent in supervised learning. Despite their clear interdependence, these factors have rarely been studied together. In this context, we propose an evolutionary algorithm that simultaneously optimizes both factors using ad-hoc variations of internal validation indices as a fitness function.
Ponencia Jabuti CE: A Tool for Specifying Smart Contracts in the Domain of Enterprise Application Integration(ScitePress, 2024) Teles-Borges, Mailson; Bocanegra, Jose; Dornelles, Eldair F.; Sawicki, Sandro; Reina Quintero, Antonia María; Molina-Jimenez, Carlos; Roos-Frantz, Fabricia; Frantz, Rafael Z.; Lenguajes y Sistemas InformáticosSome decentralised applications (such as blockchains) take advantage of the services that smart contracts provide. Currently, each blockchain platform is tightly coupled to a particular contract language; for example, Ethereum supports Serpent and Solidity, while Hyperledger prefers Go. To ease contract reuse, contracts can be specified in platform-independent languages and automatically translated into the languages of the target platforms. With this approach, the task is reduced to the specification of the contract in the language statements. This can be tedious and error-prone unless the language is accompanied by supportive tools. This paper presents Jabuti CE, a model-driven tool that assists users of Jabuti DSL in specifying platformindependent contracts for Enterprise Application Integration. We have implemented Jabuti CE as an extension for Visual Studio Code.
Ponencia Procesamiento de los resultados obtenidos del trabajo con los foros didácticos alojados en cursos virtuales para someterlos a posterior análisis: Realización de informes automáticos partiendo de los correspondientes ficheros logs(IEEE, 2016-06) Romero Moreno, Luisa María; Enríquez de Salamanca Ros, Fernando; Lenguajes y Sistemas InformáticosEl eLearning, asentado en cada vez más instituciones y empresas, continúa evolucionando y mejorando convirtiendo sus métodos en una herramienta de aprendizaje más flexible y utilizados por un mayor número de profesores. Dentro de la evolución natural de estos métodos ha aparecido la disciplina de las Analíticas del Aprendizaje (LA) y que persigue estructurar y organizar el amplísimo volumen de datos que pueden obtenerse del trabajo realizado en el ámbito educativo que incorpora medios digitales. Se presenta aquí un trabajo cuyo objetivo ha sido el diseño de una pieza de software que a través de un fichero en formato csv extraído de un foro de la plataforma Moodle ofrece información sobre las interacciones entre los estudiantes de un curso, entre otros datos. Está información viene presentada en un informe de texto y en un archivo de formato SQL. Posteriormente con un estudio conveniente de estos informes es posible extraer conclusiones contrastadas del trabajo realizado con estos medios y continuar tratando los datos con otras herramientas.
Ponencia Methodology with python technology and social network analysis tools to analyze the work of students collaborating in facebook groups(IEEE, 2019) Romero Moreno, Luisa María; Lenguajes y Sistemas InformáticosUniversities and institutions began to create their virtual campuses by using eLearning platforms whether commercial or not. Within these, it has highlighted among other tools instead of the didactic forums where students among themselves and students and teaching teams could discuss and in general build knowledge within a collaborative context. The data coming from these forums have been analyzed, with different techniques, and have allowed obtaining some very interesting conclusions. However, in recent academic years there is a clear trend of migration of collaborative work that previously occurred in the forums to social networks and mobiles. This poses a problem since these tools are not coordinated by the teaching teams. A preliminary work analyzes this trend and points out the consequences to which it may give rise. The present work exposes a methodology to automate the information obtained from Facebook working group, which have been designed by the students themselves.
