Artículos (Lenguajes y Sistemas Informáticos)

URI permanente para esta colecciónhttps://hdl.handle.net/11441/11392

Examinar

Envíos recientes

Mostrando 1 - 20 de 924
  • Acceso AbiertoArtículo
    BinRec: addressing data sparsity and cold-start challenges in recommender systems with biclustering
    (Springer Science and Business Media LLC, 2025-07-01) Rodríguez-Baena, Domingo; Gómez-Vela, Francisco A.; Lopez-Fernandez, Aurelio; García-Torres, Miguel; Divina, Federico; Lenguajes y Sistemas Informáticos; Universidad Pablo de Olavide
    Recommender Systems help users in making decision in different fields such as purchases or what movies to watch. User Based Collaborative Filtering (UBCF) approach is one of the most commonly used techniques for developing these soft ware tools. It is based on the idea that users who have previously shared similar tastes will almost certainly share similar tastes in the future. As a result, determining the nearest users to the one for whom recommendations are sought (active user) is critical. However, the massive growth of online commercial data has made this task especially difficult. As a result, Biclustering techniques have been used in recent years to perform a local search for the nearest users in subgroups of users with similar rating behaviour under a subgroup of items (biclusters), rather than searching the entire rating database. Nevertheless, due to the large size of these databases, the number of biclusters generated can be extremely high, making their processing very complex. In this paper we propose BinRec, a novel UBCF approach based on Biclustering. BinRec simplifies the search for neighbouring users by determining which ones are nearest to the active user based on the number of biclusters shared by the users. Experimental results show that BinRec outperforms other state-of-the-art recommender systems, with a remarkable improvement in environments with high data sparsity. The flexibility and scalability of the method position it as an efficient alternative for common collaborative filtering problems such as sparsity or cold-start.
  • Acceso AbiertoArtículo
    Biclustering in bioinformatics using big data and High Performance Computing applications: challenges and perspectives, a review
    (Springer, 2025-07) López Fernández, Aurelio; Gómez-Vela, Francisco A.; Rodríguez–Baena, Domingo S.; Delgado-Chaves, Fernando M.; Gonzalez‑Dominguez, Jorge; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    Biclustering is a powerful machine learning technique that simultaneously groups rows and columns in matrix-based datasets. Applied to gene expression data in bio informatics, its use has expanded alongside the rapid growth of high-throughput sequencing technologies, leading to massive and complex biological datasets. This review aims to examine how biclustering methods and their validation strategies are evolving to meet the demands of High Performance Computing (HPC) and Big Data environments. We present a structured classification of existing approaches based on the computational paradigms they employ, including MPI/OpenMP, Apache Hadoop/Spark, and GPU/CUDA. By synthesising these developments, we highlight current trends and outline key research challenges. The knowledge gathered in this work may support researchers in adapting and scaling biclustering algorithms to analyse large-scale biomedical data more efficiently. Our contribution is intended to bridge the gap between algorithmic innovation and computational scalability in the context of bioinformatics and data-intensive applications.
  • Acceso AbiertoArtículo
    Class integration of ChatGPT and learning analytics for higher education
    (Wiley, 2024) Civit, Miguel; Escalona Cuaresma, María José; Cuadrado Mendez, Francisco José; Reyes-de-Cozar, Salvador; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    Background:ActiveLearningwithAI-tutoring inHigherEducationtacklesdropout rates. Objectives: To investigate teaching-learningmethodologiespreferredbystudents. AHPisusedtoevaluateaChatGPT-basedstudentedlearningmethodologywhichis compared toanother active learningmethodologyanda traditionalmethodology. StudywithLearningAnalytics toevaluatealternatives, andhelpstudentselect the beststrategiesaccordingtotheirpreferences. Methods:Comparativestudyof threelearningmethodologies inacounterbalanced Single-Groupwith33universitystudents. It followsapre-test/post-test approach usingAHPandSAM.HRVandGSRusedfortheestimationofemotionalstates. Findings: Criteria related to in-class experiences valuedhigher than test-related criteria.Chat-GPTintegrationwaswellregardedcomparedtowell-establishedmeth odologies.Studentemotionself-assessmentcorrelatedwithphysiologicalmeasures, validatingusedLearningAnalytics. Conclusions:ProposedmodelAI-Tutoringclassroomintegrationfunctionseffectively atincreasingengagementandavoidingfalseinformation.AHPwiththephysiological measuringallowsstudentstodeterminepreferredlearningmethodologies, avoiding biases,andacknowledgingminoritygroups
  • Acceso AbiertoArtículo
    An ontological knowledge-based method for handling feature model defects due to dead feature
    (Elsevier, 2024-07) Bhushan, Megha; Galindo Duarte, José Ángel; Negi, A.; Samant, P.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    The specifications of a certain domain are addressed by a portfolio of software products, known as Software Product Line (SPL). Feature Model (FM) supports domain engineering by modeling domain knowledge along with variability among SPL. The quality of FM is one of the significant factors for the successful SPL in order to attain high quality software products. However, the benefits of SPL can be reduced due to defects in FM. Dead Feature (DF) is one of such defects. Several approaches exist in the literature to detect defects due to DF in FMs. But only a few can handle their sources and solutions which are cumbersome and difficult to understand by humans. An ontological knowledge-based method for handling defects due to DF in FMs is described in this paper. It specifies FM in the form of ontology-based knowledge representation. The rules based on first-order logic are created and implemented using Prolog to detect defects due to DF with sources as well as suggest solutions to resolve these defects. A case study of the product line available on SPLOT repository is utilized for illustrating the proposed work. The experiments are performed with real-world FMs of varied sizes from SPLOT and FMs created with the FeatureIDE tool. The results prove the efficiency, scalability (up to model with 32,000 eatures) and accuracy of the presented method. Therefore, reusability of DFs free knowledge enables deriving defect free products from SPL and eventually enhances the quality of SPL.
  • Acceso AbiertoArtículo
    Advances in time series forecasting: innovative methods and applications
    (Amer Inst, 2021-08-14) Torres, F.J.; Martínez Ballesteros, María del Mar; Troncoso, A.; Martínez Álvarez, F.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    Time series forecasting plays a critical role in various domains, including finance, economics, environmental science, and healthcare. Time series forecasting has undergone significant evolution with the increasing availability of data and advancements in machine learning and statistical methods. This special issue aimed to bring together the latest advances, innovations, and research in the field of time series forecasting. Thus, a significant theme in this collection is the application of advanced ensemble learning techniques to improve forecasting accuracy.
  • Acceso AbiertoPonencia
    A Methodological Approach to Model-Driven Software Development for Quality Assurance in Metaverse Environments
    (Ceur-Ws, 2024) Enamorado Díaz, Elena; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    The metaverse has potential applications in many fields, for example, the field of health care. Due to the particularities of the metaverse technology (immersive access due to virtual and augmented reality, combination of people with digital twins, etc.) it is necessary to define software development methodologies that guarantee quality during the development of applications in this context. The thesis project presented in this article aims to identify and analyze existing proposals in the current scientific literature in order to know the state of the art, and then propose a methodological framework to ensure quality in the development of applications for the metaverse. Finally, it is proposed to use the Model Driven Engineering (MDE) paradigm and to validate the proposal with a real case of application in the healthcare field.
  • Acceso AbiertoArtículo
    UVL: Feature modelling with the Universal Variability Language
    (Elsevier, 2025-01) Benavides Cuevas, David Felipe; Sundermann, Chico; Feichtinger, Kevin; Galindo Duarte, José Ángel; Rabiser, Rick; Thüm, Thomas; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    Feature modelling is a cornerstone of software product line engineering, providing a means to represent software variability through features and their relationships. Since its inception in 1990, feature modelling has evolved through various extensions, and after three decades of development, there is a growing consensus on the need for a standardised feature modelling language. Despite multiple endeavours to standardise variability modelling and the creation of various textual languages, researchers and practitioners continue to use their own approaches, impeding effective model sharing. In 2018, a collaborative initiative was launched by a group of researchers to develop a novel textual language for representing feature models. This paper introduces th outcome of this effort: the Universal Variability Language (UVL), which is designed to be human-readable and serves as a pivot language for diverse software engineering tools. The development of UVL drew upon community feedback and leveraged established literature in the field o variability modelling. The language is structured into three levels– Boolean, Arithmetic, and Type– and allows for language extensions to introduce additional constructs enhancing its expressiveness. UVL is integrated int various existing software tools, such as FeatureIDE and flamapy, and is maintained by a consortium of institutions. All tools that support the language are released in an open-source format, complemented by dedicated parser implementations for Python and Java. Beyond academia, UVL has found adoption within a range of institutions and companies. It is envisaged that UVL will become the language of choice in the future for a multitude of purposes, including knowledge sharing, educational instruction, and tool integration and interoperability. We envision UVL as a pivotal solution, addressing the limitations of prior attempts and fostering collaboration and innovation in the domain of software product line engineering.
  • Acceso AbiertoArtículo
    Towards high-quality informatics K-12 education in Europe: key insights from the literature
    (Springer Open, 2025-02-05) Sampson, Demetrios; Kampylis, Panagiotis; Moreno León, Jesús; Bocconi, Stefania; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    This paper explores the evolving landscape of informatics education in European primary and secondary schools, analysing academic and grey literature to define the state of play and open questions related to ‘high‑quality informatics education’. It underlines the strategic importance of promoting high‑quality informatics education to pre pare students for life and work in the digital era, contributing to European societies and economies’ social and economic resilience. Drawing on a review of over 180 recent academic publications, policy documents, and grey literature, it provides an overview of how informatics education is being implemented across Europe and beyond, highlighting recent curricular developments, pedagogical practices, and policy initiatives. The paper also identifies and analyses key open issues related to high‑quality informat ics education, organised into four clusters: student‑related (e.g., equity and inclusion), teacher‑related (e.g., professional development, shortage of qualified teachers), school related (e.g., the need for whole‑school approach) and curriculum‑ and resourcerelated (e.g., competing curriculum priorities, quality of teaching and learning materials). Finally, the paper offers recommendations for policymakers, researchers, and practitioners (school leaders and educators) related to the key open issues of high quality K‑12 informatics education. Overall, the paper contributes to the discussion on high‑quality informatics K‑12 education in Europe towards identifying and address ing major challenges for equitable access to quality informatics education for all European K‑12 students.
  • Acceso AbiertoArtículo
    The IDL tool suite: Specifying and analyzing inter-parameter dependencies in web APIs
    (Elsevier, 2025) Barakat, Saman; Martín López, Alberto; Müller Cejás, Carlos; Segura Rueda, Sergio; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    Web APIs may include inter-parameter dependencies that limit how input parameters can be combined to call services correctly. These dependencies are extremely common, appearing in 4 out of every 5 APIs. This paper presents the IDL tool suite, a set of software tools for managing inte parameter dependencies in web APIs. The suite includes a specification language (IDL), an OpenAPI Specification extension (IDL4OAS), an analysis engine (IDLReasoner), a web API, a playground, an AI chatbot, and a website. We also highlight several contributions by different groups of authors where the IDL tool suite has proven useful in the domains of automated testing, code generation, and API gateways. To date, the IDL tool suite has contributed to the detection of more than 200 bugs in industrial APIs, including GitHub, Spotify, and YouTube, among others. Also, IDL has been used to boost automated code generation, generating up to 10 times more code than state of-the-art generators for web APIs.
  • Acceso AbiertoArtículo
    SmartRPA: Generating software robots from user interface logs
    (Elsevier, 2025) Agostinelli, S.; Hohenadl, T.; Marrella, A.; Martínez Rojas, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    Robotic Process Automation (RPA) is a maturing technology in the field of Business Process Management (BPM) that automates intensive routine tasks previously performed by a human user on the User Interface (UI) of a computer system, by means of a software robot. To date, RPA tools available in the market strongly rely on the ability of human experts to manually implement the routines to automate. This work addresses the limitations of current manual RPA development by introducing SmartRPA, a cross-platform software tool. SmartRPA analyzes UI logs of past routine executions to generate software robots capable of handling intermediate user inputs, thereby reducing development time and error rates.
  • Acceso AbiertoArtículo
    Process Mining Without Perfect Data? Anne Rozinat Says Yes!: A Practitioner’s View on Event Log Quality
    (Springer, 2025) Río Ortega, Adela del; Beerepoot, Iris; Van der Aa, Han; Evermann, Joerg; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    Anne Rozinat has been a process mining enthusiast for more than two decades. She holds a PhD degree in process mining from the Eindhoven University of Technology (TU/ e). Together with Christian Gu¨nther, she is a co-founder of one of the oldest process mining tool vendors in existence: Fluxicon (since 2009).1 Their Disco tool is used by pro fessionals, and has a long-standing tradition of being used by research groups and teachers all over the world, thanks to their Academic Initiative. Fluxicon’s Flux Capacitor blog2 and Process Mining Cafe´3 regularly provide insights on the intersection of industry practice and academic research on process mining. Thanks to her wealth of experience on both sides of the process mining world, Anne is perfect candidate to provide her views on the topic of our special issue relate to Exploring the (Mis) Match Between Real-World Processes and Event Data.
  • Acceso AbiertoArtículo
    Nonlinear Ensemble Deep Learning Model for Energy Consumption Prediction with Bayesian Optimization
    (International Publications, 2025) Tefera, Ejigu; Kekeba, Kula; Ravindra Babu, B.; Martínez Ballesteros, María del Mar; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    Accurate prediction of electric energy consumption is crucial for efficient load dispatching, energy utilization, and grid operation. Traditional statistical and classical machine learning methods struggle with the nonlinear nature of energy consumption data, often leading to higher prediction errors. Additionally, deep learning models using a single approach face challenges such as convergence to local minima and poor generalization. This paper proposes a nonlinear ensemble deep learning model for residential energy consumption prediction, incorporating Bayesian optimization for hyperparameter tuning. The model combines Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and 1D Convolutional Neural Networks (1D-CNN), leveraging their powerful nonlinear feature learning capabilities. A k-means clustering approach is used to preprocess and reduce variability in the data, enhancing the ensemble model's performance. The ensemble model was tested on real energy consumption data from two districts in Addis Ababa, showing significant improvements in prediction accuracy with lower MAE, RMSE, and MAPE values compared to single models and un-clustered data. The integration of clustering and Bayesian optimization further enhanced model generalizability and minimized overfitting, demonstrating the effectiveness of a nonlinear approach in capturing complex energy consumption patterns.
  • Acceso AbiertoArtículo
    Quantum Software Engineering: Roadmap and Challenges Ahead
    (ACM, 2025-05) Murillo, Juan Manuel; Garcia-Alonso, Jose; Moguel, Enrique; Barzen, Johanna; Leymann, Frank; Ali, Shaukat; Ruiz Cortés, Antonio; Wimmer, Manuel; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    As quantum computers advance, the complexity of the software they can execute increases as well. To ensure this software is efficient, maintainable, reusable, and cost-effective—key qualities of any industry-grade software—mature software engineering practices must be applied throughout its design, development, and operation. However, the significant differences between classical and quantum software make it challenging to directly apply classical software engineering methods to quantum systems. This challenge has led to the emergence of Quantum Software Engineering (QSE) as a distinct field within the broader software engineering landscape. In this work, a group of active researchers analyze in depth the current state of QSE research. From this analysis, the key areas of QSE are identified and explored in order to determine the most relevant open challenges that should be addressed in the next years. These challenges help identify necessary breakthroughs and future research directions for advancing QSE.
  • Acceso AbiertoArtículo
    MetaGen: A framework for metaheuristic development and hyperparameter optimization in machine and deep learning
    (Elsevier, 2025-07) Gutiérrez Avilés, David; Jiménez Navarro, Manuel Jesús; Torres, José Francisco; Martínez-Álvarez, Francisco; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    Hyperparameter optimization is a pivotal step in enhancing model performance within machine learning. Traditionally, this challenge is addressed through metaheuristics, which efficiently explore large search spaces to uncover optimal solutions. However, implementing these techniques can be complex without adequate development tools, which is the primary focus of this paper. Hence, we introduce MetaGen, a novel Python package designed to provide a comprehensive framework for developing and evaluating metaheuristic algorithms. MetaGen follows best practices in Python design, ensuring minimalistic code implementation, intuitive comprehension, and full flexibility in solution representation. The package defines two distinct user roles: Developers, responsible for algorithm implementation for hyperparameter optimization, and Solvers, who leverage pre-implemented metaheuristics to address optimization problems. Beyond algorithm implementation, MetaGen facilitates benchmarking through built-in test functions, ensuring standardized performance comparisons. It also provides automated reporting and visualization tools to analyze optimization progress and outcomes effectively. Furthermore, its modular design allows distribution and integration into existing machine learning workflows. Several illustrative use cases are presented to demonstrate its adaptability and efficacy. The package, along with code, a user manual, and supplementary materials, is available at: https://github.com/Data-Science-Big-Data-Research-Lab/MetaGen.
  • Acceso AbiertoArtículo
    Optimising secure and sustainable smart home configurations
    (Elsevier, 2025) Muñoz Heredia, Daniel; Varela Vaca, Ángel Jesús; Borrego Núñez, Diana; Gómez López, María Teresa; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    As the adoption of smart devices accelerates rapidly in smart homes worldwide, the variety of available devices on the market is also diversifying. This creates a challenge for users, who must choose devices that best meet their needs, and for system designers, who must ensure these devices integrate efficiently within a connected ecosystem. In response to this challenge, the solution presented in this work provides a metamodel that gathers the smart home features, including attributes related to security, usability, connectivity and sustainability. These features are used to create personalised configurations of smart homes that meet user requirements. This is achieved through the creation of multi-objective optimisation problems focused on improving: security, to ensure network and personal data protection; usability, to facilitate the easy management of the environment; connectivity, to maintain seamless interaction between both existing and future devices; and sustainability, which assesses the environmental impact and energy efficiency of the technological ecosystem. The implementation of the proposal is available and a set of experiments have been developed to evaluate the proposal’s applicability using real devices, being reproducible and replicable.
  • Acceso AbiertoArtículo
    Online forecasting using neighbor-based incremental learning for electricity markets
    (Springer, 2025-01-24) Melgar-García, L.; Gutiérrez Avilés, David; Rubio Escudero, Cristina; Troncoso, A.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Educación. España
    Electricity market forecasting is very useful for the different actors involved in the energy sector to plan both the supply chain and market operation. Nowadays, energy demand data are data coming from smart meters and have to be processed in real-time for more efficient demand management. In addition, electricity prices data can present changes over time such as new patterns and new trends. Therefore, real-time forecasting algorithms for both demand and prices have to adapt and adjust to online data in order to provide timely and accurate responses. This work presents a new algorithm for electricity demand and prices forecasting in real-time. The proposed algorithm generates a prediction model based on the k-nearest neighbors algorithm, which is incrementally updated in an online scenario considering both changes to existing patterns and adding new detected patterns to the model. Both time-frequency and error threshold based model updates have been evaluated. Results using energy demand from 2007 to 2016 and prices data for different time periods from the Spanish electricity market are reported and compared with other benchmark algorithms.
  • Acceso AbiertoArtículo
    From manual to automated: a state-of-the-art review to examine the impact of intelligent document processing in banking automation
    (Elsevier, 2025) Alonso-Rocha J.L.; Martínez Rojas, Antonio; González Enríquez, José; Sánchez-Oliva, J.M.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    In the rapidly evolving digital era, industries increasingly harness technology to optimize operations. The bank ing sector, in particular, stands out as a prominent example, having integrated Artificial Intelligence (AI) to streamline processes and improve efficiency. Our study focuses on one key aspect: automating loan manage ment, specifically through Intelligent Document Processing (IDP). While automation technologies have been widely studied, a notable gap exists in sector-specific knowledge, especially within the banking industry. This paper conducts a Systematic Literature Review (SLR), examining 48 primary studies, to analyze the state-of the-art of this problem. This comprehensive analysis reveals how IDP reshapes banking processes, providing sector-specific insights. Our findings reveal the profound impact of automation in banking, along with 8 notable challenges that remain to be addressed. This contributes to future research and enriches our understanding of IDP’s current and potential applications in the sector.
  • Acceso AbiertoArtículo
    FM fact label
    (Elsevier, 2025) Horcas Aguilera, José Miguel; Galindo Duarte, José Ángel; Fuentes, Lidia; Benavides Cuevas, David Felipe; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos
    FM Fact Label is a tool for visualizing the characterizations of feature models based on their metadata, structural measures, and analytical metrics. Although there are various metrics available to characterize feature models, there is no standard method to visualize and identify unique properties of feature models. Unlike existing tools, FM Fact Label provides a standalone web-based platform for configurable and interactive visualization, enabling export to various formats. This contribution is significant because it supports the Universal Variability Language (UVL) and enhances the UVL ecosystem by offering a common representation of the results of existing analysis tools.
  • Acceso AbiertoArtículo
    Exploring low-resource weather forecasting with echo state network-based architectures and satellite data
    (Elsevier, 2025) López Ortiz, E.; Jiménez, M.; Soria Morillo, Luis Miguel; Álvarez García, Juan Antonio; Vegas-Olmos, J. J.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    Cloud forecasting plays a crucial role in various fields such as agriculture, energy systems, and air travel. An accurate forecasting system can offer significant benefits by improving decision-making efficiency in these areas. This study investigates the use of Echo State Network (ESN)-based architectures for weather forecasting, focusing on cloud prediction across Central Europe using the CloudCast benchmark, which integrates data from Meteosat satellites and the European Centre for Medium-Range Weather Forecasts (ECMWF) model. Two novel techniques are included in this study, evaluated in two different phases. First, the Multi Reservoir Weighted ESN (MWESN) architecture is proposed, featuring optimised inter-reservoir connections that enhance both the effectiveness and adaptability of the model. This model is evaluated along with advanced ESN architectures, including Multi-Reservoir ESN, Deep ESN among others. Second, the Error-Guided Regional Training (ERT) method is introduced to minimise the computational resources required for forecasting at the pixel level while maintaining high accuracy. Combined, MWESN and ERT demonstrate a 1.41% improvement in accuracy, effectively capturing complex spatio-temporal dynamics while significantly reducing computational demands compared to existing state-of-the-art methods. Additionally, models are tested on low-resource devices such as Raspberry Pi units, illustrating their feasibility for real-world meteorological applications.
  • Acceso AbiertoArtículo
    Depex: A software for analysing and reasoning about vulnerabilities in software projects dependencies
    (Elsevier, 2025) Márquez Trujillo, Antonio Germán; Varela Vaca, Ángel Jesús; Gómez López, María Teresa; Galindo Duarte, José Ángel; Benavides Cuevas, David Felipe; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. España
    This paper presents Depex, a tool that allows developers to reason over the entire configuration space of the dependencies of an open-source software repository. The dependency information is extracted from the repository requirements files and the package managers of the dependencies, generating a graph that includes information regarding security vulnerabilities affecting the dependencies. The dependency graph allows automatic reasoning through the creation of a Boolean satisfiability model based on Satisfiability Modulo Theories (SMT). Automatic reasoning lets operations such as identifying the safest dependency configuration or validating if a particular configuration is secure. To demonstrate the impact of the proposal, it has been evaluated on more than 300 real open-source repositories of Python Package Index (PyPI), Node Package Manager (NPM) and Maven Central (Maven), as well as compared with current commercial tools on the market.