Artículos (Lenguajes y Sistemas Informáticos)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11392
Examinar
Envíos recientes
Artículo Urban Pollution Impact Assessment in Six Lithuanian Cities With a Focus on Road Traffic Emissions - Integrated Framework for Environmental Health Studies(Elsevier, 2025-10) Kecorius, Simonas; Madueño, Leizel; Birmili, Wolfram; Löndahl, Jakob; Plauškaitė, Kristina; Byčenkienė, Steigvilė; Lovrić, Mario; Petrić, Valentino; Carranza García, Manuel; Jiménez Navarro, Manuel Jesús; Martínez Ballesteros, María del Mar; Weiss, Magdalena; Schmid, Otmar; Cyrys, Josef; Peters, Annette; Kecorius, Gaudentas; Lenguajes y Sistemas InformáticosAn integrated framework is introduced and applied to assess the health impact of airborne pollution with greater physiological relevance, moving beyond conventional exposure metrics. Measured particle number size distribution data was integrated with a regional respiratory tract deposition fractions to estimate total and alveolar deposited particle surface area concentrations. Land use regression modeling, combined with randomized commuting patterns, enabled the evaluation of city-specific alveolar surface area deposition doses, providing new insight into localized average exposure and its implications for public health. The results showed that although the mean street-level air pollution in Lithuania is higher than in other European cities, the urban background levels are on the same level. We found that the total respiratory deposited surface area concentration is up to 18-fold higher due to coarse particles, which also determines alveolar deposited particle surface area dose. Our findings advocate for using integrated pollution assessments and region-specific policies rather than broad diesel vehicle-targeted bans. The proposed methodology is expected to enhance traditional exposure assessments by switching to lung deposited surface area, which can be further refined by incorporating daytime activity patterns, socio-economic status, and personal health conditions.Artículo Road-traffic emissions of ultrafine particles and elemental black carbon in six Northern European cities(Elsevier, 2025) Kecorius, Simonas; Madueño, Leizel; Plauškaitė, Kristina; Byčenkienė, Steigvilė; Lovrić, Mario; Petrić, Valentino; Carranza García, Manuel; Jiménez Navarro, Manuel Jesús; Martínez Ballesteros, María del Mar; Kecorius, Gaudentas; Lenguajes y Sistemas InformáticosUrban air pollution from vehicular emissions remains a pressing public health concern, particularly in Eastern Europe, where data gaps hinder effective mitigation. This study, conducted in the summer of 2024, presents the first detailed analysis of ultrafine particle (UFP) and equivalent black carbon (eBC) emissions from road traffic across Lithuania’s six major cities: Vilnius, Kaunas, Klaipėda, Šiauliai, Panevėžys, and Alytus. We used a custom mobile laboratory to capture real-world emissions, revealing stark spatial disparities. Panevėžys and Vilnius topped eBC levels (10400 ng/m³ and 10200 ng/m³, respectively), driven by aging vehicle fleets and a diesel prevalence of 70 % in Panevėžys, which also recorded the highest UFP concentration (97800 particles/cm³). Emission factors, calculated using an adapted Operational Street Pollution Model (OSPM), identified Vilnius’ light-duty vehicles as leading in particle number emissions (8.90 × 10¹⁴ particles/(km·veh)), likely due to the prevalence of gasoline direct injection engines. At the same time, Panevėžys dominated eBC emissions (150 mg/(km·veh). Heavy-duty vehicles, including buses and trucks, exhibited emission factors up to five times higher than those of their light-duty counterparts, thereby amplifying their impact in urban areas. These findings illuminate emission dynamics in an understudied region, providing policymakers with precise and actionable insights for targeted interventions, such as fleet upgrades or the establishment of low-emission zones. By addressing a critical knowledge gap, this study empowers the scientific community and public health advocates to devise strategies that combat vehicle-related pollution, reduce exposure to harmful pollutants, and foster healthier urban environments across Eastern Europe and beyond.Artículo IDE4ICDS: A Human-Centric and Model-Driven Proposal to Improve the Digitization of Clinical Practice Guideline(Assoc. Computing Machinery, 2024-09-27) Parra-Calderón, Carlos Luis; García García, Julián Alberto; Ramos-Cueli, Juan Manuel; Alvarez-Romero, Celia; Román-Villarán, Esther; Escalona Cuaresma, María José; Lenguajes y Sistemas InformáticosClinical practice guidelines (CPGs) are a formalization of specific clinical knowledge that states the best evidence-based clinical practices for treating pathologies. However, CPGs are limited because they are usually expressed as text. This gives rise to a certain level of ambiguity, subjective interpretation of the actions to be performed, and variability in clinical practice by different health professionals facing the same circumstances. The inherent complexity of CPGs is also a challenge for software engineers designing, developing, and maintaining software systems and clinical decision support system to manage and digitize them. This challenge stems from the need to evolve CPGs and design software systems capable of allowing their evolution. This paper proposes a model-driven, human-centric and tool-supported framework (called IDE4ICDS) for improving digitisation of CPG in practical environments. This framework is designed from a human-centric perspective to be used by mixed teams of clinicians and software engineers. It was also validated with the type 2 diabetes mellitus CPG in the Andalusian Public Health System (Spain) involving 89 patients and obtaining a kappabased analysis. The recommendations were acceptable (0.61–0.80) with a total kappa index of 0.701, leading to the conclusion that the proposal provided appropriate recommendations for each patient.Artículo COTriage: Applying a Model-Driven Proposal for Improving the Development of Health Information Systems with Chatbots(Institute of Electrical and Electronics Engineers (IEEE), 2024-06-26) García García, Julián Alberto; Sánchez Gómez, Nicolás; Escalona Cuaresma, María José; Ruiz, Mercedes; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). EspañaToday, organizations require innovative and flexible solutions to digitize and automate their processes—particularly those processes designed to obtain information from users. Chatbots are one of the most widely used technological options for automating processes. This article aims to integrate chatbot technology with health information systems (HISs) to improve the execution of health-care processes. Specifically, it presents the COTriage framework, which proposes model-driven mechanisms for improving the design and development of process-oriented HISs with chatbot-based triage process integration. The proposal was also instantiated with the COVID-19 triage process and assisted reproduction treatment processes on iMedea (a real HIS). Finally, the proposal was discussed considering the acceptance of end users, as well as the degree of efficiency and effectiveness achieved by the software team who applies COTriage on our case study.Artículo Pragmatic random sampling of Kconfig-based systems: A unified approach(Elsevier, 2025-08) Fernandez-Amoros, David; Heradio, Ruben; Horcas Aguilera, José Miguel; Galindo Duarte, José Ángel; Benavides Cuevas, David Felipe; Fuentes, Lidia; Lenguajes y Sistemas InformáticosThe configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides more than 18,000 configurable options described across almost 1,700 files in the Kconfig language. As a result, many analyses of these systems rely on sampling their configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, among others.). The Kernel and other Kconfig-based systems can be sampled pragmatically, using their built-in tool conf to get a sample directly from the Kconfig specification that is approximately random, or idealistically, generating a genuine random sample by first translating the Kconfig files into logic formulas, then using a logic engine to compute the probability that each option value has to appear in a configuration, and finally utilizing these probabilities to generate an authentically random sample. The pros of the idealistic approach are that it ensures the sample is representative of the population, but the cons are that it sets out many challenging problems that have not been solved yet (fundamentally, how to obtain a valid translation into Boolean that covers all the Kconfig language, and how to compute the option value probabilities for very large formulas). This paper introduces a new version of conf called randconfig , which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig has been tested on ten versions of the Linux Kernel and twenty additional Kconfig systems. Its compatibility significantly enhances the current landscape, where some systems use a customized conf variant that is maintained independently, while others do not support sampling at all. randconfig not only offers universal sampling for all Kconfig systems but also simplifies its evolutive maintenance as a single tool rather than an unorganized collection of conf variants.Artículo A Conceptual Framework for Smart Governance Systems Implementation(IGI Global, 2025) Muñoz-Hermoso, Salvador; Domínguez Mayo, Francisco José; Cerrillo-I-Martínez, Agustí; Benavides Cuevas, David Felipe; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de SevillaKnowledge-based decision-making open to citizens holds little significance in governments. One reason is that so far, no reference frameworks are available to implement smart governance systems in the full public policy cycle, resulting in most of the existing tools not being knowledge-based. Thus, there is a risk that decisions are ineffective or misaligned with the different interests of civil society. Moreover, existing proposals do not cover most of the key features needed in smart governance and do not provide sufficient elements to facilitate its implementation. Based on the existing literature and tools, as well as on a survey of local government practitioners, the authors propose a conceptual framework for implementing smart governance systems, which manages both knowledge internal and external to the organization, and the one provided by stakeholders; thus, improving consensus and group decision-making. To this end, the framework considers available data and information technologies, and its components make it easier for institutions and information technology providers to develop solutions with a knowledge-based collaborative governance model.Artículo Transformer and Adaptive Threshold Sliding Window for Improving Violence Detection in Videos(MDPI, 2024-08-16) Rendón-Segador, F.J.; Álvarez García, Juan Antonio; Soria Morillo, Luis Miguel; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). EspañaThis paper presents a comprehensive approach to detect violent events in videos by combining CrimeNet, a Vision Transformer (ViT) model with structured neural learning and adversarial regularization, with an adaptive threshold sliding window model based on the Transformer architecture. CrimeNet demonstrates exceptional performance on all datasets (XD-Violence, UCF-Crime, NTU-CCTV Fights, UBI-Fights, Real Life Violence Situations, MediEval, RWF-2000, Hockey Fights, Violent Flows, Surveillance Camera Fights, and Movies Fight), achieving high AUC ROC and AUC PR values (up to 99% and 100%, respectively). However, the generalization of CrimeNet to cross-dataset experiments posed some problems, resulting in a 20–30% decrease in performance, for instance, training in UCF-Crime and testing in XD-Violence resulted in 70.20% in AUC ROC. The sliding window model with adaptive thresholding effectively solves these problems by automatically adjusting the violence detection threshold, resulting in a substantial improvement in detection accuracy. By applying the sliding window model as post-processing to CrimeNet results, we were able to improve detection accuracy by 10% to 15% in cross-dataset experiments. Future lines of research include improving generalization, addressing data imbalance, exploring multimodal representations, testing in real-world applications, and extending the approach to complex human interactions.Artículo Impact of face swapping and data augmentation on sign language recognition(Springer Nature, 2024-07-24) Perea Trigo, Marina; López Ortiz, Enrique José; Soria Morillo, Luis Miguel; Álvarez García, Juan Antonio; Vegas-Olmos, J. J.; Lenguajes y Sistemas Informáticos; Ciencias de la Computación e Inteligencia Artificial; Universidad de Sevilla/CBUA; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaThis study addresses the challenge of improving communication between the deaf and hearing community by exploring different sign language recognition (SLR) techniques. Due to privacy issues and the need for validation by interpreters, creating large-scale sign language (SL) datasets can be difficult. The authors address this by presenting a new Spanish isolated sign language recognition dataset, CALSE-1000, consisting of 5000 videos representing 1000 glosses, with various signers and scenarios. The study also proposes using different computer vision techniques, such as face swapping and affine transformations, to augment the SL dataset and improve the accuracy of the model I3D trained using them. The results show that the inclusion of these augmentations during training leads to an improvement in accuracy in top-1 metrics by up to 11.7 points, top-5 by up to 8.8 points and top-10 by up to 9 points. This has great potential to improve the state of the art in other datasets and other models. Furthermore, the analysis confirms the importance of facial expressions in the model by testing with a facial omission dataset and shows how face swapping can be used to include new anonymous signers without the costly and time-consuming process of recording.Artículo Energy-efficient edge and cloud image classification with multi-reservoir echo state network and data processing units(MDPI, 2024-06-04) López Ortiz, Enrique José; Perea Trigo, Marina; Soria Morillo, Luis Miguel; Álvarez García, Juan Antonio; Vegas-Olmos, J. J.; Lenguajes y Sistemas Informáticos; Ciencias de la Computación e Inteligencia Artificial; Ministerio de Ciencia e Innovación (MICIN). EspañaIn an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms.Artículo Sign Language Anonymization: Face Swapping Versus Avatars(MDPI, 2025-06-09) Perea-Trigo, Marina; Vázquez-Enríquez, Manuel; Benjumea-Bellot, Jose C.; Alba-Castro, Jose L.; Álvarez García, Juan Antonio; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaThe visual nature of Sign Language datasets raises privacy concerns that hinder data sharing, which is essential for advancing deep learning (DL) models in Sign Language recognition and translation. This study evaluated two anonymization techniques, realistic avatar synthesis and face swapping (FS), designed to anonymize the identities of signers, while preserving the semantic integrity of signed content. A novel metric, Identity Anonymization with Expressivity Preservation (IAEP), is introduced to assess the balance between effective anonymization and the preservation of facial expressivity crucial for Sign Language communication. In addition, the quality evaluation included the LPIPS and FID metrics, which measure perceptual similarity and visual quality. A survey with deaf participants further complemented the analysis providing valuable insight into the practical usability and comprehension of anonymized videos. The results show that while face swapping achieved acceptable anonymization and preserved semantic clarity, avatar-based anonymization struggled with comprehension. These findings highlight the need for further research efforts on securing privacy while preserving Sign Language understandability, both for dataset accessibility and the anonymous participation of deaf people in digital content.Artículo Dissecting OLMS membrane algorithms: understanding the role of communication and evolutionary operators in optimization strategies(Springer Nature, 2025-06-24) Andreu Guzmán, José A.; Orellana Martín, David; Valencia Cabrera, Luis; Ciencias de la Computación e Inteligencia Artificial; Lenguajes y Sistemas InformáticosMetaheuristics are general-purpose optimization techniques designed to explore the solution space of complex problems, balancing between exploration and exploitation, trying to escape local optima. Some techniques are inspired by natural processes, such as simulated annealing, particle swarm optimization, or genetic algorithms. Membrane computing, a computational paradigm based on the behavior and the structure of living cells, has proved capable in solving computationally hard problems in an efficient way. From the intersection of both fields, the framework of membrane algorithms embeds metaheuristics as a way to evolve objects in a membrane system. A thorough study of this framework is presented in this work, deeply analyzing the mutual influence of a variety of strategies of membrane and genetic algorithms, enhancing their synergy in searching for optimal solutions. Specifically, this paper assesses the impact of aspects, such as communication rules, genetic operators or number of membranes, among others. All strategies are compared using well-known problems as Traveling Salesman Problem and Graph Coloring Problem, taken as a benchmark. The results show the best solutions are dependent on the specific problem addressed and the genetic algorithm used but, overall, a distributive send-in strategy is ideal for specializing membranes, allowing some to focus on exploration of the state space and others on exploitation of good solutions.Artículo A dataset on vulnerabilities affecting dependencies in software package managers(Elsevier, 2025-07) Márquez Trujillo, Antonio Germán; Varela Vaca, Ángel Jesús; Gómez López, María Teresa; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICIU). EspañaThe increasing reliance on third-party dependencies in soft- ware development introduces significant security risk chal- lenges. This study presents a dataset that maps the vulnera- bilities that affect dependencies in three major package managers: Node Package Manager (NPM), Python Package Index (PyPI), Cargo Crates and RubyGems. The dataset comprises information on 4437,679 unique packages and 60,950,846 versions of packages, with vulnerability data sourced from Open Source Vulnerabilities (OSV). It includes 270,430 known vulnerabilities linked to package versions, allowing a detailed analysis of security risks in software supply chains. Our methodology involved extracting dependency and version data from official package manager sources, correlating them with vulnerability reports, and storing the results in structured formats, including CSV and database dumps. The resultant dataset enables automated monitoring of vulnerable dependencies, facilitating analysis and security assessments, and defining mitigation strategies. This work identifies that 0.42 % of PyPI, 7.5 % of RubyGems, 3.91 % of Cargo and 6.93 % NPM versions rely on at least one vulnerable dependency. Furthermore, PyPI has 329 latest versions affected, RubyGem 919, Cargo 53, and NPM 14,858. This dataset provides valuable information for researchers, devel opers, and security professionals looking to improve software supply chain security. It provides a foundation for developing tools aimed at security and data analytics, enabling early vulnerability detection and improving mitigation controls for dependency-related security risks, thus promoting more secure software ecosystems. The dataset can be extended by incorporating additional packages, introducing new features, and ensuring continuous updates.Artículo BinRec: addressing data sparsity and cold-start challenges in recommender systems with biclustering(Springer Science and Business Media LLC, 2025-07-01) Rodríguez-Baena, Domingo; Gómez-Vela, Francisco A.; López Fernández, Aurelio; García-Torres, Miguel; Divina, Federico; Lenguajes y Sistemas Informáticos; Universidad Pablo de OlavideRecommender Systems help users in making decision in different fields such as purchases or what movies to watch. User Based Collaborative Filtering (UBCF) approach is one of the most commonly used techniques for developing these soft ware tools. It is based on the idea that users who have previously shared similar tastes will almost certainly share similar tastes in the future. As a result, determining the nearest users to the one for whom recommendations are sought (active user) is critical. However, the massive growth of online commercial data has made this task especially difficult. As a result, Biclustering techniques have been used in recent years to perform a local search for the nearest users in subgroups of users with similar rating behaviour under a subgroup of items (biclusters), rather than searching the entire rating database. Nevertheless, due to the large size of these databases, the number of biclusters generated can be extremely high, making their processing very complex. In this paper we propose BinRec, a novel UBCF approach based on Biclustering. BinRec simplifies the search for neighbouring users by determining which ones are nearest to the active user based on the number of biclusters shared by the users. Experimental results show that BinRec outperforms other state-of-the-art recommender systems, with a remarkable improvement in environments with high data sparsity. The flexibility and scalability of the method position it as an efficient alternative for common collaborative filtering problems such as sparsity or cold-start.Artículo Biclustering in bioinformatics using big data and High Performance Computing applications: challenges and perspectives, a review(Springer, 2025-07) López Fernández, Aurelio; Gómez-Vela, Francisco A.; Rodríguez–Baena, Domingo S.; Delgado-Chaves, Fernando M.; Gonzalez‑Dominguez, Jorge; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. EspañaBiclustering is a powerful machine learning technique that simultaneously groups rows and columns in matrix-based datasets. Applied to gene expression data in bio informatics, its use has expanded alongside the rapid growth of high-throughput sequencing technologies, leading to massive and complex biological datasets. This review aims to examine how biclustering methods and their validation strategies are evolving to meet the demands of High Performance Computing (HPC) and Big Data environments. We present a structured classification of existing approaches based on the computational paradigms they employ, including MPI/OpenMP, Apache Hadoop/Spark, and GPU/CUDA. By synthesising these developments, we highlight current trends and outline key research challenges. The knowledge gathered in this work may support researchers in adapting and scaling biclustering algorithms to analyse large-scale biomedical data more efficiently. Our contribution is intended to bridge the gap between algorithmic innovation and computational scalability in the context of bioinformatics and data-intensive applications.Artículo Class integration of ChatGPT and learning analytics for higher education(Wiley, 2024) Civit, Miguel; Escalona Cuaresma, María José; Cuadrado Mendez, Francisco José; Reyes-de-Cozar, Salvador; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. EspañaBackground:ActiveLearningwithAI-tutoring inHigherEducationtacklesdropout rates. Objectives: To investigate teaching-learningmethodologiespreferredbystudents. AHPisusedtoevaluateaChatGPT-basedstudentedlearningmethodologywhichis compared toanother active learningmethodologyanda traditionalmethodology. StudywithLearningAnalytics toevaluatealternatives, andhelpstudentselect the beststrategiesaccordingtotheirpreferences. Methods:Comparativestudyof threelearningmethodologies inacounterbalanced Single-Groupwith33universitystudents. It followsapre-test/post-test approach usingAHPandSAM.HRVandGSRusedfortheestimationofemotionalstates. Findings: Criteria related to in-class experiences valuedhigher than test-related criteria.Chat-GPTintegrationwaswellregardedcomparedtowell-establishedmeth odologies.Studentemotionself-assessmentcorrelatedwithphysiologicalmeasures, validatingusedLearningAnalytics. Conclusions:ProposedmodelAI-Tutoringclassroomintegrationfunctionseffectively atincreasingengagementandavoidingfalseinformation.AHPwiththephysiological measuringallowsstudentstodeterminepreferredlearningmethodologies, avoiding biases,andacknowledgingminoritygroupsArtículo An ontological knowledge-based method for handling feature model defects due to dead feature(Elsevier, 2024-07) Bhushan, Megha; Galindo Duarte, José Ángel; Negi, A.; Samant, P.; Lenguajes y Sistemas InformáticosThe specifications of a certain domain are addressed by a portfolio of software products, known as Software Product Line (SPL). Feature Model (FM) supports domain engineering by modeling domain knowledge along with variability among SPL. The quality of FM is one of the significant factors for the successful SPL in order to attain high quality software products. However, the benefits of SPL can be reduced due to defects in FM. Dead Feature (DF) is one of such defects. Several approaches exist in the literature to detect defects due to DF in FMs. But only a few can handle their sources and solutions which are cumbersome and difficult to understand by humans. An ontological knowledge-based method for handling defects due to DF in FMs is described in this paper. It specifies FM in the form of ontology-based knowledge representation. The rules based on first-order logic are created and implemented using Prolog to detect defects due to DF with sources as well as suggest solutions to resolve these defects. A case study of the product line available on SPLOT repository is utilized for illustrating the proposed work. The experiments are performed with real-world FMs of varied sizes from SPLOT and FMs created with the FeatureIDE tool. The results prove the efficiency, scalability (up to model with 32,000 eatures) and accuracy of the presented method. Therefore, reusability of DFs free knowledge enables deriving defect free products from SPL and eventually enhances the quality of SPL.Artículo Advances in time series forecasting: innovative methods and applications(Amer Inst, 2021-08-14) Torres, F.J.; Martínez Ballesteros, María del Mar; Troncoso, A.; Martínez Álvarez, F.; Lenguajes y Sistemas InformáticosTime series forecasting plays a critical role in various domains, including finance, economics, environmental science, and healthcare. Time series forecasting has undergone significant evolution with the increasing availability of data and advancements in machine learning and statistical methods. This special issue aimed to bring together the latest advances, innovations, and research in the field of time series forecasting. Thus, a significant theme in this collection is the application of advanced ensemble learning techniques to improve forecasting accuracy.Ponencia A Methodological Approach to Model-Driven Software Development for Quality Assurance in Metaverse Environments(Ceur-Ws, 2024) Enamorado Díaz, Elena; Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación. EspañaThe metaverse has potential applications in many fields, for example, the field of health care. Due to the particularities of the metaverse technology (immersive access due to virtual and augmented reality, combination of people with digital twins, etc.) it is necessary to define software development methodologies that guarantee quality during the development of applications in this context. The thesis project presented in this article aims to identify and analyze existing proposals in the current scientific literature in order to know the state of the art, and then propose a methodological framework to ensure quality in the development of applications for the metaverse. Finally, it is proposed to use the Model Driven Engineering (MDE) paradigm and to validate the proposal with a real case of application in the healthcare field.Artículo UVL: Feature modelling with the Universal Variability Language(Elsevier, 2025-01) Benavides Cuevas, David Felipe; Sundermann, Chico; Feichtinger, Kevin; Galindo Duarte, José Ángel; Rabiser, Rick; Thüm, Thomas; Lenguajes y Sistemas InformáticosFeature modelling is a cornerstone of software product line engineering, providing a means to represent software variability through features and their relationships. Since its inception in 1990, feature modelling has evolved through various extensions, and after three decades of development, there is a growing consensus on the need for a standardised feature modelling language. Despite multiple endeavours to standardise variability modelling and the creation of various textual languages, researchers and practitioners continue to use their own approaches, impeding effective model sharing. In 2018, a collaborative initiative was launched by a group of researchers to develop a novel textual language for representing feature models. This paper introduces th outcome of this effort: the Universal Variability Language (UVL), which is designed to be human-readable and serves as a pivot language for diverse software engineering tools. The development of UVL drew upon community feedback and leveraged established literature in the field o variability modelling. The language is structured into three levels– Boolean, Arithmetic, and Type– and allows for language extensions to introduce additional constructs enhancing its expressiveness. UVL is integrated int various existing software tools, such as FeatureIDE and flamapy, and is maintained by a consortium of institutions. All tools that support the language are released in an open-source format, complemented by dedicated parser implementations for Python and Java. Beyond academia, UVL has found adoption within a range of institutions and companies. It is envisaged that UVL will become the language of choice in the future for a multitude of purposes, including knowledge sharing, educational instruction, and tool integration and interoperability. We envision UVL as a pivotal solution, addressing the limitations of prior attempts and fostering collaboration and innovation in the domain of software product line engineering.Artículo Towards high-quality informatics K-12 education in Europe: key insights from the literature(Springer Open, 2025-02-05) Sampson, Demetrios; Kampylis, Panagiotis; Moreno León, Jesús; Bocconi, Stefania; Lenguajes y Sistemas InformáticosThis paper explores the evolving landscape of informatics education in European primary and secondary schools, analysing academic and grey literature to define the state of play and open questions related to ‘high‑quality informatics education’. It underlines the strategic importance of promoting high‑quality informatics education to pre pare students for life and work in the digital era, contributing to European societies and economies’ social and economic resilience. Drawing on a review of over 180 recent academic publications, policy documents, and grey literature, it provides an overview of how informatics education is being implemented across Europe and beyond, highlighting recent curricular developments, pedagogical practices, and policy initiatives. The paper also identifies and analyses key open issues related to high‑quality informat ics education, organised into four clusters: student‑related (e.g., equity and inclusion), teacher‑related (e.g., professional development, shortage of qualified teachers), school related (e.g., the need for whole‑school approach) and curriculum‑ and resourcerelated (e.g., competing curriculum priorities, quality of teaching and learning materials). Finally, the paper offers recommendations for policymakers, researchers, and practitioners (school leaders and educators) related to the key open issues of high quality K‑12 informatics education. Overall, the paper contributes to the discussion on high‑quality informatics K‑12 education in Europe towards identifying and address ing major challenges for equitable access to quality informatics education for all European K‑12 students.