Investigación
URI permanente para esta comunidadhttps://hdl.handle.net/11441/10690
Esta comunidad enfocada en la investigación recoge artículos, capítulos de libros, libros, ponencias y datos fuentes de investigación.
This research-focused community collects articles, book chapters, books, presentations and research source data.
Examinar
Examinando Investigación por Premio "Premio Mensual Publicación Científica Destacada de la US. Escuela Técnica Superior de Ingeniería Informática"
Mostrando 1 - 20 de 37
- Resultados por página
- Opciones de ordenación
Artículo A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer(Elsevier, 2023-06) Marrón-Esquivel, José M.; Durán López, Lourdes; Linares Barranco, Alejandro; Domínguez Morales, Juan Pedro; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Andalusian Regional Project, Spain (with FEDER support) DAFNE US-1381619; MECIN AEI/10.13039/501100011033 MINDROB PID2019-105556GB-C33; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresBackground: Among all the cancers known today, prostate cancer is one of the most commonly diagnosed in men. With modern advances in medicine, its mortality has been considerably reduced. However, it is still a leading type of cancer in terms of deaths.The diagnosis of prostate cancer is mainly conducted by biopsy test. From this test, Whole Slide Images are obtained, from which pathologists diagnose the cancer according to the Gleason scale. Within this scale from 1 to 5, grade 3and above is considered malignant tissue. Several studies have shown an inter- observer discrepancy between pathologists in assigning the value of the Gleason scale. Due to the recent advances in artificial intelligence, its application to the computational pathology field with the aim of supporting and providing a second opinion to the professional is of great interest. Method:In this work, the inter-observer variability of a local dataset of 80 whole-slide images annotated by a team of 5 pathologists from the same group was analyzed at both area and label level. Four approaches were followed to train six different Convolutional Neural Network architectures, which were evaluated on the same data set on which the inter-observer variability was analyzed. Results: An inter-observer variability of 0.6946k was obtained, with 46% discrepancy in terms of area size of the annotations performed by the pathologists. The best trained models achieved 0.826±0.014k on the test set when trained with data from the same source. Conclusions: The obtained results show that deep learning-based automatic diagnosi s systems could help reduce the widely-known inter-observer variability that is present among pathologists and support them in their decision, serving as a second opinion or as a triage tool for medical centers.Artículo A machine learning-based methodology for short-term kinetic energy forecasting with real-time application: Nordic Power System case(Elsevier, 2023-12) Riquelme Domínguez, José Miguel; Carranza García, Manuel; Lara Benítez, Pedro; González Longatt, Francisco; Universidad de Sevilla. Departamento de Ingeniería Eléctrica; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. TIC-134: Sistemas informáticosThe progressive substitution of conventional synchronous generation for renewable-based generation imposes a series of challenges in many aspects of modern power systems, among which are the issues related to the low rotational inertia systems. Rotational inertia and the kinetic energy stored in the rotating masses in the power system play a fundamental role in the operation of power systems as it represents in some sort the ability of the system to withstand imbalances between generation and demand. Therefore, transmission system operators (TSOs) need tools to forecast the inertia or the kinetic energy available in the systems in the very short term (from minutes to hours) in order to take appropriate actions if the values fall below the one that ensures secure operation. This paper proposes a methodology based on machine learning (ML) techniques for short-term kinetic energy forecasting available in power systems; it focuses on the length of the moving window, which allows for obtaining a balance between the historical information needed and the horizon of forecasting. The proposed methodology aims to be as flexible as possible to apply to any power system, regardless of the data available and the software used. To illustrate the proposed methodology, time series of the kinetic energy recorded in the Nordic Power System (NPS) has been used as a case study. The results show that Linear Regression (LR) is the most suitable method for a time horizon of one hour due to its high accuracyto-simplicity ratio, while Long Short-Term Memory (LSTM) is the most accurate for a forecasting horizon of four hours. Experimental assessment has been carried out using Typhoon HIL-404 simulator, verifying that both algorithms are suitable for real-time simulation.Artículo A Mashup-based Framework for Business Process Compliance Checking(IEEE Computer Society, 2020) Cabanillas Macías, Cristina; Resinas Arias de Reyna, Manuel; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Universidad de Sevilla. TIC-205. Ingeniería del Software Aplicada.Business process compliance ensures that the business processes of an organisation are designed and executed according to the rules that enforce the compliance controls that govern the company. We faced the challenge of building a Business Process Compliance Management System (BPCMS) for a process-aware organisation that had to provide support for several needs that, despite having been identified in the literature, were only partially satisfied by existing approaches. The variability in the types of rules and their interpretation generally restricts the existing support for compliance checking to specific types of rules (e.g., rules affecting the control flow of the process), a specific phase of the business process management (BPM) lifecycle (e.g., design time or run time), or certain information systems (ISs) for data retrieval (e.g., process event logs). Motivated by this, we designed a conceptual framework for design-time and run-time compliance checking that relies on the use of mashups for rule specification and checking. It presents the following advantages: (i) an open-ended set of types rules can be specified by designing and connecting mashup components; (ii) (parts of) the definitions of the rules can be reused as needed; and (iii) the mashup-based compliance checking (MCC) system can be integrated with ISs of the organisation, enabling the verification of actual facts on actions performed during the execution of a process (e.g., the existence of a specific document in a concrete location). Design-time and run-time implementations of the framework were conducted and tested in a real setting.Artículo A new P-Lingua toolkit for agile development in membrane computing(Elsevier, 2022) Pérez Hurtado de Mendoza, Ignacio; Orellana Martín, David; Martínez del Amor, Miguel Ángel; Valencia Cabrera, Luis; Riscos Núñez, Agustín; Universidad de Sevilla. Departamento de Ciencias de la Computación e Inteligencia Artificial; Agencia Estatal de Investigación. España; Universidad de Sevilla. TIC193 : Computación NaturalMembrane computing is a massively parallel and non-deterministic bioinspired computing paradigm whose models are called P systems. Validating and testing such models is a challenge which is being overcome by developing simulators. Regardless of their heterogeneity, such simulators require to read and interpret the models to be simulated. To this end, P-Lingua is a high-level P system definition language which has been widely used in the last decade. The P-Lingua ecosystem includes not only the language, but also libraries and software tools for parsing and simulating membrane computing models. Each version of P-Lingua supported new types or variants of P systems. This leads to a shortcoming: Only a predefined list of variants can be used, thus making it difficult for researchers to study custom ones. Moreover, derivation modes cannot be user-defined, i.e, the way in which P system computations should be generated is determined by the simulation algorithm in the source code. The main contribution of this paper is a completely new design of the P-Lingua language, called P-Lingua 5, in which the user can define custom variants and derivation modes, among other improvements such as including procedural programming and simulation directives. It is worth mentioning that it has backward-compatibility with previous versions of the language. A completely new set of command-line tools is provided for parsing and simulating P-Lingua 5 files. Finally, several examples are included in this paper covering the most common P system types.Artículo A Survey of Vectorization Methods in Topological Data Analysis(IEEE Computer Society, 2023-12) Ali, Dashti; Asaad, Aras; Jiménez Rodríguez, María José; Nanda, Vidit; Paluzo Hidalgo, Eduardo; Soriano Trigueros, Manuel; ; Universidad de Sevilla. Departamento de Matemática Aplicada I (ETSII)Attempts to incorporate topological information in supervised learning tasks have resulted in the creation of several techniques for vectorizing persistent homology barcodes. In this paper, we study thirteen such methods. Besides describing an organizational framework for these methods, we comprehensively benchmark them against three well-known classification tasks. Surprisingly, we discover that the best-performing method is a simple vectorization, which consists only of a few elementary summary statistics. Finally, we provide a convenient web application which has been designed to facilitate exploration and experimentation with various vectorization methods.Artículo A systematic comparison of deep learning methods for Gleason grading and scoring(Elsevier, 2024-07) Domínguez Morales, Juan Pedro; Durán López, Lourdes; Marini, Niccolo; Vicente Díaz, Saturnino; Linares Barranco, Alejandro; Atzori, Manfredo; Muller, Henning; ; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Junta de Andalucía; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; European Union (UE). H2020; Universidad de Sevilla. TEP108: Robótica y Tecnología de ComputadoresProstate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.Artículo A systematic comparison of different machine learning models for the spatial estimation of air pollution(Springer, 2023) Cerezuela Escudero, Elena; Montes Sánchez, Juan Manuel; Domínguez Morales, Juan Pedro; Durán López, Lourdes; Jiménez Moreno, Gabriel; ; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. TEP108. Robótica y Tecnología de ComputadoresAir pollutants harm human health and the environment. Nowadays, deploying an air pollution monitoring network in many urban areas could provide real-time air quality assessment. However, these networks are usually sparsely distributed and the sensor calibration problems that may appear over time lead to missing and wrong measurements. There is an increasing interest in developing air quality modelling methods to minimize measurement errors, predict spatial and temporal air quality, and support more spatially-resolved health effect analysis. This research aims to evaluate the ability of three feed-forward neural network architectures for the spatial prediction of air pollutant concentrations using the measures of an air quality monitoring network. In addition to these architectures, Support Vector Machines and geostatistical methods (Inverse Distance Weighting and Ordinary Kriging) were also implemented to compare the performance of neural network models. The evaluation of the methods was performed using the historical values of seven air pollutants (Nitrogen monoxide, Nitrogen dioxide, Sulphur dioxide, Carbon monoxide, Ozone, and particulate matters with size less than or equal to 2.5 µm and to 10 µm) from an urban air quality monitoring network located at the metropolitan area of Madrid (Spain). To assess and compare the predictive ability of the models, three estimation accuracy indicators were calculated: the Root Mean Squared Error, the Mean Absolute Error, and the coefficient of determination. FFNN-based models are superior to geostatistical methods and slightly better than Support Vector Machines for fitting the spatial correlation of air pollutant measurementsArtículo A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends(Elsevier, 2022-12) Civit, Miguel; Civit Masot, Javier; Cuadrado, Francisco; Escalona Cuaresma, María José; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (Spanish Government) PID2019-105455GB-C31; Consejería de Economía y Conocimiento (Junta de Andalucía) US-1251532; Universidad de Sevilla. TEP108: Robótica y Tecnología de Computadores; Universidad de Sevilla. TIC021: Engineering and Science for Software SystemsCurrently available reviews in the area of artificial intelligence-based music generation do not provide a wide range of publications and are usually centered around comparing very specific topics between a very limited range of solutions. Best surveys available in the field are bibliography sections of some papers and books which lack a systematic approach and limit their scope to only handpicked examples In this work, we analyze the scope and trends of the research on artificial intelligence-based music generation by performing a systematic review of the available publications in the field using the Prisma methodology. Furthermore, we discuss the possible implementations and accessibility of a set of currently available AI solutions, as aids to musical composition. Our research shows how publications are being distributed globally according to many characteristics, which provides a clear picture of the situation of this technology. Through our research it becomes clear that the interest of both musicians and computer scientists in AI-based automatic music generation has increased significantly in the last few years with an increasing participation of mayor companies in the field whose works we analyze. We discuss several generation architectures, both from a technical and a musical point of view and we highlight various areas were further research is needed.Artículo A systematic review of capability and maturity innovation assessment models: Opportunities and challenges(Elsevier, 2023) Giménez Medina, Manuel; González Enríquez, José; Domínguez Mayo, Francisco José; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de Sevilla. TIC-021: Engineering and Science for Software Systems; Universidad de Sevilla. TIC-134: Sistemas InformáticosPublic funding, being the primary source for innovation, imposes restrictions caused by a lack of trust between the roles of public funders and organisations in the innovation process. Capability and maturity innovation assessment models can improve the process by combining both roles to create an agile and trusting environment. This paper aims to provide a current description of the state-of-the-art on capability and maturity innovation assessment models in the context of Information and Communication Technologies. To this end, a Systematic Mapping Study was carried out considering high quality published research from four relevant digital libraries since 2000. The 78 primary studies analysed show several gaps and challenges. In particular, a common ontology has not been achieved, and Innovation Management Systems are scarcely considered. Concepts such as open innovation have not been correctly applied to incorporate all Quadruple Helix stakeholders, especially the government and its role as a public funder. This implies that no studies explore a standard agile public–private maturity model based on capabilities since the public funders’ restrictions have not been considered. Furthermore, although some concepts of innovation capabilities have evolved, none of the studies analysed offer a comprehensive coverage of capabilities. As potential future lines of research, this paper proposes 11 challenges based on the 5 shortcomings found in the literature.Artículo An Event-Based Digital Time Difference Encoder Model Implementation for Neuromorphic Systems(IEEE Computer Society, 2022) Gutiérrez Galán, Daniel; Schoepe, Thorben; Domínguez Morales, Juan Pedro; Jiménez Fernández, Ángel Francisco; Chicca, Elisabetta; Linares Barranco, Alejandro; Universidad de Sevilla. Departamento de Arquitectura y Tecnología de Computadores; Ministerio de Economía y Competitividad (MINECO). España; Agencia Estatal de Investigación. España; Universidad de Sevilla. TEP108 : Robotica y Tecnología de ComputadoresNeuromorphic systems are a viable alternative to conventional systems for real-time tasks with constrained resources. Their low power consumption, compact hardware realization, and low-latency response characteristics are the key ingredients of such systems. Furthermore, the event-based signal processing approach can be exploited for reducing the computational load and avoiding data loss due to its inherently sparse representation of sensed data and adaptive sampling time. In event-based systems, the information is commonly coded by the number of spikes within a specific temporal window. However, the temporal information of event-based signals can be difficult to extract when using rate coding. In this work, we present a novel digital implementation of the model, called time difference encoder (TDE), for temporal encoding on event-based signals, which translates the time difference between two consecutive input events into a burst of output events. The number of output events along with the time between them encodes the temporal information. The proposed model has been implemented as a digital circuit with a configurable time constant, allowing it to be used in a wide range of sensing tasks that require the encoding of the time difference between events, such as optical flow-based obstacle avoidance, sound source localization, and gas source localization. This proposed bioinspired model offers an alternative to the Jeffress model for the interaural time difference estimation, which is validated in this work with a sound source lateralization proof-of-concept system. The model was simulated and implemented on a field-programmable gate array (FPGA), requiring 122 slice registers of hardware resources and less than 1 mW of power consumption.Artículo ARTE: Automated Generation of Realistic Test Inputs for Web APIs(IEEE Computer Society, 2022) Alonso Valenzuela, Juan Carlos; Martín López, Alberto; Segura Rueda, Sergio; García Rodríguez, José María; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Junta de Andalucía; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Universidad de Sevilla. TIC205: Ingeniería del Software AplicadaAutomated test case generation for web APIs is a thriving research topic, where test cases are frequently derived from the API specification. However, this process is only partially automated since testers are usually obliged to manually set meaningful valid test inputs for each input parameter. In this article, we present ARTE, an approach for the automated extraction of realistic test data for web APIs from knowledge bases like DBpedia. Specifically, ARTE leverages the specification of the API parameters to automatically search for realistic test inputs using natural language processing, search-based, and knowledge extraction techniques. ARTE has been integrated into RESTest, an open-source testing framework for RESTful APIs, fully automating the test case generation process. Evaluation results on 140 operations from 48 real-world web APIs show that ARTE can efficiently generate realistic test inputs for 64.9% of the target parameters, outperforming the state-of-the-art approach SAIGEN (31.8%). More importantly, ARTE supported the generation of over twice as many valid API calls (57.3%) as random generation (20%) and SAIGEN (26%), leading to a higher failure detection capability and uncovering several real-world bugs. These results show the potential of ARTE for enhancing existing web API testing tools, achieving an unprecedented level of automationPonencia Automated reasoning on feature models(2005) Benavides Cuevas, David Felipe; Trinidad Martín Arroyo, Pablo; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosSoftware Product Line (SPL) Engineering has proved to be an effective method for software production. However, in the SPL community it is well recognized that variability in SPLs is increasing by the thousands. Hence, an automatic support is needed to deal with variability in SPL. Most of the current proposals for automatic reasoning on SPL are not devised to cope with extra– functional features. In this paper we introduce a proposal to model and reason on an SPL using constraint programming. We take into account functional and extra–functional features, improve current proposals and present a running, yet feasible implementation.Artículo BIGOWL4DQ: Ontology-driven approach for Big Data quality meta-modelling, selection and reasoning(Elsevier B.V., 2024) Barba González, Cristóbal; Caballero, Ismael; Varela Vaca, Ángel Jesús; Cruz Lemus, José Antonio; Gómez López, María Teresa; Navas Delgado, Ismael; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Junta de Andalucía; Universidad de Málaga; Consejería de Educación, Cultura y Deportes de la Junta de Comunidades de Castilla-La ManchaData quality should be at the core of many Artificial Intelligence initiatives from the very first moment in which data is required for a successful analysis. Measurement and evaluation of the level of quality are crucial to determining whether data can be used for the tasks at hand. Conscientious of this importance, industry and academia have proposed several data quality measurements and assessment frameworks over the last two decades. Unfortunately, there is no common and shared vocabulary for data quality terms. Thus, it is difficult and time-consuming to integrate data quality analysis within a (Big) Data workflow for performing Artificial Intelligence tasks. One of the main reasons is that, except for a reduced number of proposals, the presented vocabularies are neither machine-readable nor processable, needing human processing to be incorporated. Objective: This paper proposes a unified data quality measurement and assessment information model. This model can be used in different environments and contexts to describe data quality measurement and evaluation concerns. Method: The model has been developed as an ontology to make it interoperable and machine-readable. For better interoperability and applicability, this ontology, BIGOWL4DQ, has been developed as an extension of a previously developed ontology for describing knowledge management in Big Data analytics. Conclusions: This extended ontology provides a data quality measurement and assessment framework required when designing Artificial Intelligence workflows and integrated reasoning capacities. Thus, BIGOWL4DQ can be used to describe Big Data analysis and assess the data quality before the analysis. Result: Our proposal has been validated with two use cases. First, the semantic proposal has been assessed using an academic use case. And second, a real-world case study within an Artificial Intelligence workflow has been conducted to endorse our work.Artículo Bridges Between Spiking Neural Membrane Systems and Virus Machines(2024) Ramírez de Arellano, Antonio; Orellana Martín, David; Pérez Jiménez, Mario de Jesús; ; Universidad de Sevilla. Departamento de Ciencias de la Computación e Inteligencia ArtificialSpiking Neural P Systems (SNP) are well-established computing models that take inspiration from spikes between biological neurons; these models have been widely used for both theoretical studies and practical applications. Virus machines (VMs) are an emerging computing paradigm inspired by viral transmission and replication. In this work, a novel extension of VMs inspired by SNPs is presented, called Virus Machines with Host Excitation (VMHEs). In addition, the universality and explicit results between SNPs and VMHEs are compared in both generating and computing mode. The VMHEs defined in this work are shown to be more efficient than SNPs, requiring fewer memory units (hosts in VMHEs and neurons in SNPs) in several tasks, such as a universal machine, which was constructed with 18 hosts less than the 84 neurons in SNPs, and less than other spiking models discussed in the workArtículo CARMEN: A framework for the verification and diagnosis of the specification of security requirements in cyber-physical systems(Elsevier, 2021) Varela Vaca, Ángel Jesús; Rosado, David G.; Sánchez, Luis E.; Gómez López, María Teresa; Martínez Gasca, Rafael; Fernández Medina, Eduardo; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Junta de Andalucía; Junta de Castilla-La Mancha; Universidad de Sevilla. TIC258: Data-centric Computing Research HubIn the last years, cyber-physical systems (CPS) are receiving substantial mainstream attention especially in industrial environments, but this popularity has been accompanied by serious security challenges. A CPS is a complex system that includes hardware and software components, with different suppliers and connection protocols, forcing complex data management and storage. For this reason, the construction, verification and diagnosis of security CPS become a major challenge, which involves a correct specifica tion of security requirements, the verification of the correct system configurations, and if necessary, the diagnosis to detectthe features to be modified to obtain a security configuration. In this paper, we propose a framework for the verification and diagnosis of security requirements, according to the possible correct configurations of the CPS. The framework is based on the specification of the security requirements and their analysis supported by Model-Driven Engineering and Software Product Line Engineering (SPLE) approaches. To illustrate the usefulness, the proposal has been applied to the security requirements in an Agriculture 4.0 scenario based on automated hydroponic cultivationArtículo Detecting the ultra low dimensionality of real networks(Nature Research, 2022) Almagro Blanco, Pedro; Boguñá, Marián; Serrano, M. Ángeles; Universidad de Sevilla. Departamento de Ciencias de la Computación e Inteligencia Artificial; Agencia Estatal de Investigación. España; Generalitat de Catalunya; Universidad de Sevilla. TIC-137: Lógica, Computación e Ingeniería del ConocimientoReducing dimension redundancy to find simplifying patterns in high dimensional datasets and complex networks has become a major endeavor in many scientific fields. However, detecting the dimensionality of their latent space is challenging but necessary to generate efficient embeddings to be used in a multitude of downstream tasks. Here, we propose a method to infer the dimensionality of networks without the need for any a priori spatial embed ding. Due to the ability of hyperbolic geometry to capture the complex con nectivity of real networks, we detect ultra low dimensionality far below values reported using other approaches. We applied our method to real networks from different domains and found unexpected regularities, including: tissue specific biomolecular networks being extremely low dimensional; brain con nectomes being close to the three dimensions of their anatomical embedding; and social networks and the Internet requiring slightly higher dimensionality. Beyond paving the way towards an ultra efficient dimensional reduction, our findings help address fundamental issues that hinge on dimensionality, such as universality in critical behavior.Artículo DISCERNER: Dynamic selection of resource manager in hyper-scale cloud-computing data centres(Elsevier, 2021) Fernández Cerero, Damián; Ortega Rodríguez, Francisco Javier; Jakóbik, Agnieszka; Fernández Montes González, Alejandro; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Universidad de Sevilla. TIC134: Sistemas InformáticosData centres constitute the engine of the Internet, and run a major portion of large web and mobile applications, content delivery and sharing platforms, and Cloud-computing business models. The high performance of such infrastructures is therefore critical for their correct functioning. This work focuses on the improvement of data-centre performance by dynamically switching the main data-centre governance software system: the resource manager. Instead of focusing on the development of new resource-managing models as soon as new workloads and patterns appear, we propose DISCERNER, a decision-theory model that can learn from numerous data-centre execution logs to determine which existing resource-managing model may optimise the overall performance for a given time period. Such a decision-theory system employs a classic machine-learning classifier to make real-time decisions based on past execution logs and on the current data-centre operational situation. A set of extensive and industry-guided experiments has been simulated by a validated data-centre simulation tool. The results obtained show that the values of key performance indicators may be improved by at least 20% in realistic scenarios.Artículo Effects of Mindfulness on Conceptual Modeling Performance: a Series of Experiments(IEEE Computer Society, 2022) Bernárdez Jiménez, Beatriz; Durán Toro, Amador; Parejo Maestre, José Antonio; Juristo, Natalia; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Economía y Competitividad (MINECO). España; Junta de Andalucía; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Junta de AndalucíaContext. Mindfulness is a meditation technique whose main goal is keeping the mind calm and educating attention by focusing only on one thing at a time, usually breathing. The reported benefits of its continued practice can be of interest for Software Engineering students and practitioners, especially in tasks like conceptual modeling, in which concentration and clearness of mind are crucial. Goal. In order to evaluate whether Software Engineering students enhance their conceptual modeling performance after several weeks of mindfulness practice, a series of three controlled experiments were carried out at the University of Seville during three consecutive academic years (2013–2016) involving 130 students. Method. In all the experiments, the subjects were divided into two groups. While the experimental group practiced mindfulness, the control group was trained in public speaking as a placebo treatment. All the subjects developed two conceptual models based on a transcript of an interview, one before and another one after the treatment. The results were compared in terms of conceptual modeling quality (measured as effectiveness, i.e. the percentage of model elements correctly identified) and productivity (measured as efficiency, i.e. the number of model elements correctly identified per unit of time). Results. The statistically significant results of the series of experiments revealed that the subjects who practiced mindfulness developed slightly better conceptual models (their quality was 8.16% higher) and they did it faster (they were 46.67% more productive) than the control group, even if they did not have a previous interest in meditation. Conclusions. The practice of mindfulness improves the performance of Software Engineering students in conceptual modeling, especially their productivity. Nevertheless, more experimentation is needed in order to confirm the outcomes in other Software Engineering tasks and populations.Artículo Elastic Smart Contracts in Blockchains(IEEE Computer Society, 2021) Dustdar, Schahram; Fernández Montes, Pablo; García Rodríguez, José María; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Junta de Andalucía; Universidad de Sevilla. TIC205: Ingeniería del Software AplicadaIn this paper, we deal with questions related to blockchains in complex Internet of Things (IoT)-based ecosystems. Such ecosystems are typically composed of IoT devices, edge devices, cloud computing software services, as well as people, who are decision makers in scenarios such as smart cities. Many decisions related to analytics can be based on data coming from IoT sensors, software services, and people. However, they are typically based on different levels of abstraction and granularity. This poses a number of challenges when multiple blockchains are used together with smart contracts. This work proposes to apply our concept of elasticity to smart contracts and thereby enabling analytics in and between multiple blockchains in the context of IoT. We propose a reference architecture for Elastic Smart Contracts and evaluate the approach in a smart city scenario, discussing the benefits in terms of performance and self-adaptability of our solution.Artículo Enabling security risk assessment and management for business process models(Elsevier, 2024) Rosado, David G.; Sánchez, Luis E.; Varela Vaca, Ángel Jesús; Santos Olmo, Antonio; Gómez López, María Teresa; Martínez Gasca, Rafael; Fernández Medina, Eduardo; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosBusiness processes (BP) are considered the enterprise’s cornerstone but are increasingly in the spotlight of attacks. Therefore, the design of business processes must consider the security risks and be adequately integrated into the information and operational systems. However, security risk assessment and management are rarely considered at the level of business processes during design time, let alone considering a risk architecture that takes into account the connection and dependencies of risks at these levels of the organisation, business processes, and information systems. In general, most approaches deal with integrating new artefacts for business process models to support risk analysis, but sometimes, the notation can increase complexity, making it difficult to have a risk management tool to support the analysis. After analysing the current risk processes and frameworks, we have realised that they are often neglected when considering organisational and business process levels. In this paper, MARISMA-BP (MARISMA for Business Process) pattern is proposed, a security risk pattern to enable the assessment and management of risks for business process models. This approach is an artefact that has been validated in a real scenario following the design science methodology. Further, MARISMA-BP pattern is supported by eMARISMA, an automated infrastructure that allows the definition and reuse of each risk component, helping us to carry out the risk assessment and management process in an efficient and dynamic way. To demonstrate the applicability of the proposal, MARISMA BP pattern is applied to a real health-based business process scenario. The findings illustrate the efficacy of MARISMA-BP within eMARISMA for comprehensive risk assessment and management, underscoring its versatility and practical relevance in any business process environment.