Ponencias (Lenguajes y Sistemas Informáticos)
URI permanente para esta colecciónhttps://hdl.handle.net/11441/11394
Examinar
Examinando Ponencias (Lenguajes y Sistemas Informáticos) por Título
Mostrando 1 - 20 de 1294
- Resultados por página
- Opciones de ordenación
Ponencia 1st International Workshop on Maturity of Web Engineering Practices (MATWEP 2018)(Springer, 2018) Domínguez Mayo, Francisco José; González Enríquez, José; Koch, Nora; Morillo Baro, Esteban; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Economía y Competitividad (MINECO). EspañaKnowledge transfer and adoption of software engineering approaches by practitioners is always a challenge for both academia and industry. The objective of the workshop MATWEP is to provide an open discussion space that combines solid theory work with practical on-the- field experience in the Web Engineering area. The topics covered are knowledge transfer of Web Engineering approaches, such as methods, techniques and tools in all phases of the development life-cycle of Web applications. We report on the papers presented in the edition 2018 and the fruitful discussion on these topics.Ponencia 24th International workshop on configuration (CONFWS'22)(ACM, 2022) Forza, Cipriano; Galindo Duarte, José Ángel; Stettinger, Martin; Vareilles, Élise; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosConfiguration is composing product models of complex and variant systems requiring parameterizable components. This traditionally relies on knowledge-representation formalisms that enable the reasoning and automatization of different software engineering tasks when developing such systems. The main goal of this workshop is to promote research in the application areas related to configuration. Traditionally, this workshop has managed to bridge the gap with contributions from both the academy and the industry, thus, enabling the exchange of ideas and challenges. It provides a forum for discussing ideas, evaluations and experiences, especially in using AI techniques for solving configuration problems.Ponencia A bioinspired ensemble approach for multi-horizon reference evapotranspiration forecasting in Portugal(Association for Computing Machinery, 2023-03) Jiménez Navarro, Manuel Jesús; Martínez Ballesteros, María del Mar; Sofia Brito, Isabel; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia e Innovación (MICIN). España; Junta de AndalucíaThe year 2022 was the driest year in Portugal since 1931 with 97% of territory in severe drought. Water is especially important for the agricultural sector in Portugal, as it represents 78% total consumption according to theWater Footprint report published in 2010. Reference evapotranspiration is essential due to its importance in optimal irrigation planning that reduces water consumption. This study analyzes and proposes a framework to forecast daily reference evapotranspiration at eight stations in Portugal from 2012 to 2022 without relying on public meteorological forecasts. The data include meteorological data obtained from sensors included in the stations. The goal is to perform a multi-horizon forecasting of reference evapotranspiration using the multiple related covariates. The framework combines the data processing and the analysis of several state-of-the-art forecasting methods including classical, linear, tree-based, artificial neural network and ensembles. Then, an ensemble of all trained models is proposed using a recent bioinspired metaheuristic named Coronavirus Optimization Algorithm to weight the predictions. The results in terms of MAE and MSE are reported, indicating that our approach achieved a MAE of 0.658.Ponencia A Case Study for Generating Test Cases from Use Cases(IEEE Computer Society, 2008) Gutiérrez Rodríguez, Javier Jesús; Escalona Cuaresma, María José; Mejías Risoto, Manuel; Torres Valderrama, Jesús; Centeno, Arturo H.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Educación y Ciencia (MEC). España; Universidad de Sevilla. TIC021: Ingeniería Web y Testing Temprano (IWT2)The verification of the correct implementation of use eases is a vital task in software development and quality assurance. Although there are many works describing how to generate test eases from use cases, there are very few ease studies and empirical results of their application and effectiveness. This paper introduces a first ease study that test the correct implementation of use cases in a web system and a command line system, analyses the results and exposes that generation of use cases has a successful about 80%.Ponencia A Catalogue of Inter-Parameter Dependencies in RESTful Web APIs(Springer, 2019) Martín López, Alberto; Segura Rueda, Sergio; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Economía y Competitividad (MINECO). España; Ministerio de Ciencia, Innovación y Universidades (MICINN). España; Ministerio de Educación, Cultura y Deporte (MECD). España; Universidad de Sevilla. TIC205: Ingeniería del Software AplicadaWeb services often impose dependency constraints that re strict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. Researchers and practitioners are openly requesting support for modelling and validating dependencies among in put parameters in web APIs, but this is not possible unless we share a deep understanding of how dependencies emerge in practice—the aim of this work. In this paper, we present a thorough study on the presence of dependency constraints among input parameters in web APIs in in dustry. The study is based on a review of more than 2.5K operations from 40 real-world RESTful APIs from multiple application domains. Overall, our findings show that input dependencies are the norm, rather than the exception, with 85% of the reviewed APIs having some kind of dependency among their input parameters. As the main outcome of our study, we present a catalogue of seven types of dependencies consistently found in RESTful web APIsPonencia A Comparative Study of Classifier Combination Methods Applied to NLP Tasks(Springer, 2011) Enríquez de Salamanca Ros, Fernando; Troyano Jiménez, José Antonio; Cruz Mata, Fermín; Ortega Rodríguez, Francisco Javier; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosThere are many classification tools that can be used for various NLP tasks, although none of them can be considered the best of all since each one has a particular list of virtues and defects. The combination methods can serve both to maximize the strengths of the base classifiers and to reduce errors caused by their defects improving the results in terms of accuracy. Here is a comparative study on the most relevant methods that shows that combination seems to be a robust and reliable way of improving our results.Capítulo de Libro A Comparative Study of Machine Learning Regression Methods on LiDAR Data: A Case Study(Springer, 2014) García Gutiérrez, Jorge; Martínez Álvarez, Francisco; Troncoso Lora, Alicia; Riquelme Santos, José Cristóbal; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosLight Detection and Ranging (LiDAR) is a remote sensor able to extract vertical information from sensed objects. LiDAR-derived information is nowadays used to develop environmental models for describing fire behaviour or quantifying biomass stocks in forest areas. A multiple linear regression (MLR) with previous stepwise feature selection is the most common method in the literature to develop LiDAR-derived models. MLR defines the relation between the set of field measurements and the statistics extracted from a LiDAR flight. Machine learning has recently been paid an increasing attention to improve classic MLR results. Unfortunately, few studies have been proposed to compare the quality of the multiple machine learning approaches. This paper presents a comparison between the classic MLR-based methodology and common regression techniques in machine learning (neural networks, regression trees, support vector machines, nearest neighbour, and ensembles such as random forests). The selected techniques are applied to real LiDAR data from two areas in the province of Lugo (Galizia, Spain). The results show that support vector regression statistically outperforms the rest of techniques when feature selection is applied. However, its performance cannot be said statistically different from that of Random Forests when previous feature selection is skipped.Ponencia A Comparison of Objective and Subjective Sleep Quality Measurement in a Group of Elderly Persons in a Home Environment(SpringerLink, 2020) Gaiduk, Maksym; Seepold, Ralf; Martínez Madrid, Natividad; Ortega Ramírez, Juan Antonio; Conti, Massimo; Orcioni, Simone; Penzel, Thomas; Scherz, Wilhelm Daniel; Perea, Juan José; Serrano Alarcón, Ángel; Weiss, Gerald; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosThe main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress devicewas used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.Ponencia A Comparison of Test Case Prioritization Criteria for Software Product Lines(2014) Sánchez Jerez, Ana Belén; Segura Rueda, Sergio; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. TIC205: Ingeniería del Software AplicadaSoftware Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has been paid to the order in which the products are tested. Test case prioritization techniques reorder test cases to meet a certain performance goal. For instance, testers may wish to order their test cases in order to detect faults as soon as possible, which would translate in faster feedback and earlier fault correction. in this paper, we explore the applicability of test case prioritization techniques to SPL testing. We propose five different prioritization criteria based on common metrics of feature models and we compare their effectiveness in increasing the rate of early fault detection, i.e. a measure of how quickly faults are detected. The results show that different orderings of the same SPL suite may lead to significant differences in the rate of early fault detection. They also show that our approach may contribute to accelerate the detection of faults of SPL test suites based on combinatorial testingPonencia A Conceptual Framework for Automated Negotiation Systems(Springer, 2006) Resinas Arias de Reyna, Manuel; Fernández Montes, Pablo; Corchuelo Gil, Rafael; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosIn the last years, much work have been done in the development of techniques for automated negotiation, and, particularly, in the automated negotiation of SLAs. However, there is no work that describes how to develop advanced software systems that are able to negotiate automatically in an open environment such as the Internet. In this work, we develop a conceptual framework for automated negotiations of SLAs that serves as a starting point by identifying the elements that must be supported in those software systems. In addition, based on that conceptual framework, we report on a set of properties for automated negotiation systems that may be used to compare different proposalsPonencia A Conceptual Framework for Efficient Web Crawling in Virtual Integration Contexts(Springer, 2011) Hernández Salmerón, Inmaculada Concepción; Sleiman, Hassan A.; Ruiz Cortés, David; Corchuelo Gil, Rafael; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosVirtual Integration systems require a crawling tool able to navigate and reach relevant pages in the Web in an efficient way. Existing proposals in the crawling area are aware of the efficiency problem, but still most of them need to download pages in order to classify them as relevant or not. In this paper, we present a conceptual framework for designing crawlers supported by a web page classifier that relies solely on URLs to determine page relevance. Such a crawler is able to choose in each step only the URLs that lead to relevant pages, and therefore reduces the number of unnecessary pages downloaded, optimising bandwidth and making it efficient and suitable for virtual integration systems. Our preliminary experiments show that such a classifier is able to distinguish between links leading to different kinds of pages, without previous intervention from the user.Ponencia A constraint-based approach for managing declarative temporal business process models(Association for Information Systems (AIS), 2018) Jiménez Ramírez, Andrés; Barba Rodríguez, Irene; Valle Sevillano, Carmelo del; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. TIC021: Engineering and Science for Software SystemsThere is an increasing interest in aligning information systems in a process-oriented way. As an alternative of the traditional imperative models which tend to be too rigid, processes may be specified in a declarative (e.g., constraint-based) way. Nonetheless, in general, offering operational support (e.g., generating possible execution traces) to declarative business process models entails more complexity when compared to imperative modeling alternatives. Such support becomes even more complex in many real scenarios where the management of complex temporal relations between the process activities is crucial (i.e., the temporal perspective should be managed). Despite the needs for enabling process flexibility and dealing with temporal constraints, most existing tools are unable to manage both. In a previous work, we then proposed TConDec-R, which is a constraint-based process modeling language which allows for the specification of temporal constraints. However, TConDec-R revealed a number of limitations that are overcome with the present work. More specifically, this paper significantly extends and improves our previous work by (1) defining TConDec-R process models based on high-level elements from the constraint programming paradigm, (2) introducing a constraint-based tool with a client/server architecture for providing operational support to TConDec-R process models, and (3) performing an empirical evaluation of the approach.Ponencia A Constraint-based Job-Shop Scheduling Model for Software Development Planning(Asociación de Ingeniería del Software y Tecnologías de Desarrollo de Software (SISTEDES), 2009) Barba Rodríguez, Irene; Valle Sevillano, Carmelo del; Borrego Núñez, Diana; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Educación y Ciencia (MEC). EspañaThis paper proposes a constraint-based model for the Job Shop Scheduling Problem to be solved using local search techniques. The model can be used to represent a multiple software process planning problem when the different (activities of) projects compete for limited sta®. The main aspects of the model are: the use of integer variables which represent the relative order of the operations to be scheduled, and two global constraints, all different and increasing, for ensuring feasibility. An interesting property of the model is that cycle detection in the sched- ules is implicit in the satisfaction of the constraints. In order to test the proposed model, a parameterized local search algorithm has been used, with a neighborhood similar to the Nowicki and Smutnicki one, which has been adapted in order to be suitable for the proposed model.Ponencia A Constraint-based Model for Multi-objective Repair Planning(IEEE Computer Society, 2009) Barba Rodríguez, Irene; Valle Sevillano, Carmelo del; Borrego Núñez, Diana; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosThis work presents a constraint based model for the planning and scheduling of disconnection and connection tasks when repairing faulty components in a system. Since multi-mode operations are considered, the problem involves the ordering and the selection of the tasks and modes from a set of alternatives, using the shared resources efficiently. Additionally, delays due to change of configurations and transportation are considered. The goal is the minimization of two objective functions: makespan and cost. The set of all feasible plans are represented by an extended And/Or graph, that embodies all of the constraints of the problem, allowing non reversible and parallel plans. A simple branch-and-bound algorithm has been used for testing the model with different combinations of the functions to minimize using the weighted-sum approach.Ponencia A Context-Oriented System For Mobile Devices(Universidad de Sevilla, 2011-06) Cuadrado Cordero, Ismael; Soria Morillo, Luis Miguel; Ortega Ramírez, Juan Antonio; González Abril, Luis; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. Departamento de Economía Aplicada IOur work provides developers of context-based application for mobile devices a framework for developing comprehensive and adaptable solutions that can interacts each other through a set of functionalities and interaction methods for building secure applications easily. On the other hand, our work provides a platform to reduce the energy consumption of the devices; due to the reuse of functionalities. To do this, we have designed a layered architecture that allows interaction between applications and context oriented services transparently to users. The main layer of our architecture (Core layer) provides a tool that allows communication between adjacent layers. More over, using our architecture, developers can design context-based applications in a simpler way, a very important goal in order to increasing number and functionalities of this kind of applications on mobile devices. Furthermore, our architecture allows the reuse ofknowledge between developers. During our work, OSGi technology has been used in mobile phones, cutting-edge research in the field.Ponencia A Controlled Experiment for Evaluating a Metric–Based Reading Technique for Requirements Inspection(IEEE Computer Society, 2004) Bernárdez Jiménez, Beatriz; Genero Bocco, Marcela; Durán Toro, Amador; Toro Bonilla, Miguel; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Ministerio de Ciencia y Tecnología (MCYT). EspañaNatural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric–based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as ”if the metric value is too low (or high) the presence of defects of type must be checked”. We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal of this paper is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple Checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than Checklist reviewers, but they were not more efficient, because MBR reviewers took longer than Checklist reviewers on average.Ponencia A Controlled Experiment to Evaluate the Effects of Mindfulness in Software Engineering(ACM, 2014) Bernárdez Jiménez, Beatriz; Durán Toro, Amador; Parejo Maestre, José Antonio; Ruiz Cortés, Antonio; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Universidad de Sevilla. TIC205: Ingeniería del Software AplicadaContext. Many reports support the fact that some psycho--social aspects of software engineers are key factors for the quality of the software development process and its resulting products. Based on the experience of some of the authors after more than a year of practising mindfulness---a meditation technique aimed to increase clearness of mind and awareness---we guessed that it could be interesting to empirically evaluate whether mindfulness affects positively not only the behaviour but also the professional performance of software engineers. Goal. In this paper, we present a quasi--experiment carried out at the University of Seville to evaluate whether Software Engineering & Information Systems students enhance their conceptual modelling skills after the continued daily practice of mindfulness during four weeks. Method. Students were divided into two groups: one group practised mindfulness, and the other---the control group---were trained in public speaking. In order to study the possible cause--and--effect relationship, effectiveness (the rate of model elements correctly identified) and efficiency (the number of model elements correctly identified per unit of time) of the students developing conceptual modelling exercises were measured before and after taking the mindfulness and public speaking sessions. Results. The experiment results have revealed that the students who practised mindfulness have become more efficient in developing conceptual models than those who attended the public speaking sessions. With respect to effectiveness, some enhancement have been observed, although not as significant as in the case of efficiency. Conclusions. This rising trend in effectiveness suggests that the number of sessions could have been insufficient and that a longer period of sessions could have also enhanced effectiveness significantly.Ponencia A Data Mining Method to Support Decision Making in Software Development Projects(École Supérieure d' Électronique de l' Ouest, 2003) Álvarez, J.L.; Mata, J.; Riquelme Santos, José Cristóbal; Ramos Román, Isabel; Universidad de Sevilla. Departamento de Lenguajes y Sistemas Informáticos; Comisión Interministerial de Ciencia y Tecnología (CICYT). EspañaIn this paper, we present a strategy to induce knowledge as support decision making in Software Development Projects (SDP). The motive of this work is to reduce the great quantity of SDP do not meet the initial cost requirements, delivery date and the quality of the final product. The main objective of this strategy is to support the manager in the decision taking to establish the policies from management when beginning a software project. Thus, we apply a data mining tool, called ELLIPSES, on databases of SDP. The databases are generated by means of the simulation of a dynamic model for the management of SDP. ELLIPSES tool is a new method oriented to discover knowledge according to the expert’s needs, by the detection of the most significant regions. The method essence is found in an evolutionary algorithm that finds these regions one after another. The expert decides which regions are significant and determines the stop criterion. The extracted knowledge is offered through two types of rules: quantitative and qualitative models. The tool also offers a visualization of each rule by parallel coordinate systems. In order to present this strategy, ELLIPSES is applied to a database which has already been obtained by means of the simulation of a dynamic model on a project concluded.Ponencia A Dataset of Scratch Programs: Scraped, Shaped and Scored(IEEE, 2017-07) Aivaloglou, E.; Hermans, F.; Moreno León, Jesús; Robles, G.; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosScratch is increasingly popular, both as an introductory programming language and as a research target in the computing education research field. In this paper, we present a dataset of 250K recent Scratch projects from 100K different authors scraped from the Scratch project repository. We processed the projects' source code and metadata to encode them into a database that facilitates querying and further analysis. We further evaluated the projects in terms of programming skills and mastery, and included the project scoring results. The dataset enables the analysis of the source code of Scratch projects, of their quality characteristics, and of the programming skills that their authors exhibit. The dataset can be used for empirical research in software engineering and computing education.Ponencia A decision tree-based method for protein contact map prediction(Springer, 2011) Santiesteban Toca, Cosme E.; Márquez Chamorro, Alfonso Eduardo; Asencio Cortés, Gualberto; Aguilar Ruiz, Jesús Salvador; Universidad de Sevilla. Departamento de Lenguajes y Sistemas InformáticosIn this paper, we focus on protein contact map prediction. We describe a method where contact maps are predicted using decision tree-based model. The algorithm includes the subsequence information between the couple of analyzed amino acids. In order to evaluate the method generalization capabilities, we carry out an experiment using 173 non-homologous proteins of known structures. Our results indicate that the method can assign protein contacts with an average accuracy of 0.34, superior to the 0.25 obtained by the FNETCSS method. This shows that our algorithm improves the accuracy with respect to the methods compared, especially with the increase of protein length