Mostrar el registro sencillo del ítem

Artículo

dc.creatorDenecke, Kerstines
dc.creatorMay, Richardes
dc.creatorRivera-Romero, Octavioes
dc.date.accessioned2024-06-27T09:33:35Z
dc.date.available2024-06-27T09:33:35Z
dc.date.issued2024-02
dc.identifier.issn0148-5598es
dc.identifier.issn1573-689Xes
dc.identifier.urihttps://hdl.handle.net/11441/160916
dc.description.abstractLarge Language Models (LLMs) such as General Pretrained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT), which use transformer model architectures, have significantly advanced artificial intelligence and natural language processing. Recognized for their ability to capture associative relationships between words based on shared context, these models are poised to transform healthcare by improving diagnostic accuracy, tailoring treatment plans, and predicting patient outcomes. However, there are multiple risks and potentially unintended consequences associated with their use in healthcare applications. This study, conducted with 28 participants using a qualitative approach, explores the benefits, shortcomings, and risks of using transformer models in healthcare. It analyses responses to seven open-ended questions using a simplified thematic analysis. Our research reveals seven benefits, including improved operational efficiency, optimized processes and refined clinical documentation. Despite these benefits, there are significant concerns about the introduction of bias, auditability issues and privacy risks. Challenges include the need for specialized expertise, the emergence of ethical dilemmas and the potential reduction in the human element of patient care. For the medical profession, risks include the impact on employment, changes in the patient-doctor dynamic, and the need for extensive training in both system operation and data interpretation.es
dc.formatapplication/pdfes
dc.format.extent11 p.es
dc.language.isoenges
dc.publisherSpringeres
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectLarge Language Modeles
dc.subjectTransformer Modelses
dc.subjectArtificial Intelligencees
dc.subjectHealthcarees
dc.subjectGenerative Artificial Intelligencees
dc.titleTransformer Models in Healthcare. A Survey and Thematic Analysis of Potentials, Shortcomings and Riskses
dc.typeinfo:eu-repo/semantics/articlees
dc.type.versioninfo:eu-repo/semantics/publishedVersiones
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.contributor.affiliationUniversidad de Sevilla. Departamento de Tecnología Electrónicaes
dc.relation.publisherversionhttps://link.springer.com/article/10.1007/s10916-024-02043-5es
dc.identifier.doi10.1007/s10916-024-02043-5es
dc.contributor.groupUniversidad de Sevilla. TIC150: Tecnología Electrónica e Informática Industriales
dc.journaltitleJournal of Medical Systemses
dc.publication.volumen48es
dc.publication.issue1es
dc.publication.initialPage23es

FicherosTamañoFormatoVerDescripción
JMS_rivera-romero_2024_transfo ...1.265MbIcon   [PDF] Ver/Abrir  

Este registro aparece en las siguientes colecciones

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como: Atribución 4.0 Internacional