EconPapers    
Economics at your fingertips  
 

On the Use of Knowledge Transfer Techniques for Biomedical Named Entity Recognition

Tahir Mehmood (), Ivan Serina, Alberto Lavelli, Luca Putelli and Alfonso Gerevini
Additional contact information
Tahir Mehmood: Department of Information Engineering, University of Brescia, Via Branze 38, 25121 Brescia, Italy
Ivan Serina: Department of Information Engineering, University of Brescia, Via Branze 38, 25121 Brescia, Italy
Alberto Lavelli: NLP Research Group, Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy
Luca Putelli: Department of Information Engineering, University of Brescia, Via Branze 38, 25121 Brescia, Italy
Alfonso Gerevini: Department of Information Engineering, University of Brescia, Via Branze 38, 25121 Brescia, Italy

Future Internet, 2023, vol. 15, issue 2, 1-27

Abstract: Biomedical named entity recognition (BioNER) is a preliminary task for many other tasks, e.g., relation extraction and semantic search. Extracting the text of interest from biomedical documents becomes more demanding as the availability of online data is increasing. Deep learning models have been adopted for biomedical named entity recognition (BioNER) as deep learning has been found very successful in many other tasks. Nevertheless, the complex structure of biomedical text data is still a challenging aspect for deep learning models. Limited annotated biomedical text data make it more difficult to train deep learning models with millions of trainable parameters. The single-task model, which focuses on learning a specific task, has issues in learning complex feature representations from a limited quantity of annotated data. Moreover, manually constructing annotated data is a time-consuming job. It is, therefore, vital to exploit other efficient ways to train deep learning models on the available annotated data. This work enhances the performance of the BioNER task by taking advantage of various knowledge transfer techniques: multitask learning and transfer learning. This work presents two multitask models (MTMs), which learn shared features and task-specific features by implementing the shared and task-specific layers. In addition, the presented trained MTM is also fine-tuned for each specific dataset to tailor it from a general features representation to a specialized features representation. The presented empirical results and statistical analysis from this work illustrate that the proposed techniques enhance significantly the performance of the corresponding single-task model (STM).

Keywords: biomedical named entity recognition; deep learning; single-task model; ELMo; transfer learning; multitask learning (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/15/2/79/pdf (application/pdf)
https://www.mdpi.com/1999-5903/15/2/79/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:15:y:2023:i:2:p:79-:d:1071756

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:15:y:2023:i:2:p:79-:d:1071756