EconPapers    
Economics at your fingertips  
 

Introducing Various Semantic Models for Amharic: Experimentation and Evaluation with Multiple Tasks and Datasets

Seid Muhie Yimam, Abinew Ali Ayele, Gopalakrishnan Venkatesh, Ibrahim Gashaw and Chris Biemann
Additional contact information
Seid Muhie Yimam: Language Technology Group, Universität Hamburg, Grindelallee 117, 20146 Hamburg, Germany
Abinew Ali Ayele: Language Technology Group, Universität Hamburg, Grindelallee 117, 20146 Hamburg, Germany
Gopalakrishnan Venkatesh: International Institute of Information Technology, Bangalore 560100, India
Ibrahim Gashaw: College of Informatics, University of Gondar, Gondar 6200, Ethiopia
Chris Biemann: Language Technology Group, Universität Hamburg, Grindelallee 117, 20146 Hamburg, Germany

Future Internet, 2021, vol. 13, issue 11, 1-18

Abstract: The availability of different pre-trained semantic models has enabled the quick development of machine learning components for downstream applications. However, even if texts are abundant for low-resource languages, there are very few semantic models publicly available. Most of the publicly available pre-trained models are usually built as a multilingual version of semantic models that will not fit well with the need for low-resource languages. We introduce different semantic models for Amharic, a morphologically complex Ethio-Semitic language. After we investigate the publicly available pre-trained semantic models, we fine-tune two pre-trained models and train seven new different models. The models include Word2Vec embeddings, distributional thesaurus (DT), BERT-like contextual embeddings, and DT embeddings obtained via network embedding algorithms. Moreover, we employ these models for different NLP tasks and study their impact. We find that newly-trained models perform better than pre-trained multilingual models. Furthermore, models based on contextual embeddings from FLAIR and RoBERTa perform better than word2Vec models for the NER and POS tagging tasks. DT-based network embeddings are suitable for the sentiment classification task. We publicly release all the semantic models, machine learning components, and several benchmark datasets such as NER, POS tagging, sentiment classification, as well as Amharic versions of WordSim353 and SimLex999.

Keywords: datasets; neural networks; semantic models; Amharic NLP; low-resource language; text tagging (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2021
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/13/11/275/pdf (application/pdf)
https://www.mdpi.com/1999-5903/13/11/275/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:13:y:2021:i:11:p:275-:d:666378

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:13:y:2021:i:11:p:275-:d:666378