Improving Performance of Automated Essay Scoring by Using Back-Translation Essays and Adjusted Scores
You-Jin Jong,
Yong-Jin Kim,
Ok-Chol Ri and
Junwei Ma
Mathematical Problems in Engineering, 2022, vol. 2022, 1-10
Abstract:
Automated essay scoring plays an important role in judging students’ language abilities in education. Traditional approaches use handcrafted features to score and are time consuming and complicated. Recently, neural network approaches have improved performance without any feature engineering. Unlike other natural language processing tasks, only a small number of datasets are publicly available for automated essay scoring, and the size of the dataset is not sufficiently large. Considering that the performance of a neural network is closely related to the size of the dataset, the lack of data limits the performance improvement of the automated essay scoring model. In this study, we proposed a method to increase the number of essay-score pairs using back translation and score adjustment and applied it to the Automated Student Assessment Prize dataset for augmentation. We evaluated the effectiveness of the augmented data using models from prior work. In addition, performance was evaluated in a model using long short-term memory, which is widely used for automated essay scoring. The performance was improved by using augmented data.
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
http://downloads.hindawi.com/journals/mpe/2022/6906587.pdf (application/pdf)
http://downloads.hindawi.com/journals/mpe/2022/6906587.xml (application/xml)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hin:jnlmpe:6906587
DOI: 10.1155/2022/6906587
Access Statistics for this article
More articles in Mathematical Problems in Engineering from Hindawi
Bibliographic data for series maintained by Mohamed Abdelhakeem ().