Implementation of Automated Feedback System for Japanese Essays in Intermediate Education
DOI:
https://doi.org/10.52731/liir.v003.057Keywords:
Automated Feedback System, Automated Essay Feedback, Question Answer System, 6 1 writing-trait, Japanese LanguageAbstract
Traditional Automated Essay Scoring (AES) only provides students with a holistic score, unable to provide meaningful feedback on students writing. Holistic, structure, style, word, and readability are chosen from the 6+1 writing-trait theory to create an Automated Essay Feedback (AEF) for Japanese L1 students. By combining these rule-based traits with a data-driven model, we
created a hybrid system that can automatically grade and give feedback to students. The system automatically identifies parts of student writing that need improvement, then recommends corrective and suggestive feedback. Our contributions are twofold: design a 5-writing-trait AEF for Japanese L1 students and implement the holistic corrective writing-trait.
References
A. Hashemifardnia, E. Namaziandost, and M. Sepehri, “The effectiveness of giving grade,
corrective feedback, and corrective feedback-plus-giving grade on grammatical accuracy,”
International Journal of Research Studies in Language Learning, vol. 8, no. 1, Jan. 2019,
doi: 10.5861/ijrsll.2019.3012.
E. B. Page, “Teacher comments and student performance: A seventy-four classroom experiment in school motivation.,” J Educ Psychol, vol. 49, no. 4, pp. 173–181, Aug. 1958,
doi: 10.1037/h0041940.
C. van Beuningen, N. H. de Jong, and F. Kuiken, “The Effect of Direct and Indirect
Corrective Feedback on L2 Learners’ Written Accuracy,” ITL - International Journal of
Applied Linguistics, vol. 156, pp. 279–296, 2008, doi: 10.2143/ITL.156.0.2034439.
T. Ishioka and M. Kameda, “Automated Japanese essay scoring system:jess,” in Proceedings. 15th International Workshop on Database and Expert Systems Applications, 2004.,
, pp. 4–8. doi: 10.1109/DEXA.2004.1333440.
T. Mizumoto et al., “Analytic Score Prediction and Justification Identification in Automated Short Answer Scoring,” in Proceedings of the Fourteenth Workshop on Innovative
Use of NLP for Building Educational Applications, 2019, pp. 316–325. doi:
18653/v1/W19-4433.
S. Takano and O. Ichikawa, “Automatic scoring of short answers using justification cues
estimated by BERT,” in Proceedings of the 17th Workshop on Innovative Use of NLP for
Building Educational Applications (BEA 2022), 2022, pp. 8–13. doi:
18653/v1/2022.bea-1.2.
N. Reimers and I. Gurevych, “Sentence-BERT: Sentence Embeddings using Siamese
BERT-Networks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3980–3990. doi: 10.18653/v1/D19-1410.
T. Zesch, M. Wojatzki, and D. Scholten-Akoun, “Task-Independent Features for Automated Essay Grading,” in Proceedings of the Tenth Workshop on Innovative Use of NLP
for Building Educational Applications, 2015, pp. 224–232. doi: 10.3115/v1/W15-0626.
“Scholastic SC-0439280389-A1 Theory and Practice 6 Plus 1 Traits of Writing Guide,
Grades 3 and Up.” Scholastic Teaching Resources, Apr. 2018.
A. A. Qoura and F. A. Zahran, “The Effect of the 6+1 Trait Writing Model on ESP University Students Critical Thinking and Writing Achievement,” English Language Teaching, vol. 11, no. 9, p. 68, Aug. 2018, doi: 10.5539/elt.v11n9p68.
G. Deeva, D. Bogdanova, E. Serral, M. Snoeck, and J. de Weerdt, “A review of automated feedback systems for learners: Classification framework, challenges and opportunities,” Comput Educ, vol. 162, p. 104094, Mar. 2021, doi: 10.1016/j.compedu.2020.
“https://github.com/Hasegawa-lab-JAIST/huyphan-6-writing-trait-feedback.”
“RIKEN (2020): RIKEN Dataset for Short Answer Assessment. Informatics Research
Data Repository, National Institute of informatics. Dataset: https://doi.org/10.32130/
rdata.3.1.”