Visualization of the Impact of Classroom Utterances Using Generative Dialogue Models

Authors

  • Sakuei Onishi
  • Tomohiko Yasumori Okayama University of Science
  • Hiromitsu Shiina Okayama University of Science

DOI:

https://doi.org/10.52731/liir.v004.181

Keywords:

Dialogue Analysis, Dialogue Model, Impact of Speech, Speech visualization

Abstract

As teachers in elementary school classes have limited time for reflection, it is desirable for the reflection process to automated. Therefore, in this study, we analyze the utterances of teachers and children using a neural network based dialogue model. Additionally, we also analyze and visualize the degree of impact of the utterance during the utterance generation process.

References

K. Akita, Transforming Pedagogy. Seorishobo, 2009, ch. The Turn from Teacher Education to Research on Teachers’ Learning Processes: Transformation into Research on Micro-Educational Practices, pp. 45–75, (in Japanese).

A. Sakamoto, “How do in-service teachers learn from their teaching experiences?” The Japanese Journal of Educational Psychology, vol. 55, no. 4, pp. 584–596, 2007,(in Japanese).

T. Yasumori, “Speech protocol analysis during classroom sessions and reflection of elementary school math teachers based on the pck model,” The Bulletin of Japanese Curriculum Research and Development, vol. 41, no. 1, pp. 59–71, 2018, (in Japanese).

T. Onishi, S. Onishi, and H. Shiina, “Improved response generation consistency in multiturn dialog,” in 2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI), 2022, pp. 416–419.

A. Madsen, S. Reddy, and S. Chandar, “Post-hoc interpretability for neural nlp: A survey,” ACM Comput. Surv., vol. 55, no. 8, dec 2022.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.

R. Cs´aky, P. Purgai, and G. Recski, “Improving neural conversational models with entropy-based data filtering,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 5650–5669.

Z. Lin, G. I. Winata, P. Xu, Z. Liu, and P. Fung, “Variational transformers for diverse response generation,” arXiv preprint arXiv:2003.12738, 2020.

B. Sun, S. Feng, Y. Li, J. Liu, and K. Li, “Generating relevant and coherent dialogue responses using self-separated conditional variational AutoEncoders,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp.

–5637.

T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” In International Conference on Learning Representation,

Downloads

Published

2023-12-20