Classroom Utterance Analysis and Visualization Using a Generative Deep Neural Networks for Dialogue Model
Abstract
In elementary school and other classes at different levels, teachers have less time for reflection, which can involve self-assessment and classroom observation. Never-theless, reflection activities are becoming increasingly important. Unfortunately, few studies support teachers ’reflection, and studies that analyze classroom utterances treat them as text and do not consider them as dialogues. However, in the field of dialogue response generation, some dialogue models using neural networks have been proposed. In this study, we propose a dialogue model that considers the domain of the class and extracts similar utterances. The proposed model considers the domain of ele-mentary school classes, a domain in which speakers can be classified, and incorporates a method to abstract the characteristics of speakers by clustering. The proposed model can be constructed with a relatively small number of parameters. We also developed a system to visualize the classification probabilities analyzed using the proposed dia-logue model. The developed visualization system was evaluated by experts and was found to visually recognize the classification bias of an utterance and to confirm the quality of the utterance by the similarity of the utterances.
References
[2] Ministry of Education, Culture, Sports, Science and Technology,“Courses of study for elementary schools (notification, 2016),”https://www.mext.go.jp/content/1413522 001.pdf, 2016, accessed Oct. 3, 2022 (in Japanese).
[3] K. Akita, Transforming Pedagogy. Seorishobo, 2009, ch. The Turn from Teacher Education to Research on Teachers’ Learning Processes: Transformation into Research
on Micro-Educational Practices, pp. 45–75, (in Japanese).
[4] A. Sakamoto, “A study on teacher reflection processes in lesson study: Focusing on differences between lesson teacher and observing teacher thought processes,” Japan Bulletin of Educators for Human Development, vol. 9, no. 8, pp. 27–37, 2010, (in
Japanese).
[5] T. Yasumori, “Speech protocol analysis during classroom sessions and reflection of elementary school math teachers based on the pck model,” The Bulletin of Japanese Curriculum Research and Development, vol. 41, no. 1, pp. 59–71, 2018, (in Japanese).
[6] y. Wang, S. Ooi, K. Matsumura, and H. Noma, “Research on a classroom reflection system for improving new teachers’ teaching skills,” in Interaction, Information Processing Society of Japan, 2021, pp. 753–757, (in Japanese).
[7] National Institute for Educational Policy Research, International Comparison of Teacher Environments: the OECD Teaching and Learning International Survey(TALIS) 2018 Report, National Institute for Educational Policy Research, Ed. GYOSEI, 2018, (in Japanese).
[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
[9] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
[10] O. Vinyals and Q. Le, “A neural conversational model,” in ICML Deep LearningWorkshop 2015, 2015.
[11] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, “Building end-to-end dialogue systems using generative hierarchical neural network models,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, ser. AAAI’16. AAAI Press, 2016, pp. 3776–3783.
[12] I. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio, “A hierarchical latent variable encoder-decoder model for generating dialogues,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, Feb. 2017. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/10983
[13] Z. Lin, G. I. Winata, P. Xu, Z. Liu, and P. Fung, “Variational transformers for diverse response generation,” arXiv preprint arXiv:2003.12738, 2020.
[14] T. Yasumori, Examination of teachers’ utterances to realize ”proactive and interactive deep learning”, ser. B, Humanities and Social Sciences. Bulletin of Okayama University of Science, 2021, no. 57, pp. 45–52, (in Japanese).
[15] T. Zhao, R. Zhao, and M. Eskenazi, “Learning discourse-level diversity for neural dialog models using conditional variational autoencoders,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics, Jul. 2017, pp. 654–664. [Online]. Available: https://aclanthology.org/P17-1061
[16] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[17] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with lstm,” Neural computation, vol. 12, no. 10, pp. 2451–2471, 2000.
[18] K. Greff, R. K. Srivastava, J. Koutn´ık, B. R. Steunebrink, and J. Schmidhuber, “Lstm: A search space odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, 2016.
[19] S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio, “Generating sentences from a continuous space,” in Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 10–21. [Online]. Available: https://aclanthology.org/K16-1002
[20] X. Zhou and W. Y. Wang, “MojiTalk: Generating emotional responses at scale,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 1128–1137. [Online]. Available: https://aclanthology.org/P18-1104
[21] A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.- Y. Nie, J. Gao, and B. Dolan, “A neural network approach to contextsensitive generation of conversational responses,” in Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado: Association for Computational Linguistics, May–Jun. 2015, pp. 196–205. [Online]. Available: https://aclanthology.org/N15-1020
[22] R. Cs´aky, P. Purgai, and G. Recski, “Improving neural conversational models with entropy-based data filtering,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 5650–5669. [Online]. Available:
https://aclanthology.org/P19-1567
[23] B. Sun, S. Feng, Y. Li, J. Liu, and K. Li, “Generating relevant and coherent dialogue responses using self-separated conditional variational AutoEncoders,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 5624–5637. [Online]. Available: https://aclanthology.org/2021.acl-long.437
[24] J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan, “A diversity-promoting objective function for neural conversation models,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California: Association for Computational Linguistics, Jun. 2016, pp. 110–119. [Online]. Available: https://aclanthology.org/N16-1014
[25] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” In International Conference on Learning Representation, 2019.
[26] D. W. Scott, Multivariate density estimation : theory, practice, and visualization, ser. Wiley series in probability and mathematical statistics. New York ;: Wiley, 1992.
[27] B. W. Silverman, Density estimation for statistics and data analysis, ser. Chapman & Hall/CRC monographs on statistics and applied probability. London: Chapman andHall, 1986. [Online]. Available: https://cds.cern.ch/record/1070306
[28] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, ˙I. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman,I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro,F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, “SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,” Nature Methods, vol. 17, pp. 261– 272, 2020.