Simile Identification with Pseudo Data Acquisition and Re-labeling

  • Jintaro Jimi Kyushu Institute of Technology
  • Kazutaka Shimada Kyushu Institute of Technology
Keywords: simile identification, pseudo data, automatic training data acquisition, figurative language

Abstract

The simile is a kind of figurative language. It expresses the target of the figurative language by using some typical phrases such as “like”. It is important to distinguish whether the sentence is a simile or a literal for understanding a sentence. However, a large amount of data is required to generate a classifier by machine learning. Moreover, creating the dataset is costly. In this paper, we propose a pseudo dataset acquisition method for simile identification. We first construct a dataset of simile and literal sentences using machine translation. We utilize mBART as the machine translation system. This process automatically generates pseudo-simile and literal instances from three types of corpora. Then, we apply some machine learning approaches to the simile identification task. We compare Support Vector Machine, Naive Bayes, and BERT in the experiment. The experimental result shows the validity of the pseudo dataset as compared with a simple baseline (machine translation with rules). In addition, re-labeling with machine learning for the original pseudo data contributed to the improvement of the simile identification accuracy.

References

Takehiro Tazoe, Tsutomu Shiino, Fumito Masui, and Atsuo Kawai. The metaphorical judgment model for ”noun b like noun a” expressions. Journal of natural language processing (in Japanese), 10(2):43–58, 2003.

Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 607–613. Association for Computational Linguistics, 2018.

Lizhen Liu, Xiao Hu, Wei Song, Ruiji Fu, Ting Liu, and Guoping Hu. Neural multitask learning for simile recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1543–1553, 2018.

Kenichi Kobayashi, Junpei Tsuji, and Masato Noto. Application of data augmentation to image-based plant disease detection using deep learning. The 79th nathional convention of IPSJ (in Japanese), 79(2):289–290, 2017.

Ji Dang, Toya Matsuyama, Pang-Jo Chun, Jiyuan Shi, and Shogo Matsunaga. Deep convolutional neural networks for bridge deterioration detection by UAV inspection. Intelligence, Informatics and Infrastructure, 1(J1):596–605, 2020.

Shinnosuke Nishimoto, Hiroshi Noji, and Yuji Matsumoto. Detecting aspect in sentiment analysis by data augmentation. 2017 The Association for Natural Language Processing (in Japanese), pages 581–584, 2017.

Manabu Sassano. Using vitual examples for text classification with support vector machines. Journal of natural language processing (in Japanese), 13(3):21–35, 2006.

Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.

Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742, 2020.

Jintaro Jimi and Kazutaka Shimada. Pseudo data acquisition using machine translation and simile identification. In 2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI), pages 391–396, 2022.

Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742, 2020.

Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.

Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv: 2008.00401, 2020.

Joseph L Fleiss. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378, 1971.

Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.

Christopher Manning and Hinrich Schutze. Foundations of statistical natural language processing. MIT press, 1999.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pretraining of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019. Copyright

Published
2024-06-04
Section
Technical Papers