Semantic Waveform Measurement Method of Kansei Transition for Time-series Media Contents
Abstract
In this paper, we present a semantic waveform measurement method of Kansei transition for time-series media contents. Kansei Transition is changes in user's sensitivity evoked by timeseries changes in media content. It is important to apply the time-series change of media content to Kansei information processing as Kansei transition. In our method, we represent Kansei transition by time-series change of media content as waveforms. In addition, we realize semantic waveform similarity measurement by comparison with Kansei transitions represented by waveforms applying a signal processing technique. The semantic similarity measurement enables to measure similarity between each waveform which is extracted from media contents on timeseries. In our method, it is possible to realize media content retrieval and recommendation systems corresponding to the time-series Kansei transition of media content. Our method consists of two modules: Kansei transition extraction module and semantic waveform similarity measurement module. The Kansei transition extraction module extracts time-series Kansei magnitude from the features of time-series media contents as Kansei transition. The semantic waveform similarity measurement module measures similarities between each waveform represented as Kansei transition. Our method enablesto calculate the similarity of media content based on timeseries changes in Kansei. We can apply our method to new media content retrieval depending on time-series change in media content Kansei.
References
T. Kitagawa and Y. Kiyoki, “Fundamental framework for media data retrieval system using media lexico transformation operator,” Information Modelling and Knowledge Bases, vol. 12, pp. 316–326, 2001.
T. Kitagawa, T. Nakanishi, and Y. Kiyoki, “An Implementation Method of Automatic Metadata Extraction Method for Music Data and its Application to a Semantic Associative Search,” Systems and Computers in Japan, vol. 35(6), 59-78, 2004.
T. Kitagawa, T. Nakanishi, and Y. Kiyoki, “An Implementation Method of Automatic Metadata Extraction Method for Image Data and its Application to a Semantic Associative Search,” Information Processing Society of Japan Transactions on Databases, vol. 43 (SIG12) (TOD16), pp. 38-51, 2002.
H. Sakoe and S. Chiba, “Dynamic Programming Algorithm Optimization for Spoken Word Recognition,” IEEE Transaction on Acoustics, Speech, and Signal Processing, Vol. ASSP26, No. 1, pp. 43-49, 1978.
D.J. Berndt and J. Clifford, “Finding Patterns in Time Series: A Dynamic Programming Approach,” Proc. Advances in Knowledge Discovery and Data Mining, AAAI/MIT, pp. 229-248, 1996.
T. M. Rath and R. Manmatha, “Word image matching using dynamic time warping,” Proc. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, 2003.
K. Hevner, “The Affective Value of Pitch and Tempo in Music,” The American Journal of Psychology, Vol. 49, No. 4, pp. 621-630,1937.
S. Kobayashi, Color Image Scale, Tokyo: Koudansya Press, 1984.
T. Nakanishi and K. Tamaru, “An impression estimation and visualization method for TV commercials,” Proc. 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), Victoria, BC, 2017, pp. 1-6, 2017.
N. Kobayashi, K. Inui, Y. Matsumoto, K. Tateishi, “Collecting Evaluative Expressions for Opinion Extraction,” Journal of Natural Language Processing 12(3), pp. 203-222, 2005.
M. Higashiyama, K. Inui, Y. Matsumoto, “Learning Sentiment of Nouns from Selectional Preferences of Verbs and Adjectives,” Proc. the 14th Annual Meeting of the Association for Natural Language Processing, pp.584-587, 2008.