Realization for Finger Character Recognition Method by Similarity Measure of Finger Features

  • Takuma Nitta Musashino University
  • Shinpei Hagimoto Musashino University
  • Ari Yanase Musashino University
  • Ryotaro Okada Musashino University
  • Virach Sornlertlamvanich Musashino University
  • Takafumi Nakanishi Musashino University
Keywords: finger character knowledge base, finger character recognition, sign language, similarity measure


In this paper, we present a novel finger character recognition method in sign language using dimension reduction finger character feature knowledge base for similarity measure. A sign language communication is crucial method for deaf or hearing-impaired people. One of the most important problems is that very few people can understand a sign language. Essentially, there is not enough image data set for finger character learning. In addition to aligning a corpus of images of finger character, it is necessary to realize an automatic recognition system for finger characters in a sign language. We construct a knowledge base for finger character features and apply it to realize a novel finger character recognition. Our method enables finger character recognition by similarity measure between the input finger character features and a knowledge base. The experimental results show that our approach efficiently utilizes the knowledge base generated from a small amount of finger character images. We also present our prototype system and experimental evaluation.


G. Fang, W. Gao, and D. Zhao, “Large vocabulary sign language recognition based on fuzzy decision trees,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 34, no. 3, 2004, pp. 305-314.

W.W.Kong and S.Ranganath, “Signing Exact English (SEE): Modeling and recognition,” Pattern Recognition, vol. 41, no. 5, 2008, pp. 1638-1652.

K. Cem et al., “Hand pose estimation and hand shape classification using multi–layered randomized decision forests,” 2012 Proceedings of the 12th European conference on Computer Vision – Volume Part VI, pp. 852-863.

L. Gu, X. Yuan and T. Ikenaga, “Hand gesture interface based on improved adaptive hand area detection and contour signature,” 2012 International Symposium on Intelligent Signal Processing and Communications Systems, Taipei, 2012, pp. 463-468.

K. M. Lim, A.W.C. Tan and S. C. Tan, “Block-based histogram of optical flow for isolated sign language recognition,” Journal of visual communication and image repre-sentation, vol. 40, part B, 2016, pp. 538- 545.

D. Konstantinidis, K. Dimitropoulos and P. Daras, “A deep learning approach for analyzing video and skeletal features in sign language recognition,” 2018 IEEE International Conference on Imaging Systems and Techniques (IST), Krakow, 2018, pp1-6.

T. Simon et al., “Hand keypoint detection in single images using multiview bootstrapping,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 4645-4653.

O. Koller, S. Zargaran and H. Ney, “Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMs,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 3416-3424.

T. Liu, W. Zhou and H. Li, “Sign language recognition with long short-term memory,” 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, 2016, pp. 2871–2875.

D. Guo et al., “Sign language recognition based on adaptive hmms with data augmentation,” 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, 2016, pp. 2876–2880.

J. Zhang, W. Zhou and H. Li, “A threshold-based hmm-dtw approach for continuous sign language recognition,” 2014 Proceedings of International Conference on Internet Multimedia Computing and Service, pp. 237-240.

CR. De. Souza and E. Pizzolato, “Sign language recognition with support vector machines and hidden conditional random fields going from fingerspelling to natural articulated words,” 2013 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition, pp. 84-98.

Q. Facundo et al., “A study of convolutional architectures for handshape recognition applied

to sign language”, 2017 XXIII Congreso Argentino de Ciencias de la Computación, La Plata, 2017, pp.13-22.

A. Saxena, D. K. Jain and A. Singhal, “Sign language recognition using principal component analysis,” 2014 Fourth International Conference on Communication Systems and Network Technologies, Bhopal, 2014, pp. 810-813.

Y. Kiyoki, T. Kitagawa and H. Takanari, “A metadatabase system for semantic image search by a mathematical model of meaning,” ACM Sigmod Record, vol.23, no. 4, 1994, pp. 34-41.

T. Kitagawa and Y. Kiyoki, “A mathematical model of meaning and its application to multidatabase systems,” Proceedings RIDE-IMS `93: Third International Workshop on Research Issues on Data Engineering: Interoperability in Multidatabase Systems,Vienna, Austria, 1993, pp.130-135.

F.Zhang et al., “MediaPipe Hands: On-device Real-time Hand Tracking,” CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Seattle, WA, USA, 2020.

S. Hagimoto et al., “Knowledge Base Creation by Reliability of Coordinates Detected from Videos for Finger Character Recognition,” Proc. 19th IADIS International Conference e-Society 2021, FSP 5.1-F144, 2021, pp.169-176.

P. Comon, “Independent Component Analysis, a new concept?,” Signal Processing, Elsevier, vol 36, 1994, pp.287-314.

Technical Papers