Fusion of Physical and Human Sensors for Condition Prediction

Preliminary Experiments in Smart Agriculture

Authors

  • Moritaro Inoue Japan Advanced Institute of Science and Technology
  • Kenta Toya Japan Advanced Institute of Science and Technology
  • Riku Ogawa Japan Advanced Institute of Science and Technology
  • Naoshi Uchihira Japan Advanced Institute of Science and Technology

DOI:

https://doi.org/10.52731/liir.v004.171

Keywords:

Internet of Things, Human Sensor, Machine Learning, Condition Prediction, Smart Voice Messaging System

Abstract

In recent years, the internet of things (IoT) has been used effectively in smart agriculture, where farmers can make decisions and transfer knowledge based on sensor data. However, the physical sensors (temperature, humidity, and illuminance sensors) of IoT systems have limitations in capturing various changes in crops and environment in the actual fields. Combining physical sensors with the human five senses (human sensors) can flexibly record changes that cannot be captured by physical sensors alone. In this study, a smart voice messaging system is used for recording the five human senses via voice messages. Assisted by machine learning, preliminary experiments are conducted using planter boxes for predicting soil condition in the watering process. Our results confirm the effectiveness and validation of fusing physical and human sensors.

References

L. Da Xu, W. He, and S. Li, “Internet of things in industries: A survey,” IEEE Transactions on industrial informatics, vol. 10, no. 4, pp. 2233–2243, 2014.

A. Whitmore, A. Agarwal, and L. Da Xu, “The Internet of Things—A survey of topics and trends,” Information systems frontiers, vol. 17, pp. 261–274, 2015.

Z. Liqiang, Y. Shouyi, L. Leibo, Z. Zhen, and W. Shaojun, “A crop monitoring system based on wireless sensor network,” Procedia Environmental Sciences, vol. 11, pp. 558–565, 2011.

R. J. McQueen, S. R. Garner, C. G. Nevill-Manning, and I. H. Witten, “Applying machine learning to agricultural data,” Computers and electronics in agriculture, vol. 12, no. 4, pp. 275–293, 1995.

N. Uchihira, S. Choe, K. Hiraishi, et al., “Collaboration management by smart voice messaging for physical and adaptive intelligent services,” 2013 Portland International Conference on Management ofEngineering and Technology (PICMET), 2013, pp. 251–258.

M. Hirafuji, “Creating comfortable, amazing, exciting and diverse lives with CYFARS (CYber FARmerS) and agricultural virtual corporation,” Proc. of the Second Asian Conference for Information Technology in Agriculture, 2000, pp. 424–431.

R. C. Jisha, G. Vignesh, and D. Deekshit, “IOT based water level monitoring and implementation on both agriculture and domestic areas,” 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), vol. 1, 2019, pp. 1119–1123.

J. M. Mathana and T. S. Nagarajan, “Secured IoT based smart greenhouse system with image inspection,” 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), 2020, pp. 1080–1082.

A. K. Dey, “Understanding and using context,” Personal and ubiquitous computing, vol. 5, pp. 4–7, 2001.

C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “CA4IOT: Context Awareness for Internet of Things,” 2012 IEEE International Conference on Green Computing and Communications, 2012, pp. 775–782.

N. Uchihira, T. Nishimura, and K. Ijuin, “Human-centric digital twin focused on “gen-ba” knowledge: Conceptual model and examples by smart voice messaging system,” 2023 Portland International Conference on Management of Engineering and Technology (PICMET), 2023, pp. 1–7.

A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 1–11.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (naacL-HLT), vol. 1, 2019, pp. 4171–4186.

A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” OpenAI, Tech. Rep., 2018.

N. Uchihira and M. Yoshida, “Agricultural Knowledge Management Using Smart Voice Messaging Systems: Combination of Physical and Human Sensors,” International Conference on Serviceology 2018, 2018, pp. 148–151.

G. Ke, Q. Meng, T. Finley, et al., “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 3146–3154.

S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30, 2017, pp. 1–10.

S. M. Lundberg, G. Erion, H. Chen, et al., “From local explanations to global understanding with explainable AI for trees,” Nature Machine Intelligence, vol. 2, no. 1, pp. 56–67, 2020.

Downloads

Published

2023-12-20