Adversarial Attacks for Time Series Classification using Partial Perturbations

  • Jun Teraoka Graduate School of Information Sciences, Hiroshima City University
  • Keiichi Tamura Graduate School of Information Sciences, Hiroshima City University
Keywords: Adversarial Examples, Time Series Data, Deep Learning, Security

Abstract

Adversarial attacks using adversarial examples have recently become a significant threat that intentionally misleads deep-learning models beyond human recognition. Adversarial examples have primarily been studied in the field of image recognition; however, they have recently been applied in other fields, including time series data classification. To generate adversarial examples, small perturbations unrecognizable by humans are typically added to all the data regions. However, adding perturbations to the entire time series data results in time series data that are clearly manipulated for time series classification. In this case, adversarial attacks are immediately apparent to humans and do not pose a significant threat. This study shows that unidentifiable adversarial examples of time series can be identified as adversarial examples in time series data classification by adopting partial perturbations. The fast gradient sign method (FGSM) and projected gradient descent (PGD) attack methods, which are originally proposed for generating adversarial examples of image data, are applied to time series data classification models. In this study, partial-FGSM and partial-PGD attacks are proposed which utilize only a part of the perturbations to generate fewer unreliable adversarial examples of time series data that are easily recognized as adversarial examples. To evaluate partial-FGSM and partial-PGD attacks, the 2 Class-Based-Detecting adversarial detection method is employed, as its effectiveness for protecting adversarial attacks against time series classification has been proven. The performance is evaluated, and the results show that attacks are possible with a small degradation in attack performance for some datasets, even if the perturbation ratio is 1/10.

References

Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 1097–1105, 2012, doi: 10.1145/3065386.

Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. “Intriguing properties of neural networks,” In ICLR, 2014.

Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and Har-nessing Adversarial Examples,” In ICLR, 2015.

Samuel Henrique Silva and Peyman Najafirad, “Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey,” arXiv preprint arXiv: 2007.00753, 2020.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial Examples: Attacks and Defenses for Deep Learning,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2805-2824, Sept. 2019, doi: 10.1109/TNNLS.2018.2886017.

Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang, “Recent Advances in Ad-versarial Training for Adversarial Robustness,” in Proceedings of the Thirtieth In-ternational Joint Conference on Artificial Intelligence, pp. 4312-4321, 2021, doi: 10.24963/ijcai.2021/591.

Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel, “On the (statistical) detection of adversarial examples,” arXiv preprint arXiv:1702.06280, 2017.

N. Carlini and D. Wagner, “Audio Adversarial Examples: Targeted Attacks on Speech-to-Text,” 2018 IEEE Security and Privacy Workshops (SPW), 2018, pp. 1-7, doi: 10.1109/SPW.2018.00009.

Z. Shao, Z. Wu, and M. Huang, “AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, doi: 10.1109/TASLP.2021.3129339.

H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller, “Adversarial Attacks on Deep Neural Networks for Time Series Classification,” in 2019 Interna-tional Joint Conference on Neural Networks (IJCNN), pp. 1-8, doi: 10.1109/IJCNN.2019.8851936, 2019.

F. Karim, S. Majumdar, and H. Darabi, “Adversarial Attacks on Time Series,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3309-3320, 1, Oct. 2021, doi: 10.1109/TPAMI.2020.2986319.

M. G. Abdu-Aguye, W. Gomaa, Y. Makihara, and Y. Yagi, “Detecting Adversarial Attacks In Time-Series Data,” in ICASSP 2020 - 2020 IEEE International Confer-ence on Acoustics, Speech and Signal Processing (ICASSP), pp. 3092-3096, doi: 10.1109/ICASSP40776.2020.9053311, 2020.

J. Teraoka and K. Tamura, “Detecting Adversarial Examples for Time Series Classification and Its Performance Evaluation,” in Czarnowski I., Howlett R.J., Jain L.C. (eds) Intelligent Decision Technologies. Smart Innovation, Systems and Technologies, vol 238. Springer, Singapore, 2021, doi:10.1007/978-981-16-2765-1_47.

J. Teraoka and K. Tamura, "Adversarial Examples of Time Series Data based on Partial Perturbations," 2022 12th International Congress on Advanced Applied In-formatics (IIAI-AAI), Kanazawa, Japan, 2022, pp. 1-6, doi: 10.1109/IIAIAAI55812.2022.00011 (Present paper is the extended journal version of this paper).

Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.

Aleksander Madry, Aleksander Makelov, and Ludwig Schmidt, “Towards Deep Learning Models Resistant to Adversarial Attacks,” in International Conference on Learning Representations, 2018.

Alex Krizhevsky and Geoffrey Hinton, “Learning Multiple Layers of Features from Tiny Images”, 2009.

H. A. Dau et al., “The UCR time series archive,” in IEEE/CAA Journal of Auto-matica Sinica, vol. 6, no. 6, pp. 1293-1305, November 2019, doi: 10.1109/JAS.2019.1911747.

Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1578-1585, doi: 10.1109/IJCNN.2017.7966039, 2017.

H. Ismail Fawaz, G. Forestier, J. Weber et al. “Deep learning for time series classification: a review,” Data Mining and Knowledge Discovery, 33, 917–963 (2019), doi: doi.org/10.1007/s10618-019-00619-1.

Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Al-exander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, and Patrick McDaniel, “Technical Report on the CleverHans v2.1.0 Adversarial Examples Library,” arXiv preprint arXiv:1610.00768, 2016.

Published
2024-10-28
Section
Technical Papers