Generating a Technical Trend Map by Analyzing the Structure of U.S. Patents Using Patent Families
Abstract
Researchers and developers search for patents in fields related to their own research to obtain information on issues and effective technologies in those fields for use in their research. However, it is impossible to read through the full text of many patents, so a method that enables patent information to be grasped briefly is needed. In this study, we analyze the structure of U.S. patents with the aim of extracting important information. Using Japanese patents with structural tags such as “field”, “problem”, “solution”, and “effect”, and corresponding U.S. patents (patent families), we automatically created a dataset of 81,405 U.S. patents with structural tags. Furthermore, using this dataset, we conduct an experiment to assign structural tags to each sentence in the U.S. patents automatically. For the embedding layer, we use a language representation model BERT pretrained on patent documents and construct a multi-label classifier that classifies a given sentence into one of four categories: “field”, “problem”, “solution”, or “effect”. We are able to classify sentences with precision of 0.6994, recall of 0.8291, and F-measure of 0.7426. We have analyzed the structure of U.S. patents using our method and generated a technological trend map, which confirms the effectiveness of the proposed method.
References
Vinodkumar Prabhakaran, William L. Hamilton, Dan McFarland, and Dan Jurafsky, “Predicting the rise and fall of scientific topics from trends in their rhetorical fram-ing,” In Proceedings of the 54th Annual Meeting of the Association for Computa-tional Linguistics, pp.1170–1180, 2016.
Xiangci Li, Gully Burns, and Nanyun Peng, “Scientific discourse tagging for evi-dence extraction,” In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pp.2550–2562, 2021.
Gully A Burns, Xiangci Li, and Nanyun Peng, “Building deep learning models for evidence classification from the open access biomedical literature,” Database, 2019.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang, “BioBERT: a pre-trained biomedical language represen-tation model for biomedical text mining,” Bioinformatics, pp.1234–1240, 2020.
Iz Beltagy, Arman Cohan, and Kyle Lo, “SciBERT: pretrained contextualized em-beddings for scientific text,” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp.3615–3620, 2019.
Franck Dernoncourt and Ji Young Lee, “Pubmed 200k RCT: a dataset for sequential sentence classification in medical abstracts,” In Proceedings of the 8th International Joint Conference on Natural Language Processing, pp.308–313, 2017.
Gully APC Burns, Pradeep Dasigi, Anita de Waard, and Eduard H Hovy, “Auto-mated detection of discourse segment and experimental types from the text of cancer pathway results sections,” Database, 2016.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2017.
Isao Goto, Ka Po Chow, Bin Lu, Eiichiro Sumita, and Benjamin K. Tsou, “Overview of the patent machine translation task at the NTCIR-10 workshop,” In Proceedings of the 10th NTCIR Conference, pp.260–286, 2013.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” In Proceedings of Advances in Neural Information Processing Systems, pp.6000–6010, 2017.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli, “FAIRSEQ: a fast, extensible toolkit for sequence modeling,” In Proceedings of North American Association for Computational Lin-guistics (NAACL): System Demonstrations, pp.48–53, 2019.
Hamid Bekamiri, Daniel S. Hain, and Roman Jurowetzki, “PatentSBERTa: a deep NLP based hybrid model for patent distance and classification using augmented SBERT,” arXiv preprint arXiv:2103.11933, 2021.
Dorin Comaniciu and Peter Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 5, pp. 603-619, 2002.
Andrew Rosenberg and Julia Hirschberg, “V-Measure: A conditional entro-py-based external cluster evaluation measure,” In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computa-tional Natural Language Learning, pp. 410–420, 2007.
Laurens van der Maaten and Geoffrey Hinton, “Visualizing High-Dimensional Data Using t-SNE,” Journal of Machine Learning Research 9, pp. 2579-2605, 2008.