KDD'24 Tutorial: Automated Mining of Structured Knowledge from Text in the Era of Large Language Models

Yunyi Zhang, Ming Zhong, Siru Ouyang, Yizhu Jiao, Sizhe Zhou, Linyi Ding, Jiawei Han
Computer Science Department, University of Illinois Urbana-Champaign
Time: Aug 25, 2024 10:00 AM - 1:00 PM (CEST) / 1:00 AM - 4:00 AM (PDT)

Abstract

Massive amount of unstructured text data are generated daily, ranging from news articles to scientific papers. How to mine structured knowledge from the text data remains a crucial research question. Recently, large language models (LLMs) have shed light on the text mining field with their superior text understanding and instruction-following ability. There are typically two ways of utilizing LLMs: fine-tune the LLMs with human-annotated training data, which is labor intensive and hard to scale; prompt the LLMs in a zero-shot or few-shot way, which cannot take advantage of the useful information in the massive text data. Therefore, it remains a challenge on automated mining of structured knowledge from massive text data in the era of large language models.

In this tutorial, we cover the recent advancements in mining structured knowledge using language models with very weak supervision. We will introduce the following topics in this tutorial:

  1. introduction to large language models, which serves as the foundation for recent text mining tasks;
  2. ontology construction, which automatically enriches an ontology from a massive corpus;
  3. weakly-supervised text classification in flat and hierarchical label space;
  4. weakly-supervised information extraction, which extracts entity and relation structures.

Slides

  • Introduction [Slides]
  • Part I: Language Foundation Models for Text Analysis [Slides]
  • Part II: Taxonomy Construction and Enrichment [Slides]
  • Part III: Weakly-Supervised Text Classification [Slides]
  • Part IV: Weakly-Supervised Information Extraction [Slides]

Authors

Yunyi ZhangYunyi Zhang, Ph.D. student, Computer Science, UIUC. His research focuses on weakly supervised text mining, text classification, and taxonomy construction.

Website: https://yzhan238.github.io/




Ming ZhongMing Zhong, Ph.D. student, Computer Science, UIUC. His research focuses on structuring explicit knowledge from massive corpora and manipulating implicit knowledge in foundation models. He has received the Amazon-Illinois Ph.D. Fellowship (2023).

Website: https://maszhongming.github.io/



Siru OuyangSiru Ouyang, Ph.D. student, Computer Science, UIUC. Her research focuses on mining structured knowledge from massive text data. She has received the Richard T. Cheng Fellowship (2022).

Website: https://ozyyshr.github.io/







Yizhu JiaoYizhu Jiao, Ph.D. student, Computer Science, UIUC. Her research focuses on knowledge structuring and grounding. She has received the Best Student Paper Runner-Up Award on ICDM 2020 and Best Reviewer Award on KDD 2023.

Website: https://yzjiao.github.io/





Sizhe ZhouSizhe Zhou, M.S. student, Computer Science, UIUC. His research focuses on mining and understanding structured knowledge from massive text data.









Linyi DingLinyi Ding, M.S. student, Computer Science, UIUC. Her research focuses on weakly supervised knowledge graph construction and ontology construction from unstructured data.






Jiawei HanJiawei Han, Michael Aiken Chair Professor, Computer Science, UIUC. His research areas encompass data mining, text mining, data warehousing and information network analysis, with over 900 research publications. He is Fellow of ACM, Fellow of IEEE, and received numerous prominent awards, including ACM SIGKDD Innovation Award (2004) and IEEE Computer Society W. Wallace McDowell Award (2009). He has delivered 50+ conference tutorials or keynote speeches.

Website: http://hanj.cs.illinois.edu/