Knowledge Base Construction from Pre-trained Language Models by Prompt learning

Xiao Ning, Remzi Celebi

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

32 Downloads (Pure)

Abstract

Pre-trained language models (LMs) have advanced the state-of-the-art for many semantic tasks and have also been proven effective for extracting knowledge from the models itself. Although several works have explored the capability of the LMs for constructing knowledge bases, including prompt learning, this potential has not yet been fully explored. In this work, we propose a method of extracting factual knowledge from LMs for given subject-relation pairs and explore the most effective strategy to generate blank object entities for each relation of triples. We design prompt templates for each relation using personal knowledge and the descriptive information available on the web such as WikiData. The probing approach of our proposed LMs is tested on the dataset provided by the International Semantic Web Conference (ISWC 2022) LM-KBC Challenge. To cope with the problem of varying performance for each relation, we designed a parameter selection strategy for each relation. Using the test dataset, we obtain an F1-score of 0.4935%, which is higher than the baseline of 31.08%.
Original languageEnglish
Title of host publicationKnowledge Base Construction from Pre-trained Language Models 2022
Pages46-54
Number of pages9
Volume3274
Publication statusPublished - 1 Jan 2022
Event2022 Semantic Web Challenge on Knowledge Base Construction from Pre-Trained Language Models - Online, Hanghzou, China
Duration: 1 Jan 20221 Oct 2022

Publication series

SeriesCEUR Workshop Proceedings
ISSN1613-0073

Conference

Conference2022 Semantic Web Challenge on Knowledge Base Construction from Pre-Trained Language Models
Abbreviated titleLM-KBC 2022
Country/TerritoryChina
CityHanghzou
Period1/01/221/10/22

Keywords

  • Information Extraction
  • Link Prediction
  • Pre-trained language model
  • Prompt learning

Cite this