Large Language Models (LLMs) continue to demonstrate immense capabilities in many tasks relating to natural language understanding and generation. These capabilities, to a large extent, are possible because of the effective data management strategies that go into preparing the training dataset for these models. Applying a fewshot learning technique on a dataset opens unique opportunities for using LLMs in many domains, including enhancing knowledge graph construction (KGC) processes. However, the fundamental
problem in KGC is identifying entities and relationships and resolving triple complexities. In this work, we explore the in-context learning capability of GPT-4 for instruction-driven adaptive KGC and propose a novel approach that forces GPT-4 to reflect on the errors it makes in the given examples and generates verbal experience to guide the model to avoid similar mistakes during the KGC. Our comparative analysis of few-shot learning and the baseline (zeroshot learning) not only highlights the strengths and limitations of
GPT-4 in KGC but also demonstrates how the in-context learning capabilities of GPT-4 can contribute to more dynamic, accurate, and instruction-followed knowledge graphs.
Citation:
Y. Mou, “Leveraging LLMs Few-shot Learning to Improve Instruction-driven Knowledge Graph Construction,” VLDB Workshop, Jan. 2025, https://vldb.org/workshops/2024/proceedings/LLM+KG/LLM+KG-8.pdf
More Information:
https://vldb.org/workshops/2024/proceedings/LLM+KG/LLM+KG-8.pdf