Knowledge Graph Enhanced Large Language Model Application Architecture

Victor Wang
December 25, 2024
5 min read

Introduction

Recently, OpenAI plans to launch new products, features, and demos for 12 days straight starting December 5th. On the eighth day, they announced that ChatGPT's search functionality would be available to all users. The fact that search-enhanced large language models warranted a dedicated day of presentation, receiving equal treatment as features like o1/o3 and ChatGPT Vision, indicates its significant importance.

ChatGPT's external search capability enables it to move beyond pre-trained data and access real-time information from the internet. This represents a typical knowledge-enhanced large language model application, where external knowledge enhancement expands the model's capabilities, allowing it to provide more precise and timely answers, especially for questions involving recent events, news, or information requiring rapid updates.

With knowledge enhancement, ChatGPT can utilize external search engines or databases to find real-time data and fresh knowledge. For example, when asked about current news events, ChatGPT can search the internet for answers rather than solely relying on its training data. Despite the excellence of large language models like GPT-4o in many tasks, they still face limitations in terms of domain knowledge accuracy and timeliness. Thus, this search enhancement capability helps improve the accuracy and timeliness of large models like ChatGPT in practical applications, particularly in specialized or dynamic knowledge domains. This article provides a brief overview of knowledge-enhanced large model application architecture.

Design Principles of LLM-Based Application Architecture

1. Avoiding Hallucination

The application architecture design should emphasize reducing hallucination phenomena. Large models handle basic understanding, while knowledge graphs, vector retrieval, and search engines provide data and knowledge supplements. This multi-module collaborative architecture functions like a multi-layered ecosystem, both robust and efficient.

2. Flexible Architecture, Multi-Scenario Coverage

Flexibility is a core feature of knowledge-enhanced large language models. The architecture should adapt to different industry scenarios, from content generation to knowledge Q&A, from intelligent search to business recommendations. Through modular design, users can select different functional combinations based on scenarios, enhancing system generalization and adaptability.

3. The Right Tools for the Right Job

Implementing efficient knowledge-enhanced large language models requires advanced tool support. The introduction of vector retrieval, knowledge graphs, and search engines enables deep semantic understanding and rapid response.

System Architecture Overview of Knowledge-Enhanced Large Models

Knowledge-enhanced large language model application architecture's core concept is to inject external knowledge into large models in both structured and unstructured forms, compensating for the model's inherent knowledge limitations - similar to how humans "look up reference materials" during learning. Architecturally, knowledge-enhanced large model applications typically use large models as general knowledge processing infrastructure while combining document search, database retrieval, and knowledge graphs to inject high-precision, domain-specific knowledge content. The system architecture of knowledge-enhanced large model applications is as follows.

Knowledge enhancement, as an augmentation method, aids understanding and reasoning through relevant information retrieval. Specifically, knowledge enhancement aims to address several key issues:

  1. Real-time Information Updates: Through external resource searches, large models can access the latest content, including news, academic research, and market dynamics.
  2. Improved Answer Precision: No longer limited to pre-training information, enabling more comprehensive and detailed answers.
  3. Professional Domain Adaptability: Professional knowledge is too sparse during pre-training, leading to poorer performance in specialized knowledge applications. However, in practical applications, more specialized knowledge carries greater value. Knowledge-enhanced large model applications can effectively bridge this gap, enabling high-value applications.
  4. Interpretability of Generated Content: Making the model's reasoning process more transparent and understandable.
Knowledge Sources and Integration
  • Public Knowledge: Including public datasets and open-domain research literature. This portion is mainly absorbed by large models during pre-training and can also be accessed through general search engines (such as Baidu, Google, WeChat Search, Bing, etc.) and input to the large model through prompt context.
  • Private Knowledge: Utilizing small models or LoRA technology for training, or injecting enterprise-specific domain knowledge through enterprise search engines and knowledge graphs. Private knowledge injection can be achieved through knowledge graph construction, document management, and expert experience accumulation.
Key Module Design

The architecture consists of these core modules:

  1. Data Management: Responsible for data collection, import, document management, and data source management. This forms the foundation of knowledge enhancement, where high-quality data is key to model learning and knowledge base construction. For example, collecting data from Wikipedia, professional domain databases, and web crawlers, followed by cleaning, deduplication, and format conversion.
  1. Annotation Management: Includes dataset management, language and vision annotation, sample annotation for different SFT and RLHF objectives, and annotation task management. This module handles data annotation, including NLP & CV annotation, SFT and RLHF annotation, and annotation task management. Annotation quality directly impacts model training effectiveness.
  1. Model Management: Responsible for commercial model integration, open-source model support, instruction fine-tuning, and large model evaluation. Selecting appropriate base models and performing targeted fine-tuning is a crucial step in knowledge enhancement. For example, using pre-trained models like LLaMA, Qwen, or Deepseek, with fine-tuning capabilities for specific tasks when needed.
  1. Prompt Engineering: This module handles Prompt management, optimization, recommendation, few-shot learning, and automatic sample generation. Well-designed prompts can effectively guide models to utilize external knowledge.
  1. Knowledge Graph: Encompasses knowledge graph schema design, designing appropriate knowledge graph patterns based on professional domain or specific task requirements, and building internal enterprise knowledge systems.
  1. Large Model Graph Construction and Application: Supports Natural Language to SQL (NL2SQL) mapping, extractive knowledge construction, task orchestration, and online effect testing.
  1. LLM Graph Application: Includes semantic search, conversational Q&A, visual interactive analysis, and customized application development. The ultimate goal of knowledge enhancement is application in various real-world scenarios.
  1. Verification Management: In many scenarios, such as medical, financial, manufacturing, and other serious contexts, review of data annotation and knowledge graph construction is necessary to ensure the correctness of knowledge-enhanced large model applications.
  1. System Management: Responsible for account management, permission management, storage management, monitoring and alerts, statistical reports, graph management, business glossary management, and system settings.

Key Technical Overview of Enhancement Methods

Currently, the main methods for enhancing large models and providing knowledge include knowledge graphs, vector retrieval, search engines, and business engines developed for specific industries. For detailed information about knowledge graphs, please refer to my bestselling technical book "Knowledge Graphs: Cognitive Intelligence Theory and Practice."  Here's a brief introduction to these technologies:

1. Knowledge Graphs

Knowledge graphs are a technology that stores domain knowledge in graph structures, representing entities and their relationships through nodes and edges. Each node represents an entity (such as people, places, events, concepts, etc.), while edges represent relationships between these entities (such as "belongs to," "is related to"). Knowledge graph design can intuitively and precisely present complex domain knowledge, facilitating effective reasoning and querying.

In knowledge-enhanced large models, knowledge graphs serve as a structured knowledge representation method, providing efficient and authoritative knowledge support. For example, when the model faces a query requiring domain knowledge (such as "What process is used to manufacture XXXX product?"), it can directly obtain answers by querying the knowledge graph without relying on the language model's generation process. Through knowledge graphs, models can avoid generating inaccurate information, improving accuracy and reliability.

2. Vector Retrieval

Vector retrieval technology relies on converting text, images, or other types of data into high-dimensional vectors, enabling similarity calculations and matching in vector space. This method has high semantic expression capability, capturing deep semantic associations in text rather than just surface-level text matching.

The core advantage of vector retrieval lies in its ability to handle fuzzy input and unstructured data. For example, faced with an imperfect question, the model can quickly find the most relevant information by calculating vector distances between the input text and database candidates. This technology not only improves model robustness but also supports cross-language search capabilities.

3. Search Engines

Search engines are an important component in knowledge-enhanced large models, primarily responsible for real-time acquisition of information from the internet or specific domains. They form a complementary relationship with knowledge graphs and language models, particularly when dealing with dynamic information (such as news, weather, regulatory updates, etc.), ensuring that the model provides the most current and accurate answers.

The introduction of search engines greatly expands the knowledge base of knowledge-enhanced large models, enabling models to timely acquire and utilize the latest facts and data. For example, when users ask questions about current news or the latest developments in specific fields, search engines can help large language models obtain and provide answers in real-time.

4. Business Engines

Business engines are modules within knowledge-enhanced large models that focus on specific industries or application scenarios, specifically handling knowledge related to particular businesses. This module typically combines enterprise internal data, such as product information, customer data, transaction records, etc., enabling large language models to provide precise services for specific business scenarios.

The role of business engines is to combine knowledge retrieval with model reasoning, ensuring that models can provide answers that meet industry standards and user needs based on actual business requirements. For example, an e-commerce platform's intelligent customer service robot needs to access product information, order records, and customer data to accurately answer user inquiries in real-time.

Smart Manufacturing Application Scenarios and Practices

In the smart manufacturing domain, by constructing knowledge graphs of products, production lines, materials, personnel, machine equipment, supply chains, processes, quality control, and environmental parameters, complex relationship reasoning can provide precise knowledge support for manufacturing cause analysis, process optimization, equipment maintenance, and other aspects, achieving intelligent Q&A for relevant business scenarios.