CN116340584A - Implementation method for automatically generating complex graph database query statement service - Google Patents

Implementation method for automatically generating complex graph database query statement service Download PDF

Info

Publication number
CN116340584A
CN116340584A CN202310590729.8A CN202310590729A CN116340584A CN 116340584 A CN116340584 A CN 116340584A CN 202310590729 A CN202310590729 A CN 202310590729A CN 116340584 A CN116340584 A CN 116340584A
Authority
CN
China
Prior art keywords
graph database
language model
query statement
scale
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310590729.8A
Other languages
Chinese (zh)
Other versions
CN116340584B (en
Inventor
古思为
吴敏
杨柳雪
叶小萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yueshu Technology Co ltd
Original Assignee
Hangzhou Yueshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yueshu Technology Co ltd filed Critical Hangzhou Yueshu Technology Co ltd
Priority to CN202310590729.8A priority Critical patent/CN116340584B/en
Publication of CN116340584A publication Critical patent/CN116340584A/en
Application granted granted Critical
Publication of CN116340584B publication Critical patent/CN116340584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to the technical field of query of graph databases, solves the problem that a graph database cannot be searched and queried by using natural language, and discloses a realization method for automatically generating a query statement service of a complex graph database, which comprises the following steps: pre-training and fine-tuning the large-scale language model; extracting primitive information from a map where a user is located; the large-scale language model automatically generates prompt results and/or graph database query sentences according to the input of a user in a query section; executing the query statement of the graph database to obtain a real-time query result on the graph, and checking, executing and verifying the query statement of the graph database according to the real-time query result.

Description

Implementation method for automatically generating complex graph database query statement service
Technical Field
The application relates to the technical field of graph database query, in particular to a realization method for automatically generating a query statement service of a complex graph database.
Background
The Graph Database language is a programming language for querying and manipulating data in a Graph Database (Graph Database). A graph database is a special type of database that stores data using graph structures and represents relationships between the data through the connection of nodes (nodes) and edges (edges). Unlike relational databases, graph databases are generally more suitable for handling complex data structures and relationships, and are therefore widely used in the fields of social networking, recommendation systems, geographic information systems, etc., and graph database languages are programming languages for querying and manipulating data in graph databases. Different graph databases typically use different languages, but their underlying principles and grammars are similar, all for describing relationships between nodes and edges, querying and manipulating graph data.
In large applications, graph databases are becoming more and more widely used to manage highly correlated data. However, writing complex query sentences is a tedious task, requiring users to be familiar with database structures and grammar, and for ordinary users without knowledge of the database structures and grammar background, the users cannot search the graph database, which brings great trouble to the ordinary users.
Disclosure of Invention
The method aims to solve the problem that the natural language cannot be used for searching and inquiring the graph database in the prior art, and provides a realization method for automatically generating the query statement service of the complex graph database.
In a first aspect, a method for implementing a service for automatically generating a query statement of a complex graph database is provided, including:
pre-training and fine-tuning the large-scale language model;
extracting primitive information from a map where a user is located so that the primitive information is provided as context to a large-scale language model when generating a query statement of a graph database;
the large-scale language model automatically generates prompt results and/or graph database query sentences according to the input of a user in a query section;
executing the query statement of the graph database to obtain a real-time query result on the graph, and checking, executing and verifying the query statement of the graph database according to the real-time query result.
Further, the large-scale language model includes a LLM model or a GPT model.
Further, the large-scale language model is packaged by adopting a template of promt.
Further, the pre-training of the large-scale language model includes: the large-scale language model is pre-trained with a large-scale generic corpus, including wiki encyclopedia, news stories, and academic papers, to enable the large-scale language model to learn generic language patterns and semantic rules.
Further, fine tuning the large-scale language model includes:
collecting and preparing a labeled dataset for a particular task;
loading the weight of the pre-trained large-scale language model;
according to the requirements of specific tasks, adjusting parameters of a large-scale language model;
fine tuning the large-scale language model by using the marked data set, and carrying out parameter adjustment according to the performance of the verification set;
and evaluating the trimmed large-scale language model by using the test set.
Further, the graph database query statement has intra-line annotations, and the graph database query statement can be edited secondarily.
Further, the automatic generation of the prompt result and/or the query statement of the graph database by the large-scale language model according to the input of the user on the query section comprises:
acquiring a request description input by a user in a query section;
sending the request description and the id of the current diagram to a back end interface of the large-scale language model;
the rear-end interface of the large-scale language model obtains the primitive information according to the id of the current graph, and constructs a query prompt for the phonetic model;
the back-end interface returns the result output by the large language model in a streaming mode.
Further, according to the real-time query result, a graph database query statement modification suggestion is provided for the user.
In a second aspect, a computer readable storage medium is provided, the computer readable medium storing program code for execution by a device, the program code comprising steps for performing the method as in any one of the implementations of the first aspect.
In a third aspect, there is provided an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor implements a method as in any of the implementations of the first aspect.
The application has the following beneficial effects:
1. the method introduces a large-scale language model and extracts primitive information, and can describe the query requirement by using natural language, so that a user can generate a complex graph database query statement without being familiar with a database structure and grammar;
2. the method can be embedded in an interface for composing the graph query, and is used as the function complement of real-time response, so that the complex query composition and query template pre-generation of a graph user are accelerated, in addition, the graphic primitive information is automatically acquired, the manual clear assignment is not needed, and the use experience and efficiency are greatly improved;
3. aiming at different specific scenes, combining with the calculation cost, using different large-scale language models and fine adjustment modes of model training, especially aiming at grammar and semantics in the Nebula graph language field, carrying out targeted fine adjustment, and effectively increasing the generation effect and efficiency of query sentences of a graph database;
4. the method uses the promt template to prescribe the writing specification, details and input/output format examples of the natural language, so that a large-scale language model can more accurately understand the query intention and generate a graph database query statement conforming to the user expectation.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application.
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for implementing an automatically generated complex graph database query statement service in accordance with one embodiment of the present application;
FIG. 2 is a flow chart of fine tuning in a method for implementing an automatically generated complex graph database query statement service according to an embodiment of the present application;
FIG. 3 is a flowchart of prompting results and/or generating a graph database query statement in a method for implementing an automatic generation complex graph database query statement service according to an embodiment of the present application;
fig. 4 is a schematic diagram of an internal structure of an electronic device according to a second embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The implementation method for automatically generating the query statement service of the complex graph database according to the first embodiment of the present application includes: .
Specifically, fig. 1 shows a flowchart of an implementation method for automatically generating a complex graph database query statement service in the first application embodiment, where the method includes:
s100, pre-training and fine-tuning a large-scale language model;
specifically, the large-scale language model can be a LLM model or a GPT model, and can also be an autonomously developed language model, and the large-scale language model is packaged by using a prompt template, and a natural language is used for specifying written specifications, details, examples of input and output formats and schema of a graph database to generate query sentences of natural language description, so that the large-scale language model can more accurately understand query intention and generate graph database query sentences which accord with user expectations;
wherein pre-training the large-scale language model comprises: pre-training a large-scale language model by using a large-scale general corpus, so that the large-scale language model can learn general language modes and semantic rules, wherein the general corpus comprises wikipedia, news reports and academic papers;
the fine tuning (i.e., fine-tuning stage) of a large-scale language model refers to a process of fine tuning an already trained model on a specific task after pre-training (pre-training) is completed, in which a number of labeled datasets are usually required to be collected for the specific task, and then the pre-trained model is fine-tuned by using these datasets to better adapt the model to the requirements of the specific task, and the key of the fine-tuning process is to trade-off the generalization ability of the model and the adaptation ability to the specific task. Because models have learned a large number of natural language structures and laws during the pre-training (pre-training) phase, the fine-training phase generally requires only a small amount of labeled data to achieve better performance on a particular task;
as shown in fig. 2, fine tuning the large-scale language model includes:
s101, collecting and preparing a marked data set of a specific task;
s102, loading weights of a pre-trained large-scale language model;
s103, adjusting parameters of the large-scale language model according to requirements of specific tasks;
s104, performing fine adjustment on the large-scale language model by using the marked data set, and performing parameter adjustment according to the performance of the verification set;
s105, evaluating the trimmed large-scale language model by using the test set.
The GPT model is an autoregressive language model based on a Transformer, and can generate natural language texts conforming to linguistic rules and language contexts in a pre-training mode;
in the pre-training stage of the GPT model, a large-scale general corpus such as Wikipedia, news report and the like can be used for pre-training the model, so that the GPT model can learn general language modes and semantic rules, and in the process of fine-tuning the GPT model, the corpus in the field (such as documents, books and the like in the field of a graph database) is used for carrying out targeted fine-tuning on the GPT model, so that the GPT model is better suitable for language characteristics and query requirements in the field;
in practical application, in order to meet the requirements of instantaneity and efficiency, smaller GPT models, such as medium and small versions of GPT-2 and GPT-3, can be used for reasoning and generating query sentences, the data sizes of the models used in the pre-training stage are smaller, but in the fine-tuning stage, the models can be trained in a targeted manner by utilizing a corpus in the field, so that the models are better suitable for the query requirements in the field;
in general, the GPT model is used as a core model for natural language understanding, plays a key role in the application, and can better understand natural language input of a user and generate query sentences which accord with the expectation of the user through pre-training and fine-tuning of the model, and can adopt models of different scales for reasoning and generating for different application scenes and requirements, so that better effect and efficiency are achieved.
S200, extracting primitive information (namely a schema structure) from a map where a user is located, so that the primitive information is provided as context to a large-scale language model when generating a map database query statement;
in the primitive information extraction stage, an automatic extraction algorithm can be adopted, so that the explicit supply of primitive information by a user is avoided, the user experience and prompt accuracy are improved, the manual explicit specification is not needed, the use experience and efficiency are greatly improved, and the primitive information is provided as a context to a large-scale language model when a query suggestion request is called.
S300, automatically generating a prompt result and/or a graph database query statement according to the input of a user in a query section by the large-scale language model, wherein the graph database query statement is provided with in-line notes so that the user can understand the graph database query statement, and the graph database query statement can be edited secondarily, namely, the user can directly edit the graph database query statement secondarily in a dialog box, thereby bringing great convenience to the user;
as shown in fig. 3, the automatic generation of the prompt result and/or the graph database query sentence by the large-scale language model according to the input of the user on the query section includes:
s301, acquiring a request description input by a user in a query section, for example: "get common friends of given two people";
s302, sending the request description and the id of the current diagram to a rear-end interface of the large-scale language model;
s303, a rear-end interface of the large-scale language model obtains primitive information according to the id of the current graph, and constructs a query prompt for the phonetic model;
s304, the back-end interface returns a result output by the large language model in a streaming mode.
That is, the large-scale language model can feed back the prompt result according to the input of the user in the query section, and can also directly produce the query statement of the graph database, specifically, when the natural language input by the user has errors, unclear expression or insufficient expression, the prompt result of guiding type can be output to guide the user to supplement or correct the input, and when the natural language input by the user is enough to generate the query statement of the graph database, the query statement of the graph database can be directly generated and output.
The back-end interface transmits information input by a user to the general large GPT model for reasoning, and in the current example, an API provided by OpenAI is used; in practical use, however, any mainstream language big model LLM can meet the requirements of converting natural language into graph databases DDL, DML and DQL, the model uses a template package of promtt, and uses natural language to define written specifications, details and examples of input and output formats, and schema of the graph database, and generates query sentences of natural language description
S400, executing the query statement of the graph database to obtain a real-time query result on the graph, and checking, executing and verifying the query statement of the graph database according to the real-time query result.
The method comprises the step of providing a plurality of possible query statement suggestions for a user according to the real-time query result by utilizing the generation capacity of a large-scale language model, wherein the user can select and finely adjust according to the requirements and actual conditions.
In summary, the technical scheme realizes the method of automatically generating the query statement of the complex graph database by performing fine tuning (pertinence) training by utilizing a large-scale language model (LLM) and a generation type pre-training model (GPT), improves the accuracy and the user experience of query suggestion by extracting the graphic primitive information (schema structure) as a context, and simultaneously enables a user to continuously check, execute and verify the query statement by real-time feedback circulation, thereby bringing the potential of the LLM/GPT into play to the greatest extent.
Example two
A computer readable storage medium according to a second embodiment of the present application stores program code for execution by a device, the program code including steps for performing the method in any one of the implementations of the first embodiment of the present application;
wherein the computer readable storage medium may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM); the computer readable storage medium may store program code which, when executed by a processor, is adapted to carry out the steps of a method as in any one of the implementations of the first embodiment of the present application.
Example III
As shown in fig. 4, an electronic device according to a third embodiment of the present application includes a processor, a memory, and a program or an instruction stored in the memory and executable on the processor, where the program or the instruction implements a method according to any one of the implementations of the first embodiment of the present application when executed by the processor;
the processor may be a general-purpose central processing unit (central processing unit, CPU), microprocessor, application specific integrated circuit (application specific integrated circuit, ASIC), graphics processor (graphics processing unit, GPU) or one or more integrated circuits for executing relevant programs to implement the methods according to any of the implementations of the first embodiment of the present application.
The processor may also be an integrated circuit electronic device with signal processing capabilities. In implementation, each step of the method in any implementation of the first embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor or an instruction in software form.
The processor may also be a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (field programmable gatearray, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware decoding processor or in a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware thereof, performs functions required to be performed by units included in the data processing apparatus according to the embodiment of the present application, or performs a method in any implementation manner of the first embodiment of the present application.
The above is only a preferred embodiment of the present application; the scope of protection of the present application is not limited in this respect. Any person skilled in the art, within the technical scope of the present disclosure, shall cover the protection scope of the present application by making equivalent substitutions or alterations to the technical solution and the improved concepts thereof.

Claims (10)

1. The implementation method for automatically generating the complex diagram database query statement service is characterized by comprising the following steps:
pre-training and fine-tuning the large-scale language model;
extracting primitive information from a map where a user is located so that the primitive information is provided as context to a large-scale language model when generating a query statement of a graph database;
the large-scale language model automatically generates prompt results and/or graph database query sentences according to the input of a user in a query section;
executing the query statement of the graph database to obtain a real-time query result on the graph, and checking, executing and verifying the query statement of the graph database according to the real-time query result.
2. The method for automatically generating a complex graph database query statement service according to claim 1, wherein the large-scale language model comprises LLM model or GPT model.
3. The method for automatically generating a complex graph database query statement service according to claim 1, wherein the large-scale language model is packaged by adopting a template of promt.
4. The method for automatically generating a complex graph database query statement service according to claim 1, wherein the pre-training the large-scale language model comprises: the large-scale language model is pre-trained with a large-scale generic corpus, including wiki encyclopedia, news stories, and academic papers, to enable the large-scale language model to learn generic language patterns and semantic rules.
5. The method for implementing the automatic generation of complex diagram database query statement service according to claim 1, wherein fine tuning the large-scale language model comprises:
collecting and preparing a labeled dataset for a particular task;
loading the weight of the pre-trained large-scale language model;
according to the requirements of specific tasks, adjusting parameters of a large-scale language model;
fine tuning the large-scale language model by using the marked data set, and carrying out parameter adjustment according to the performance of the verification set;
and evaluating the trimmed large-scale language model by using the test set.
6. The method for automatically generating a complex graph database query statement service of claim 1, wherein the graph database query statement has intra-line annotations and the graph database query statement is capable of being edited twice.
7. The method for automatically generating a complex graph database query statement service according to claim 3, wherein the automatically generating the prompt result and/or the graph database query statement by the large-scale language model according to the input of the user on the query section comprises:
acquiring a request description input by a user in a query section;
sending the request description and the id of the current diagram to a back end interface of the large-scale language model;
the rear-end interface of the large-scale language model obtains the primitive information according to the id of the current graph, and constructs a query prompt for the phonetic model;
the back-end interface returns the result output by the large language model in a streaming mode.
8. The method for implementing the service for automatically generating complex-graph database query statements according to claim 1, further comprising: and providing a graph database query statement modification suggestion for the user according to the real-time query result.
9. A computer readable storage medium storing program code for execution by a device, the program code comprising steps for performing the method of any one of claims 1-8.
10. An electronic device comprising a processor, a memory, and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the method of any of claims 1-8.
CN202310590729.8A 2023-05-24 2023-05-24 Implementation method for automatically generating complex graph database query statement service Active CN116340584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310590729.8A CN116340584B (en) 2023-05-24 2023-05-24 Implementation method for automatically generating complex graph database query statement service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310590729.8A CN116340584B (en) 2023-05-24 2023-05-24 Implementation method for automatically generating complex graph database query statement service

Publications (2)

Publication Number Publication Date
CN116340584A true CN116340584A (en) 2023-06-27
CN116340584B CN116340584B (en) 2023-08-11

Family

ID=86889723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310590729.8A Active CN116340584B (en) 2023-05-24 2023-05-24 Implementation method for automatically generating complex graph database query statement service

Country Status (1)

Country Link
CN (1) CN116340584B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594757A (en) * 2023-07-18 2023-08-15 深圳须弥云图空间科技有限公司 Method and device for executing complex tasks by using large language model
CN116703337A (en) * 2023-08-08 2023-09-05 金现代信息产业股份有限公司 Project document examination system and method based on artificial intelligence technology
CN116737909A (en) * 2023-07-28 2023-09-12 无锡容智技术有限公司 Table data processing method based on natural language dialogue
CN116910105A (en) * 2023-09-12 2023-10-20 成都瑞华康源科技有限公司 Medical information query system and method based on pre-training large model
CN116955674A (en) * 2023-09-20 2023-10-27 杭州悦数科技有限公司 Method and web device for generating graph database statement through LLM
CN116991985A (en) * 2023-09-28 2023-11-03 宏景科技股份有限公司 Real-time information response method and system based on generated pre-training model
CN117009492A (en) * 2023-09-28 2023-11-07 之江实验室 Graph query method and system based on local knowledge base and natural language big model
CN117112847A (en) * 2023-10-20 2023-11-24 杭州悦数科技有限公司 Data generation method and device of graph database based on community model
CN117251473A (en) * 2023-11-20 2023-12-19 摩斯智联科技有限公司 Vehicle data query analysis method, system, device and storage medium
CN117453717A (en) * 2023-11-06 2024-01-26 星环信息科技(上海)股份有限公司 Data query statement generation method, device, equipment and storage medium
JP7441366B1 (en) 2023-09-19 2024-02-29 株式会社東芝 Information processing device, information processing method, and computer program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020052871A1 (en) * 2000-11-02 2002-05-02 Simpleact Incorporated Chinese natural language query system and method
CN109766417A (en) * 2018-11-30 2019-05-17 浙江大学 A kind of construction method of the literature annals question answering system of knowledge based map
CN113988071A (en) * 2021-10-20 2022-01-28 华南师范大学 Intelligent dialogue method and device based on financial knowledge graph and electronic equipment
CN114528312A (en) * 2022-02-16 2022-05-24 京东科技信息技术有限公司 Method and device for generating structured query language statement
CN115563313A (en) * 2022-10-25 2023-01-03 上海交通大学 Knowledge graph-based document book semantic retrieval system
WO2023051021A1 (en) * 2021-09-30 2023-04-06 阿里巴巴达摩院(杭州)科技有限公司 Human-machine conversation method and apparatus, device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020052871A1 (en) * 2000-11-02 2002-05-02 Simpleact Incorporated Chinese natural language query system and method
CN109766417A (en) * 2018-11-30 2019-05-17 浙江大学 A kind of construction method of the literature annals question answering system of knowledge based map
WO2023051021A1 (en) * 2021-09-30 2023-04-06 阿里巴巴达摩院(杭州)科技有限公司 Human-machine conversation method and apparatus, device, and storage medium
CN113988071A (en) * 2021-10-20 2022-01-28 华南师范大学 Intelligent dialogue method and device based on financial knowledge graph and electronic equipment
CN114528312A (en) * 2022-02-16 2022-05-24 京东科技信息技术有限公司 Method and device for generating structured query language statement
CN115563313A (en) * 2022-10-25 2023-01-03 上海交通大学 Knowledge graph-based document book semantic retrieval system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王缓缓;张警灿;李虎;: "基于Ontology的智能问答系统的研究", 重庆理工大学学报(自然科学), no. 05 *
赵飞;刘文婷;: "基于自然语言的数据库查询接口探究", 赤峰学院学报(自然科学版), no. 06 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594757B (en) * 2023-07-18 2024-04-12 深圳须弥云图空间科技有限公司 Method and device for executing complex tasks by using large language model
CN116594757A (en) * 2023-07-18 2023-08-15 深圳须弥云图空间科技有限公司 Method and device for executing complex tasks by using large language model
CN116737909A (en) * 2023-07-28 2023-09-12 无锡容智技术有限公司 Table data processing method based on natural language dialogue
CN116737909B (en) * 2023-07-28 2024-04-23 无锡容智技术有限公司 Table data processing method based on natural language dialogue
CN116703337A (en) * 2023-08-08 2023-09-05 金现代信息产业股份有限公司 Project document examination system and method based on artificial intelligence technology
CN116910105A (en) * 2023-09-12 2023-10-20 成都瑞华康源科技有限公司 Medical information query system and method based on pre-training large model
JP7441366B1 (en) 2023-09-19 2024-02-29 株式会社東芝 Information processing device, information processing method, and computer program
CN116955674B (en) * 2023-09-20 2024-01-09 杭州悦数科技有限公司 Method and web device for generating graph database statement through LLM
CN116955674A (en) * 2023-09-20 2023-10-27 杭州悦数科技有限公司 Method and web device for generating graph database statement through LLM
CN116991985B (en) * 2023-09-28 2023-12-19 宏景科技股份有限公司 Real-time information response method and system based on generated pre-training model
CN117009492B (en) * 2023-09-28 2024-01-09 之江实验室 Graph query method and system based on local knowledge base and natural language big model
CN117009492A (en) * 2023-09-28 2023-11-07 之江实验室 Graph query method and system based on local knowledge base and natural language big model
CN116991985A (en) * 2023-09-28 2023-11-03 宏景科技股份有限公司 Real-time information response method and system based on generated pre-training model
CN117112847B (en) * 2023-10-20 2024-02-06 杭州悦数科技有限公司 Data generation method and device of graph database based on community model
CN117112847A (en) * 2023-10-20 2023-11-24 杭州悦数科技有限公司 Data generation method and device of graph database based on community model
CN117453717A (en) * 2023-11-06 2024-01-26 星环信息科技(上海)股份有限公司 Data query statement generation method, device, equipment and storage medium
CN117251473A (en) * 2023-11-20 2023-12-19 摩斯智联科技有限公司 Vehicle data query analysis method, system, device and storage medium
CN117251473B (en) * 2023-11-20 2024-03-15 摩斯智联科技有限公司 Vehicle data query analysis method, system, device and storage medium

Also Published As

Publication number Publication date
CN116340584B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN116340584B (en) Implementation method for automatically generating complex graph database query statement service
CN109933602B (en) Method and device for converting natural language and structured query language
CN109933653A (en) Question and answer querying method, system and the computer equipment of question answering system
US10579835B1 (en) Semantic pre-processing of natural language input in a virtual personal assistant
CN116701431A (en) Data retrieval method and system based on large language model
US20190129695A1 (en) Programming by voice
CN113254619A (en) Automatic reply method and device for user query and electronic equipment
CN116629235B (en) Large-scale pre-training language model fine tuning method and device, electronic equipment and medium
CN114168619B (en) Training method and device of language conversion model
CN112016296B (en) Sentence vector generation method, sentence vector generation device, sentence vector generation equipment and sentence vector storage medium
US11599769B2 (en) Question and answer matching method, system and storage medium
CN116881470A (en) Method and device for generating question-answer pairs
CN117077792B (en) Knowledge graph-based method and device for generating prompt data
CN117112608A (en) Antlr 4-based database statement conversion method and device
CN115757469A (en) Data generation method, electronic device and storage medium for text-to-SQL tasks
CN115964465A (en) Intelligent question and answer method and device and electronic equipment
Chabierski et al. Machine Comprehension of Text Using Combinatory Categorial Grammar and Answer Set Programs.
CN117289905B (en) Application software development method and device, storage medium and electronic equipment
CN106682221B (en) Question-answer interaction response method and device and question-answer system
Nie et al. Graph neural net-based user simulator
CN117217191A (en) Prompt processing method, device, equipment and storage medium of language model
US11669681B2 (en) Automated calculation predictions with explanations
CN117520492A (en) Intelligent question-answering optimization method, device and application based on document
CN117891831A (en) Substation operation and maintenance intelligent question-answering method based on large model, related method and device
KR20230102382A (en) Method of Augmenting Training Data Set in Natural Language Processing System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant