CN112507139B - Knowledge graph-based question and answer method, system, equipment and storage medium - Google Patents

Knowledge graph-based question and answer method, system, equipment and storage medium Download PDF

Info

Publication number
CN112507139B
CN112507139B CN202011586544.2A CN202011586544A CN112507139B CN 112507139 B CN112507139 B CN 112507139B CN 202011586544 A CN202011586544 A CN 202011586544A CN 112507139 B CN112507139 B CN 112507139B
Authority
CN
China
Prior art keywords
entity
reference entity
relationship
stack
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011586544.2A
Other languages
Chinese (zh)
Other versions
CN112507139A (en
Inventor
陈晓东
马帅
陈华庚
莫小君
赵梅玲
邹凯
王强
徐�明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Original Assignee
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZNV Technology Co Ltd, Nanjing ZNV Software Co Ltd filed Critical Shenzhen ZNV Technology Co Ltd
Priority to CN202011586544.2A priority Critical patent/CN112507139B/en
Publication of CN112507139A publication Critical patent/CN112507139A/en
Application granted granted Critical
Publication of CN112507139B publication Critical patent/CN112507139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a question answering method, a system, equipment and a storage medium based on a knowledge graph, wherein the method comprises the following steps: receiving a question sentence input by a user, and extracting word segmentation of the question sentence to obtain question information; if one piece of information in the question information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph, the reference triplet information is added to a preset attention stack; assembling a query statement according to the reference triplet information and the question information in the attention stack; and inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer, and returning the questioning answer to the user. The invention can return the question answers to the user according to the question sentences of the user, and can ensure that the accuracy of multiple rounds of question answers is higher.

Description

Knowledge graph-based question and answer method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of knowledge graphs, in particular to a question answering method, a question answering system, question answering equipment and a storage medium based on knowledge graphs.
Background
The knowledge graph technology describes concept entities and relations thereof in the objective world in a structured form, and the information of the Internet is expressed to be more similar to the form of the human cognitive world, so that the knowledge graph technology provides a capability of better organizing, managing and understanding mass information of the Internet. Knowledge-graph relationship concepts, entities and their relationships, where an entity is a thing in the objective world, and a concept is a generalization and abstraction of a thing with the same attributes. Knowledge-graph is typically represented using a triplet format, i.e., g= (E, R, S), where e= { E1, E2, E3, …, en } is a collection of entities in the knowledge base, together comprising |e| different entities; r= { R1, R2, …, rn } is a set of relationships in the knowledge base, containing |r| different relationships altogether; Representing a set of triples in a knowledge base. Knowledge graph, big data and deep learning together become one of the core driving forces for the development of the Internet and artificial intelligence.
Natural Language Understanding commonly known as man-machine conversation is a branch discipline of artificial intelligence. The research computer simulates the human language interaction process, so that the computer can understand and use the natural language of human society such as Chinese, english, etc. to realize the natural language communication between human and machine, to replace part of mental labor of human, including the processing work of inquiring data, solving problem, picking document, assembling data and all related natural language information. The intelligent question-answering robot developed by combining knowledge graph technology and natural language understanding is widely applied in various fields, such as cat customer service, market machine shopping guide and the like.
At present, a task-oriented multi-round dialogue system constructed based on knowledge graph and natural language understanding technology is generally used for solving the problem in a certain field, such as the consultation of a certain technology or a certain service and a product. During the course of a conversation, a questioner usually makes a question according to the habit of a daily conversation, and after several conversations, a plurality of abbreviated questions are often presented, some reference words are used, even the reference words are often omitted, and the system needs to know what the intention of the user is at this time. For example, in a multi-round dialogue, three kinds of entities in the domain are mentioned, an A entity has attributes A1, A2, a B entity has attributes B1, B2 and B3, a C entity has attributes C1 and C2, the process of the dialogue is completed three times, and then in a fourth round dialogue sentence of a user, the user does not mention related words of the three former entities, but the fourth round dialogue is resolved into very simple predicate words, and the predicate relation of the A entity is related to the attribute A1, so that feedback of the A1 attribute of the A entity is needed to a questioner. If the multi-round dialog is unable to solve the problem of attention management, the system is not aware that the current focus of the fourth round dialog user is the A1 attribute of the A entity. For example, some intelligent customer service robots often fail to give the user a correct answer because they do not know what the user's abbreviation questionnaire (predicate or subject only) is pointing to, and the answer may leave the user blind or list a list of questions for the user to reselect for input. Therefore, the multi-round dialogue system which is constructed based on the knowledge graph and the natural language understanding technology and faces the task at present can not give correct answers to the user according to the user's thumbnail question method, so that the human-computer interaction experience of the user is poor.
Disclosure of Invention
The embodiment of the application aims to solve the problem that a task-oriented multi-round dialogue system constructed based on a knowledge graph and a natural language understanding technology cannot give correct answers to a user according to a user's abbreviated question method by providing a question-answering method, a system, equipment and a storage medium based on the knowledge graph.
The embodiment of the application provides a question-answering method based on a knowledge graph, which comprises the following steps:
receiving a question sentence input by a user, and extracting word segmentation from the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
if one piece of information in the questioning information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph, adding the reference triplet information to a preset attention stack; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity;
assembling a query statement according to the reference triplet information and the question information in the attention stack;
and inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer, and returning the questioning answer to the user.
In an embodiment, the determining that one of the question information is correspondingly matched with one of any reference triples in the knowledge graph includes:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; or,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
In an embodiment, the adding the reference triplet information to a preset attention stack includes:
carrying out a pop operation on the history information in the attention stack; the history information comprises a first reference entity, a reference entity relation and a second reference entity of the history;
and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
In an embodiment, after the popping operation of the history information in the attention stack, the method includes:
and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the history information to the attention stack, and then adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
In an embodiment, after the popping operation is performed on the history information in the attention stack, the method further includes:
and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the first reference entity and the reference entity relation to the attention stack, adding the history information to the attention stack, and then adding the second reference entity to the attention stack.
In an embodiment, after the popping operation is performed on the history information in the attention stack, the method further includes:
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
In an embodiment, the assembling the query sentence according to the reference triplet information and the question information in the attention stack includes:
if the statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement, acquiring a first reference entity or a reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack;
and carrying out statement assembly according to the entity or entity relation and a first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or entity relation, so as to obtain the query statement.
In addition, in order to achieve the above object, the present invention further provides a question-answering system based on a knowledge graph, including:
the information extraction module is used for receiving a question sentence input by a user, and extracting the word segmentation of the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
the information adding module is used for adding one piece of reference triplet information in the knowledge graph to a preset attention stack if judging that the one piece of information in the question information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity;
The statement assembling module is used for assembling the query statement according to the reference triplet information and the question information in the attention stack;
and the answer inquiry module is used for inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer and returning the questioning answer to the user.
In addition, to achieve the above object, the present invention also provides a terminal device including: the method comprises the steps of a memory, a processor and a knowledge-based question-answering program which is stored in the memory and can be run on the processor, wherein the knowledge-based question-answering program realizes the knowledge-based question-answering method when being executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a storage medium having stored thereon a knowledge-graph-based question-answering program which, when executed by a processor, implements the steps of the knowledge-graph-based question-answering method described above.
The technical scheme of the question answering method, the question answering system, the question answering equipment and the storage medium based on the knowledge graph provided by the embodiment of the application has at least the following technical effects or advantages:
according to the technical scheme, the question sentences input by the user are received, word segmentation is carried out on the question sentences to obtain question information, if one piece of information in the question information is judged to be matched with one piece of information in any reference triplet information in the knowledge graph, the reference triplet information is added to a preset attention stack, the query sentences are assembled according to the reference triplet information in the attention stack and the question information, the query sentences are used for querying the knowledge graph to obtain question answers, and the question answers are returned to the user. According to the invention, the knowledge graph is used for filtering the entity and entity relation analyzed in the user question sentence and then carrying out the stack-entering and stack-exiting operation of the attention stack, so that the high-efficiency utilization rate of the entity and entity relation in the attention stack can be ensured, the question answer is returned to the user according to the user question sentence, and the accuracy of multiple rounds of question answers is higher.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a first embodiment of a knowledge-based question-answering method according to the present invention;
FIG. 3 is a flowchart of a second embodiment of a knowledge-based question-answering method according to the present invention;
FIG. 4 is a flowchart of a third embodiment of a knowledge-based question-answering method according to the present invention;
fig. 5 is a functional block diagram of the knowledge-based question-answering system of the present invention.
Detailed Description
In order that the above-described aspects may be better understood, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware running environment according to an embodiment of the present invention.
It should be noted that fig. 1 may be a schematic structural diagram of a hardware operating environment of a terminal device.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, memory 1005, user interface 1003, network interface 1004, communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal device may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 1 is not limiting to a terminal device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a knowledge-graph-based question-answering program may be included in the memory 1005 as one storage medium. The operating system is a program for managing and controlling hardware and software resources of the terminal equipment, a question-answering program based on a knowledge graph and other software or program running.
In the terminal device shown in fig. 1, the user interface 1003 is mainly used for connecting a terminal, and performs data communication with the terminal; the network interface 1004 is mainly used for a background server and is in data communication with the background server; the processor 1001 may be used to invoke a knowledge-graph based question-answering program stored in the memory 1005.
In this embodiment, the terminal device includes: a memory 1005, a processor 1001, and a knowledge-graph-based question-answering program stored on the memory and executable on the processor, wherein:
when the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are performed:
Receiving a question sentence input by a user, and extracting word segmentation from the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
if one piece of information in the questioning information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph, adding the reference triplet information to a preset attention stack; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity;
assembling a query statement according to the reference triplet information and the question information in the attention stack;
and inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer, and returning the questioning answer to the user.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; or,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
carrying out a pop operation on the history information in the attention stack; the history information comprises a first reference entity, a reference entity relation and a second reference entity of the history;
and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the history information to the attention stack, and then adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
And if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the first reference entity and the reference entity relation to the attention stack, adding the history information to the attention stack, and then adding the second reference entity to the attention stack.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
When the processor 1001 invokes the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement, acquiring a first reference entity or a reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack;
And carrying out statement assembly according to the entity or entity relation and a first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or entity relation, so as to obtain the query statement.
The embodiments of the present invention provide embodiments of a knowledge-based question-answering method that is applied to human-machine questions and answers, such as a day cat customer service, a mall robot shopping guide, etc., although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that herein.
As shown in fig. 2, in a first embodiment of the present application, the knowledge-graph-based question-answering method of the present application includes the following steps:
step 210: and receiving a question sentence input by a user, and extracting the word segmentation of the question sentence to obtain question information.
In this embodiment, the question sentence may be a question sentence, and specifically may be text information input by a user, for example, the question sentence input by the user in the browser is: the "how the Philippine steak is fried" can also be natural voice input by the user, for example, when the user interacts with the intelligent robot, the intelligent robot is asked questions: the "how weather is open" in this embodiment is not particularly limited to the form of the question sentence and the input method. The questioning information includes entities and entity relationships. Specifically, after receiving a question sentence input by a user, analyzing the question sentence by adopting a natural language processing (NLP, natural Language Processing) technology to obtain a word segment contained in the question sentence, and extracting an entity and an entity relationship from the obtained word segment to obtain question information contained in the question sentence.
Step 220: and if one piece of information in the question information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph, adding the reference triplet information to a preset attention stack.
In this embodiment, the knowledge graph is preset, and different types of entities and entity relationships are stored in the knowledge graph, and for convenience of distinguishing, the entities and entity relationships stored in the knowledge graph are referred to as a reference entity and a reference entity relationship, respectively. Wherein reference entities and reference entity relationships having the same attribute are categorized into one type, and each reference entity and reference entity relationship has one type tag. Reference to the set of entity tags t= { T1, T2, T3, …, tn }, reference to the set of entity relationship tags r= { K1, K2, K3, …, km }. Each reference entity and reference entity relationship has a corresponding tag Tn or Km within the domain knowledge. Each reference entity and reference entity relationship will have a plurality of attributes, in the form of a key value pair, attribute NAME, attribute value TEXT. As shown in tables 1 and 2.
Table 1 references to entity tags and attribute structures
Reference entity Reference entity tag Attribute name Attribute value
E1 T8 NAME1 TEXT1
E2 T2 NAME2 TEXT2
E3 T2 NAME2 TEXT3
E3 T2 NAME3 TEXT4
E5 T3 NAME5 TEXT5
Table 2 referring to entity relationship tags and attribute structures
Reference entity relationships Reference entity relationship tags Attribute name Attribute value
R1 K5 NAME1 TEXT1
R2 K3 NAME2 TEXT2
R3 K3 NAME3 TEXT3
R4 K2 NAME3 TEXT4
R5 K3 NAME4 TEXT5
Specifically, the reference entity and the reference entity relationship are stored in the knowledge graph in the form of triplet information, the reference entity comprises a first reference entity and a second reference entity, the triplet information comprises the first reference entity, the reference entity relationship and the second reference entity, the first reference entity can be used as the second reference entity, and the second reference entity can also be used as the first reference entity. The triplet information is specifically expressed as (a first reference entity, a reference entity relationship, a second reference entity), wherein the first reference entity represents a subject entity in a sentence, the second reference entity represents an object entity in a sentence, and the reference entity relationship represents a relationship between the first reference entity and the second reference entity. After extracting the entity and the entity relation included in the current question sentence, a first reference entity and a reference entity relation corresponding to the entity and the entity relation respectively can be obtained in a knowledge graph according to the entity and the entity relation, a first reference entity label can be obtained according to the first reference entity, a reference entity relation label can be obtained according to the reference entity relation, a second reference entity corresponding to the first reference entity can be obtained according to the first reference entity, and a second reference entity label can be obtained according to the second reference entity.
Further, comparing the entity or entity relation correspondence in the question sentence with the first reference entity or the second reference entity or the reference entity relation in the knowledge graph, and if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; or if the similarity between the entity relationship and the reference entity relationship reaches a preset threshold, determining that the entity relationship is matched with the reference entity relationship. Wherein, the entity matching with the first reference entity or the second reference entity means that the entity is the same as, similar to, associated with, and the like, and the entity relationship corresponding matching with the reference entity relationship means that the entity relationship is the same as, similar to, associated with, and the like, the reference entity relationship. The similarity between the entity and the first reference entity or the second reference entity and the similarity between the entity relationship and the reference entity relationship reach the preset threshold value can be judged by a similarity judgment mode of the corresponding text, so that whether the entity relationship and the reference entity relationship are matched or not can be judged. For example, the comparison between the specific text information of the entity relationship and the specific text information of the reference entity relationship may also be performed in other manners, which is not limited in this embodiment. When an entity is matched with a first reference entity or a second reference entity or an entity relationship is matched with a reference entity relationship, a first reference entity and a reference entity relationship corresponding to the entity and the entity relationship are added to the attention stack, and when the first reference entity and the reference entity relationship are added to the attention stack, a first reference entity label, a reference entity relationship label and a second reference entity label corresponding to the first reference entity are also added to the attention stack. The attention stack is used for storing a first reference entity and a reference entity relationship in a knowledge graph corresponding to the entities and the entity relationships in different question sentences respectively, and is specifically shown in table 3. The first reference entity in table 3 is also called subject entity, the second reference entity is also called object entity, and a plurality of different sets of reference triplet information are stored in table 3, for example, a set of triplet information from top to bottom E5-E3 respectively.
TABLE 3 attention stack structure
Note that, the information in the attention stack is last-in first-out, the line number length of the attention stack is set to a fixed length (the length is not large), and after the length is exceeded, the information is cleared from the tail of the attention stack. When the attention stack is pushed every time, if the total number of the triple information exceeds the set stack number, the record is automatically deleted from the tail of the attention stack. When a man-machine question and answer is performed, the question sentence of the user is kept unchanged within the preset residence time, if the same user communicates with the intelligent robot, a question is given once, and if the question exceeds the preset residence time, all the triple information stored in the attention stack is emptied. Or after the user communicates with the mall robot customer service, closing the communication window, and executing the operation of clearing all the triplet information stored in the attention stack.
Step 230: and assembling a query statement according to the reference triplet information and the questioning information in the attention stack.
In this embodiment, after adding the first reference entity and the reference entity relationship corresponding to the entity and the entity relationship to the attention stack, respectively, statement assembly is performed according to the entity and the entity relationship in the current question statement and the reference triplet information in the attention stack, which is matched with the entity and the entity relationship in the current question statement, so as to obtain a query statement for querying the answer of the question corresponding to the current question statement.
Step 240: and inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer, and returning the questioning answer to the user.
In this embodiment, a query sentence corresponding to the current query sentence is adopted, a knowledge graph is queried, a query answer corresponding to the query sentence is obtained by the knowledge graph, and the query answer is returned to the user. And when the query statement cannot exceed the query range of the knowledge graph, a guided answer is returned to the client, and the user is guided to input the question statement in the query range of the knowledge graph or does not answer the question statement of the user.
According to the technical scheme, as the question sentences input by the user are received, the word segmentation extraction is carried out on the question sentences to obtain the question information, if one piece of information in the question information is judged to be matched with one piece of information in any reference triplet information in the knowledge graph, the reference triplet information is added to a preset attention stack, the query sentences are assembled according to the reference triplet information and the question information in the attention stack, the knowledge graph is queried by adopting the query sentences to obtain the question answers, and the question answers are returned to the user.
As shown in fig. 3, in a second embodiment of the present application, the adding the reference triplet information to a preset attention stack in step S220 includes the following steps:
step 211: and carrying out a pop operation on the history information in the attention stack.
Specifically, the history information is triple information added to the attention stack last before, and includes a first reference entity, a reference entity relationship and a second reference entity of the history. After the entity and entity relation in the current question sentence are obtained, and one piece of information in the question information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph, the historical information in the attention stack is popped, namely, the historical information is removed from the attention stack.
Step 212: and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
Specifically, if the entity in the current question sentence is the same as the first reference entity queried in the knowledge graph, and the entity in the current question sentence is the same as the second reference entity corresponding to the first reference entity queried in the knowledge graph, the first reference entity, the reference entity relationship corresponding to the first reference entity and the second reference entity corresponding to the first reference entity are directly stacked. Or if the entity relationship in the current question sentence is the same as the reference entity relationship queried in the knowledge graph, and the entity relationship in the current question sentence is the same as the second reference entity corresponding to the reference entity relationship queried in the knowledge graph, directly stacking the first reference entity corresponding to the reference entity relationship, the reference entity relationship and the second reference entity corresponding to the first reference entity.
Step 213: and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the history information to the attention stack, and then adding the first reference entity, the reference entity relation and the second reference entity to the attention stack.
Specifically, if the entity in the current question sentence is different from the first reference entity queried in the knowledge graph, and the entity in the current question sentence is also different from the second reference entity corresponding to the first reference entity queried in the knowledge graph, stacking the history information removed before, and then directly stacking the first reference entity, the reference entity relationship corresponding to the first reference entity, and the second reference entity corresponding to the first reference entity. Or if the entity relationship in the current question sentence is different from the reference entity relationship queried in the knowledge graph, and the entity relationship in the current question sentence is different from the second reference entity corresponding to the reference entity relationship queried in the knowledge graph, stacking the history information removed before, and stacking the first reference entity corresponding to the reference entity relationship, the reference entity relationship and the second reference entity corresponding to the first reference entity.
Step 214: and if the entity or the entity relation is corresponding to the first reference entity or the reference entity relation and is different from the second reference entity, adding the first reference entity and the reference entity relation to the attention stack, adding the history information to the attention stack, and then adding the second reference entity to the attention stack.
Specifically, if the entity in the current question sentence is the same as the first reference entity queried in the knowledge graph and the entity in the current question sentence is different from the second reference entity corresponding to the first reference entity queried in the knowledge graph, stacking the first reference entity and the reference entity relationship corresponding to the first reference entity, stacking the history information removed before, and stacking the second reference entity corresponding to the first reference entity. Or if the entity relationship in the current question sentence is the same as the reference entity relationship queried in the knowledge graph and the entity in the current question sentence is different from the second reference entity corresponding to the first reference entity queried in the knowledge graph, stacking the first reference entity and the reference entity relationship corresponding to the reference entity relationship, stacking the history information removed before, and stacking the second reference entity corresponding to the first reference entity.
Step 215: if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
Specifically, if the entity in the current question sentence is different from the first reference entity queried in the knowledge graph and the entity in the current question sentence is the same as the second reference entity corresponding to the first reference entity queried in the knowledge graph, stacking the second reference entity corresponding to the first reference entity, stacking the history information removed before, and stacking the first reference entity and the reference entity relationship corresponding to the first reference entity. Or if the entity relationship in the current question sentence is different from the reference entity relationship queried in the knowledge graph, and the entity in the current question sentence is the same as the second reference entity corresponding to the reference entity relationship queried in the knowledge graph, stacking the second reference entity corresponding to the first reference entity, stacking the history information removed before, and stacking the first reference entity corresponding to the reference entity relationship and the reference entity relationship.
As shown in fig. 4, in the third embodiment of the present application, step S230 includes the steps of:
step S231: and if the statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement, acquiring a first reference entity or reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack.
In this embodiment, after extracting the entity and entity relationship from the current question sentence, the entity and entity relationship is assembled to obtain a sentence composed of the entity and entity relationship, and then whether the sentence composed of the entity and entity relationship is a complete main predicate structure sentence is judged. If the statement formed by the entity and the entity relationship is a complete main predicate structure statement, the current question statement is also a complete main predicate structure statement, namely the current question statement has a main statement and a predicate, the entity is the main statement, and the entity relationship is the predicate, so that the statement query knowledge graph formed by the entity and the entity relationship is directly adopted, and the question answer corresponding to the current question statement is obtained. For example, the user first asks a question to the intelligent robot: what is the Philippi steak decocted? "; the method comprises the steps of extracting a 'Philippine beefsteak' as an entity (subject), extracting a 'frying' as an entity relationship (predicate), matching the Philippine beefsteak (beefsteak entity tag) and a frying method (making relationship tag) in a knowledge graph, adding an attention stack into the Philippine beefsteak (beefsteak entity tag) and the frying method (making relationship tag), forming a sentence of the entity and the entity relationship as a 'Philippine beefsteak (subject entity) -frying method (predicate entity relationship) - >, then adopting the' Philippine beefsteak (subject entity) -frying method (predicate entity relationship) - >, executing the knowledge graph query to obtain the frying method (object entity) of the Philippine beefsteak, returning the frying method of the Philippine beefsteak to a user, and adding the frying method (object entity) of the Philippine beefsteak to the attention stack.
If the statement formed by the entity and the entity relation is judged to be an incomplete main predicate structure statement, the current question statement is indicated to be an incomplete main predicate structure statement, namely the current question statement is a thumbnail statement, and only the main or predicate exists in the current question statement, namely one of the extracted entity and the entity relation does not exist. If only the subject exists in the current question sentence, the subject is treated as an entity, and then a first reference entity with the push time smaller than the preset time and matched with the entity is obtained from the attention stack. Or if only predicates exist in the current question sentences, the predicates are treated as entity relations, and then the reference entity relations which have the push time smaller than the preset time and are matched with the entity relations are obtained from the attention stack. The stacking time is smaller than the preset time to represent the latest stacking time closest to the current time, and the stacking time is smaller than the first reference entity or the reference entity relation of the preset time to represent the first reference entity or the reference entity relation of the latest stacking time.
Step S232: and carrying out statement assembly according to the entity or entity relation and a first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or entity relation, so as to obtain the query statement.
In this embodiment, if only a subject exists in the current question sentence, the subject is taken as an entity, a first reference entity which is matched with the latest time of stacking is searched from the attention stack, and then the attention stack is continuously popped. And marking a first reference entity label of the first reference entity which is currently popped as a first label, marking the first reference entity label of the first reference entity which is popped at the latest time of being popped and matched with the first reference entity as a second label, and if the first label is consistent with the second label, performing statement assembly by using a reference entity relation corresponding to the first reference entity which is currently popped and an entity in the current question statement to obtain a query statement. For example, the user asks the intelligent robot a second time: "macroscopic steak? And judging that the question sentence is an incomplete main predicate structure sentence, matching the question sentence with an entity macroscopic steak in a knowledge graph, taking out reference triple information about the Philippine steak added by a previous dialog from an attention stack, and replacing the macroscopic steak with the Philippine steak because the macroscopic steak and the Philippine steak have the same label (steak entity label), and forming a complete query sentence with a frying method to query the knowledge graph. The query statement is: the naked eye beefsteak is matched with (subject entity) -decocted method (predicate relation) - >, the decocted method (object entity) of the macroscopic beefsteak is obtained, the decocted method of the macroscopic beefsteak is returned to the user, and the decocted method (object entity) of the macroscopic beefsteak is added to the attention stack. Or if the current question sentence has only predicates, searching the latest push time from the attention stack by taking the predicates as entity relations and a reference entity relation matched with the entity, and then continuously carrying out push operation on the attention stack. And marking the reference entity label of the current popped reference entity relation as a third label, marking the reference entity relation label of the reference entity relation which is popped up with the latest pop-up time and matched with the entity as a fourth label, and if the third label is consistent with the fourth label, performing statement assembly by using the first reference entity corresponding to the current popped reference entity relation and the entity relation in the current question statement to obtain the query statement.
If the query sentences obtained by the assembly in the two modes can not query the question answer, the second reference entity which is currently popped up and the entity or entity relation in the current question sentence are used for assembling the query sentences. And continuously assembling the query sentences according to the three modes until a question answer can be queried or the stack bottom of the attention stack is traversed through the query sentences to stop, and then stacking the currently popped reference triplet information according to the sequence when stacking.
According to the technical scheme, if the statement formed after the entity and entity relation are assembled is an incomplete main predicate structure statement, the first reference entity and the reference entity relation which are smaller than the preset time and matched with the entity and the entity relation are obtained from the attention stack, the first reference entity and the reference entity relation which are smaller than the preset time and matched with the entity and the entity relation are assembled, and the technical means of inquiring statement is obtained, so that when the question statement of the current user is a thumbnail statement, the entity and the entity relation in the question statement of the user are reassembled into the inquiring statement which accords with the question intention of the user, and the obtained question answer is more matched with the question statement of the user.
As shown in fig. 5, a knowledge graph-based question-answering system provided in the present application includes:
the information extraction module 310 is configured to receive a question sentence input by a user, and perform word segmentation extraction on the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
the information adding module 320 is configured to add the reference triplet information to a preset attention stack if it is determined that one of the question information is correspondingly matched with one of any reference triplet information in the knowledge graph; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity;
a sentence assembling module 330, configured to assemble a query sentence according to the reference triplet information and the question information in the attention stack;
and the answer query module 340 is configured to query the knowledge graph by using the query statement to obtain a question answer, and return the question answer to the user.
Further, the information adding module 320 is specifically configured to determine that the entity matches with the first reference entity or the second reference entity if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold in determining that one of the question information matches with one of the information in any reference triplet information in the knowledge graph; or if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
Further, the information adding module 320 includes, in adding the reference triplet information to a preset attention stack:
the pop management unit is used for performing pop operation on the history information in the attention stack; the history information comprises a first reference entity, a reference entity relation and a second reference entity of the history;
and the push management unit is used for adding the first reference entity, the reference entity relation and the second reference entity to the attention stack if the entity or the entity relation is the same as the first reference entity or the reference entity relation and the entity is the same as the second reference entity.
Further, the push management unit is further configured to add the history information to the attention stack first and then add the first reference entity, the reference entity relationship, and the second reference entity to the attention stack if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is different from the second reference entity.
Further, the push management unit is further configured to add the first reference entity and the reference entity relationship to the attention stack, add the history information to the attention stack, and then add the second reference entity to the attention stack if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is different from the second reference entity.
Further, the push management unit is further configured to add the second reference entity to the attention stack, add the history information to the attention stack, and then add the first reference entity and the reference entity relationship to the attention stack if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity.
Further, the sentence assembling module 330 includes:
the statement judging unit is used for acquiring a first reference entity or a reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack if a statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement;
the statement construction unit is used for carrying out statement assembly according to the entity or entity relation, the first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or the entity relation, and the query statement is obtained.
The specific implementation of the knowledge-based question-answering system is basically the same as the examples of the knowledge-based question-answering method, and will not be repeated here.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. The knowledge graph-based question-answering method is characterized by comprising the following steps of:
receiving a question sentence input by a user, and extracting word segmentation from the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
if one of the questioning information is judged to be correspondingly matched with one of the information in any reference triplet information in the knowledge graph, carrying out stack stripping operation on the history information in the preset attention stack; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity; the history information comprises a first reference entity, a reference entity relation and a second reference entity of the history;
If the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack;
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is different from the second reference entity, the history information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack;
if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is different from the second reference entity, adding the first reference entity and the reference entity relationship to the attention stack, adding the history information to the attention stack, and adding the second reference entity to the attention stack;
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, adding the second reference entity to the attention stack, adding the history information to the attention stack, and then adding the first reference entity and the reference entity relationship to the attention stack;
If the statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement, acquiring a first reference entity or a reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack;
statement assembly is carried out according to the entity or entity relation, the first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or entity relation, and a query statement is obtained;
and inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer, and returning the questioning answer to the user.
2. The method of claim 1, wherein said determining that one of the question information matches one of any of the reference triplet information in the knowledge-graph, comprises:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; or,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
3. A knowledge graph-based question-answering system, comprising:
the information extraction module is used for receiving a question sentence input by a user, and extracting the word segmentation of the question sentence to obtain question information; the questioning information comprises an entity and an entity relationship;
the information adding module is used for carrying out stack-pulling operation on the history information in the preset attention stack if judging that one piece of information in the question information is correspondingly matched with one piece of information in any reference triplet information in the knowledge graph; the triplet information comprises a first reference entity, a reference entity relationship and a second reference entity; the history information comprises a first reference entity, a reference entity relation and a second reference entity of the history;
if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack;
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is different from the second reference entity, the history information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack;
If the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is different from the second reference entity, adding the first reference entity and the reference entity relationship to the attention stack, adding the history information to the attention stack, and adding the second reference entity to the attention stack;
if the entity or the entity relationship is different from the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, adding the second reference entity to the attention stack, adding the history information to the attention stack, and then adding the first reference entity and the reference entity relationship to the attention stack;
the statement assembly module is used for acquiring a first reference entity or a reference entity relationship which has a push time smaller than a preset time and is correspondingly matched with the entity or the entity relationship from the attention stack if a statement formed by assembling the entity and the entity relationship is an incomplete main predicate structure statement;
statement assembly is carried out according to the entity or entity relation, the first reference entity or reference entity relation which is smaller than the preset time and matched with the entity or entity relation, and a query statement is obtained;
And the answer inquiry module is used for inquiring the knowledge graph by adopting the inquiry statement to obtain a questioning answer and returning the questioning answer to the user.
4. A terminal device, comprising: a memory, a processor and a knowledge-based question-answering program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the knowledge-based question-answering method according to any one of claims 1-2.
5. A storage medium having stored thereon a knowledge-based question-answering program which, when executed by a processor, implements the steps of the knowledge-based question-answering method according to any one of claims 1-2.
CN202011586544.2A 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium Active CN112507139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011586544.2A CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011586544.2A CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112507139A CN112507139A (en) 2021-03-16
CN112507139B true CN112507139B (en) 2024-03-12

Family

ID=74951746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011586544.2A Active CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112507139B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779220B (en) * 2021-09-13 2023-06-23 内蒙古工业大学 Mongolian multi-hop question-answering method based on three-channel cognitive map and graph annotating semantic network
CN114610860B (en) * 2022-05-07 2022-09-27 荣耀终端有限公司 Question answering method and system
CN116303919A (en) * 2022-11-30 2023-06-23 荣耀终端有限公司 Question and answer method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
CN109308321A (en) * 2018-11-27 2019-02-05 烟台中科网络技术研究所 A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium
CN111428055A (en) * 2020-04-20 2020-07-17 神思电子技术股份有限公司 Industry-oriented context omission question-answering method
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN111598252A (en) * 2020-04-30 2020-08-28 西安理工大学 University computer basic knowledge problem solving method based on deep learning
JP2020191009A (en) * 2019-05-23 2020-11-26 本田技研工業株式会社 Knowledge graph complementing device and knowledge graph complementing method
CN112100351A (en) * 2020-09-11 2020-12-18 陕西师范大学 Method and equipment for constructing intelligent question-answering system through question generation data set

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248689B2 (en) * 2015-10-13 2019-04-02 International Business Machines Corporation Supplementing candidate answers

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
CN109308321A (en) * 2018-11-27 2019-02-05 烟台中科网络技术研究所 A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium
JP2020191009A (en) * 2019-05-23 2020-11-26 本田技研工業株式会社 Knowledge graph complementing device and knowledge graph complementing method
CN111428055A (en) * 2020-04-20 2020-07-17 神思电子技术股份有限公司 Industry-oriented context omission question-answering method
CN111598252A (en) * 2020-04-30 2020-08-28 西安理工大学 University computer basic knowledge problem solving method based on deep learning
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112100351A (en) * 2020-09-11 2020-12-18 陕西师范大学 Method and equipment for constructing intelligent question-answering system through question generation data set

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-view multitask learning for knowledge base relation detectio;Hongzhi Zhang 等;《Knowledge-Based Systems》;20191130;1-10 *
基于多源数据的档案知识问答服务研究;徐彤阳 等;《档案管理》;20201115;44-47 *

Also Published As

Publication number Publication date
CN112507139A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112507139B (en) Knowledge graph-based question and answer method, system, equipment and storage medium
US20200301954A1 (en) Reply information obtaining method and apparatus
JP6894534B2 (en) Information processing method and terminal, computer storage medium
US9305050B2 (en) Aggregator, filter and delivery system for online context dependent interaction, systems and methods
CN109284399B (en) Similarity prediction model training method and device and computer readable storage medium
CN108664599B (en) Intelligent question-answering method and device, intelligent question-answering server and storage medium
US20200034374A1 (en) Customized visualization based intelligence augmentation
CN110580516B (en) Interaction method and device based on intelligent robot
CN111046147A (en) Question answering method and device and terminal equipment
CN112989208B (en) Information recommendation method and device, electronic equipment and storage medium
EP3961426A2 (en) Method and apparatus for recommending document, electronic device and medium
CN112507089A (en) Intelligent question-answering engine based on knowledge graph and implementation method thereof
CN110362664A (en) A kind of pair of chat robots FAQ knowledge base storage and matched method and device
KR20120047622A (en) System and method for managing digital contents
CN116501960B (en) Content retrieval method, device, equipment and medium
CN113220854A (en) Intelligent dialogue method and device for machine reading understanding
CN116956068A (en) Intention recognition method and device based on rule engine, electronic equipment and medium
CN109783612B (en) Report data positioning method and device, storage medium and terminal
CN110489032B (en) Dictionary query method for electronic book and electronic equipment
CN109033082B (en) Learning training method and device of semantic model and computer readable storage medium
CN115640403A (en) Knowledge management and control method and device based on knowledge graph
US11507593B2 (en) System and method for generating queryeable structured document from an unstructured document using machine learning
CN114546326A (en) Virtual human sign language generation method and system
US20200226160A1 (en) Database for unstructured data
CN112286916A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant