CN112507139A - Knowledge graph-based question-answering method, system, equipment and storage medium - Google Patents

Knowledge graph-based question-answering method, system, equipment and storage medium Download PDF

Info

Publication number
CN112507139A
CN112507139A CN202011586544.2A CN202011586544A CN112507139A CN 112507139 A CN112507139 A CN 112507139A CN 202011586544 A CN202011586544 A CN 202011586544A CN 112507139 A CN112507139 A CN 112507139A
Authority
CN
China
Prior art keywords
entity
information
question
reference entity
stack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011586544.2A
Other languages
Chinese (zh)
Other versions
CN112507139B (en
Inventor
陈晓东
马帅
陈华庚
莫小君
赵梅玲
邹凯
王强
徐�明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Original Assignee
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZNV Technology Co Ltd, Nanjing ZNV Software Co Ltd filed Critical Shenzhen ZNV Technology Co Ltd
Priority to CN202011586544.2A priority Critical patent/CN112507139B/en
Publication of CN112507139A publication Critical patent/CN112507139A/en
Application granted granted Critical
Publication of CN112507139B publication Critical patent/CN112507139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a question-answering method, a system, equipment and a storage medium based on a knowledge graph, wherein the method comprises the following steps: receiving a question sentence input by a user, and performing word segmentation extraction on the question sentence to obtain question information; if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph, adding the reference triple information to a preset attention stack; assembling query sentences according to the reference triple information and the question information in the attention stack; and querying the knowledge graph by using query sentences to obtain a question answer, and returning the question answer to the user. The invention can return the user question answers according to the question sentences of the user, and can ensure that the accuracy of multi-turn question answers is higher.

Description

Knowledge graph-based question-answering method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of knowledge graphs, in particular to a question-answering method, a question-answering system, question-answering equipment and a storage medium based on knowledge graphs.
Background
The knowledge map technology describes concept entities and relations thereof in an objective world in a structured mode, expresses the information of the internet into a mode closer to the human cognitive world, and provides the capability of better organizing, managing and understanding mass information of the internet. Knowledge-graph relational concepts, entities and their relationships, where an entity is a thing in the objective world and a concept is a generalization and abstraction of things with the same attributes. A knowledge graph is typically represented using a triplet form, i.e., G ═ (E, R, S), where E ═ { E1, E2, E3, …, en } is a set of entities in a knowledge base, containing a total of | E | different entities; r { R1, R2, …, rn } is a set of relationships in the knowledge base, containing | R | different relationships;
Figure BDA0002865511220000011
representing a set of triples in a knowledge base. The knowledge map, big data and deep learning become the core driver of the development of the internet and artificial intelligenceOne of the power.
Natural Language Understanding is commonly known as man-machine conversation and is a branch subject of artificial intelligence. The research uses computer to simulate human language communication process, so that the computer can understand and use human social natural language such as Chinese and English, to realize human-computer natural language communication, to replace human part of mental labor, including processing and processing work of inquiring data, solving questions, extracting documents, compiling data and all information related to natural language. The intelligent question-answering robot developed by combining knowledge map technology and natural language understanding is widely applied to various fields, such as tianmao customer service, market machine shopping guide and the like.
At present, task-oriented multi-turn dialog systems constructed based on knowledge graphs and natural language understanding techniques are generally used to solve problems in a certain field, such as consulting a certain technology or a certain service or product. In the process of conversation, a questioner generally asks according to the habit of daily conversation, after several conversations, some abbreviated questioning methods are often presented, some pronouns are used, even the pronouns are often omitted, and the system needs to know what the intention of the user is at this time. For example, in a multi-turn dialog, entities in three fields are mentioned, an entity A has attributes A1 and A2, an entity B has attributes B1, B2 and B3, an entity C has attributes C1 and C2, the process of the dialog has completed three rounds, and then in a fourth turn of dialog sentences of the user, the user does not mention related words of the three entities, but the fourth turn of dialog sentences are analyzed into simple predicates, the predicates of the entity A are related to the attributes A1, and the A1 attributes of the entity A need to be fed back to a questioner. If the multiple turns of dialog fail to solve the attention management problem, the system at this time does not know that the current focus of the fourth turn of dialog user is the A1 attribute of the A entity. For example, some intelligent customer service robots often fail to give the user the correct answer because they do not know what the user's abbreviation (in the case of predicates or subjects alone) points to, and the answer may leave the user blind or list a list of questions for the user to reselect to input. Therefore, at present, a task-oriented multi-turn dialog system constructed on the basis of a knowledge graph and a natural language understanding technology cannot provide correct answers for users according to the user's abbreviated inquiry method, so that the user's human-computer interaction experience is poor.
Disclosure of Invention
The embodiment of the application aims to solve the problem that a task-oriented multi-turn dialogue system which is constructed on the basis of the knowledge graph and the natural language understanding technology cannot provide correct answers for users according to the user's abbreviated inquiry method by providing a knowledge graph-based question-answering method, a knowledge graph-based question-answering system, knowledge graph-based question-answering equipment and a storage medium.
The embodiment of the application provides a question-answering method based on a knowledge graph, which comprises the following steps:
receiving a question sentence input by a user, and performing word segmentation extraction on the question sentence to obtain question information; the questioning information comprises an entity and an entity relation;
if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph, the reference triple information is added to a preset attention stack; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
assembling query sentences according to the reference triple information in the attention stack and the question information;
and querying the knowledge graph by using the query sentence to obtain a question answer, and returning the question answer to the user.
In an embodiment, the determining that one of the questioning information and one of the reference triplet information in the knowledge-graph correspondingly match includes:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; alternatively, the first and second electrodes may be,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
In an embodiment, the adding the reference triplet information to a preset attention stack includes:
performing pop operation on the historical information in the attention stack; the historical information comprises a first historical reference entity, a reference entity relation and a second historical reference entity;
and if the entity or the entity relationship is correspondingly the same as the first reference entity or the reference entity relationship and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack.
In an embodiment, after the popping operation is performed on the history information in the attention stack, the method includes:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is not the same as the second reference entity, the historical information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack.
In an embodiment, after the popping operation is performed on the history information in the attention stack, the method further includes:
if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship and the entity is different from the second reference entity, the first reference entity and the reference entity relationship are added to the attention stack, the historical information is added to the attention stack, and the second reference entity is added to the attention stack.
In an embodiment, after the popping operation is performed on the history information in the attention stack, the method further includes:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
In an embodiment, the assembling a query statement according to the reference triplet information in the attention stack and the question information includes:
if the sentence formed after the entity and the entity relation are assembled is an incomplete main predicate structure sentence, acquiring a first reference entity or a reference entity relation, which has the stacking time less than the preset time and is correspondingly matched with the entity or the entity relation, from the attention stack;
and performing statement assembly according to the entity or entity relationship, the stacking time is less than preset time, and a first reference entity or reference entity relationship matched with the entity or the entity relationship to obtain the query statement.
In addition, in order to achieve the above object, the present invention further provides a knowledge-graph-based question-answering system, including:
the information extraction module is used for receiving the question sentences input by the user and performing word segmentation extraction on the question sentences to obtain question information; the questioning information comprises an entity and an entity relation;
the information adding module is used for adding the reference triple information to a preset attention stack if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
the statement assembly module is used for assembling query statements according to the reference triple information in the attention stack and the question information;
and the answer query module is used for querying the knowledge graph by adopting the query statement to obtain a question answer and returning the question answer to the user.
In addition, to achieve the above object, the present invention also provides a terminal device including: the system comprises a memory, a processor and a knowledge-map-based question-answering program which is stored on the memory and can run on the processor, wherein the knowledge-map-based question-answering program realizes the steps of the knowledge-map-based question-answering method when being executed by the processor.
In addition, to achieve the above object, the present invention also provides a storage medium having a knowledge-map-based question-answering program stored thereon, which when executed by a processor implements the steps of the above knowledge-map-based question-answering method.
The technical scheme of the question-answering method, system, equipment and storage medium based on the knowledge graph provided by the embodiment of the application at least has the following technical effects or advantages:
the technical scheme includes that the method includes the steps that a question sentence input by a user is received, word segmentation extraction is conducted on the question sentence, question information is obtained, if it is judged that one piece of information in the question information is correspondingly matched with one piece of information in any one reference triple information in a knowledge graph, the reference triple information is added to a preset attention stack, a query sentence is assembled according to the reference triple information in the attention stack and the question information, the knowledge graph is queried through the query sentence, a question answer is obtained, and the question answer is returned to the user. According to the invention, the entity and entity relationship analyzed in the user question sentences are filtered by using the knowledge graph, and then the stack entering and exiting operation of the attention stack is carried out, so that the high-efficiency utilization rate of the entity and entity relationship in the attention stack can be ensured, the user question answers are returned according to the user question sentences, and the accuracy of multi-turn question answers can be higher.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a knowledge-graph based question-answering method according to the present invention;
FIG. 3 is a schematic flow chart of a second embodiment of the knowledge-graph based question-answering method according to the present invention;
FIG. 4 is a schematic flow chart of a third embodiment of a knowledge-graph based question-answering method according to the present invention;
FIG. 5 is a functional block diagram of the knowledge-graph based question-answering system of the present invention.
Detailed Description
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that fig. 1 is a schematic structural diagram of a hardware operating environment of the terminal device.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a memory 1005, a user interface 1003, a network interface 1004, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal device may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal device configuration shown in fig. 1 is not meant to be limiting for the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a knowledge-graph-based question-answering program. Among them, the operating system is a program that manages and controls hardware and software resources of the terminal device, a knowledge-graph-based question-answering program, and the execution of other software or programs.
In the terminal device shown in fig. 1, the user interface 1003 is mainly used for connecting a terminal, and performing data communication with the terminal; the network interface 1004 is mainly used for the background server and performs data communication with the background server; the processor 1001 may be used to invoke a knowledge-graph based question-answering program stored in the memory 1005.
In this embodiment, the terminal device includes: a memory 1005, a processor 1001, and a knowledge-graph based question-answering program stored on the memory and executable on the processor, wherein:
when the processor 1001 calls the knowledge-graph based question-answering program stored in the memory 1005, the following operations are performed:
receiving a question sentence input by a user, and performing word segmentation extraction on the question sentence to obtain question information; the questioning information comprises an entity and an entity relation;
if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph, the reference triple information is added to a preset attention stack; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
assembling query sentences according to the reference triple information in the attention stack and the question information;
and querying the knowledge graph by using the query sentence to obtain a question answer, and returning the question answer to the user.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; alternatively, the first and second electrodes may be,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
performing pop operation on the historical information in the attention stack; the historical information comprises a first historical reference entity, a reference entity relation and a second historical reference entity;
and if the entity or the entity relationship is correspondingly the same as the first reference entity or the reference entity relationship and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is not the same as the second reference entity, the historical information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship and the entity is different from the second reference entity, the first reference entity and the reference entity relationship are added to the attention stack, the historical information is added to the attention stack, and the second reference entity is added to the attention stack.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
When the processor 1001 calls the knowledge-graph-based question-answering program stored in the memory 1005, the following operations are also performed:
if the sentence formed after the entity and the entity relation are assembled is an incomplete main predicate structure sentence, acquiring a first reference entity or a reference entity relation, which has the stacking time less than the preset time and is correspondingly matched with the entity or the entity relation, from the attention stack;
and performing statement assembly according to the entity or entity relationship, the stacking time is less than preset time, and a first reference entity or reference entity relationship matched with the entity or the entity relationship to obtain the query statement.
Embodiments of the present invention provide embodiments of a knowledge-graph based question-and-answer method, where it is noted that although a logical order is shown in the flow chart, in some cases the steps shown or described may be performed in an order different from that shown here, and the knowledge-graph based question-and-answer method is applied to human-machine question-and-answer, such as a skatecat customer service, a mall robot shopping guide, and so forth.
As shown in fig. 2, in a first embodiment of the present application, the method for question answering based on knowledge-graph of the present application includes the following steps:
step 210: and receiving a question sentence input by a user, and performing word segmentation extraction on the question sentence to obtain question information.
In this embodiment, the question sentence may be a question sentence, and specifically may be text information input by the user, for example, the question sentence input by the user in the browser is: how to fry the deep steak can also be natural voice input by the user, and if the user interacts with the intelligent robot, the user asks the intelligent robot: "how the weather is tomorrow", the present embodiment does not specifically limit the form and input manner of the question sentence. The questioning information includes entities and entity relationships. Specifically, after a question sentence input by a user is received, the question sentence is analyzed by using a Natural Language Processing (NLP) technology to obtain a word segmentation included in the question sentence, and then an entity and an entity relationship are extracted from the obtained word segmentation to obtain question information included in the question sentence.
Step 220: and if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph, adding the reference triple information to a preset attention stack.
In this embodiment, the knowledge graph is preset, and different types of entities and entity relationships are stored in the knowledge graph. The reference entities and the reference entity relations with the same attribute are classified into a type, and each reference entity and reference entity relation has a type label. The reference entity tag set T ═ { T1, T2, T3, …, Tn }, and the reference entity relationship tag set R ═ K1, K2, K3, …, Km }. Each reference entity and reference entity relationship has a corresponding tag Tn or Km within domain knowledge. Each reference entity and reference entity relationship has a plurality of attributes, which are key-value pairs of the form { attribute NAME, attribute value TEXT }. Specifically, the results are shown in tables 1 and 2.
Table 1 reference entity tag and attribute structure
Reference entity Reference entity tag Attribute name Attribute value
E1 T8 NAME1 TEXT1
E2 T2 NAME2 TEXT2
E3 T2 NAME2 TEXT3
E3 T2 NAME3 TEXT4
E5 T3 NAME5 TEXT5
Table 2 reference entity relationship labels and attribute structure
Referencing entity relationships Referencing entity relationship labels Attribute name Attribute value
R1 K5 NAME1 TEXT1
R2 K3 NAME2 TEXT2
R3 K3 NAME3 TEXT3
R4 K2 NAME3 TEXT4
R5 K3 NAME4 TEXT5
Specifically, the reference entity and the reference entity relationship are stored in the knowledge graph in the form of triple information, the reference entity includes a first reference entity and a second reference entity, the triple information includes a first reference entity, a reference entity relationship and a second reference entity, the first reference entity can be used as the second reference entity, and the second reference entity can also be used as the first reference entity. The triplet information is specifically represented as (a first reference entity, a reference entity relationship, and a second reference entity), where the first reference entity represents a subject entity in the sentence, the second reference entity represents an object entity in the sentence, and the reference entity relationship represents a relationship between the first reference entity and the second reference entity. After the entity and the entity relationship included in the current question sentence are extracted, a first reference entity and a reference entity relationship respectively corresponding to the entity and the entity relationship can be obtained in the knowledge graph according to the entity and the entity relationship, a first reference entity label can be obtained according to the first reference entity, a reference entity relationship label can be obtained according to the reference entity relationship, a second reference entity corresponding to the first reference entity can be obtained according to the first reference entity, and a second reference entity label can be obtained according to the second reference entity.
Further, comparing the entity or entity relationship correspondence in the question sentence with a first reference entity or a second reference entity or a reference entity relationship in the knowledge graph, and if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold value, determining that the entity is matched with the first reference entity or the second reference entity; or if the similarity between the entity relationship and the reference entity relationship reaches a preset threshold, determining that the entity relationship is matched with the reference entity relationship. The matching of the entity with the first reference entity or the second reference entity means that the entity is the same as, similar to, associated with, etc. the first reference entity or the second reference entity, and the corresponding matching of the entity relationship with the reference entity relationship means that the entity relationship is the same as, similar to, associated with, etc. the reference entity relationship. And judging whether the entity is matched with the first reference entity or the second reference entity through a similarity judgment mode corresponding to the text. If the specific text information of the entity relationship is compared with the specific text information of the reference entity relationship, other manners may also be adopted, and this embodiment is not particularly limited. When the entity is matched with the first reference entity or the second reference entity or the entity relationship is matched with the reference entity relationship, the first reference entity and the reference entity relationship respectively corresponding to the entity and the entity relationship are added to the attention stack, and when the first reference entity and the reference entity relationship are added to the attention stack, the first reference entity label, the reference entity relationship label and the second reference entity label corresponding to the first reference entity are also added to the attention stack. The attention stack is configured to store a first reference entity and a reference entity relationship in the knowledge graph, which correspond to an entity and an entity relationship in different question sentences, specifically shown in table 3. The first reference entity in table 3 is also referred to as a subject entity, the second reference entity is also referred to as an object entity, and a plurality of different sets of reference triplet information are stored in table 3, for example, from top to bottom, E5 to E3 are sets of triplet information respectively.
TABLE 3 attention Stack Structure
Figure BDA0002865511220000121
It should be noted that the information in the attention stack is last-in-first-out, the length of the line number in the attention stack is set to a fixed length (the length is not large), and the information is cleared from the tail of the attention stack after the length is exceeded. And when the attention stack is pushed every time, if the total number of the triple information exceeds the set stack row number, automatically deleting the record from the tail part of the attention stack. When the man-machine answers, the question sentences of the user are kept unchanged in the preset stay time, like when a user communicates with the intelligent robot, a question is asked once, and if the question exceeds the preset stay time, all the triple information stored in the attention stack is emptied. Or after the user communicates with the shop machine customer service, the communication window is closed, and the operation of clearing all the triple information stored in the attention stack is executed in the same way.
Step 230: and assembling a query statement according to the reference triple information in the attention stack and the question information.
In this embodiment, after the first reference entity and the reference entity relationship respectively corresponding to the entity and the entity relationship are added to the attention stack, sentence assembly is performed according to the entity and the entity relationship in the current question sentence and the reference triplet information in the attention stack that matches the entity and the entity relationship in the current question sentence, and then the query sentence for querying the question answer corresponding to the current question sentence is obtained.
Step 240: and querying the knowledge graph by using the query sentence to obtain a question answer, and returning the question answer to the user.
In this embodiment, the query statement corresponding to the current question is used to query the knowledge graph, the knowledge graph obtains the question answer corresponding to the query statement, and the question answer is returned to the user. And when the query sentence cannot exceed the query range of the knowledge graph, returning a guiding answer to the client, and guiding the user to input the question sentence in the query range of the knowledge graph or not positively answering the question sentence of the user.
According to the technical scheme, the embodiment adopts the technical means that the question sentences input by the user are received, the word segmentation extraction is carried out on the question sentences to obtain the question information, if one piece of information in the question information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge map, the reference triple information is added into a preset attention stack, the query sentences are assembled according to the reference triple information and the question information in the attention stack, the knowledge map is queried by the query sentences to obtain the question answers, the question answers are returned to the user, the stack entering and exiting operations of the attention stack are carried out after the entity and entity relations analyzed in the question sentences of the user are filtered by the knowledge map, the high efficiency utilization rate of the entity and entity relations in the attention stack can be ensured, and the question answers are returned to the user according to the question sentences of the user, and the accuracy of multiple rounds of questions and answers can be higher.
As shown in fig. 3, in the second embodiment of the present application, the step S220 of adding the reference triplet information to the preset attention stack includes the following steps:
step 211: and performing pop operation on the historical information in the attention stack.
Specifically, the history information is triple information which is added to the attention stack last before, and includes a first reference entity, a reference entity relationship and a second reference entity of the history. And after the relation between the entity in the current question sentence and the entity is acquired, and it is judged that one piece of information in the question information is correspondingly matched with one piece of information in any reference triple information in the knowledge map, the stack pulling operation is carried out on the historical information in the attention stack, namely the historical information is removed from the attention stack.
Step 212: and if the entity or the entity relationship is correspondingly the same as the first reference entity or the reference entity relationship and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack.
Specifically, if the entity in the current question sentence is the same as the first reference entity queried in the knowledge graph, and the entity in the current question sentence is also the same as the second reference entity corresponding to the first reference entity queried in the knowledge graph, the first reference entity, the reference entity relationship corresponding to the first reference entity, and the second reference entity corresponding to the first reference entity are directly stacked. Or, if the entity relationship in the current question sentence is the same as the reference entity relationship queried in the knowledge graph, and the entity relationship in the current question sentence is also the same as the second reference entity corresponding to the reference entity relationship queried in the knowledge graph, directly stacking the first reference entity corresponding to the reference entity relationship, the reference entity relationship and the second reference entity corresponding to the first reference entity.
Step 213: if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is not the same as the second reference entity, the historical information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack.
Specifically, if the entity in the current question sentence is different from the first reference entity queried in the knowledge graph, and the entity in the current question sentence is different from the second reference entity corresponding to the first reference entity queried in the knowledge graph, the previously removed historical information is stacked, and then the first reference entity, the reference entity relationship corresponding to the first reference entity, and the second reference entity corresponding to the first reference entity are directly stacked. Or if the entity relationship in the current question sentence is not the same as the reference entity relationship queried in the knowledge graph, and the entity relationship in the current question sentence is not the same as the second reference entity corresponding to the reference entity relationship queried in the knowledge graph, stacking the previously removed historical information, and then stacking the first reference entity corresponding to the reference entity relationship, the reference entity relationship and the second reference entity corresponding to the first reference entity.
Step 214: if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship and the entity is different from the second reference entity, the first reference entity and the reference entity relationship are added to the attention stack, the historical information is added to the attention stack, and the second reference entity is added to the attention stack.
Specifically, if the entity in the current question sentence is the same as the first reference entity queried in the knowledge graph, and the entity in the current question sentence is different from the second reference entity corresponding to the first reference entity queried in the knowledge graph, the reference entity relationship between the first reference entity and the first reference entity is stacked, the history information removed before is stacked, and then the second reference entity corresponding to the first reference entity is stacked. Or, if the entity relationship in the current question sentence is the same as the reference entity relationship queried in the knowledge map, and the entity in the current question sentence is different from the second reference entity corresponding to the first reference entity queried in the knowledge map, stacking the first reference entity and the reference entity relationship corresponding to the reference entity relationship, stacking the history information removed before, and stacking the second reference entity corresponding to the first reference entity.
Step 215: if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
Specifically, if the entity in the current question sentence is different from the first reference entity queried in the knowledge graph, and the entity in the current question sentence is the same as the second reference entity corresponding to the first reference entity queried in the knowledge graph, the second reference entity corresponding to the first reference entity is stacked, the history information removed before is stacked, and then the reference entity relationship between the first reference entity and the first reference entity is stacked. Or if the entity relationship in the current question sentence is not the same as the reference entity relationship queried in the knowledge map, and the entity in the current question sentence is the same as the second reference entity corresponding to the reference entity relationship queried in the knowledge map, stacking the second reference entity corresponding to the first reference entity, stacking the history information removed before, and stacking the first reference entity corresponding to the reference entity relationship and the reference entity relationship.
As shown in fig. 4, in the third embodiment of the present application, step S230 includes the steps of:
step S231: and if the sentence formed after the entity and the entity relation are assembled is an incomplete main predicate structure sentence, acquiring a first reference entity or reference entity relation, which has the stacking time less than the preset time and is correspondingly matched with the entity or the entity relation, from the attention stack.
In this embodiment, after the entity-entity relationship is extracted from the current question sentence, sentence assembly is performed on the entity-entity relationship to obtain a sentence composed of the entity-entity relationship, and then whether the sentence composed of the entity-entity relationship is a complete subject-predicate structural sentence is determined. If the sentence formed by the relation between the entities and the entities is judged to be the complete subject predicate structural sentence, the current question sentence is also the complete subject predicate structural sentence, namely the current question sentence has a subject and a predicate, the entities are subjects, and the entity relation is a predicate, so that the knowledge graph is directly inquired by the sentence formed by the relation between the entities to obtain the question answer corresponding to the current question sentence. For example, the user asks the intelligent robot for the first time: how do the Feili beefsteak fry? "; extracting 'phenanthrene beefsteak' as an entity (subject), 'frying' as an entity relationship (predicate), matching with the phenanthrene beefsteak (beefsteak entity label) and the frying method (practice relationship label) in the knowledge graph, adding an attention stack to the phenanthrene beefsteak (beefsteak entity label) and the frying method (practice relationship label), wherein a sentence formed by the entity and the entity relationship is 'phenanthrene beefsteak (subject entity) -frying method (predicate entity relationship) - >, then adopting' phenanthrene beefsteak (subject entity) -frying method (predicate entity relationship) - >, executing knowledge graph query to obtain the frying method (object entity) of the phenanthrene beefsteak, returning the frying method of the phenanthrene beefsteak to a user, and adding the frying method (object entity) of the phenanthrene beefsteak to the attention stack.
If the statement formed by the entity and the entity relationship is judged to be the incomplete subject predicate structure statement, the current question statement is the incomplete subject predicate structure statement, namely the current question statement is the abbreviation statement, and only the subject or predicate exists in the current question statement, namely one of the extracted entity and entity relationship does not exist. If only the subject is in the current question sentence, the subject is used as an entity to be processed, and then a first reference entity which has the stack entry time smaller than the preset time and is matched with the entity is obtained from the attention stack. Or if only the predicate exists in the current question sentence, the predicate is taken as the entity relationship to be processed, and then the reference entity relationship, of which the stacking time is less than the preset time and is matched with the entity relationship, is obtained from the attention stack. The stack-entering time is smaller than the preset time, the latest stack-entering time closest to the current time is represented, the first reference entity or reference entity relationship of the stack-entering time smaller than the preset time is represented, and the first reference entity or reference entity relationship of the latest stack-entering time is represented.
Step S232: and performing statement assembly according to the entity or entity relationship, the stacking time is less than preset time, and a first reference entity or reference entity relationship matched with the entity or the entity relationship to obtain the query statement.
In this embodiment, if only the subject is in the current question sentence, the subject is used as an entity, the first reference entity which is the latest stacking time and is matched with the entity is searched from the attention stack, and then the attention stack is continuously popped. And if the first label is consistent with the second label, performing statement assembly by using the reference entity relationship corresponding to the currently popped first reference entity and the entity in the current question statement to obtain the query statement. For example, the user asks the intelligent robot a second time: "naked-eye beefsteak worsted? "the question sentence is judged to be an incomplete structural sentence of the subject and the predicate, and is matched with an entity naked-eye beefsteak in the knowledge graph, reference triple information about the phenanthrene beefsteak added in the previous round of conversation is taken out from the attention stack, and the naked-eye beefsteak and the phenanthrene beefsteak have the same label (beefsteak entity label), so that the naked-eye beefsteak is replaced by the phenanthrene beefsteak, and forms an integral query sentence with the decoction method, and the knowledge graph is queried. The query statement is: preparing the naked-eye beefsteak (subject entity) -decocting (predicate relation) - >, obtaining the decocting method (object entity) of the naked-eye beefsteak, returning the decocting method of the naked-eye beefsteak to a user, and adding the decocting method (object entity) of the naked-eye beefsteak to an attention stack. Or, if only the predicate exists in the current question sentence, the predicate is taken as the entity relationship, the reference entity relationship which is the latest stacking time and is matched with the entity is searched from the attention stack, and then the attention stack is continuously popped. And recording a reference entity label of the reference entity relationship which is currently popped as a third label, recording a reference entity relationship label of the reference entity relationship which is popped in the stack at the latest stacking time and is matched with the entity as a fourth label, and if the third label is consistent with the fourth label, performing statement assembly by using the first reference entity corresponding to the reference entity relationship which is currently popped and the entity relationship in the current question statement to obtain the query statement.
And if the query statement assembled by the two methods cannot query the question answer, assembling the query statement by using the second reference entity which is currently popped and the entity or entity relationship in the current question statement. And continuously assembling the query statements according to the three modes until a question answer can be queried through the query statements or the stack bottom of the attention stack is traversed, and then stacking the current popped reference triple information according to the sequence in the previous stacking.
According to the technical scheme, if the sentence formed after the entity-entity relationship is assembled is the incomplete main predicate structure sentence, the first reference entity and the reference entity relationship, which have the stacking time less than the preset time and are matched with the entity-entity relationship, are obtained from the attention stack, and the first reference entity and the reference entity relationship, which have the stacking time less than the preset time and are matched with the entity-entity relationship, are assembled to obtain the query sentence.
As shown in fig. 5, the present application provides a knowledge-graph-based question-answering system, which includes:
the information extraction module 310 is configured to receive a question sentence input by a user, perform word segmentation extraction on the question sentence, and obtain question information; the questioning information comprises an entity and an entity relation;
an information adding module 320, configured to add one of the query information to a preset attention stack if it is determined that one of the query information matches one of any reference triplet information in the knowledge graph; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
a statement assembling module 330, configured to assemble a query statement according to the reference triplet information in the attention stack and the question information;
and the answer query module 340 is configured to query the knowledge graph by using the query statement to obtain a question answer, and return the question answer to the user.
Further, in terms of determining that one of the query messages corresponds to one of the reference triple messages in the knowledge graph, the message adding module 320 is specifically configured to determine that the entity matches the first reference entity or the second reference entity if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold; or if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
Further, the information adding module 320, in adding the reference triplet information to a preset attention stack, includes:
the pop management unit is used for popping the historical information in the attention stack; the historical information comprises a first historical reference entity, a reference entity relation and a second historical reference entity;
a stack management unit, configured to add the first reference entity, the reference entity relationship, and the second reference entity to the attention stack if the entity or the entity relationship corresponds to the first reference entity or the reference entity relationship and is the same as the second reference entity.
Further, the stack entry management unit is further configured to, if the entity or the entity relationship is not corresponding to the first reference entity or the reference entity relationship, and the entity is not the same as the second reference entity, add the history information to the attention stack, and then add the first reference entity, the reference entity relationship, and the second reference entity to the attention stack.
Further, the stack pushing management unit is configured to, if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship, and the entity is different from the second reference entity, add the first reference entity and the reference entity relationship to the attention stack, add the history information to the attention stack, and add the second reference entity to the attention stack.
Further, the stack pushing management unit is configured to, if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, add the second reference entity to the attention stack, add the history information to the attention stack, and add the first reference entity and the reference entity relationship to the attention stack.
Further, the sentence assembling module 330 includes:
a statement judging unit, configured to, if a statement formed after the entity and the entity relationship are assembled is an incomplete main predicate structure statement, obtain, from the attention stack, a first reference entity or a reference entity relationship that has a stacking time shorter than a preset time and is correspondingly matched with the entity or the entity relationship;
and the statement construction unit is used for performing statement assembly according to the entity or the entity relationship, the stacking time is less than preset time, and the first reference entity or the reference entity relationship is matched with the entity or the entity relationship to obtain the query statement.
The specific implementation of the knowledge-graph-based question-answering system of the invention is basically the same as that of the above knowledge-graph-based question-answering method, and is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A question-answering method based on a knowledge graph is characterized by comprising the following steps:
receiving a question sentence input by a user, and performing word segmentation extraction on the question sentence to obtain question information; the questioning information comprises an entity and an entity relation;
if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph, the reference triple information is added to a preset attention stack; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
assembling query sentences according to the reference triple information in the attention stack and the question information;
and querying the knowledge graph by using the query sentence to obtain a question answer, and returning the question answer to the user.
2. The method of claim 1, wherein the determining that one of the questioning information and one of any reference triplet information in the knowledge-graph correspondingly match comprises:
if the similarity between the entity and the first reference entity or the second reference entity reaches a preset threshold, determining that the entity is matched with the first reference entity or the second reference entity; alternatively, the first and second electrodes may be,
and if the similarity between the entity relationship and the reference entity relationship reaches the preset threshold, determining that the entity relationship is matched with the reference entity relationship.
3. The method of claim 2, wherein the adding the reference triplet information to a preset attention stack comprises:
performing pop operation on the historical information in the attention stack; the historical information comprises a first historical reference entity, a reference entity relation and a second historical reference entity;
and if the entity or the entity relationship is correspondingly the same as the first reference entity or the reference entity relationship and the entity is the same as the second reference entity, adding the first reference entity, the reference entity relationship and the second reference entity to the attention stack.
4. The method of claim 3, wherein said popping the historical information in the attention stack comprises:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is not the same as the second reference entity, the historical information is added to the attention stack, and then the first reference entity, the reference entity relationship and the second reference entity are added to the attention stack.
5. The method of claim 3, wherein after the popping the historical information in the attention stack, further comprising:
if the entity or the entity relationship is the same as the first reference entity or the reference entity relationship and the entity is different from the second reference entity, the first reference entity and the reference entity relationship are added to the attention stack, the historical information is added to the attention stack, and the second reference entity is added to the attention stack.
6. The method of claim 3, wherein after the popping the historical information in the attention stack, further comprising:
if the entity or the entity relationship is not the same as the first reference entity or the reference entity relationship, and the entity is the same as the second reference entity, the second reference entity is added to the attention stack, the history information is added to the attention stack, and then the first reference entity and the reference entity relationship are added to the attention stack.
7. The method of claim 1, wherein said assembling a query statement from the reference triplet information in the attention stack and the question information comprises:
if the sentence formed after the entity and the entity relation are assembled is an incomplete main predicate structure sentence, acquiring a first reference entity or a reference entity relation, which has the stacking time less than the preset time and is correspondingly matched with the entity or the entity relation, from the attention stack;
and performing statement assembly according to the entity or entity relationship, the stacking time is less than preset time, and a first reference entity or reference entity relationship matched with the entity or the entity relationship to obtain the query statement.
8. A knowledge-graph-based question-answering system, comprising:
the information extraction module is used for receiving the question sentences input by the user and performing word segmentation extraction on the question sentences to obtain question information; the questioning information comprises an entity and an entity relation;
the information adding module is used for adding the reference triple information to a preset attention stack if one piece of information in the questioning information is judged to be correspondingly matched with one piece of information in any reference triple information in the knowledge graph; the triple information comprises a first reference entity, a reference entity relation and a second reference entity;
the statement assembly module is used for assembling query statements according to the reference triple information in the attention stack and the question information;
and the answer query module is used for querying the knowledge graph by adopting the query statement to obtain a question answer and returning the question answer to the user.
9. A terminal device, comprising: a memory, a processor, and a knowledge-graph based question-answering program stored on the memory and executable on the processor, the knowledge-graph based question-answering program when executed by the processor implementing the steps of the knowledge-graph based question-answering method according to any one of claims 1 to 7.
10. A storage medium having stored thereon a knowledge-graph based question-answering program which, when executed by a processor, implements the steps of the knowledge-graph based question-answering method of any one of claims 1 to 7.
CN202011586544.2A 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium Active CN112507139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011586544.2A CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011586544.2A CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112507139A true CN112507139A (en) 2021-03-16
CN112507139B CN112507139B (en) 2024-03-12

Family

ID=74951746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011586544.2A Active CN112507139B (en) 2020-12-28 2020-12-28 Knowledge graph-based question and answer method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112507139B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779220A (en) * 2021-09-13 2021-12-10 内蒙古工业大学 Mongolian multi-hop question-answering method based on three-channel cognitive map and graph attention network
CN114610860A (en) * 2022-05-07 2022-06-10 荣耀终端有限公司 Question answering method and system
CN116303919A (en) * 2022-11-30 2023-06-23 荣耀终端有限公司 Question and answer method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103069A1 (en) * 2015-10-13 2017-04-13 International Business Machines Corporation Supplementing candidate answers
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
CN109308321A (en) * 2018-11-27 2019-02-05 烟台中科网络技术研究所 A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium
CN111428055A (en) * 2020-04-20 2020-07-17 神思电子技术股份有限公司 Industry-oriented context omission question-answering method
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN111598252A (en) * 2020-04-30 2020-08-28 西安理工大学 University computer basic knowledge problem solving method based on deep learning
JP2020191009A (en) * 2019-05-23 2020-11-26 本田技研工業株式会社 Knowledge graph complementing device and knowledge graph complementing method
CN112100351A (en) * 2020-09-11 2020-12-18 陕西师范大学 Method and equipment for constructing intelligent question-answering system through question generation data set

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
US20170103069A1 (en) * 2015-10-13 2017-04-13 International Business Machines Corporation Supplementing candidate answers
CN109308321A (en) * 2018-11-27 2019-02-05 烟台中科网络技术研究所 A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium
JP2020191009A (en) * 2019-05-23 2020-11-26 本田技研工業株式会社 Knowledge graph complementing device and knowledge graph complementing method
CN111428055A (en) * 2020-04-20 2020-07-17 神思电子技术股份有限公司 Industry-oriented context omission question-answering method
CN111598252A (en) * 2020-04-30 2020-08-28 西安理工大学 University computer basic knowledge problem solving method based on deep learning
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112100351A (en) * 2020-09-11 2020-12-18 陕西师范大学 Method and equipment for constructing intelligent question-answering system through question generation data set

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONGZHI ZHANG 等: "Multi-view multitask learning for knowledge base relation detectio", 《KNOWLEDGE-BASED SYSTEMS》, 30 November 2019 (2019-11-30), pages 1 - 10 *
徐彤阳 等: "基于多源数据的档案知识问答服务研究", 《档案管理》, 15 November 2020 (2020-11-15), pages 44 - 47 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779220A (en) * 2021-09-13 2021-12-10 内蒙古工业大学 Mongolian multi-hop question-answering method based on three-channel cognitive map and graph attention network
CN113779220B (en) * 2021-09-13 2023-06-23 内蒙古工业大学 Mongolian multi-hop question-answering method based on three-channel cognitive map and graph annotating semantic network
CN114610860A (en) * 2022-05-07 2022-06-10 荣耀终端有限公司 Question answering method and system
CN114610860B (en) * 2022-05-07 2022-09-27 荣耀终端有限公司 Question answering method and system
CN116303919A (en) * 2022-11-30 2023-06-23 荣耀终端有限公司 Question and answer method and system

Also Published As

Publication number Publication date
CN112507139B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
US11379529B2 (en) Composing rich content messages
CN107220352B (en) Method and device for constructing comment map based on artificial intelligence
JP6894534B2 (en) Information processing method and terminal, computer storage medium
US11899681B2 (en) Knowledge graph building method, electronic apparatus and non-transitory computer readable storage medium
RU2701110C2 (en) Studying and using contextual rules of extracting content to eliminate ambiguity of requests
CN112507139B (en) Knowledge graph-based question and answer method, system, equipment and storage medium
CN108664599B (en) Intelligent question-answering method and device, intelligent question-answering server and storage medium
JP6404106B2 (en) Computing device and method for connecting people based on content and relationship distance
US11232134B2 (en) Customized visualization based intelligence augmentation
US20140379719A1 (en) System and method for tagging and searching documents
US10901992B2 (en) System and method for efficiently handling queries
CN114691831A (en) Task-type intelligent automobile fault question-answering system based on knowledge graph
EP3961426A2 (en) Method and apparatus for recommending document, electronic device and medium
CN110765342A (en) Information query method and device, storage medium and intelligent terminal
CN116501960B (en) Content retrieval method, device, equipment and medium
CN111400473A (en) Method and device for training intention recognition model, storage medium and electronic equipment
CN112507089A (en) Intelligent question-answering engine based on knowledge graph and implementation method thereof
CN116521841A (en) Method, device, equipment and medium for generating reply information
CN105808688A (en) Complementation retrieval method and device based on artificial intelligence
CN113220854A (en) Intelligent dialogue method and device for machine reading understanding
EP2778982A1 (en) Attribute detection
CN109829033A (en) Method for exhibiting data and terminal device
CN109783612B (en) Report data positioning method and device, storage medium and terminal
CN117149804A (en) Data processing method, device, electronic equipment and storage medium
CN109033082B (en) Learning training method and device of semantic model and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant