CN112989002A - Question-answer processing method, device and equipment based on knowledge graph - Google Patents

Question-answer processing method, device and equipment based on knowledge graph Download PDF

Info

Publication number
CN112989002A
CN112989002A CN202110350525.8A CN202110350525A CN112989002A CN 112989002 A CN112989002 A CN 112989002A CN 202110350525 A CN202110350525 A CN 202110350525A CN 112989002 A CN112989002 A CN 112989002A
Authority
CN
China
Prior art keywords
graph
information
query statement
answer
question
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110350525.8A
Other languages
Chinese (zh)
Other versions
CN112989002B (en
Inventor
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110350525.8A priority Critical patent/CN112989002B/en
Publication of CN112989002A publication Critical patent/CN112989002A/en
Application granted granted Critical
Publication of CN112989002B publication Critical patent/CN112989002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification relates to the technical field of artificial intelligence, and particularly discloses a question and answer processing method, a question and answer processing device and question and answer processing equipment based on a knowledge graph, wherein the question and answer processing method comprises the steps of receiving question information which is sent by terminal equipment and represented by using a natural language; generating a graph query statement of at least one sequentially executed execution unit corresponding to the question information; accessing a pre-constructed knowledge graph based on the graph query statement to obtain an execution result of the execution unit, and storing the execution result so that the graph query statement of the next execution unit accesses the knowledge graph based on the execution result; and determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed back the answer information to the terminal equipment. The accuracy of answer information generation can be greatly improved.

Description

Question-answer processing method, device and equipment based on knowledge graph
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, and a device for processing a question and answer based on a knowledge graph.
Background
Under the situation that the competition of the current financial market is increasingly violent, the customer service becomes an indispensable key component in the company ecosphere. With the development of artificial intelligence and natural language processing technology, automatic question answering instead of artificial seat question answering has more and more extensive development and application. In the practical work of bank customer service, important question and answer scenes such as website information consultation, credit card information consultation and the like are frequently used in the requirement of artificial seats, and the efficiency of customer service work in the scenes is imperatively improved by an artificial intelligence technology.
At present, with the continuous and deep research and popularization of knowledge graph technology, knowledge data is gradually converted into knowledge graphs in the industry, the knowledge graphs are used as data assets for management and access storage, the traditional full-text knowledge searching mode is replaced, and the technical fields of knowledge visualization, intelligent question answering, accurate searching, knowledge reasoning and the like are achieved. The knowledge-graph-based question answering technically mainly comprises the steps of designing a graph query statement template in advance according to a question category range, carrying out abstract recognition on a question after the question is received, obtaining a query intention after the linked result is matched with the template according to the question recognition, and generating a graph query statement by replacing the linked result. By using the method of the graph query statement template, a large number of graph query statement templates need to be constructed in advance, and large labor cost needs to be consumed. Meanwhile, the constructed graph query statement template is limited after all, and the flexible requirement of a knowledge consultation scene cannot be met.
No effective solution to the above problems is currently available.
Disclosure of Invention
The embodiment of the specification aims to provide a question and answer processing method, device and equipment based on a knowledge graph, and the accuracy of answer information generation can be further improved.
The specification provides a question-answer processing method, device and equipment based on a knowledge graph, which are realized by the following modes:
a knowledge-graph-based question-answer processing method, comprising: receiving problem information represented by a natural language and sent by terminal equipment; generating a graph query statement of at least one sequentially executed execution unit corresponding to the question information; accessing a pre-constructed knowledge graph based on the graph query statement to obtain an execution result of the execution unit, and storing the execution result so that the graph query statement of the next execution unit accesses the knowledge graph based on the execution result; and determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed back the answer information to the terminal equipment.
In other embodiments of the method provided herein, the method further comprises: comparing the similarity between the answer information and the sample answers corresponding to the question information to determine the expected value of the answer information based on the similarity; under the condition that the expected value is larger than a specified value, storing the graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set as a graph query statement sequence so as to optimize a statement generation module based on the reference information set; the statement generation module is used for generating a graph query statement of the problem information.
In other embodiments of the method provided herein, the sentence generation module is constructed based on iterative reinforcement learning and a maximum likelihood estimation algorithm.
In other embodiments of the method provided herein, the method further comprises: extracting a feature vector of the graph query statement when the expected value is greater than a specified value; extracting a map feature vector of a sample answer corresponding to the question information based on the knowledge map; the map feature vector is generated at least according to an answer map path; comparing the query vector to the profile feature vector to determine whether the sequence of profile query statements is correct; marking the graph query statement sequence by using the result of whether the graph query statement sequence is correct, so as to optimize the statement generation module based on the marked graph query statement sequence.
In other embodiments of the method provided in this specification, the accessing a pre-constructed knowledge-graph based on the graph query statement includes: the execution module accesses a pre-constructed knowledge graph based on the graph query statement; wherein the execution module is built using a lisp interpreter.
In other embodiments of the method provided in this specification, the accessing a pre-constructed knowledge-graph based on the graph query statement includes: checking the graph query statement by using a pre-constructed statement structure information set; wherein the statement structural information set at least comprises graph query statement syntax structural information of Gremlin language; in the event that the check passes, accessing a pre-constructed knowledge-graph based on the graph query statement.
In another aspect, an embodiment of the present specification further provides a knowledge-graph-based question-answering processing apparatus, where the apparatus includes: the receiving module is used for receiving the problem information which is sent by the terminal equipment and represented by using the natural language; a statement generating module, configured to generate a graph query statement of at least one sequentially executed execution unit corresponding to the question information; the execution module is used for accessing a pre-constructed knowledge graph based on the graph query statement, obtaining the execution result of the execution unit, and storing the execution result so as to enable the graph query statement of the next execution unit to access the knowledge graph based on the execution result; and the answer generation module is used for determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed the answer information back to the terminal equipment.
In other embodiments of the apparatus provided herein, the apparatus further comprises: the first comparison module is used for comparing the similarity between the answer information and the sample answer corresponding to the question information so as to determine the expected value of the answer information based on the similarity; an information set updating module, configured to store, as a graph query statement sequence, a graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set when the expected value is greater than a specified value, so as to optimize the statement generation module based on the reference information set; the statement generation module is used for generating a graph query statement of the problem information.
In other embodiments of the apparatus provided herein, the apparatus further comprises: the first feature extraction module is used for extracting a feature vector of the graph query statement under the condition that the expected value is greater than a specified value; the second feature extraction module is used for extracting a map feature vector of a sample answer corresponding to the question information based on the knowledge map; the map feature vector is generated at least according to an answer map path; a second comparison module, configured to compare the query vector with the map feature vector to determine whether the map query statement sequence is correct; and the marking module is used for marking the graph query statement sequence by using the result of whether the graph query statement sequence is correct or not so as to optimize the statement generation module based on the marked graph query statement sequence.
In another aspect, the present specification further provides a knowledge-graph-based question-answering apparatus, which includes at least one processor and a memory for storing processor-executable instructions, where the instructions, when executed by the processor, implement the steps of the method according to any one or more of the above-mentioned embodiments.
In the method, the apparatus, and the device for processing a question and answer based on a knowledge graph provided in one or more embodiments of the present specification, generation of answer information corresponding to question information is split into at least one form of execution units that are executed in sequence, a graph query statement of each execution unit is generated, a database is accessed or logic calculation is executed by using the corresponding graph query statement, and an execution result is stored in an associated manner. The execution of the graph query statement of the following execution unit is sequentially executed based on the execution result of the preceding execution unit to generate final answer information. The whole processing logic can be regarded as an inverted tree structure, a leaf is taken as each execution unit, the execution result of the leaf is cached in the memory and quoted for the graph query statement execution of the subsequent execution unit, and the answer information of the question information is generated according to the result executed by the final root node. The processing logic of the whole question and answer information is clearer, the optimization and adjustment of the whole model processing logic are facilitated, the model training efficiency and the accuracy of the model execution result can be greatly improved, and the accuracy of answer information generation is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
FIG. 1 is a schematic flow chart of an embodiment of a knowledge-graph-based question-answer processing method provided in the present specification;
fig. 2 is a schematic block diagram of a knowledge-graph-based question-answering processing apparatus according to the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on one or more embodiments of the present specification without making any creative effort shall fall within the protection scope of the embodiments of the present specification.
In one scenario example provided by the embodiments of the present specification, the question answering processing method may be applied to a system for executing question answering processing, and the system may include a server and a terminal device. The server may refer to a single server, or may include a server cluster composed of a plurality of servers. The terminal device may be a mobile phone or a service terminal. The server can receive question information sent by the terminal equipment, process the question information based on a pre-configured question-answering processing model and a knowledge graph to obtain answer information corresponding to the question information, and feed the answer information back to the terminal equipment to realize automatic question-answering processing.
The knowledge-graph may be pre-constructed and configured in a database accessible by the server. The knowledge map information can be from knowledge bases established inside the enterprise, related open source knowledge bases and the like. Information associated with question and answer conversations can be generally divided into structured data accessed from a database and unstructured text data that is document-based. In the knowledge graph construction process based on the knowledge base, knowledge extraction can be performed firstly. The unstructured data can be preprocessed before knowledge extraction, and the unstructured data are converted into structured data, so that the accuracy of question and answer reply based on the knowledge graph is improved.
For example, the data related to banking outlets stored in the knowledge base is usually based on structured data. If the relevant data of the bank outlets can comprise outlet names, outlet addresses, contact ways, business handling ranges, business hours, belonged areas and the like, the outlet addresses belonging to the short text data are only required to be processed, and entity information such as provinces, cities, regional areas and the like is further analyzed. Credit cards present relevant data, usually in text form. For text data, semantic analysis is needed, word segmentation is performed on sentences, invalid words are removed, fault-tolerant error correction is performed, a relevant dictionary is established, low-frequency words are removed, TopK high-frequency words are reserved, and an entity synonym library can be established. And then, extracting knowledge based on the preprocessed information. The synonym library is mainly used for entity fusion during map construction and entity disambiguation during entity linkage, and the question and answer accuracy rate is improved.
And then, according to the technical analysis of the knowledge base and the related problem samples, creating an ontology Schema, establishing a knowledge graph triple, and setting the structures of various entities, including the connection of the relationships among the entities, the attributes of the entities and the attributes of the relationships. And mapping the related information to the Schema structure body field according to the knowledge extraction result of the first step, constructing a knowledge graph, and finally importing the knowledge graph into a database.
For example, each bank branch is used as a node, the branch number and the branch type are attributes thereof, the province, the city and the district are other nodes, a region affiliation relationship with the bank branch is established, a relationship between the bank branch and business hours is established, a relationship between nearby bank branches is established according to the distance between the bank branch and the bank branch, and the like, and an ontology synonym library is established according to needs. For another example, the credit card activity, and the credit card interest are all different entities, the annual fee of the credit card is an attribute of the credit card, the activity deadline of the credit card is an attribute of the activity entity, and if the activity deadline of a certain activity of a certain credit card is to be queried, the activity can be found through the relationship between the credit card and the activity, and then the activity deadline attribute of the activity can be queried. And establishing a knowledge graph based on the relationship among the entities, the attributes of the entities and the attributes of the relationship, and storing the knowledge graph in a database.
Knowledge maps of different topics can also be constructed in advance. The topics may be divided based on, for example, traffic type, traffic demand, etc. By constructing knowledge maps with different themes, the information searching amount during the generation of answer information can be reduced, the interference of noise information is reduced, and the searching efficiency and accuracy are improved.
Of course, other modifications of the knowledge graph construction are possible for those skilled in the art based on the technical spirit of the embodiments of the present disclosure, and all that is required is to cover the protection scope of the present disclosure as long as the functions and effects of the knowledge graph construction are the same or similar.
The question-answer processing model at least comprises a statement generating module and an executing module. The statement generation module is used for converting the problem information represented by the natural language into a graph query statement executable by the server. The graph query statement can be used for accessing a knowledge graph of a database and extracting required information; and the graph query statement may also be used to perform logical computations, etc., based on information extracted from the knowledge-graph. The graph query statement may be written based on the Gremlin language, for example. Of course, the graph query statement may be written in other languages capable of executing graph queries. The logical computation may include, for example, performing data screening, averaging, summing, etc. on the information extracted from the knowledge-graph. The statement generation module may send the generated graph query statement to the execution module, and the execution module accesses the knowledge-graph of the database and performs the related logic calculation, and the like. Finally, answer information corresponding to the question information can be obtained based on the execution result of the execution module.
When determining the answer corresponding to the complex problem based on the knowledge graph, the statement generation module may split the complex problem into a plurality of execution units that are independently executed in sequence, and generate the graph query statement corresponding to each execution unit. After the graph query statement based on each execution unit is executed, an execution result can be obtained. The execution result may be, for example, data queried from the graph based on the graph query statement of the corresponding execution unit or a calculation result obtained through calculation. The execution result of each execution unit may be stored in association with the execution unit. The graph query statement of the following execution unit may reference the execution results of the preceding execution unit, access a database or perform a logical computation based on the referenced execution results.
The execution unit may be, for example, a first-degree query involved in the knowledge graph, or may be a logical calculation based on the results of the query from a previous execution unit, or the like. The first-degree query may refer to a query operation between nodes associated with an edge in the knowledge graph and a query operation between a node and a node attribute. For example, if a contact phone of a query a site is an attribute of the query a site, the query belongs to a first-degree query. If the inquiry is whether the A network site is on duty in the fifth and sixth periods and the relation between holidays and network sites is 'business', the inquiry also belongs to the first-degree inquiry. After the query sentences of each graph are sequentially executed, answer information corresponding to the question information can be obtained.
For example, the question information is "which is available at the branch of saturday business in city B", and the statement generation module may split the question information into four execution units that are executed in sequence: "which network points are in city B", "business hours of each network point", "network points of Saturday business", "name and address of network point".
If the execution unit "which nodes are in city B" can be processed first, the statement generation module can generate the corresponding graph query statement of the execution unit and send it to the execution module. The execution module can access the database by using the graph query statement corresponding to the execution unit, query the knowledge graph to obtain the network point entities associated with the B city, and take the network point list (stored in the list and can be the node identification of the network point in the graph) formed by the network points of the B city as the execution result of the graph query statement and store the result.
Then, the execution unit "business hours of each website" may be processed, and the statement generation module may generate a graph query statement corresponding to the execution unit, and send the graph query statement to the execution module. When executing the graph query statement and accessing the database, the execution module can refer to the execution result of the previous execution unit, query the business hours of all the network points in the network point list of the B city from the knowledge graph, store the queried business hours as the attribute information of all the network points in the network point list, obtain the execution result of the execution unit, and store the execution result.
And by analogy, when the next execution unit, namely the saturday business site, is executed, the execution result of the previous execution unit can be referred to, and the sites with saturday business hours in the site list are extracted or marked and stored. Then, the next execution unit "name and address of the website" can be executed, and the execution result of the previous execution unit is referred to, the database is accessed, and the name and address information of the extracted or labeled website is inquired from the knowledge graph. Finally, answer information corresponding to the question information may be generated based on the execution result of the last execution unit.
Of course, the above-described execution unit division method is merely an example, and does not directly limit the actual execution method. In the actual execution process, for a given question information, when generating a correct answer corresponding to the question information, the actual execution mode of the corresponding execution unit and the execution sequence of each execution unit may not be single in actual application, and there may be many variations. For example, the statement generation module may analyze the entity information of the question information and the association relationship between the entity information, and generate a graph query statement corresponding to at least one execution unit that is executed in sequence according to the analysis result. Or, the sentence generation module may be constructed by a neural network, a sequence-to-sequence model, or the like, and then the module may be trained by using sample data. And then, generating a graph query statement corresponding to at least one sequentially executed execution unit corresponding to the problem information by using the sentence generation module obtained by training.
Of course, other modifications of the sentence generation module construction are possible for those skilled in the art in light of the technical spirit of the embodiments of the present disclosure.
Through the embodiment, the generation of answer information corresponding to the question information is split into the form of at least one execution unit executed in sequence, the graph query statement of each execution unit is generated, the corresponding graph query statement is used for accessing a database or executing logic calculation, and the execution result is stored in an associated manner. The execution of the graph query statement of the following execution unit is sequentially executed based on the execution result of the preceding execution unit to generate final answer information. The whole processing logic can be regarded as an inverted tree structure, a leaf is taken as each execution unit, the execution result of the leaf is cached in the memory and quoted for the graph query statement execution of the subsequent execution unit, and the answer information of the question information is generated according to the result executed by the final root node. The processing logic of the whole question-answering information is clearer, the optimization and adjustment of the whole model processing logic are facilitated, and the model training efficiency and the accuracy of the model execution result can be greatly improved.
The sentence generation module may be constructed based on a Seq2Seq (sequence-to-sequence) model. For example, the statement generation module may include at least two LSTM network models, one as an encoder and one as a decoder. The encoder is used for extracting the features of the problem information, the encoder can send the extracted features to the decoder, and the decoder converts the extracted features into the graph query statement corresponding to the at least one execution unit.
For example, the question information x may be split first to obtain a word sequence x composed of m words1、x2、…、xTEach word x may be divided intotMapping to an embedded vectorqtAnd obtaining a word vector. The encoder can then read these embedded vectors and use ht+1=LSTM(ht,qt,we) Gradually updating the hidden state, where htAs a word xtCorresponding hidden state, weIs a parameter matrix of the encoder. The hidden state ultimately output by the encoder is a feature extracted from the word sequence. I.e. the hidden state finally output by the encoder can be used to characterize the overall characteristics of the problem information x. The decoder may act as a graph query statement generator. The hidden state output by the encoder can be used as the initial hidden state of the decoder, using ut+1=LSTM(ut,ct-1,wd) Gradually updating the hidden state of the decoder, wherein ct-1Embedding vectors, w, generated for execution results based on a previous execution unitdIs a parameter matrix.
For example, a Hash table storage mechanism may be used to store intermediate information involved in the answer information generation process. The Hash table may store information in the form of key-variables. The key value may be information for identifying the intermediate information, and the value of the variable is the intermediate information. During the encoding process, the problem information may be matched to the entities of the knowledge-graph using entity linking techniques. For each link entity, a record can be added in the hash table, wherein the key value k of the keyiIs the value of a vector, variable s, corresponding to the hidden state of the encoder for each linked entityiThe value of (d) is the name of the data representation of each linking entity in the knowledge-graph. In the decoding process, when a graph query statement of an execution unit is generated, the corresponding graph query statement is executed by an execution module, the executed execution result is stored as the value of a variable, and the key value corresponding to the variable is determined by the hidden state of a decoder corresponding to the execution unit. The key-variable data corresponding to the execution result may be further added to the reference table of the decoder. When the decoder generates the graph query statement corresponding to the next execution unit, the execution result of the execution unit may be extracted from the reference table.
Accordingly, the statement generation module may perform feature extraction on the problem information to obtain information for characterizing the overall features of the problem information, such as the hidden state finally output by the encoder as described above. The encoder decodes based on the hidden state and gradually generates a graph query statement of at least one sequentially executed execution unit corresponding to the problem information. And the statement generation module sends the graph query statements of each execution unit to the execution module in sequence so that the execution module executes in sequence. By taking the integral characteristics of the problem information as input and generating the graph query statement of the next execution unit based on the execution result of the previous execution unit in the decoding process, the generation of the graph query statement can be more accurate. Meanwhile, the sequence formed by the execution units corresponding to the correct answers can be obtained more quickly, and the model training efficiency is improved.
Of course, other modifications of the embodiments will be apparent to those skilled in the art from the teachings herein, but it is within the scope of the disclosure that the functions and effects of the embodiments will be identical or similar.
Based on the above embodiments, as shown in fig. 1, an example of this specification provides a knowledge-graph-based question-answering processing method applied to a server, where the method may include the following steps.
S20: and receiving the problem information which is sent by the terminal equipment and represented by using the natural language.
The server can receive the problem information which is sent by the terminal device and is characterized by the natural language. For example, in a question-and-answer dialog scenario, a user may enter question information on a terminal device, which may send the question information to a server. If the user can input the question information in a typing or voice mode, the server can convert the question information into a text form after receiving the question information. Accordingly, problem information is typically characterized using natural language.
S22: and generating a graph query statement of at least one sequentially executed execution unit corresponding to the problem information.
After receiving the question information, the server may convert the question information into a program code executable by the server, so as to access the database and perform a data query. In this embodiment, the information stored in the database is presented in the form of a knowledge graph. The construction of the knowledge graph in the database is implemented with reference to the above embodiments, and will not be described herein. Accordingly, the server may convert the question information into a graph query statement to access the database based on the statement. The graph query statement may be constructed, for example, using the Gremlin language. Of course, other graph language constructs may be used.
The server may generate a graph query statement for at least one in-order execution unit corresponding to the issue information. The execution logic included in the execution unit may be in various forms, such as a first-degree query operation on the knowledge graph, a logical calculation based on a query result, and the like. The server can generate a graph query statement of at least one execution unit executed in sequence according to the analysis result of the entity information of the problem information and the incidence relation between the entity information. Alternatively, the sentence generation module may also be constructed by a neural network, a sequence-to-sequence model, and the like, which are described in the above embodiments and are not described herein again. So that the statement generation module intelligently generates a graph query statement of at least one sequentially executed execution unit corresponding to the problem information. Namely, the sentence generating module is trained so that the sentence generating module can intelligently determine the execution logic and the combination mode of the execution unit corresponding to the question information to obtain the answer information corresponding to the question information.
S24: and accessing a pre-constructed knowledge graph based on the graph query statement to obtain an execution result of the execution unit, and storing the execution result so that the graph query statement of the next execution unit accesses the knowledge graph based on the execution result.
The server can access the pre-constructed knowledge graph based on the graph query statement of each execution unit to obtain the execution result of the corresponding execution unit. The execution result may also be stored. The storage manner may refer to the above embodiments, which are not described herein. When executing the graph query statement of the next execution unit, the server may refer to the execution result of the previous execution unit, access the knowledge graph, generate the execution result of the execution unit, and store the execution result. And the rest is done in sequence until all execution units finish execution.
S26: and determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed back the answer information to the terminal equipment.
The server may determine answer information corresponding to the question information based on an execution result of the last execution unit. Such as directly using the execution result as answer information of the question information. If the execution result is the name of the data representation in the knowledge graph, corresponding entity information can be further extracted based on the name of the data representation, and answer information is generated. The server can feed back the finally generated answer information to the terminal equipment so that the terminal equipment can display the answer information to the user.
Through the implementation mode, the whole processing logic is equivalently constructed into an inverted tree structure, the leaf is used as each execution unit, the execution result of the leaf is cached in the memory, the execution result is quoted for the graph query statement execution of the subsequent execution unit, and the answer information of the question information is generated according to the final result executed by the root node. The processing logic of the whole question-answering information is clearer, the optimization and adjustment of the whole model processing logic are facilitated, and the model training efficiency and the accuracy of the model execution result can be greatly improved.
Preferably, the execution module may be constructed using a Lisp interpreter. When discrete operations are executed on a knowledge base based on a map query language, language understanding, semantic analysis, symbolic reasoning and the like are required to be performed. However, operations such as language understanding, semantic analysis, symbolic reasoning and the like before execution are difficult to be completely popularized to the outside of training data, so that the stability and universality of the constructed model are poor. The Lisp interpreter has the characteristics of abstraction, scalability and accuracy, and the use of the Lisp interpreter to construct an execution module can more effectively realize operations such as language understanding, semantic analysis and symbolic reasoning before execution.
In some embodiments, the program assisted execution mechanism may be further configured in the lisp interpreter. The program auxiliary execution mechanism can be used for providing code help for executing statements, and the search space during the graph traversal operation is reduced. In the case that the expected value of the answer information is high, the execution operation of the graph query statement of the answer information can be stored in the execution reference set, so that the program auxiliary execution mechanism can be adjusted and optimized based on the execution reference set. Correspondingly, based on the program auxiliary execution mechanism, the specific execution machine code which can be obtained only after operations such as language understanding, analysis, reasoning and the like are quickly associated during execution, and the query efficiency and accuracy are greatly improved.
In other embodiments, the program pre-detection mechanism may also be pre-configured in the execution module. After receiving the execution statement, the execution module may first perform syntax, semantic, and other checks on the graph query statement and the like by using the program pre-detection mechanism. When the error occurs in the check, the marking and the feedback can be carried out in time, and the execution operation is terminated. If the Gremlin language is used as a graph query language, grammar and functions of graph traversal operation are constructed based on the Gremlin language. The Gremlin language is different from a common database structured query language (such as SQL), a large number of query functions are generated based on the Gremlin language, and a large number of programs are formed after synthesis of the functions. For the execution of the Gremlin program, the validity of the program is difficult to confirm in a machine model training prediction mode, so that if the final query is wrong or fails, problem tracking is difficult to perform. The structure of grammar, semantics and the like of the Gremlin language can be analyzed in advance, for example, g.V () represents all nodes in the query current graph; g.V (). properties (). key () represents all attributes etc. of all nodes in the query current graph. Then, a program pre-detection mechanism which is in accordance with the program language can be constructed based on structures such as grammar and semantics of the Gremlin language, the problem of execution failure caused by grammar and semantics errors of the execution statements is tracked in time, troubles brought to model optimization by the problems are avoided, and the efficiency and the stability of the model optimization are improved.
Accordingly, in some embodiments, the accessing a pre-constructed knowledge-graph based on the graph query statement may include: checking the graph query statement by using a pre-constructed statement structure information set; wherein the statement structural information set at least comprises graph query statement syntax structural information of Gremlin language; in the event that the check passes, accessing a pre-constructed knowledge-graph based on the graph query statement.
In other embodiments, the question-answering processing model may further include a management module. The answer information may be sent to a management module during model training or model optimization. The management module can analyze the expected value corresponding to the answer information. For example, in the model training process, the management module may determine the expected value of the answer information by comparing the similarity between the answer information and the sample answer. In the question-answering conversation scenario, the management module may determine an expected value of answer information based on a scoring result of the answer information. The score may be a score that is given to the answer by a business person or a user to determine the accuracy of the answer information obtained by the server based on the score. The statement generation module can adjust and optimize model parameters based on the expected value so as to improve the accuracy of generating the graph query statement sequence and further improve the accuracy of generating answer information.
The problem information sent by the terminal equipment usually exists in a natural language form, the information in the database is stored in a knowledge graph form, and the query of the related entity information needs to be executed by using a special graph query language, so that the corresponding relation between the problem information and a graph query sentence for executing the data query is relatively complex and is difficult to label directly. And for a specified problem information, the forms of the graph query statement of the execution unit generated by the statement generation module and the execution sequence of each execution unit are complicated and changeable. Therefore, if the question information and the corresponding graph query statement sequence are directly marked, the complexity is higher. In this embodiment, the question-answer processing module is constructed based on question-answer pairs as sample data, so that difficulty and cost of data annotation can be reduced.
For example, in the sentence generation module training, the standard answer information corresponding to the question information can be extracted as a sample answer, question-answer pair sample data is constructed, and the adjustment and optimization of the question-answer processing model parameters are guided. For example, a problem sample can be constructed in connection with dialog scenarios and common problems. Then, sample answers corresponding to the question samples can be determined by combining information such as business processing logic, knowledge maps and the like related to the conversation scene. For example, in a credit card question-answer scenario, the service processing logic and the knowledge graph of the credit card may be analyzed to realize the association between the entity information in the question information and the entity information in the knowledge graph, and further find out the sample answer to generate question-answer pair sample data.
For example, in a question-and-answer scenario of a bank outlet, which relates to asking business hours of a certain bank outlet, common question information may be "do business saturday business of a bank outlet? "," Saturday afternoon of A site is open to several points? "and the like. Assuming that 1 ten thousand banking outlets are provided, 20 types of queries related to the banking outlets are counted, and each banking outlet is expected to be related to the 20 types of problems, so that 20 ten thousand problems can be listed. 10% of the data can be randomly extracted from the data to be used as question and answer pair sample data, and a training set, a verification set and a prediction set are constructed to carry out multi-round model training.
In a question-answer dialogue scene, the answer information corresponding to the question information can be scored, question-answer pair sample data is constructed based on the answer information with higher score, and the optimization of the sentence generation module is guided. Answer information with a higher score may be used as a sample answer to optimize the sentence generation module based on the corresponding question information and answer information. Of course, in the question-answering dialogue scenario, other mechanisms may also be used to determine the accuracy of the answer information, which is not limited herein.
Accordingly, in some embodiments, the method may further comprise: comparing the similarity between the answer information and the sample answers corresponding to the question information to determine the expected value of the answer information based on the similarity; and under the condition that the expected value is greater than the specified value, storing the graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set as a graph query statement sequence, so as to train and optimize a statement generation module based on the reference information set, and further generate the graph query statement of other question information based on the optimized statement generation module. Therefore, the accuracy of answer information generation is improved.
Although the question-answer pairs can guide the training and optimization of the model as sample data, the question-answer pairs are different from the sentence generation module in direct input and output. In some embodiments, the sentence generation module may be constructed using iterative reinforcement learning and maximum likelihood estimation calculations. I.e. iterative reinforcement learning and maximum likelihood estimation algorithm combinations can be used to guide the training and optimization of the sentence generation module. The maximum likelihood estimation can enhance the stability and the learning efficiency of the model, the target function of reinforcement learning is better, and the stability and the accuracy of the model can be improved while the training efficiency is ensured by combining the maximum likelihood estimation and the target function of reinforcement learning.
Given a question information x, assume hidden states, execution units and expectations of(s) per time step T ∈ {0,1, …, T }t,at,rt). The context of the issue information is deterministic, and the hidden state can be defined by the issue information x and the sequence of execution units: st=(x,a0:t-1) Wherein a is0:t-1=(a0,…,at-1) The history is hidden state at time step t. Active execution unit a for time step tt∈A(st) Wherein A(s)t) Is a set of indicia of valid execution given by the execution module. Since each execution unit corresponds to a history a0:tCorresponding to one execution sequence. Expectation of rt=I[t=T]·F1(x,a0:T) The value is only the last step of decoding and is non-zero, i.e. the sample answer and the sequence a of the query statement0:TScore F of generated answer information comparison1. Thus, a0:TIs expected to be R (x, a)0:T) Can be determined using equation (1).
Figure BDA0003001993420000131
Agent for strengthening learningEach decision process is defined by a policy, piθ(s,a)=Pθ(at=a|x,a0:t-1) Where θ is a model parameter. Since the knowledge-graph entity relationships are deterministic, generate a0:TThe probability of (c) can be calculated by using the formula (2).
Figure BDA0003001993420000132
The execution target of the final execution unit is defined as the accumulated reward of the earlier execution unit, and because the graph query statement sequence formed by each execution unit is not in a certain limited range, it takes a long time for the model to generate the optimal graph query statement sequence, and it may be difficult to determine the optimal graph query statement sequence. The strategy gradient method can be further utilized for training of reinforcement learning. The strategy gradient requires a neural network to output the predicted actions, targets, and gradients, as shown in equation (3).
Figure BDA0003001993420000133
Figure BDA0003001993420000134
Wherein B (x) ═ Σ a0:TPθ(a0:T|x)R(x,a0:T) Is a baseline whose effect is to reduce the variance of the gradient estimate without introducing bias. Meanwhile, the reinforcement learning adopts a random strategy, selective local gradient estimation is carried out by using beam search, and different from the method of sampling from a model to approximate the gradient, the normalized probability is used in the beam, and the graph query statement sequences of the first k execution units are selected, so that training is concentrated on a high-probability sequence, and the variance of the gradient is reduced. If the beam width k is small, good execution units may fall out of the beam, resulting in a zero gradient for all execution units in the beam. If the beam width k is large, the training is very slow, and when the model is not trainedWhile training, the normalized probability of a good execution unit is still small, resulting in equation (3) being close to the zero baseline. A training strategy based on maximum likelihood may be further employed to solve the problem of small normalized probability of an ideal execution unit. That is, an iterative process may be performed based on formula (4), a good execution unit is searched for under the condition of a given parameter, and a corresponding graph query statement sequence is output, a model target is optimized, and a probability of an optimal graph query statement sequence is obtained.
Figure BDA0003001993420000141
Wherein the content of the first and second substances,
Figure BDA0003001993420000142
the model obtains an ideal graph query statement sequence corresponding to the problem x, and the maximum probability pole can be obtained by deriving to zero, so that probability parameters of the likelihood estimation reward maximum graph query statement sequence are obtained.
Iterative maximum likelihood estimation can yield the answer y, but still does not directly optimize the optimal sequence of graph query statements. For example, the best sequence of graph query statements may be an erroneous execution operation that only unexpectedly produces the correct answer. Therefore, after finding the optimal graph query statement sequence, reinforcement learning can be performed, the sum of the probabilities of the graph query statement sequences in a bundle is normalized to (1- α), and α is used as the probability of the found optimal graph query statement sequence. So that the model always sets a reasonable probability for a high-reward graph query statement sequence during training. The influence of wrong graph query statement sequences on model stability is reduced. Preferably, the objectives of the above two algorithms can be linearly combined to ensure the stability of the training.
As can be seen from the above embodiments, although the question-answer pairs are not standard labeling samples of the question-answer processing model, the correct answers may be obtained, but the sequence of the graph query sentences for obtaining the answers may be wrong, and only the correct answers are generated accidentally. And the form and content of the question-answer pairs are relatively solidified, the number is limited, the question-answer processing model is constructed only based on the question-answer pairs as sample data, and the accuracy of the internal processing logic of the constructed question-answer processing model is difficult to ensure.
In still other embodiments, the method may further comprise: taking a graph query statement of at least one sequentially executed execution unit corresponding to the answer information as a graph query statement sequence and extracting a feature vector of the graph query statement sequence under the condition that the expected value is greater than a specified value; extracting a map feature vector of a sample answer corresponding to the question information based on the knowledge map; the map feature vector is generated at least according to an answer map path; and comparing the query vector with the map feature vector to determine whether the map query sentence sequence is correct based on the comparison result. Marking the graph query statement sequence by using the result of whether the graph query statement sequence is correct, so as to optimize the statement generation module based on the marked graph query statement sequence.
For example, a more ideal graph query sentence sequence obtained by reinforcement learning can be extracted, and a feature vector of the graph query sentence is extracted. Knowledge map features such as answer paths, answer contexts and answer types can be extracted by combining sample answers to form map feature vectors, and the map feature vectors are compared with query vectors corresponding to ideal map query sentence sequences to determine whether the map query sentence sequences are correct. Then, correct or wrong labels can be marked on the graph query statement sequence, the model is further trained, and the generation probability of the correct ideal graph query statement sequence is further improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. For details, reference may be made to the description of the related embodiments of the related processing, and details are not repeated herein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the above-mentioned question-answer processing method based on the knowledge graph, one or more embodiments of the present specification further provide a question-answer processing device based on the knowledge graph. The apparatus may include systems, software (applications), modules, components, servers, etc. that utilize the methods described in the embodiments of the present specification in conjunction with hardware implementations as necessary. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. Specifically, fig. 2 is a schematic block diagram of an embodiment of a knowledge-graph-based question-answering processing apparatus provided in the specification, and as shown in fig. 2, the apparatus may include the following modules when applied to a server.
The receiving module 102 may be configured to receive question information represented by a natural language and sent by a terminal device.
The statement generating module 104 may be configured to generate a graph query statement of at least one sequentially executed execution unit corresponding to the question information.
The execution module 106 may be configured to access a pre-constructed knowledge graph based on the graph query statement, obtain the execution result of the execution unit, and store the execution result, so that the graph query statement of the next execution unit accesses the knowledge graph based on the execution result.
The answer generating module 108 may determine answer information corresponding to the question information based on an execution result of the last execution unit, so as to feed back the answer information to the terminal device.
In other embodiments, the apparatus may further comprise:
the first comparison module may be configured to compare similarity between the answer information and a sample answer corresponding to the question information, so as to determine an expected value of the answer information based on the similarity.
An information set updating module, configured to store, as a graph query statement sequence, a graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set when the expected value is greater than a specified value, so as to optimize the statement generation module based on the reference information set; the statement generation module is used for generating a graph query statement of the problem information.
In other embodiments, the apparatus may further comprise:
the first feature extraction module may be configured to extract a feature vector of the graph query statement when the expected value is greater than a specified value.
The second feature extraction module may be configured to extract, based on the knowledge graph, a graph feature vector of a sample answer corresponding to the question information; the map feature vector is generated based at least on the answer map path.
A second alignment module may be configured to align the query vector with the graph feature vector to determine whether the graph query statement sequence is correct.
The marking module can be used for marking the graph query statement sequence by using the result that whether the graph query statement sequence is correct or not so as to optimize the statement generation module based on the marked graph query statement sequence.
It should be noted that the above-described apparatus may also include other embodiments according to the description of the method embodiment. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
The present specification also provides a knowledge-graph-based question-answer processing apparatus that can be applied to a single knowledge-graph-based question-answer processing system, as well as to a variety of computer data processing systems. The system may be a single server, or may include a server cluster, a system (including a distributed system), software (applications), an actual operating device, a logic gate device, a quantum computer, etc. using one or more of the methods or one or more of the example devices of the present specification, in combination with a terminal device implementing hardware as necessary. In some embodiments, an apparatus may include at least one processor and a memory storing processor-executable instructions that, when executed by the processor, perform steps comprising a method as in any one or more of the embodiments described above.
The memory may include physical means for storing information, typically by digitizing the information for storage on a medium using electrical, magnetic or optical means. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
It should be noted that the above-mentioned device may also include other implementation manners according to the description of the method or apparatus embodiment, and specific implementation manners may refer to the description of the related method embodiment, which is not described in detail herein.
It should be noted that the embodiments of the present disclosure are not limited to the cases where the data model/template is necessarily compliant with the standard data model/template or the description of the embodiments of the present disclosure. Certain industry standards, or implementations modified slightly from those described using custom modes or examples, may also achieve the same, equivalent, or similar, or other, contemplated implementations of the above-described examples. The embodiments using these modified or transformed data acquisition, storage, judgment, processing, etc. may still fall within the scope of the alternative embodiments of the present description.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A question-answer processing method based on a knowledge graph is characterized by comprising the following steps:
receiving problem information represented by a natural language and sent by terminal equipment;
generating a graph query statement of at least one sequentially executed execution unit corresponding to the question information;
accessing a pre-constructed knowledge graph based on the graph query statement to obtain an execution result of the execution unit, and storing the execution result so that the graph query statement of the next execution unit accesses the knowledge graph based on the execution result;
and determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed back the answer information to the terminal equipment.
2. The method of claim 1, further comprising:
comparing the similarity between the answer information and the sample answers corresponding to the question information to determine the expected value of the answer information based on the similarity;
under the condition that the expected value is larger than a specified value, storing the graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set as a graph query statement sequence so as to optimize a statement generation module based on the reference information set; the statement generation module is used for generating a graph query statement of the problem information.
3. The method of claim 2, wherein the sentence generation module is constructed based on iterative reinforcement learning and a maximum likelihood estimation algorithm.
4. The method of claim 2, further comprising:
extracting a feature vector of the graph query statement when the expected value is greater than a specified value;
extracting a map feature vector of a sample answer corresponding to the question information based on the knowledge map; the map feature vector is generated at least according to an answer map path;
comparing the query vector to the profile feature vector to determine whether the sequence of profile query statements is correct;
marking the graph query statement sequence by using the result of whether the graph query statement sequence is correct, so as to optimize the statement generation module based on the marked graph query statement sequence.
5. The method of claim 1, wherein accessing a pre-constructed knowledge graph based on the graph query statement comprises:
the execution module accesses a pre-constructed knowledge graph based on the graph query statement; wherein the execution module is built using a lisp interpreter.
6. The method of claim 5, wherein accessing a pre-constructed knowledge graph based on the graph query statement comprises:
checking the graph query statement by using a pre-constructed statement structure information set; wherein the statement structural information set at least comprises graph query statement syntax structural information of Gremlin language;
in the event that the check passes, accessing a pre-constructed knowledge-graph based on the graph query statement.
7. A knowledge-graph-based question-answering processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving the problem information which is sent by the terminal equipment and represented by using the natural language;
a statement generating module, configured to generate a graph query statement of at least one sequentially executed execution unit corresponding to the question information;
the execution module is used for accessing a pre-constructed knowledge graph based on the graph query statement, obtaining the execution result of the execution unit, and storing the execution result so as to enable the graph query statement of the next execution unit to access the knowledge graph based on the execution result;
and the answer generation module is used for determining answer information corresponding to the question information based on the execution result of the last execution unit so as to feed the answer information back to the terminal equipment.
8. The apparatus of claim 7, further comprising:
the first comparison module is used for comparing the similarity between the answer information and the sample answer corresponding to the question information so as to determine the expected value of the answer information based on the similarity;
an information set updating module, configured to store, as a graph query statement sequence, a graph query statement of at least one sequentially executed execution unit corresponding to the answer information into a reference information set when the expected value is greater than a specified value, so as to optimize the statement generation module based on the reference information set; the statement generation module is used for generating a graph query statement of the problem information.
9. The apparatus of claim 8, further comprising:
the first feature extraction module is used for extracting a feature vector of the graph query statement under the condition that the expected value is greater than a specified value;
the second feature extraction module is used for extracting a map feature vector of a sample answer corresponding to the question information based on the knowledge map; the map feature vector is generated at least according to an answer map path;
a second comparison module, configured to compare the query vector with the map feature vector to determine whether the map query statement sequence is correct;
and the marking module is used for marking the graph query statement sequence by using the result of whether the graph query statement sequence is correct or not so as to optimize the statement generation module based on the marked graph query statement sequence.
10. A knowledge-graph-based question-answering apparatus comprising at least one processor and a memory for storing processor-executable instructions which, when executed by the processor, implement steps comprising the method of any one of claims 1 to 6.
CN202110350525.8A 2021-03-31 2021-03-31 Question-answer processing method, device and equipment based on knowledge graph Active CN112989002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110350525.8A CN112989002B (en) 2021-03-31 2021-03-31 Question-answer processing method, device and equipment based on knowledge graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110350525.8A CN112989002B (en) 2021-03-31 2021-03-31 Question-answer processing method, device and equipment based on knowledge graph

Publications (2)

Publication Number Publication Date
CN112989002A true CN112989002A (en) 2021-06-18
CN112989002B CN112989002B (en) 2022-08-23

Family

ID=76339197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110350525.8A Active CN112989002B (en) 2021-03-31 2021-03-31 Question-answer processing method, device and equipment based on knowledge graph

Country Status (1)

Country Link
CN (1) CN112989002B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568929A (en) * 2021-07-30 2021-10-29 咪咕文化科技有限公司 Data storage method, data query method, data storage device, data query device and electronic equipment
CN113722501A (en) * 2021-08-06 2021-11-30 深圳清华大学研究院 Knowledge graph construction method and device based on deep learning and storage medium
CN115905497A (en) * 2022-12-23 2023-04-04 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for determining reply sentence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017010652A1 (en) * 2015-07-15 2017-01-19 포항공과대학교 산학협력단 Automatic question and answer method and device therefor
CN107305579A (en) * 2016-04-25 2017-10-31 北京京东尚科信息技术有限公司 The method of testing and device of intelligent Answer System
CN108268580A (en) * 2017-07-14 2018-07-10 广东神马搜索科技有限公司 The answering method and device of knowledge based collection of illustrative plates
CN108268582A (en) * 2017-07-14 2018-07-10 广东神马搜索科技有限公司 Information query method and device
CN109918489A (en) * 2019-02-28 2019-06-21 上海乐言信息科技有限公司 A kind of knowledge question answering method and system of more strategy fusions
CN110298030A (en) * 2019-05-24 2019-10-01 平安科技(深圳)有限公司 Method of calibration, device, storage medium and the equipment of semantic analysis model accuracy
CN111177342A (en) * 2019-12-13 2020-05-19 天津大学 Knowledge graph interactive visual query language based on bidirectional conversion
CN111274373A (en) * 2020-01-16 2020-06-12 山东大学 Electronic medical record question-answering method and system based on knowledge graph
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112231453A (en) * 2020-10-13 2021-01-15 腾讯科技(深圳)有限公司 Intelligent question and answer method and device, computer equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017010652A1 (en) * 2015-07-15 2017-01-19 포항공과대학교 산학협력단 Automatic question and answer method and device therefor
CN107305579A (en) * 2016-04-25 2017-10-31 北京京东尚科信息技术有限公司 The method of testing and device of intelligent Answer System
CN108268580A (en) * 2017-07-14 2018-07-10 广东神马搜索科技有限公司 The answering method and device of knowledge based collection of illustrative plates
CN108268582A (en) * 2017-07-14 2018-07-10 广东神马搜索科技有限公司 Information query method and device
US20190018839A1 (en) * 2017-07-14 2019-01-17 Guangzhou Shenma Mobile Information Technology Co., Ltd. Knowledge map-based question-answer method, device, and storage medium
CN109918489A (en) * 2019-02-28 2019-06-21 上海乐言信息科技有限公司 A kind of knowledge question answering method and system of more strategy fusions
CN110298030A (en) * 2019-05-24 2019-10-01 平安科技(深圳)有限公司 Method of calibration, device, storage medium and the equipment of semantic analysis model accuracy
CN111177342A (en) * 2019-12-13 2020-05-19 天津大学 Knowledge graph interactive visual query language based on bidirectional conversion
CN111274373A (en) * 2020-01-16 2020-06-12 山东大学 Electronic medical record question-answering method and system based on knowledge graph
CN111506722A (en) * 2020-06-16 2020-08-07 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112231453A (en) * 2020-10-13 2021-01-15 腾讯科技(深圳)有限公司 Intelligent question and answer method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王智悦: ""基于知识图谱的智能问答研究综述"", 《计算机工程与应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568929A (en) * 2021-07-30 2021-10-29 咪咕文化科技有限公司 Data storage method, data query method, data storage device, data query device and electronic equipment
CN113722501A (en) * 2021-08-06 2021-11-30 深圳清华大学研究院 Knowledge graph construction method and device based on deep learning and storage medium
CN113722501B (en) * 2021-08-06 2023-09-22 深圳清华大学研究院 Knowledge graph construction method, device and storage medium based on deep learning
CN115905497A (en) * 2022-12-23 2023-04-04 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for determining reply sentence
CN115905497B (en) * 2022-12-23 2024-03-19 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for determining reply sentence

Also Published As

Publication number Publication date
CN112989002B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN112989002B (en) Question-answer processing method, device and equipment based on knowledge graph
Kehler Probabilistic coreference in information extraction
US20180053107A1 (en) Aspect-based sentiment analysis
EP3799640A1 (en) Semantic parsing of natural language query
Dinarelli et al. Discriminative reranking for spoken language understanding
Shu et al. Flexibly-structured model for task-oriented dialogues
CN113988071A (en) Intelligent dialogue method and device based on financial knowledge graph and electronic equipment
Cao et al. Relmkg: reasoning with pre-trained language models and knowledge graphs for complex question answering
CN115114419A (en) Question and answer processing method and device, electronic equipment and computer readable medium
Wei et al. Joint intent detection and slot filling with wheel-graph attention networks
Xue et al. Matching transportation ontologies with Word2Vec and alignment extraction algorithm
Jeong et al. Multi-domain spoken language understanding with transfer learning
Zhu et al. Integrating local context and global cohesiveness for open information extraction
US20200202074A1 (en) Semsantic parsing
Chen et al. Distant supervision for relation extraction with sentence selection and interaction representation
CN112416754B (en) Model evaluation method, terminal, system and storage medium
US20220237383A1 (en) Concept system for a natural language understanding (nlu) framework
Li Query spelling correction
CN115658845A (en) Intelligent question-answering method and device suitable for open-source software supply chain
He et al. Application of Grammar Error Detection Method for English Composition Based on Machine Learning
Han et al. A natural language interface concordant with a knowledge base
He The parallel corpus for information extraction based on natural language processing and machine translation
Liu Research on literary translation based on the improved optimization model
Ferranti et al. An experimental analysis on evolutionary ontology meta-matching
Li Construction of English Translation Model Based on Improved Fuzzy Semantic Optimal Control of GLR Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant