CN113590782B - Training method of reasoning model, reasoning method and device - Google Patents

Training method of reasoning model, reasoning method and device Download PDF

Info

Publication number
CN113590782B
CN113590782B CN202110854886.6A CN202110854886A CN113590782B CN 113590782 B CN113590782 B CN 113590782B CN 202110854886 A CN202110854886 A CN 202110854886A CN 113590782 B CN113590782 B CN 113590782B
Authority
CN
China
Prior art keywords
entity
inference
reasoning
model
triplet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110854886.6A
Other languages
Chinese (zh)
Other versions
CN113590782A (en
Inventor
庞超
王硕寰
孙宇
李芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110854886.6A priority Critical patent/CN113590782B/en
Publication of CN113590782A publication Critical patent/CN113590782A/en
Application granted granted Critical
Publication of CN113590782B publication Critical patent/CN113590782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The disclosure provides a training method, an reasoning method and a device of an reasoning model, relates to the technical field of artificial intelligence, and particularly relates to the technical field of natural language processing, knowledge graph and deep learning. The implementation scheme is as follows: sampling the annular subgraph from the knowledge graph; generating an inference statement corresponding to the annular subgraph; and training an inference model by taking the inference statement as a training sample.

Description

Training method of reasoning model, reasoning method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and in particular, to the field of natural language processing, knowledge graph, and deep learning technology, and more particularly, to a training method and apparatus for an inference model, an inference method and apparatus, an electronic device, a computer readable storage medium, and a computer program product.
Background
Deep learning techniques, particularly pre-trained language models (e.g., BERT models, GPT models, etc.), are widely used in natural language processing tasks. At present, a pre-training language model is mostly used for processing tasks such as sequence labeling, text emotion analysis, sentence matching, machine translation and the like.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a training method and apparatus for inference model, an inference method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a training method of an inference model, including: sampling the annular subgraph from the knowledge graph; generating an inference statement corresponding to the annular subgraph; and training an inference model by taking the inference statement as a training sample.
According to another aspect of the present disclosure, there is provided an inference method including: inputting the problem text into an inference model which is obtained by training according to the training method of the inference model; and obtaining answers corresponding to the question text output by the reasoning model.
According to another aspect of the present disclosure, there is provided a training apparatus of an inference model, including: the sampling module is configured to sample the annular subgraph from the knowledge graph; the generation module is configured to generate an inference statement corresponding to the annular subgraph; and the training module is configured to train the reasoning model by taking the reasoning sentences as training samples.
According to another aspect of the present disclosure, there is provided an inference apparatus including: the problem input module is configured to input a problem text into an inference model, wherein the inference model is obtained by training according to the training method of the inference model; and the answer acquisition module is configured to acquire an answer corresponding to the question text output by the reasoning model.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the training method and/or the reasoning method of the reasoning model described above.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions is provided. The computer instructions are for causing a computer to perform the training method and/or the reasoning method of the reasoning model described above.
According to another aspect of the present disclosure, a computer program product is provided, including a computer program. The computer program, when executed by a processor, implements the training method and/or the reasoning method of the reasoning model described above.
According to one or more embodiments of the present disclosure, a cyclic subgraph is sampled from a knowledge graph, an inference sentence corresponding to the cyclic subgraph is generated, and an inference model is trained using the inference sentence as a training sample. The ring subgraph in the knowledge graph is a closed loop formed by connecting a plurality of entities through relationship edges, and can represent the relationship reasoning process among the entities. The inference statement is a text representation corresponding to the ring subgraph, and accordingly, the inference statement is a corpus capable of representing the inferences of the inference process. The inference sentence is adopted to train the inference model, so that the inference model can directly learn the knowledge inference process in the knowledge graph, thereby enabling the inference model to have knowledge inference capability and realizing accurate and efficient knowledge inference.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a training method of an inference model in accordance with an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of an exemplary knowledge-graph, in accordance with an embodiment of the disclosure;
FIGS. 4A-4C are schematic diagrams showing three cyclic subgraphs sampled from the knowledge-graph shown in FIG. 3;
FIG. 5 shows a flow chart of an inference method in accordance with an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a training apparatus of an inference model in accordance with an embodiment of the present disclosure;
Fig. 7 shows a block diagram of the structure of an inference apparatus according to an embodiment of the present disclosure; and
fig. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the training methods and/or the reasoning methods of the reasoning model to be performed.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may browse web pages using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of any type of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, wi-Fi), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in a variety of locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
Pre-Training language models (Permutation Language Model, PLM, hereinafter referred to as "language models"), such as BERT (Bidirectional Encoder Representations from Transformers, bi-directional encoder representation based on transducers) models, GPT (generated Pre-Training) models, and the like, are widely used for processing natural language processing (Natural Language Processing, NLP) tasks such as sequence labeling, text emotion analysis, sentence matching, machine translation, and the like. The existing language model is usually obtained by training a large amount of unlabeled corpus. These unlabeled corpora are usually descriptive corpora, but lack of rational corpora, that is, lack of corpora containing both cause information (question conditions) and result information (question answers) in the reasoning process, so that the existing language model cannot learn the knowledge reasoning process and does not have knowledge reasoning capability.
In order for the language model to better capture knowledge, the language model may be combined with a knowledge graph. A knowledge graph is a structured semantic knowledge base that can be represented in the form of a network topology consisting of nodes and edges, where the nodes represent entities and the edges between the nodes represent relationships between the entities. In the knowledge graph, two connected nodes and their relationships may be expressed as a triplet (h, r, t), where h is a first entity (also called a head entity, a subject, etc.), t is a second entity (also called a tail entity, a subject, etc.), and r is a relationship between the first entity and the second entity.
In the related art, the semantics of each triplet in the knowledge-graph can be learned by training a TransE model, and the word vector representation of each entity is generated; the word vector representations generated by the TransE model are then injected into the language model (typically by summing the word vector representations generated by the TransE with the original input word vectors of the language model) to train the language model. Although the method can enable the language model to learn certain knowledge, the TransE can only model a single triplet in the knowledge graph, and can not model the relation among tuples, and the knowledge reasoning capability is a reasoning prediction among the tuples, so the reasoning capability of the TransE is very limited. In addition, the method is a two-stage training process (namely training the TransE and training the language model, and the reasoning capability of the final language model is indirectly learned, so that reasoning misalignment is very easy to cause.
In order to enable a language model to have accurate reasoning capability, the embodiment of the disclosure provides a training method of a reasoning model, which samples an annular subgraph from a knowledge graph to generate a reasoning sentence corresponding to the annular subgraph, and trains the reasoning model by taking the reasoning sentence as a training sample, so that the reasoning model can directly learn a knowledge reasoning process in the knowledge graph, thereby enabling the reasoning model to have accurate knowledge reasoning capability, simplifying the model training process (without adopting a two-stage training process like the related art above), and improving the model training efficiency.
The training method of the inference model of the embodiments of the present disclosure may be performed in the server 120 shown in fig. 1, for example. For example, the server 120 may perform the training method of the inference model of the embodiments of the present disclosure based on the knowledge-graph stored in the database 130, and train to obtain the inference model.
In other embodiments, the training method of the inference model may also be performed in the client devices 101, 102, 103, 104, 105, and 106 shown in fig. 1. For example, a client device may initiate a request to obtain a knowledge-graph to server 120 over network 110. In response to the request, the server 120 obtains a knowledge-graph from the database 130 and returns it to the client device. The client device performs the training method of the inference model according to the embodiment of the present disclosure based on the knowledge-graph returned by the server 120, and trains to obtain the inference model. In other embodiments, the client device may also perform the training method of the inference model of the embodiments of the present disclosure based on the locally stored knowledge-graph, training to obtain the inference model, without obtaining the knowledge-graph from the server 120.
Based on the trained reasoning model, the embodiment of the disclosure further provides a reasoning method which can infer the question text and accurately and efficiently determine the answer corresponding to the question text.
The reasoning method of the disclosed embodiments may be performed, for example, in the server 120 shown in fig. 1. Based on the trained inference model, server 120 may perform the inference methods of embodiments of the present disclosure, providing inference services (or question-and-answer services) to client devices 101, 102, 103, 104, 105, and 106. For example, the server 120 may receive the question text uploaded by the client device, input the question text into an inference model, obtain an answer corresponding to the question text, and return the answer to the client device.
In other embodiments, the reasoning methods of the disclosed embodiments may also be performed in the client devices 101, 102, 103, 104, 105, and 106 shown in FIG. 1. For example, the client device may receive a question text input by a user (e.g., by text, voice, etc.), input the question text into an inference model stored locally at the client, obtain an answer corresponding to the question text, and feed back the answer to the user in text or voice form.
Fig. 2 illustrates a flow chart of a training method 200 of an inference model in accordance with an embodiment of the present disclosure. As previously described, the method 200 may be performed at a server (e.g., the server 120 shown in fig. 1) or a client device (e.g., the client devices 101, 102, 103, 104, 105, 106 shown in fig. 1), i.e., the subject of execution of the steps of the method 200 may be the server 120 shown in fig. 1 or the client devices 101, 102, 103, 104, 105, 106.
As shown in fig. 2, the method 200 includes:
step 210, sampling a ring subgraph from the knowledge graph;
step 220, generating an inference statement corresponding to the annular subgraph; and
and 230, training an inference model by taking the inference statement as a training sample.
According to the embodiment of the disclosure, the annular subgraph is sampled from the knowledge graph, the reasoning statement corresponding to the annular subgraph is generated, and the reasoning statement is used as a training sample to train the reasoning model. The ring subgraph in the knowledge graph is a closed loop formed by connecting a plurality of entities through relationship edges, and can represent the relationship reasoning process among the entities. The inference statement is a text representation corresponding to the ring subgraph, and accordingly, the inference statement is a corpus capable of representing the inferences of the inference process. The inference sentence is adopted to train the inference model, so that the inference model can directly learn the knowledge inference process in the knowledge graph, thereby enabling the inference model to have knowledge inference capability and realizing accurate and efficient knowledge inference.
The various steps of method 200 are described in detail below.
Step 210, sampling the annular subgraph from the knowledge graph.
As previously described, a knowledge graph is a structured semantic knowledge base that can be represented in the form of a network topology graph consisting of nodes and edges, where the nodes represent entities and the edges represent relationships between the entities. The knowledge graph can be of different types, and the types of the included entities and the types of the relationships among the entities can be various. The relationships between entities may or may not have directionality (accordingly, the knowledge graph is a directed graph). According to the different types of the entities and the relationship types among the entities, the knowledge graph can store different knowledge and is applied to different fields. The present disclosure is not limited to the specific content of the knowledge-graph.
For example, in a knowledge graph in the field of social relationship analysis, entities may be natural people, and relationships between entities may be interpersonal relationships, such as father-son relationships, couple relationships, friends relationships, and the like. For another example, in a knowledge graph in the field of medical analysis, an entity may be a disease, a symptom, an examination item, or the like, and a relationship between entities may include a positive relationship (between a disease and a symptom), a negative relationship, an implementation relationship (between a disease and an examination item), or the like. Also for example, in a knowledge graph in the field of business analysis, the entities may be natural persons, businesses, contracts, accounts, etc., and the relationships between the entities include a high-management relationship (between natural persons and businesses), a stock-holding relationship (between natural persons and businesses, between businesses and businesses), a guarantee relationship, etc.
In an embodiment of the present disclosure, the ring subgraph is a part of a knowledge graph that includes a plurality of entities connected by relationship edges to form a closed loop.
As described above, the knowledge graph may be a directed graph or an undirected graph. It is contemplated that whenever two entities are connected by a relationship edge, regardless of the direction of the relationship edge, it is meant that there is a relationship between the two entities that can be used to form an inference process (i.e., as a loop in an inference process). Thus, according to some embodiments, the directionality of the relationship edges may not be considered when sampling the ring subgraph by step 210, i.e., the knowledge graph is treated as an undirected graph. However, in general, the directionality of the relationship edges needs to be considered when the triplet is represented and the inference statements are generated in the subsequent step 220. The representation of the triples and the specific steps for generating the inference statements are described in more detail below.
It will be appreciated that in other embodiments, the directionality of the edges may also be considered when sampling the ring subgraph via step 210. In the annular subgraph obtained by sampling, all relation edges are connected end to end, and the end entity pointed by one relation edge is the start entity of the next relation edge.
Typically, to form a ring, at least three relational edges need to be included in the ring subgraph, i.e., the ring subgraph includes at least three triples. Each triplet including a first entity, a second entity, and a relationship of the first entity to the second entity may be represented in the form of (first entity, relationship, second entity). Specifically, in the case that the knowledge graph is a directed graph (i.e., the relationship is a directed relationship), the first entity is a starting entity of the relationship edge; the second entity is an end point entity of the relation edge; in the case that the knowledge graph is an undirected graph (i.e., the relationship is an undirected relationship), the first entity and the second entity may be any entity connected by the relationship edge.
According to some embodiments, in order to improve the sampling efficiency and the training efficiency of the model, and improve the effectiveness of the model learning reasoning process, a threshold of the number of triples included in the ring-shaped sub-graph may be set, for example, the threshold of the number may be set to 6, and the sampled ring-shaped sub-graph includes at most 6 triples, and a ring-shaped sub-graph including triples with a number greater than 6 will not be sampled.
Fig. 3 illustrates a block diagram of an exemplary knowledge-graph 300, in accordance with an embodiment of the disclosure. The knowledge graph 300 is a directed social relationship knowledge graph, wherein nodes are natural human entities (such as Zhang san, li Wu, wang Liu, etc.), and edges are interpersonal relationships (such as father, mother, wife, etc.) between the natural human entities.
Fig. 4A, 4B, and 4C are schematic diagrams showing three ring-shaped sub-graphs 400A, 400B, and 400C sampled from the knowledge-graph 300 shown in fig. 3, where the knowledge-graph 300 is treated as an undirected graph in the sampling process, i.e. the directionality of the relationship edges is not considered.
As shown in fig. 4A, the ring sub-graph 400A includes three entities, namely, one by one, three by one, li Wu, which are connected by corresponding relationship edges to form a closed loop. The ring sub-graph 400A includes three triples, namely (Zhang three, father, zhang one), (Zhang three, mother, li Wu), (Li Wu, husband, zhang one).
As shown in fig. 4B, the ring sub-graph 400B includes four entities, namely, one, two, three, li Wu, which are connected by corresponding relationship edges to form a closed loop. The ring child 400B includes four triples, namely (Zhang three, mother, li Wu), (Li Wu, husband, zhang one), (Zhang one, brother, zhang two), (Zhang three, tertiary father, zhang two).
As shown in fig. 4C, the ring sub-graph 400C includes five entities, namely, one, two, three, zhang Si, li Wu, which are connected by corresponding relationship edges to form a closed loop. The ring child 400C includes five triples, namely (Zhang three, son, zhang Si), (Zhang Si, grandmother, li Wu), (Li Wu, husband, zhang one), (Zhang one, brother, zhang two), (Zhang three, tertiary father, zhang two).
And 220, generating an reasoning statement corresponding to the annular subgraph.
According to some embodiments, the inference statements may be generated according to the following steps 222 and 224:
step 222, generating clauses corresponding to each of at least three triples included in the ring subgraph respectively; and
and 224, splicing clauses corresponding to the at least three triples to obtain an inference statement.
According to some embodiments, for step 222, clauses corresponding to each triplet may be generated based on the template, namely: acquiring a preset template, wherein the template comprises a first slot for filling a first entity, a second slot for filling a relation between the first entity and a second entity and a third slot for filling the second entity; and filling the first entity, the relation between the first entity and the second entity of the triplet into the corresponding slot positions of the template to obtain clauses corresponding to the triplet.
For example, the template may be "[ slot2] of [ slot1] is [ slot3]", where slot1, slot2, slot3 represent a first slot for filling a first entity, a second slot for filling a relationship of the first entity with a second entity, and a third slot for filling the second entity, respectively. Taking the triplet (Zhang three, father, zhang one) in the ring-shaped sub-graph 400A shown in fig. 4A as an example, zhang three, father, zhang one are respectively filled in slots 1, slots 2, slot3 of the template, and the clause "the father of Zhang three is Zhang one".
It will be appreciated that the form of the template is not limited to the embodiments described above. In addition to the templates in the above embodiments, other forms of templates may be employed, such as "[ slot3] being [ slot2] of [ slot1 ]" [ slot1] and [ slot3] being [ slot2] "," [ slot1] having a [ slot2] relationship with [ slot3], and the like.
Also, according to some embodiments, different templates may be employed for different triples in the ring subgraph. For example, one triplet in the ring subgraph may be used as an answer triplet and the other triples as question condition triples. For the conditional triplet, a clause may be generated using the template "[ slot2] of [ slot1] is [ slot3 ]"; for answer triples, a clause may be generated using the template "then [ slot2] of [ slot1] is [ slot3 ]". For example, the ring sub-graph 400A shown in FIG. 4A includes three triples, (Zhang three, father, zhang one), (Zhang three, mother, li Wu), (Li Wu, husband, zhang one). The first two triples, namely (Zhang three, father, zhang one), (Zhang three, mother, li Wu) may be taken as question condition triples, and the last triplet, namely (Li Wu, husband, zhang one), may be taken as answer triples. Using the template "[ slot2] of [ slot1] as [ slot3]", generating (Zhang San, father, zhang Yi) a corresponding clause "Zhang San father is Zhang Yi", and generating (Zhang San, mother, li Wu) a corresponding clause "Zhang San mother is Li Wu"; the template is adopted, the [ slot2] of the [ slot1] is the [ slot3] ", and the corresponding clause is generated (Li Wu, the husband, the one) and the husband of Li Wu are the one.
According to further embodiments, for step 222, clauses corresponding to the triples may be generated based on a serialized manner, namely: and splicing the first entity of the triplet, the relation between the first entity and the second entity to obtain a clause corresponding to the triplet.
Taking the triplet (Zhang three, father, zhang one) in the ring-shaped sub-graph 400A shown in fig. 4A as an example, the Zhang three, father, zhang one are spliced to obtain the clause of Zhang three father Zhang one.
After generating clauses corresponding to the triples in the ring subgraph by step 222, the clauses of the triples are spliced by step 224 to obtain the inference statement. According to some embodiments, two adjacent clauses in the inference statement are separated by a separator to indicate that the two clauses are knowledge content derived from different triples, so that the inference model can better distinguish the content in the inference statement and learn the inference process in the inference statement more accurately. The separator may be, for example, comma ",", semicolon "; "etc.
Still taking the ring-shaped sub-graph 400A shown in fig. 4A as an example, using the template "[ slot2] of [ slot1] is [ slot3]", generating triples (Zhang three, father, zhang one), (Zhang three, mother, li Wu), (Li Wu, husband, zhang one) respectively, the corresponding clauses "father of Zhang three is Zhang one", "mother of Zhang three is Li Wu", "husband of Li Wu is Zhang one", then splicing the three clauses, and differentiating each clause by comma delimiter, to obtain the reasoning sentence "father of Zhang three is Zhang one, mother of Zhang three is Li Wu, and husband of Li Wu is Zhang one".
Similarly, for the ring-shaped child 400B shown in FIG. 4B, the reasoning statement "mother of Zhang three is Li Wu, the husband of Li Wu is Zhang one, the sibling of Zhang one is Zhang two, and the tertiary father of Zhang three is Zhang two" may be derived. For the ring child 400C shown in FIG. 4C, the reasoning statement "son of Zhang three is Zhang Si, ancestor of Zhang Si is Li Wu, husband of Li Wu is Zhang one, sibling of Zhang one is Zhang two, tertiary father of Zhang three is Zhang two" may be derived.
And 230, training an inference model by taking the inference statement as a training sample.
According to some embodiments, step 230 may include the following steps 232-236:
step 232, replacing an element in the reasoning sentence with a preset mask to obtain a problem sentence, wherein the element is derived from the annular subgraph;
step 234, inputting the question sentence into the reasoning model, and obtaining a prediction answer output by the reasoning model; and
step 236, adjusting the parameters of the inference model based on the elements and the predicted answers.
It will be appreciated that, referring to the description of step 220 (and in particular step 222) above, where the clause corresponding to the triplet is generated using a template, the linguistic elements in the inference statement include two parts, one part derived from the triplet in the cyclic subgraph and the other part derived from the template. For step 232, the masked element is derived from the ring subgraph, i.e., the masked element may be the first entity, the second entity, or the relationship of the first entity and the second entity of any triplet in the ring subgraph, thereby enabling the inference model to learn the inference process and predict the element, and achieving the effect of knowledge inference.
As previously described, the inference statement includes a plurality of clauses, each corresponding to a triplet in the ring subgraph. According to some embodiments, the element replaced with the mask in step 232 may be located in the last clause of the inference statement. The element in the last clause of the reasoning sentence is replaced by the mask, namely the last clause in the reasoning sentence is uniformly used as the answer of the question to be predicted, and the first clauses are used as the question condition, so that the reasoning model is more focused on the learning of the knowledge reasoning process, the interference of position factors is avoided, and the accuracy of model reasoning is improved.
According to some embodiments, the mask may be a predetermined string, such as "[ mask ]". According to other embodiments, a mask table including a plurality of masks may be preset, and one mask may be randomly selected from the mask table to replace an element in the inference statement.
Still taking the ring sub-graph 400A shown in fig. 4A as an example, its corresponding reasoning sentence is "the father of Zhang three is Zhang one, the mother of Zhang three is Li Wu, and the husband of Li Wu is Zhang one". An element from the triplet in the statement, for example, "Li Wu" in the second clause, is replaced with a mask "resulting in the question statement" the parent of Zhang three is Zhang one, the mother of Zhang three is [ mask ], and the husband of Li Wu is Zhang one ". For another example, the "husband" in the last clause may be replaced with a mask "to obtain the question statement" the parent of Zhang three is Zhang one, the mother of Zhang three is Li Wu, and the mask of Li Wu is Zhang one ".
The inference model may be any language model, such as a BERT model, a GPT model, etc., and the present disclosure is not limited to the specific structure of the inference model.
According to some embodiments, parameters of the inference model are adjusted for step 236 based on the masked elements and the predicted answers output by the inference model. The element replaced by the mask is the real answer, the loss value can be determined according to the element and the predicted answer output by the inference model, and the parameter of the inference model can be adjusted according to the loss value. The process of determining the loss value and adjusting the model parameters may be iterated a number of times such that the parameters of the model are gradually optimized and the predicted answer output by the model gradually approaches the actual answer (i.e., the element replaced by the mask). Typically, the inference model training is completed when the loss value is less than a threshold, or the number of iterations reaches a threshold.
According to the training method 200 of the above-described inference model, an inference model having knowledge inference capability can be generated. Based on the reasoning model, the embodiment of the disclosure also provides a reasoning method which can realize accurate and efficient knowledge reasoning. In particular, the reasoning method of the present disclosure may be applied in a question-answering system for providing a question-answering service to a user.
Fig. 5 shows a flow chart of an inference method 500 according to an embodiment of the disclosure. As previously described, the method 500 may be performed at a server (e.g., the server 120 shown in fig. 1) or a client device (e.g., the client devices 101, 102, 103, 104, 105, 106 shown in fig. 1), i.e., the subject of execution of the steps of the method 500 may be the server 120 shown in fig. 1 or the client devices 101, 102, 103, 104, 105, 106.
As shown in fig. 5, the method 500 includes:
step 510, inputting a question text into an inference model, wherein the inference model is obtained by training a training method of the inference model according to the embodiment of the disclosure; and
step 520, obtaining an answer corresponding to the question text output by the reasoning model.
According to the embodiment of the disclosure, accurate and efficient knowledge reasoning can be realized based on the reasoning model.
According to some embodiments, the method 500 further comprises step 530 (step 530 is not shown in fig. 5): a question text input by a user is acquired, wherein the question text comprises a plurality of entities and relations among the entities.
According to some embodiments, the client device may receive question text entered by a user by way of text. According to other embodiments, the client device may also receive voice input from the user, and may obtain the question text by recognizing the voice of the user. In the case where the subject of execution of the method 500 is a server, the client device uploads the question text entered by the user to the server, and accordingly, in step 530, the server obtains the question text entered by the user, and then performs step 510 to input the question text into the inference model. In the case where the subject of execution of the method 500 is a client device, the client device may obtain the question text entered by the user in step 530, and then execute step 510 to input the question text into the inference model.
According to other embodiments, the client device may receive a storage location (e.g., cloud storage address, identification of a local database, etc.) of the user-entered question text. In the case where the subject of execution of the method 500 is a server, the client device uploads the storage location of the question text entered by the user to the server, and accordingly, in step 530, the server may obtain the question text specified by the user from the corresponding storage location, and then execute step 510 to input the question text into the inference model. In the case where the subject of execution of the method 500 is a client device, the client device may obtain a storage location for the question text entered by the user, obtain the question text from the storage location, and then execute step 510 to input the question text into the inference model in step 530.
The inference model can infer entities and relationships in the question text and output corresponding answers.
According to some embodiments, the entities included in the question text of step 510 may be entities that have occurred in training samples (i.e., inference statements) of the inference model. For this case, the inference model can accurately output the answer corresponding to the question text. For example, the question text may be "what is the mother of Zhang three Li Wu, the husband of Li Wu is Zhang one, the brother of Zhang one is Zhang two, then Zhang two is Zhang three? The inference model may output the corresponding answer "tertiary parent".
According to other embodiments, among the entities included in the question text, some or all of the entities are present that have never appeared in the training samples (i.e., inference statements) of the inference model. For this case, although the training samples of the inference model do not include these entities, the inference model has already learned the relationship inference process between the entities according to other entities (i.e., entities in the inference sentence), so when the unknown entities are included in the question text, the inference model can accurately infer the corresponding answers according to these unknown entities and their correlations in the question text.
For example, the question text may be "father of evo is Wu Er, mother of Wu Yi is Zheng San, then Wu Er is Zheng San," where no Wu Yi, evo two, zheng San have occurred in the training sample of the inference model. However, the reasoning model is based on training sample "Zhang three father is Zhang one, zhang three mother is Li Wu, li Wu husband is Zhang one" the reasoning process of having learned the relationship between parents and couples between entities, so the reasoning model can infer that the answer is 'husband' for the question text.
According to an embodiment of the disclosure, a training device of the inference model is also provided.
Fig. 6 shows a block diagram of a training apparatus 600 of an inference model according to an embodiment of the present disclosure. As shown in fig. 6, the apparatus 600 includes:
a sampling module 610 configured to sample the cyclic subgraph from the knowledge-graph;
a generation module 620 configured to generate an inference statement corresponding to the ring subgraph; and
the training module 630 is configured to train the inference model using the inference statement as a training sample.
According to the embodiment of the disclosure, the annular subgraph is sampled from the knowledge graph, the reasoning statement corresponding to the annular subgraph is generated, and the reasoning statement is used as a training sample to train the reasoning model. The ring subgraph in the knowledge graph is a closed loop formed by connecting a plurality of entities through relationship edges, and can represent the relationship reasoning process among the entities. The inference statement is a text representation corresponding to the ring subgraph, and accordingly, the inference statement is a corpus capable of representing the inferences of the inference process. The inference sentence is adopted to train the inference model, so that the inference model can directly learn the knowledge inference process in the knowledge graph, thereby enabling the inference model to have knowledge inference capability and realizing accurate and efficient knowledge inference.
According to some embodiments, the ring subgraph comprises at least three triples, and wherein the generating module comprises: a clause generating unit configured to generate a clause corresponding to each of the at least three triples, respectively; and the splicing unit is configured to splice clauses corresponding to the at least three triples respectively to obtain the reasoning sentences.
According to some embodiments, the triplet comprises a first entity, a second entity, and a relationship of the first entity to the second entity, and wherein the clause generating unit comprises: a template acquisition subunit configured to acquire a preset template, where the template includes a first slot for filling a first entity, a second slot for filling a second entity, and a third slot for filling a relationship between the first entity and the second entity; and a slot filling subunit configured to fill the first entity, the second entity, and the relation between the first entity and the second entity of the triplet into the corresponding slots of the template, so as to obtain clauses corresponding to the triplet.
According to some embodiments, the triplet comprises a first entity, a second entity, and a relationship of the first entity to the second entity, and wherein the clause generating unit comprises: and the splicing subunit is configured to splice the first entity, the relation and the second entity of the triplet to obtain clauses corresponding to the triplet.
According to some embodiments, two adjacent clauses in the inference statement are separated by a separator.
According to some embodiments, the training module comprises: the problem generation unit is configured to replace one element in the reasoning sentence with a preset mask to obtain a problem sentence, wherein the element is derived from the annular subgraph; an answer prediction unit configured to input the question sentence into the inference model and obtain a predicted answer output by the inference model; and a parameter adjustment unit configured to adjust parameters of the inference model based on the element and the predicted answer.
According to some embodiments, the element is a first entity, a second entity, or a relationship of the first entity and the second entity of any triplet in the ring subgraph.
According to some embodiments, the inference statement includes a plurality of clauses, each corresponding to a triplet in the ring subgraph, the element being located in a last clause of the inference statement.
Fig. 7 shows a block diagram of a structure of an abnormal station identification apparatus 700 according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus 700 includes:
a question input module 710 configured to input a question text into an inference model, wherein the inference model is trained according to the training method of the inference model; and
And an answer obtaining module 720, configured to obtain an answer corresponding to the question text output by the inference model.
According to the embodiment of the disclosure, accurate and efficient knowledge reasoning can be realized based on the reasoning model.
It should be appreciated that the various modules of the apparatus 600 shown in fig. 6 may correspond to the various steps in the method 200 described with reference to fig. 2, and the various modules of the apparatus 700 shown in fig. 7 may correspond to the various steps in the method 500 described with reference to fig. 5. Thus, the operations, features and advantages described above with respect to method 200 apply equally to apparatus 600 and the modules comprising it, and the operations, features and advantages described above with respect to method 500 apply equally to apparatus 700 and the modules comprising it. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module. For example, the sampling module 610 and the generation module 620 described above may be combined into a single module in some embodiments.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various modules described above with respect to fig. 6, 7 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the sampling module 610, the generation module 620, the training module 630, the question input module 710, the answer acquisition module 720 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 8, a block diagram of an electronic device 800 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the device 800, the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication receiversHair pin and/or chipset, e.g. Bluetooth TM Devices, 1302.11 devices, wi-Fi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as the method 200 and/or the method 500 described above. For example, in some embodiments, the method 200 and/or the method 500 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of method 200 and/or method 500 described above may be performed. Alternatively, in other embodiments, computing unit 801 may be configured to perform method 200 and/or method 500 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely illustrative embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (17)

1. A training method of an inference model, comprising:
sampling the annular subgraph from the knowledge graph;
Generating an inference statement corresponding to the annular subgraph;
replacing one element in the reasoning sentence with a preset mask to obtain a problem sentence, wherein the element is derived from the annular subgraph;
inputting the question sentences into the reasoning model, and obtaining a prediction answer output by the reasoning model; and
based on the elements and the predictive answer, adjusting parameters of the inference model,
and the inference model generated by training takes a question text as input, outputs an answer corresponding to the question text, wherein all entities in the question text are contained in the inference statement, or part or all of the entities in the question text are not contained in the inference statement.
2. The method of claim 1, wherein the ring subgraph comprises at least three triples, and
wherein generating the reasoning sentence corresponding to the annular subgraph comprises:
generating clauses corresponding to each triplet of the at least three triples respectively; and
and splicing clauses corresponding to the at least three triples respectively to obtain the reasoning statement.
3. The method of claim 2, wherein the triplet includes a first entity, a second entity, and a relationship of the first entity to the second entity, and
Wherein generating clauses corresponding to each of the at least three triples, respectively, includes:
acquiring a preset template, wherein the template comprises a first slot for filling a first entity, a second slot for filling a relation between the first entity and a second entity and a third slot for filling the second entity; and
and filling the first entity, the relation between the first entity and the second entity of the triplet into the corresponding slot positions of the template to obtain clauses corresponding to the triplet.
4. The method of claim 2, wherein the triplet includes a first entity, a second entity, and a relationship of the first entity to the second entity, and,
wherein generating clauses corresponding to each of the at least three triples, respectively, includes:
and splicing the first entity, the relation and the second entity of the triplet to obtain clauses corresponding to the triplet.
5. The method of any of claims 2-4, wherein two adjacent clauses in the inference statement are separated by a separator.
6. The method of claim 1, wherein the element is a first entity, a second entity, or a relationship of a first entity and a second entity of any triplet in the ring subgraph.
7. The method of claim 1 or 6, wherein the inference statement comprises a plurality of clauses, each clause corresponding to a triplet in the ring subgraph, the element being located in a last clause of the inference statement.
8. A method of reasoning, comprising:
inputting a question text into an inference model, wherein the inference model is trained in accordance with the method of any one of claims 1-7; and
and obtaining an answer corresponding to the question text output by the reasoning model.
9. The inference method of claim 8, further comprising:
and acquiring a question text input by a user, wherein the question text comprises a plurality of entities and relations among the entities.
10. A training apparatus for an inference model, comprising:
the sampling module is configured to sample the annular subgraph from the knowledge graph;
the generation module is configured to generate an inference statement corresponding to the annular subgraph; and a training module comprising:
the problem generation unit is configured to replace one element in the reasoning sentence with a preset mask to obtain a problem sentence, wherein the element is derived from the annular subgraph;
An answer prediction unit configured to input the question sentence into the inference model and obtain a predicted answer output by the inference model; and
a parameter adjustment unit configured to adjust parameters of the inference model based on the element and the predictive answer,
and the inference model generated by training takes a question text as input, outputs an answer corresponding to the question text, wherein all entities in the question text are contained in the inference statement, or part or all of the entities in the question text are not contained in the inference statement.
11. The apparatus of claim 10, wherein the ring subgraph comprises at least three triples, and
wherein, the generating module includes:
a clause generating unit configured to generate a clause corresponding to each of the at least three triples, respectively; and
and the splicing unit is configured to splice clauses corresponding to the at least three triples respectively to obtain the reasoning sentences.
12. The apparatus of claim 11, wherein the triplet includes a first entity, a second entity, and a relationship of the first entity to the second entity, and
Wherein the clause generating unit includes:
a template acquisition subunit configured to acquire a preset template, where the template includes a first slot for filling a first entity, a second slot for filling a second entity, and a third slot for filling a relationship between the first entity and the second entity; and
and the slot filling subunit is configured to fill the first entity, the second entity and the relation between the first entity and the second entity of the triplet into the corresponding slots of the template to obtain clauses corresponding to the triplet.
13. The apparatus of claim 11, wherein the triplet includes a first entity, a second entity, and a relationship of the first entity to the second entity, and
wherein the clause generating unit includes:
and the splicing subunit is configured to splice the first entity, the relation and the second entity of the triplet to obtain clauses corresponding to the triplet.
14. The apparatus of claim 10, wherein the inference statement comprises a plurality of clauses, each clause corresponding to a triplet in the ring subgraph, the element being located in a last clause of the inference statement.
15. An inference apparatus comprising:
a question input module configured to input a question text into an inference model, wherein the inference model is trained in accordance with the method of any one of claims 1-7; and
and the answer acquisition module is configured to acquire an answer corresponding to the question text output by the reasoning model.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
17. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202110854886.6A 2021-07-28 2021-07-28 Training method of reasoning model, reasoning method and device Active CN113590782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110854886.6A CN113590782B (en) 2021-07-28 2021-07-28 Training method of reasoning model, reasoning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110854886.6A CN113590782B (en) 2021-07-28 2021-07-28 Training method of reasoning model, reasoning method and device

Publications (2)

Publication Number Publication Date
CN113590782A CN113590782A (en) 2021-11-02
CN113590782B true CN113590782B (en) 2024-02-09

Family

ID=78250959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110854886.6A Active CN113590782B (en) 2021-07-28 2021-07-28 Training method of reasoning model, reasoning method and device

Country Status (1)

Country Link
CN (1) CN113590782B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596552B (en) * 2022-03-09 2023-06-23 阿波罗智能技术(北京)有限公司 Information processing method, training method, device, equipment, vehicle and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701253A (en) * 2016-03-04 2016-06-22 南京大学 Chinese natural language interrogative sentence semantization knowledge base automatic question-answering method
CN107391512A (en) * 2016-05-17 2017-11-24 北京邮电大学 The method and apparatus of knowledge mapping prediction
CN110110043A (en) * 2019-04-11 2019-08-09 中山大学 A kind of multi-hop visual problem inference pattern and its inference method
CN110263324A (en) * 2019-05-16 2019-09-20 华为技术有限公司 Text handling method, model training method and device
CN110399457A (en) * 2019-07-01 2019-11-01 吉林大学 A kind of intelligent answer method and system
CN110413732A (en) * 2019-07-16 2019-11-05 扬州大学 The knowledge searching method of software-oriented defect knowledge
CN111144115A (en) * 2019-12-23 2020-05-12 北京百度网讯科技有限公司 Pre-training language model obtaining method and device, electronic equipment and storage medium
CN111475658A (en) * 2020-06-12 2020-07-31 北京百度网讯科技有限公司 Knowledge representation learning method, device, equipment and storage medium
CN111782769A (en) * 2020-07-01 2020-10-16 重庆邮电大学 Intelligent knowledge graph question-answering method based on relation prediction
CN112015973A (en) * 2019-05-31 2020-12-01 北京百度网讯科技有限公司 Relation reasoning method and terminal for heterogeneous network
CN113127623A (en) * 2021-05-06 2021-07-16 东南大学 Knowledge base problem generation method based on hybrid expert model and joint learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11954613B2 (en) * 2018-02-01 2024-04-09 International Business Machines Corporation Establishing a logical connection between an indirect utterance and a transaction

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701253A (en) * 2016-03-04 2016-06-22 南京大学 Chinese natural language interrogative sentence semantization knowledge base automatic question-answering method
CN107391512A (en) * 2016-05-17 2017-11-24 北京邮电大学 The method and apparatus of knowledge mapping prediction
CN110110043A (en) * 2019-04-11 2019-08-09 中山大学 A kind of multi-hop visual problem inference pattern and its inference method
CN110263324A (en) * 2019-05-16 2019-09-20 华为技术有限公司 Text handling method, model training method and device
CN112015973A (en) * 2019-05-31 2020-12-01 北京百度网讯科技有限公司 Relation reasoning method and terminal for heterogeneous network
CN110399457A (en) * 2019-07-01 2019-11-01 吉林大学 A kind of intelligent answer method and system
CN110413732A (en) * 2019-07-16 2019-11-05 扬州大学 The knowledge searching method of software-oriented defect knowledge
CN111144115A (en) * 2019-12-23 2020-05-12 北京百度网讯科技有限公司 Pre-training language model obtaining method and device, electronic equipment and storage medium
CN111475658A (en) * 2020-06-12 2020-07-31 北京百度网讯科技有限公司 Knowledge representation learning method, device, equipment and storage medium
CN111782769A (en) * 2020-07-01 2020-10-16 重庆邮电大学 Intelligent knowledge graph question-answering method based on relation prediction
CN113127623A (en) * 2021-05-06 2021-07-16 东南大学 Knowledge base problem generation method based on hybrid expert model and joint learning

Also Published As

Publication number Publication date
CN113590782A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US20230005284A1 (en) Method for training image-text matching model, computing device, and storage medium
CN116521841B (en) Method, device, equipment and medium for generating reply information
CN114595686B (en) Knowledge extraction method, and training method and device of knowledge extraction model
CN116303962B (en) Dialogue generation method, training method, device and equipment for deep learning model
CN116501960B (en) Content retrieval method, device, equipment and medium
CN114611532B (en) Language model training method and device, and target translation error detection method and device
CN116028605B (en) Logic expression generation method, model training method, device and medium
US20220237376A1 (en) Method, apparatus, electronic device and storage medium for text classification
CN113590782B (en) Training method of reasoning model, reasoning method and device
CN114547252A (en) Text recognition method and device, electronic equipment and medium
CN112559715B (en) Attitude identification method, device, equipment and storage medium
CN115862031B (en) Text processing method, neural network training method, device and equipment
CN116541536B (en) Knowledge-enhanced content generation system, data generation method, device, and medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN115631251B (en) Method, device, electronic equipment and medium for generating image based on text
CN115170887B (en) Target detection model training method, target detection method and target detection device
CN114547270B (en) Text processing method, training method, device and equipment for text processing model
CN116152607A (en) Target detection method, method and device for training target detection model
CN115879468B (en) Text element extraction method, device and equipment based on natural language understanding
CN114861658B (en) Address information analysis method and device, equipment and medium
CN116450917B (en) Information searching method and device, electronic equipment and medium
CN115578451B (en) Image processing method, training method and device of image processing model
CN115713071B (en) Training method for neural network for processing text and method for processing text
CN113360624B (en) Training method, response device, electronic device and storage medium
CN117273107A (en) Training method and training device for text generation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant