CN115358341B - Training method and system for instruction disambiguation based on relational model - Google Patents
Training method and system for instruction disambiguation based on relational model Download PDFInfo
- Publication number
- CN115358341B CN115358341B CN202211050793.9A CN202211050793A CN115358341B CN 115358341 B CN115358341 B CN 115358341B CN 202211050793 A CN202211050793 A CN 202211050793A CN 115358341 B CN115358341 B CN 115358341B
- Authority
- CN
- China
- Prior art keywords
- relation
- training
- subject
- training data
- disambiguation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/268—Morphological analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to the technical field of artificial intelligence, in particular to a training method and a training system for instruction disambiguation based on a relation model, wherein all instruction words in any sample are respectively marked as subject labels to form subject training data by taking the instruction words as subject labels; according to the sequence of the expressions, when the current expression is the subject, adding markers on two sides of the current expression, marking the last expression as an object and as a relation label to form relation training data, and obtaining corresponding relation training data when all expressions are the subject to obtain a relation training set; the feature vector of each character in the subject training data and the relation training set is obtained, the subject training data, the relation training set and the feature vector of all samples are input into the relation model for training, so that the relation model extracts a correct relation, and the problem that the relation between the same reference word and the entity which appear for many times cannot be identified by the relation model at present is solved.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a training method and system for instruction disambiguation based on a relation model.
Background
The relation extraction task is to find out which entity has which relation with which entity from a sentence. The relationship extraction task is an important subtask in information extraction. Relation extraction may extract structured data from complex unstructured text that can be understood by the machine. After relation extraction is carried out on unstructured text, structured graph data can be obtained, and cross-text association between entities is carried out. By relationship, it is meant that the relationship between entities in the text is a series of main predicate-guest triples: (subject S, predicate P, object O). In the relation extraction model, one of the currently mainstream methods is to use a multi-round question-answer method.
For Li Xiaoya, yan Fan, grand-son army, li Xiayu et al published in 2019 in the computing linguistic Association of Florence, italy and were incorporated in papers on pages 1340-1350 of the computing linguistic Association's collection of the 57 th annual meeting, disclose the conversion of tasks into multiple round questions, i.e., the extraction of entities and relationships into tasks that identify answer spans from the context, i.e., the multiple round questions method is to conduct multiple rounds of questions for each entity type in the document, thereby extracting all relationships, which is disadvantageous in that: if a plurality of reference words referring to the same entity appear in the text, it cannot be identified what the relationship between the corresponding reference words and the entity is, and it cannot be determined whether the repeated reference words refer to the same entity.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a training method and a training system for instruction disambiguation based on a relation model, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a training method for reference disambiguation based on a relational model, the training method including:
s100, acquiring a training set T, wherein the T comprises n text samples T= { T 1 ,T 2 ,…,T n -wherein the ith text sample T i Comprising m entities Su i ={Su i,1 ,Su i,2 ,…,Su i,m The number R and R (j) refer to T i The j-th entity Su in (a) i,j Is a reference to (a)The order of the elements in Z is according to the reference word at T i In the order of appearance, i has a value ranging from 1 to n, the function value of R (j) is an integer and satisfies +.>j has a value ranging from 1 to m;
s200, T is taken as i SUM reference words in the Chinese language are respectively marked as subject labels to obtain T i Subject training data of (a);
s300, the text sample T i Any one of the index words is used as a subject labeling relation label to obtain a piece of relation training data, and a relation training data set of SUM index words is obtained; wherein the r-th refers to Su i,j Is a reference to (a)The relationship training data as subject is: at T i Middle->Respectively adding a first identifier and a second identifier on both sides of the T-shaped part to obtain an adjusted T-shaped part i The method comprises the steps of carrying out a first treatment on the surface of the Will adjust the T i The r-1 th of (a) refers to Su i,j Is->As an object, the object is labeled as referring to a relational tag; when r=1, su will be i,j As an object and labeled as a reference label; wherein, the value range of R is 1 to R;
s400, obtaining feature vectors of each character in the subject training data and the relation training data set, inputting the subject training data, the relation training data set and the feature vectors of all text samples in the n text samples T into a relation model, and training the relation model.
In a second aspect, another embodiment of the present invention provides a relational model-based disambiguation-referring training system, the system comprising a processor and a non-transitory computer-readable storage medium having stored therein at least one instruction or at least one program loaded and executed by the processor to implement the training method described above.
The invention has the following beneficial effects:
the training method is characterized in that a sample T is used according to a reference word i Sequentially ordering of occurrence in the sample T i The subject training data is that all the reference words are marked as subject labels, the relationship training data is that any one reference word is used as a subject, identifiers are added on two sides of the subject to obtain an adjusted text, a previous reference word or an entity serving as the subject is marked as a reference relationship label in the adjusted text, the subject training data of each sample in n samples, the relationship training data set and characteristic vectors thereof are obtained and input into a relationship model for training, the relationship model is trained in the mode, so that the relationship model extracts a correct relationship to obtain a correct relationship map, and the problem that the relationship between the same reference word and the entity can not be recognized in one text for multiple times at present is solved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a training method for finger disambiguation based on a relational model according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to a training method and system based on relation model, which are provided by the present invention, with reference to the accompanying drawings and the preferred embodiments, and the detailed description of the specific embodiments, the structure, the features and the effects thereof is as follows. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a training method and a training system for disambiguation based on a relationship model.
Referring to fig. 1, a flowchart of a training method for finger disambiguation based on a relational model according to an embodiment of the invention is shown, the training method includes the following steps:
s100, acquiring a training set T, wherein the T comprises n text samples T= { T 1 ,T 2 ,…,T n -wherein the ith text sample T i Comprising m entities Su i ={Su i,1 ,Su i,2 ,…,Su i,m The number R and R (j) refer to T i The j-th entity Su in (a) i,j Is a reference to (a)The order of the elements in Z is according to the reference word at T i In the order of appearance, i has a value ranging from 1 to n, the function value of R (j) is an integer and satisfies +.>The value of j ranges from 1 to m.
Optionally, the entity is a named entity. Further, the named entity is a person name entity.
Alternatively, when the entity is a personally named entity, the reference is a personally named pronoun.
For example: one text sample is "Zhang Sanin Beijing office, his father is Zhang Da, his mother is Liu four, his graduation in this year is just done, he is not easy to find work", 3 personal name entities { Zhang San, zhang Da, liu four } and 4 personal names referring to Zhang San "he" are included in the text, and these four personal names are named A1, A2, A3 and A4, respectively, ordered in natural order and expressed for convenience, resulting in the collection of reference words { A1, A2, A3, A4}.
S200, T is taken as i SUM reference words in the Chinese language are respectively marked as subject labels to obtain T i Subject training data of (a).
When the reference relation is marked, the reference word is taken as a subject, the previous reference word of the reference word or the referred entity is taken as an object, a subject label is marked on the subject to form subject training data, and a relation label is marked on the object to form a relation training set.
Specifically, for subject training data, when labeling, all the reference words in the sample are used as subjects for labeling. For example, text samples: in the work that Zhang three is working in Beijing, his father is Zhang, his mother is Lifour, he has just graduation in this year, he is not easy to find, the referents { A1, A2, A3, A4} in the text sample are all marked as subject labels, wherein the subject labels can be 'B-subject', and other characters are marked as 'O'. The text sample with the label is the final subject training data.
S300, the text sample T i Any one of the index words is used as a subject labeling relation label to obtain a piece of relation training data, and a relation training data set of SUM index words is obtained; wherein the r-th refers to Su i,j Is a reference to (a)The relationship training data as subject is: at T i Middle->Respectively adding a first identifier and a second identifier on both sides of the T-shaped part to obtain an adjusted T-shaped part i The method comprises the steps of carrying out a first treatment on the surface of the Will adjust the T i The r-1 th of (a) refers to Su i,j Is->As an object, the object is labeled as referring to a relational tag; when r=1, su will be i,j As an object and labeled as a reference label; wherein, the value range of R is 1 to R.
Preferably, the first identifier and the second identifier are combined identifiers formed by paired identifiers and at least one letter, wherein the letter is positioned in the middle of the paired identifiers, so that common symbols of the text can be prevented from being confused with the added identifiers, and the probability of network errors is reduced. Wherein, the pair identifiers are "", "<" and ">", "(" and ")", "[" and "]", "-", and "}", etc.
Preferably, the first identifier and the second identifier appear in pairs and are located on either side of the subject entity or the reference word.
Preferably, the first identifier is "</S >" and the second identifier is "</T >".
Wherein, when acquiring the relation training data, the latter reference word is taken as a subject, the former reference word is taken as an object to mark the relation, and when r=1, the object is Su i,j ,Su i,j Is an entity such that a backward-forward chained reference is formed and reference to the same entity can refer to the same entity via the chained reference.
For example, for the sample "Zhang Sanin Beijing office, his father is Zhang Dazhang, his mother is Liqu, he just graduation in this year, he is not easy to find work", the step of obtaining subject training data includes: and labeling all the reference words in one sample as subject language tags, wherein the subject language tags of the reference words can be 'B-subject language', and obtaining one piece of training data with the subject language tags of all the reference words. The step of obtaining a relational training dataset comprises: when A1 is used as a subject, a first identifier "</S >" and a second identifier "</T >" are respectively added to two sides of the subject A1, so that an adjusted text is obtained: 'Zhang san is on work in Beijing, </S > his father of </T > is Zhang Da, his mother is Lisi four, he just graduations in this year, he is not easy to find work'; at this time, "Zhang San", is an object, and Zhang Sanis marked as a reference relationship label in the adjusted text, wherein the reference relationship label can be obtained by respectively marking Zhang Sano as "B-reference" and "I-reference" to obtain a piece of corresponding relationship training data when A1 is used as a subject. Similarly, when A2 is the subject, the adjusted text is: "Zhang san is working in Beijing, his father is Zhang Da, </S > his </T > mother is Lisi, he just graduation in this year, he is not easy to find work"; at this time, "A1" is an object, and the reference relationship label of A1 in the adjusted text is "B-reference", and a piece of relationship training data corresponding to the case where A2 is the subject is obtained. And analogically, obtaining four reference words which are respectively used as corresponding relation training data sets in the subject. Such a chain transfer relationship can be found in the relationship training set: a4 refers to A3, A3 refers to A2, A2 refers to A1, and A1 refers to entity three.
S400, obtaining feature vectors of each character in the subject training data and the relation training data set, inputting the subject training data, the relation training data set and the feature vectors of all text samples in the n text samples T into a relation model, and training the relation model.
Alternatively, the relational model is a BERT model. The loss function of the BERT model is a cross entropy loss function, and model training is completed when the cross entropy loss function converges.
The feature vector of each character comprises a word vector, a position vector and a segment vector of the corresponding character, wherein the word vector is semantic information of the current character in the current text, and is a 768-dimensional vector; the position vector is the position of the current character in the current text; the segment vector is the clause of the current character in the current text. Wherein the first identifier and the second identifier generate a word vector, a position vector and a segment vector, respectively, i.e., "< S >" generates a word vector, a position vector and a segment vector, and "< T >" also generates a word vector, a position vector and a segment vector.
The prediction result of the BERT model is a relationship list l= { L composed of K relationships 1 ,L 2 ,…L K Each relationship represents { S, P, O } in the form of a triplet, where S is the subject, P is the relationship, and O is the object. The relationship list predicted after the sample marked in step S201 is input into the relationship model is: { A1, referring to Zhang Sanj, { A2, referring to A1, { A3, referring to A2, { A4, referring to A3 }.
The step S400 further includes a post-processing step:
s520, obtaining a relation list L= { L with K relations output by the relation model 1 ,L 2 ,…L K (S) where the k-th list of relationships is k ,P k ,O k S, where S k P for predicted subject entities k For predicted relationship, O k Is the predicted object.
S540, at L k P in (3) k When referring to the relationship, L k { S in } k ,O k Adding to the connected graph set, wherein the value range of K is 1 to K.
Wherein S is k And O k Respectively as vertexes in the connected graph and S k And O k And connecting. Similarly, all subjects and objects in the relationship list that refer to relationships are placed in the connectivity graph.
S560, creating an entity mapping table B according to the connected graph set.
The mapping relation between a named entity and all the referents referring to the named entity is recorded in the entity mapping table. For example, in B, the mapping relationship between Zhang three and A1, A2, A3 and A4 is recorded.
Preferably, the method further comprises: s580, at L k P in (3) k According to L when not referring to k The reference word in (a) queries B to obtain corresponding named entities; replacing L with the resulting named entity k And (3) obtaining a reconstructed relationship. The method can reconstruct the relationship which cannot be determined originally into the relationship with practical significance. Wherein P is k When the relationship is "refer to", when the relationship is related to "mother" or the like,i.e. non-referring relationships. For example, in the non-reference relationship { he, mother, li four }, it is impossible to determine who he is, that is, who mother is Li four, and by replacing the reference word with a named entity, it is possible to determine that Li four is a mother of Zhang three, and it is possible to obtain a relationship having practical significance.
In summary, the embodiment of the invention discloses a training method for disambiguation of designations based on a relational model, which comprises the following steps: acquiring a dataset comprising n samples, the ith text sample T i Comprising m entities Su i And refers to the jth entity Su i,j R of the reference words Z, the order in the reference words Z is according to the reference words at T i The order of occurrence of (a) is ordered; obtaining a sample T i The subject training data is that all the reference words are marked as subject labels, the relation training data is that any one reference word is used as a subject, identifiers are added on two sides of the subject to obtain an adjusted text, a previous reference word or an entity serving as the subject is marked as a reference relation label in the adjusted text, subject training data and relation training data sets of each sample in n samples are obtained, the subject training data, the relation training data sets and feature vectors of the n samples are input into a relation model, and the relation model is trained. By training the relation model in the mode, a corresponding relation list can be obtained, and the reference relation between a plurality of reference words and entities in the relation can be obtained through the mapping relation among the relation list, so that the relation model extracts a correct relation, and a correct relation map is obtained. The method solves the problem that the existing relation model can not identify the relation between the same reference word and the entity which appears in a text for many times.
Based on the same inventive concept as the above method, the embodiment of the present invention further provides a training system for relational model-based instruction disambiguation, where the system includes a processor and a non-transitory computer readable storage medium, where at least one instruction or at least one program is stored in the non-transitory computer readable storage medium, where the at least one instruction or the at least one program is loaded and executed by the processor to implement the relational model-based instruction disambiguation training method according to any one of the above embodiments, and the training method is already described in detail in the above embodiments and is not repeated.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (9)
1. A training method for finger disambiguation based on a relational model, the training method comprising:
s100, acquiring a training set T, wherein the T comprises n text samples T= { T 1 ,T 2 ,…,T n -wherein the ith text sample T i Comprising m entities Su i ={Su i,1 ,Su i,2 ,…,Su i,m The number R and R (j) refer to T i The j-th entity Su in (a) i,j Is a reference to (a)Of elements in ZIn the order of T according to the reference words i In the order of appearance, i has a value ranging from 1 to n, the function value of R (j) is an integer and satisfies +.> j has a value ranging from 1 to m;
s200, T is taken as i SUM reference words in the Chinese language are respectively marked as subject labels to obtain T i Subject training data of (a);
s300, the text sample T i Any one of the index words is used as a subject labeling relation label to obtain a piece of relation training data, and a relation training data set of SUM index words is obtained; wherein the r-th refers to Su i,j Is a reference to (a)The relationship training data as subject is: at T i Middle->Respectively adding a first identifier and a second identifier on both sides of the T-shaped part to obtain an adjusted T-shaped part i The method comprises the steps of carrying out a first treatment on the surface of the Will adjust the T i The r-1 th of (a) refers to Su i,j Is->As an object, the object is labeled as referring to a relational tag; when r=1, su will be i,j As an object and labeled as a reference label; wherein, the value range of R is 1 to R;
s400, obtaining feature vectors of each character in the subject training data and the relation training data set, inputting the subject training data, the relation training data set and the feature vectors of all text samples in the n text samples T into a relation model, and training the relation model.
2. The training method of relational model-based finger disambiguation of claim 1, further comprising a post-processing step after step S400:
s520, obtaining a relation list L= { L with K relations output by the relation model 1 ,L 2 ,…L K (S) where the k-th list of relationships is k ,P k ,O k S, where S k P for predicted subject entities k For predicted relationship, O k Is the predicted object;
s540, at L k P in (3) k When referring to the relationship, L k { S in } k ,O k Adding to the connected graph set, wherein the value range of K is 1 to K;
s560, creating an entity mapping table B according to the connected graph set.
3. The method of training relational model-based finger disambiguation of claim 2, further comprising, after S560:
s580, at L k P in (3) k According to L when not referring to k The reference word in (a) queries B to obtain corresponding named entities; replacing L with the resulting named entity k And (3) obtaining a reconstructed relationship.
4. The method of training relational model-based finger disambiguation of claim 1, wherein the first identifier and the second identifier are each a combination of a paired identifier and at least one letter, wherein the letter is located in the middle of the paired identifier.
5. The method of training relational model-based finger disambiguation of claim 1, wherein the first identifier and the second identifier each correspond to a feature vector.
6. The relational model-based training method of finger disambiguation, according to claim 1, wherein the feature vectors comprise word vectors, position vectors, and segment vectors of the respective characters.
7. The method for training the disambiguation of a reference based on a relational model according to claim 1, wherein the entity is a name entity and the reference is a name reference.
8. The relational model based training method of finger disambiguation of claim 1, wherein the relational model is a BERT model.
9. A relational model based training system for disambiguation of designations, the system comprising a processor and a non-transitory computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by the processor to implement the training method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211050793.9A CN115358341B (en) | 2022-08-30 | 2022-08-30 | Training method and system for instruction disambiguation based on relational model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211050793.9A CN115358341B (en) | 2022-08-30 | 2022-08-30 | Training method and system for instruction disambiguation based on relational model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115358341A CN115358341A (en) | 2022-11-18 |
CN115358341B true CN115358341B (en) | 2023-04-28 |
Family
ID=84005609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211050793.9A Active CN115358341B (en) | 2022-08-30 | 2022-08-30 | Training method and system for instruction disambiguation based on relational model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115358341B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270196A (en) * | 2020-12-14 | 2021-01-26 | 完美世界(北京)软件科技发展有限公司 | Entity relationship identification method and device and electronic equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110705206B (en) * | 2019-09-23 | 2021-08-20 | 腾讯科技(深圳)有限公司 | Text information processing method and related device |
CN111626042B (en) * | 2020-05-28 | 2023-07-21 | 成都网安科技发展有限公司 | Reference digestion method and device |
CN111897970B (en) * | 2020-07-27 | 2024-05-10 | 平安科技(深圳)有限公司 | Text comparison method, device, equipment and storage medium based on knowledge graph |
CN113191118B (en) * | 2021-05-08 | 2023-07-18 | 山东省计算中心(国家超级计算济南中心) | Text relation extraction method based on sequence annotation |
CN113535897A (en) * | 2021-06-30 | 2021-10-22 | 杭州电子科技大学 | Fine-grained emotion analysis method based on syntactic relation and opinion word distribution |
-
2022
- 2022-08-30 CN CN202211050793.9A patent/CN115358341B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270196A (en) * | 2020-12-14 | 2021-01-26 | 完美世界(北京)软件科技发展有限公司 | Entity relationship identification method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115358341A (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111222305B (en) | Information structuring method and device | |
CN107766483A (en) | The interactive answering method and system of a kind of knowledge based collection of illustrative plates | |
CN104331449B (en) | Query statement and determination method, device, terminal and the server of webpage similarity | |
CN104216913B (en) | Question answering method, system and computer-readable medium | |
CN112650840A (en) | Intelligent medical question-answering processing method and system based on knowledge graph reasoning | |
Lev et al. | In defense of word embedding for generic text representation | |
CN110675944A (en) | Triage method and device, computer equipment and medium | |
CN110347802B (en) | Text analysis method and device | |
CN113724882A (en) | Method, apparatus, device and medium for constructing user portrait based on inquiry session | |
KR102329242B1 (en) | Method, apparatus, device and computer readable medium for generating vqa training data | |
CN115470338B (en) | Multi-scenario intelligent question answering method and system based on multi-path recall | |
CN114528413B (en) | Knowledge graph updating method, system and readable storage medium supported by crowdsourced marking | |
CN114153994A (en) | Medical insurance information question-answering method and device | |
CN112131881A (en) | Information extraction method and device, electronic equipment and storage medium | |
CN115525751A (en) | Intelligent question-answering system and method based on knowledge graph | |
CN115114420A (en) | Knowledge graph question-answering method, terminal equipment and storage medium | |
CN110674637B (en) | Character relationship recognition model training method, device, equipment and medium | |
Daswani et al. | CollegeBot: a conversational AI approach to help students navigate college | |
CN115358341B (en) | Training method and system for instruction disambiguation based on relational model | |
CN114372454A (en) | Text information extraction method, model training method, device and storage medium | |
Crescenzi et al. | Wrapper inference for ambiguous web pages | |
CN101089841B (en) | Precision search method and system based on knowledge code | |
CN113705697B (en) | Information pushing method, device, equipment and medium based on emotion classification model | |
CN113254623B (en) | Data processing method, device, server, medium and product | |
Alwaneen et al. | Stacked dynamic memory-coattention network for answering why-questions in Arabic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |