CN114611489A - Text logic condition extraction AI model construction method, extraction method and system - Google Patents

Text logic condition extraction AI model construction method, extraction method and system Download PDF

Info

Publication number
CN114611489A
CN114611489A CN202210080919.0A CN202210080919A CN114611489A CN 114611489 A CN114611489 A CN 114611489A CN 202210080919 A CN202210080919 A CN 202210080919A CN 114611489 A CN114611489 A CN 114611489A
Authority
CN
China
Prior art keywords
text
logic
model
extraction
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210080919.0A
Other languages
Chinese (zh)
Inventor
邹伟东
蔡子哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qizhidao Network Technology Co Ltd
Original Assignee
Qizhidao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qizhidao Network Technology Co Ltd filed Critical Qizhidao Network Technology Co Ltd
Priority to CN202210080919.0A priority Critical patent/CN114611489A/en
Publication of CN114611489A publication Critical patent/CN114611489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Abstract

The invention relates to a text logic condition extraction AI model construction method, an extraction method and a system, wherein the model construction method comprises the following steps: information labeling, namely labeling each training text based on the text to be trained, wherein the labeling comprises the sequence segments and the labeling of the logic relationship between the sequence segments; text preprocessing, constructing characteristics, and generating sequence segment characteristics and logic relation matrix characteristics required by model training; and model training, namely training based on the preprocessed text, and extracting the logic conditions of the sequence segments to obtain an AI model with the logic conditions extracted. The method has the advantages that the ner identification and the logic condition extraction are integrated into a model, the logic relation between the fragments is obtained while the key sequence fragments are extracted, a large number of rules do not need to be maintained manually, various text structures can be covered, the accuracy of the logic extraction is improved, the method can be better suitable for the logic condition extraction of complex texts, and the whole extraction process is relatively simple.

Description

Text logic condition extraction AI model construction method, extraction method and system
Technical Field
The invention relates to the technical field of computers, in particular to a text logic condition extraction AI model construction method, an extraction method and an extraction system.
Background
Some text files contain a large amount of effective information, text information extraction is a common technology in the field of nlp, the current mainstream extraction method is to label sequence labeling by deep learning and label key information in the files, the labeling of sequence fragments, namely ner, is relatively mature, and for the extraction of logic conditions among the fragments, the extraction is realized by relying on semantic rules and dependency syntax analysis. The method has no problem for extracting the logic relation of the simple policy text, but the logic relation of the complex policy text is difficult to cover, a large number of rules need to be maintained, and secondly, the whole extraction process is relatively complex, and the method needs to firstly perform ner identification and then perform logic judgment on the basis of the ner, and is a non-end-to-end method.
Disclosure of Invention
The invention aims to provide a text logic condition extraction AI model construction method, which has the characteristics that an AI model constructed based on the construction method is convenient to realize end-to-end logic condition extraction, can be better suitable for complex texts, and the whole extraction process is relatively simple.
The method for constructing the text logic condition extraction AI model comprises the following steps,
information labeling, namely labeling each training text based on the text to be trained, wherein the labeling comprises the sequence segments and the labeling of the logic relationship between the sequence segments;
text preprocessing, constructing characteristics, and generating sequence segment characteristics and logic relation matrix characteristics required by model training;
and model training, namely training based on the preprocessed text, and extracting the logic conditions of the sequence segments to obtain an AI model with the logic conditions extracted.
Optionally, the sequence fragment features are in a BIO labeling format.
Optionally, the extracting of the logic conditions of the sequence segments includes, for each sequence segment, taking one token in the segment, and determining a relationship between the two tokens, thereby obtaining a logic relationship between the two sequence segments.
Optionally, for each sequence fragment, taking a first token or a last token in the fragment, and correspondingly, determining a relationship between the two first tokens or the two last tokens, thereby obtaining a logical relationship between the two sequence fragments.
Optionally, the specific method for determining the relationship between the two tokens includes calculating two tokens ziAnd zjA relationship between rkThe time score is obtained, and a specific formula for calculating the score is realized by a formula (1):
s(r)(zj,zi,rk)=V(r)f(U(r)zj+W(r)zi+b(r)) (1)
wherein, (r) represents relational extraction; f (—) represents an activation function, such as relu, tanh, etc.;
Figure BDA0003485853330000021
Figure BDA0003485853330000022
for the relationship set, d is the hidden size of the bert module, b is the logical coding layer size, and l is the layer width.
Optionally, the score calculated by formula (1) is processed to be between [0, 1] through a sigmoid layer to obtain a relation probability between two tokens:
Pr(head=wj,label=rk|wi)=σ(s(r)(zj,zi,rk)) (2)
wherein wiFor the ith character of the input sequence fragment, σ (×) is a sigmoid function.
Optionally, the method further comprises, based on the relationship probability, utilizing a cross-entropy loss function
Figure BDA0003485853330000023
Optimizing the AI model parameters by minimizing the loss function extracted from the logical relationship; wherein the content of the first and second substances,
Figure BDA0003485853330000025
is token wiThe vector of (a) is determined,
Figure BDA0003485853330000024
is token wiThe relationship label of (1);θ is the AI model parameter set.
The second aspect of the present application provides a text logic condition extraction method based on an AI model, which has the characteristics that end-to-end logic condition extraction is realized, the method can be better applied to complex texts, and the whole extraction process is relatively simple.
According to the application, the text logic condition extraction method based on the AI model comprises the following steps,
constructing an AI model, namely extracting an AI model construction method based on the logic conditions to construct;
and extracting the logic conditions, namely extracting the logic conditions in the text through the sequence segment marking result and the logic relation matrix result based on the constructed AI model.
The third purpose of the present application is to provide a text logic condition extraction system based on the AI model, which has the characteristics that the logic condition extraction is end-to-end, the system can be better applied to complex texts, and the whole extraction process is relatively simple.
The system for extracting the text logic condition based on the AI model comprises
The input interface is used for inputting text contents to be subjected to sequence segment logic condition extraction;
a logic condition extraction model which is constructed according to the logic condition extraction AI model construction method and is used for performing logic condition extraction of sequence segments on input text contents;
and the output interface outputs the logic condition of the sequence segment extracted based on the input text content.
The fourth objective of the present application is to provide a computer-readable storage medium, which facilitates the features of constructing the AI model and/or extracting the logical conditions.
A computer readable storage medium storing a computer program that can be loaded by a processor and executed to perform any of the methods described above.
To sum up, the beneficial technical effect of this application includes:
the method realizes the construction and extraction of the end-to-end text logic condition extraction AI model, fuses the ner identification and the logic condition extraction into one model, obtains the logic relation between the fragments while extracting the key sequence fragments, and does not need to manually maintain a large number of rules. Meanwhile, various text structures can be covered by means of better fitting capability of deep learning, the accuracy of logic extraction is improved, the method can be better suitable for extracting the logic conditions of the complex text, and the whole extraction process is relatively simple.
Drawings
FIG. 1 is a flowchart illustrating a text logic condition extraction method based on an AI model according to an embodiment of the invention;
FIG. 2 is a flowchart illustrating a method for constructing a text logic condition extraction AI model according to an embodiment of the invention;
FIG. 3 is a schematic diagram illustrating a construction method of a logical relationship matrix of sequence segments according to an embodiment of the present invention;
FIG. 4 is a diagram of a bert-based multi-head selection logical relationship extraction model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the "/" character herein generally means that the former and latter related objects are in an "or" relationship, unless otherwise specified
In the prior art, for extracting text information, the first step is to label the key sequence segments in the text by a ner. Take a policy clause as an example, for example, enter text: the special support of enterprises with the scale is as follows: and more than 30% increase in annual output value, and the annual output value is declared. As recognized by ner, "last year" is time, "yield" and "yield increase" are subjects, "under 1 million yuan" and "over 30% are values. The logical relationship among the segments is judged mainly by the subsequent syntactic rules, and according to the position relationship of subjects and values, the logical conditions of 'yield value' and 'below 1 billion' are judged, and the logical conditions of 'yield value increase' and 'above 30 percent' are judged. This scheme does achieve good results for the sentences in the example, but for complex sentence writing, for example: "the annual output value is less than 1 hundred million yuan, and the annual output value increase and the sales increase reach 30% and 20% respectively, enterprises with pure rules can use the" sales increase "and" 30% "as a logic condition, and the logic is disordered. And the large number of rules requires high maintenance costs.
Based on the problems in the prior art, the specific embodiment provided by the present application provides a text logic condition extraction method based on an AI model, as shown in fig. 1, including building 100 an AI model and extracting 200 logic conditions. In the AI model construction 100, firstly, based on the text to be trained, labeling each training text with sequence segments and labeling the logical relationship between the sequence segments; secondly, preprocessing the text, constructing features, generating sequence segment features and logic relation matrix features required by model training, then training based on the preprocessed text, and extracting logic conditions of the sequence segments, thereby obtaining an AI model with extracted logic conditions. Based on the constructed AI model, in the extraction 200 of the logic conditions, the logic conditions in the text are extracted through the sequence segment marking result and the logic relationship matrix result.
By the technical scheme, the method for extracting the end-to-end text logic conditions is realized, the ner identification and the logic condition extraction are integrated into a model, the logic relation between the fragments is also obtained while the key sequence fragments are extracted, and a large number of rules do not need to be manually maintained. Meanwhile, by means of the better fitting capability of deep learning of the AI model, various text structures can be covered, the accuracy of logic extraction is improved, the method can be better suitable for extracting the logic conditions of the complex text, and the whole extraction process is relatively simple.
As shown in fig. 2, as an embodiment of the method for constructing the AI model by extracting logical conditions according to the present application, a specific construction method includes,
the information labeling 101 is used for labeling each training text based on the text to be trained, wherein the labeling comprises the labeling of sequence segments and the labeling of the logic relationship between the sequence segments;
text preprocessing 102, constructing characteristics, and generating sequence segment characteristics and logic relation matrix characteristics required by model training;
and model training 103, training based on the preprocessed text, and performing logic condition extraction of the sequence segments to obtain an AI model with logic condition extraction.
In the information labeling 101, a manual labeling is performed on a sample to be trained, such as a policy text, and the labeled data includes two parts, a sequence segment and a logical relationship of the sequence segment. In the following we will make a specific description in conjunction with a text content of a policy document.
For example: based on the text content: the special support of enterprises with the scale above: in enterprises with annual production value of less than 1 million yuan and annual production value increased by more than 30%, we can label the text content to obtain the following sequence segments and logical relations of the sequence segments:
[ { "time": "last year", "subject": "yield value", "value": "1 one hundred million yuan below" }, { "time": "last year", "subject": "yield increase", "value": "30% or more" } ].
Wherein, time, subject and value are labels of sequence segments, and the labeled sequence segments have ' last year ', ' output value ', ' less than 1 hundred million yuan ', ' output value increase ' and ' more than 30%; in the labeled sequence fragments, logical relations exist among ' previous year ', ' yield value ' and ' less than 1 billion yuan ', ' previous year ', ' yield value increase ' and ' more than 30%; and through the above labeling of the sequence segments and the labeling of the logic relations of the sequence segments, the ner identification and the logic condition extraction are fused into a model.
In the preprocessing 102, sequence segment characteristics and logic relationship matrix characteristics of the sequence segments, which are required by AI model training of logic condition extraction, are generated according to the labeled sequence segments and the logic relationships of the sequence segments.
As an implementation mode of the sequence segment representation mode, the BIO annotation format is adopted for representation, if one character is the starting character of one time segment, the character is marked as (B-time); if a character is a non-beginning character of a time segment, then it is marked as (I-time); if a character does not belong to a sequence segment, it is marked (O). In the above text content embodiment we have exemplified three tags: time, subject, and value, so the indicia include (B-time) time, (B-time) subject, (B-value) value; (I-value) time, (B-subject) subject, (I-subject) value and (O). Thus, based on the text content exemplified above, we can obtain the labeling result as:
"gauge O, model O, in O, up O, enterprise O, business O, special O, term O, support O,: o, last B-time, one I-time, yearly I-time, degree I-time, produce B-subject, value I-subject, 1B-value, hundred million I-value, meta I-value, with I-value, lower I-value, O, and O, Shen O, newspaper O, yearly O, degree O, produce B-subject, value I-subject, increase I-subject, long I-subject, reach O, 30B-value,% I-value, with last I-value, O, rule O, model O, with O, go O, enterprise O, industry O', thereby obtaining sequence fragment characteristics.
The construction method of the logical relationship matrix characteristic of the sequence segments is shown in fig. 3, if there is a connection between two sequence segments, the corresponding position of the relationship matrix is 1, otherwise, it is 0, so we obtain the logical relationship matrix characteristic of the sequence segments.
In model training 103, the ner identification and the logical condition extraction are fused into one model based on the preprocessed text. Here, the model training data mainly includes two important pieces of information, the sequence segment labels and the logical relationship between the sequence segments, as shown in fig. 4, the input text is first converted into a text vector through the encoding layer, and then the converted text vector is passed through a bert language model (in the embodiment shown in fig. 4, bert is used as a language model, and rnn or lstm and other language models can also be used) to obtain semantic vector information of each token. On the basis of a bert output layer, a sequence fragment identification result in a BIO labeling format is obtained through decoding of a CRF layer; based on the sequence fragment recognition result, logical condition extraction of the sequence fragment is performed.
In the process of extracting the logic conditions of the sequence segments, for each sequence segment, one token in the segment can be taken, and the relationship between the two tokens is judged, so that the logic relationship between the two sequence segments is obtained.
After the sequence segments are acquired, the extraction of the logical conditions of the sequence segments can be regarded as a multi-head selection problem, that is, a relationship exists between each sequence segment and any other sequence segment is assumed. Because the sequence fragment is composed of a plurality of tokens, when the relationship is judged, for less information redundancy, only one token in the fragment can be taken for judgment.
In the relationship judgment, as a selection mode of the token, a first token in the segment may be selected for judgment, for example, judging a logical relationship between "last year" and "production value", only judging a relationship between "last year" and "production" of the two tokens.
In the relationship judgment, as another selection mode of the token, the last token in the segment may also be selected for judgment, for example, judging the logical relationship between "last year" and "production value", only the relationship between "degree" and "value" needs to be judged.
Further, in determining the relationship between two tokens, the two tokens z may be calculatediAnd zjA relationship between rkThe score of time is obtained, thereby making the logicThe relationship is digitized, so that the logical relationship between the sequence segments is more intuitive. A specific formula for calculating the score can be realized by the following formula (1):
s(r)(zj,zi,rk)=V(r)f(U(r)zj+W(r)zi+b(r)) (1)
wherein, (r) represents relational extraction; f (—) represents an activation function, such as relu, tanh, etc.;
Figure BDA0003485853330000061
Figure BDA0003485853330000062
for the relationship set, d is the hidden size of the bert module, b is the logical coding layer size, and l is the layer width.
Further, for simplification, the score calculated by formula (1) may be further processed to be between [0, 1] by a sigmoid layer to obtain a relationship probability between two tokens:
Pr(head=wj,label=rk|wi)=σ(s(r)(zj,zi,rk)) (2)
wherein, wiFor the ith character of the input sequence fragment, σ (×) is a sigmoid function.
In order to obtain a very accurate logical relationship, a cross entropy loss function is used based on the relationship probability obtained by the above formula (2)
Figure BDA0003485853330000071
The AI model parameters are optimized by minimizing the loss function extracted from the logical relationship. Wherein the content of the first and second substances,
Figure BDA0003485853330000073
is token wiThe vector of (a) is calculated,
Figure BDA0003485853330000072
is token wiThe relationship label of (1); θ is the AI model parameter set.
Finally, the loss function of the whole AI model is obtained by the sum of the losses of the ner part and the logical relation part:
lJOINT(w;θ)=lNER+lrel (4)
wherein lNERIs a loss function of the ner part, /)NERAs a result of the relatively mature calculation, it is not described in detail herein.
By the scheme, the AI model training is carried out by taking the characteristics of the sequence fragment and the characteristics of the logic relation matrix as input on the basis of an open-source bert, robert or albert pre-training model. The training process is as described above, and the AI model total loss function is obtained by the summation of the ner and the partial loss of the logical relationship. And continuously updating the parameters of the AI model towards the direction of minimizing the loss function through gradient descent, thereby obtaining the final logic condition extraction AI model.
In the extraction of logic conditions, when the obtained AI model is used for model prediction, a text is input, the connection probability of the sequence fragments and the fragment token can be obtained, and a connection value is set to judge whether a relation exists, for example, if the connection probability is greater than or equal to 0.5, the relation exists between the two sequence fragments, and if the connection probability is less than 0.5, the relation does not exist between the two sequence fragments; or, if the connection probability is greater than or equal to 0.6, the two sequence fragments are considered to have a relation, and if the connection probability is less than 0.6, the two sequence fragments are considered to have no relation; the contact value can be set according to requirements. And finally, extracting the logic condition information in the text according to the sequence segment labeling result and the logic relation matrix result of the sequence segment. For example, the input text: the area of a laboratory scientific research room is more than 700 square meters, and the original value of scientific research instruments and equipment is not less than 700 ten thousand yuan. And finally, extracting the logic condition information in the text through the sequence segment marking result and the logic relation matrix result of the sequence segment as follows:
the method comprises the following steps of [ { "subjects": area of a room for scientific research "," value ": more than 700 square meters" }, { "subjects": original value of scientific research instrument and equipment "," value ": not less than 700 ten thousand yuan" }.
According to the text logic condition extraction system based on the AI model, the method has the characteristics that end-to-end logic condition extraction is realized, the method can be better suitable for complex texts, and the whole extraction process is relatively simple.
According to the text logic condition extraction AI model construction method, construction of a corresponding AI model can be realized, and based on the constructed AI model, a text logic condition extraction system based on the AI model can be realized, and the system comprises
The input interface is used for inputting text contents to be subjected to sequence segment logic condition extraction;
a logic condition extraction model which is constructed according to the logic condition extraction AI model construction method and is used for performing logic condition extraction of sequence segments on input text contents;
and the output interface outputs the logic condition of the sequence segment extracted based on the input text content.
Based on the text logic condition extraction system, an end-to-end text logic condition extraction system can be realized, the ner identification and the logic condition extraction are integrated into a model, the text content needing sequence segment logic condition extraction is input, the logic relationship among the segments is also obtained while the key sequence segments of the text content are extracted, and a large number of rules do not need to be manually maintained. Meanwhile, by means of the better fitting capability of deep learning of the AI model, various text structures can be covered, the accuracy of logic extraction is improved, the method can be better suitable for extracting the logic conditions of the complex text, and the whole extraction process is relatively simple.
To implement the text logic condition extraction system, we provide a computer-readable storage medium storing a computer program capable of being loaded by a processor and executing any one of the methods described above.
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (10)

1. A text logic condition extraction AI model construction method is characterized by comprising the following steps,
information labeling (101), based on the text to be trained, labeling each training text with labels comprising sequence segments and labels of logical relations between the sequence segments;
text preprocessing (102), constructing characteristics, and generating sequence segment characteristics and logic relation matrix characteristics required by model training;
and model training (103) is carried out, training is carried out based on the preprocessed text, and logic condition extraction of the sequence segments is carried out, so that an AI model with logic condition extraction is obtained.
2. The method of claim 1, wherein the sequence segment features are in BIO tagging format.
3. The method according to claim 1, wherein the extracting of the logical conditions of the sequence segments comprises, for each sequence segment, taking one token in the segment, and judging the relationship between the two tokens, thereby obtaining the logical relationship between the two sequence segments.
4. The method according to claim 3, wherein for each sequence segment, the first token or the last token in the segment is taken, and correspondingly, the relationship between the two first tokens or the two last tokens is judged, so as to obtain the logical relationship between the two sequence segments.
5. The method according to claim 3 or 4, wherein the specific method of determining the relationship between two tokens comprises calculatingTwo token ziAnd zjA relationship between rkThe time score is obtained, and a specific formula for calculating the score is realized by a formula (1):
s(r)(zj,zi,rk)=V(r)f(U(r)zj+W(r)zi+b(r)) (1)
wherein, (r) represents relational extraction; f (—) represents an activation function, such as relu, tanh, etc.;
Figure FDA0003485853320000011
Figure FDA0003485853320000012
Figure FDA0003485853320000013
for the relationship set, d is the hidden size of the bert module, b is the logical coding layer size, and l is the layer width.
6. The method of claim 5, further comprising processing the score calculated by equation (1) through a sigmoid layer to get a relationship probability between two tokens by processing the score between [0, 1 ]:
Pr(head=wj,label=rk|wi)=σ(s(r)(zj,zi,rk)) (2)
wherein, wiFor the ith character of the input sequence fragment, σ (×) is a sigmoid function.
7. The method of claim 6, further comprising utilizing a cross entropy loss function based on the relationship probability
Figure FDA0003485853320000021
By minimizing logicOptimizing AI model parameters by using the loss function extracted by the relation; wherein the content of the first and second substances,
Figure FDA0003485853320000023
is token wiThe vector of (a) is determined,
Figure FDA0003485853320000022
is token wiThe relationship label of (1); θ is the AI model parameter set.
8. A text logic condition extraction method based on an AI model is characterized by comprising the following steps,
construction (100) of an AI model, according to the logical conditional extraction AI model construction method of one of claims 1 to 7;
and extracting the logic conditions (200), namely extracting the logic conditions in the text through the sequence segment marking result and the logic relation matrix result based on the constructed AI model.
9. A text logic condition extraction system based on AI model is characterized by comprising
The input interface is used for inputting text contents to be subjected to sequence segment logic condition extraction;
a logic condition extraction model, which is constructed according to the logic condition extraction AI model construction method of one of claims 1 to 7 and is used for performing logic condition extraction of sequence segments on input text contents;
and the output interface outputs the logic condition of the sequence segment extracted based on the input text content.
10. A computer-readable storage medium, in which a computer program is stored which can be loaded by a processor and which executes the method of any one of claims 1 to 8.
CN202210080919.0A 2022-01-24 2022-01-24 Text logic condition extraction AI model construction method, extraction method and system Pending CN114611489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210080919.0A CN114611489A (en) 2022-01-24 2022-01-24 Text logic condition extraction AI model construction method, extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210080919.0A CN114611489A (en) 2022-01-24 2022-01-24 Text logic condition extraction AI model construction method, extraction method and system

Publications (1)

Publication Number Publication Date
CN114611489A true CN114611489A (en) 2022-06-10

Family

ID=81857957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210080919.0A Pending CN114611489A (en) 2022-01-24 2022-01-24 Text logic condition extraction AI model construction method, extraction method and system

Country Status (1)

Country Link
CN (1) CN114611489A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069899A (en) * 2022-09-08 2023-05-05 重庆思达普规划设计咨询服务有限公司 Text analysis method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069899A (en) * 2022-09-08 2023-05-05 重庆思达普规划设计咨询服务有限公司 Text analysis method and system

Similar Documents

Publication Publication Date Title
CN108717406B (en) Text emotion analysis method and device and storage medium
CN110019839B (en) Medical knowledge graph construction method and system based on neural network and remote supervision
CN111738004A (en) Training method of named entity recognition model and named entity recognition method
CN110427623A (en) Semi-structured document Knowledge Extraction Method, device, electronic equipment and storage medium
CN111177326A (en) Key information extraction method and device based on fine labeling text and storage medium
CN112084381A (en) Event extraction method, system, storage medium and equipment
CN110502626B (en) Aspect level emotion analysis method based on convolutional neural network
CN111783394A (en) Training method of event extraction model, event extraction method, system and equipment
CN112052684A (en) Named entity identification method, device, equipment and storage medium for power metering
CN110046356B (en) Label-embedded microblog text emotion multi-label classification method
CN112818093A (en) Evidence document retrieval method, system and storage medium based on semantic matching
CN112800184B (en) Short text comment emotion analysis method based on Target-Aspect-Opinion joint extraction
CN113191148A (en) Rail transit entity identification method based on semi-supervised learning and clustering
CN114969275A (en) Conversation method and system based on bank knowledge graph
CN115310443A (en) Model training method, information classification method, device, equipment and storage medium
CN114880468A (en) Building specification examination method and system based on BilSTM and knowledge graph
CN114612921B (en) Form recognition method and device, electronic equipment and computer readable medium
Sommerschield et al. Machine learning for ancient languages: A survey
CN116070632A (en) Informal text entity tag identification method and device
CN115098673A (en) Business document information extraction method based on variant attention and hierarchical structure
CN114611520A (en) Text abstract generating method
CN117034948B (en) Paragraph identification method, system and storage medium based on multi-feature self-adaptive fusion
CN114611489A (en) Text logic condition extraction AI model construction method, extraction method and system
CN115759119A (en) Financial text emotion analysis method, system, medium and equipment
CN114911940A (en) Text emotion recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518051 2201, block D, building 1, bid section 1, Chuangzhi Yuncheng, Liuxian Avenue, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong

Applicant after: Qizhi Technology Co.,Ltd.

Address before: 518051 2201, block D, building 1, bid section 1, Chuangzhi Yuncheng, Liuxian Avenue, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong

Applicant before: Qizhi Network Technology Co.,Ltd.