CN113486657B - Emotion-reason pair extraction system based on knowledge assistance - Google Patents

Emotion-reason pair extraction system based on knowledge assistance Download PDF

Info

Publication number
CN113486657B
CN113486657B CN202110841439.7A CN202110841439A CN113486657B CN 113486657 B CN113486657 B CN 113486657B CN 202110841439 A CN202110841439 A CN 202110841439A CN 113486657 B CN113486657 B CN 113486657B
Authority
CN
China
Prior art keywords
clause
clauses
emotion
reason
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110841439.7A
Other languages
Chinese (zh)
Other versions
CN113486657A (en
Inventor
刘德喜
赵凤园
万常选
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoyang Health Guangzhou Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110841439.7A priority Critical patent/CN113486657B/en
Publication of CN113486657A publication Critical patent/CN113486657A/en
Application granted granted Critical
Publication of CN113486657B publication Critical patent/CN113486657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an emotion-reason pair extraction system based on knowledge assistance, and belongs to the technical field of natural language processing emotion reason prediction. A sentiment-reason pair extraction system based on knowledge assistance comprises the following three steps: emotion clause extraction, reason clause extraction and emotion-reason pairing, wherein the three steps are represented by knowledge-assisted word coding. The system adds an external knowledge auxiliary system for learning, is beneficial to emotion-reason pair extraction to a certain extent, and effectively solves the problem of insufficient judgment on causal relationship between sentences in the existing learning model.

Description

Emotion-reason pair extraction system based on knowledge assistance
Technical Field
The invention belongs to the technical field of natural language processing emotion reason prediction, and particularly relates to an emotion-reason pair extraction system based on knowledge assistance.
Background
In the existing method, candidate clauses or candidate clause pairs are mostly expressed in a vector mode and then sent into a deep learning model to predict whether causal relationships exist among the clauses.
The current method has three disadvantages. First, for a text containing a single clause, the emotion-cause pairs of candidates share a pair, and therefore, the efficiency of recognition is low, and it is not suitable for a long text containing a large number of clauses. Although the ECPE-2D model employs some limiting rules, there are still many candidate emotion-cause pairs. Secondly, although the current model can improve the emotion-reason pair recognition effect through the interaction of the candidate emotion clause and the candidate reason clause, the interference condition exists, and is directly reflected on the experimental result: compared with a model for independently extracting emotion clauses on the same ECPE data set, the emotion clause extraction effect is generally and obviously reduced when emotion-reason combined extraction is adopted; and under the condition of manually giving the emotion clauses, the reason clause extraction effect is obviously superior to the reason clause extraction effect when the emotion-reason pair is adopted for extraction. Thirdly, regarding text emotion analysis, at present, more artificial knowledge can help to improve the extraction effect, the reason for triggering emotion is mostly events, the main body of emotion is entities such as people, organizations, mechanisms and the like, as shown in fig. 1, and the characteristics are not fully utilized by a model.
Disclosure of Invention
1. Technical problem to be solved
The invention aims to provide an emotion-reason pair extraction system based on knowledge assistance, which solves the problems in the prior art:
after word-level encoding of a text, the machine may not be able to identify the problem of the emotion words in the clause more accurately.
2. Technical scheme
The system introduces a manually constructed language and psychological characteristic knowledge base and the like for auxiliary coding, strengthens the identification of emotional words and psychological characteristics, and improves the extraction effect of emotional clauses. Meanwhile, part-of-speech labels including entity identification are added, information such as characters and events in the text is captured, and richer features are provided for extracting emotion and emotion reasons. Third, emotion and emotional cause are often co-occurring, meaning that if a clause is identified as an emotional clause with a greater probability, there is also at least one reason clause in its context with a greater probability. Therefore, the external knowledge auxiliary system learning is added, and the emotion-reason pair extraction is facilitated to a certain extent.
A system for emotion-reason pair extraction based on knowledge assistance, comprising the steps of:
s1, extracting emotion clauses;
s2, extracting reason clauses;
s3, emotion-reason pairing;
S1-S3 are all represented by knowledge-assisted word encoding.
Preferably, the knowledge-assisted word encoding representation consists of 3 parts: BERT-based semantic encoding, LIWC linguistic psychology knowledge base-based part-of-speech encoding, and NLPIR-based part-of-speech encoding, wherein,
based on the semantic coding of BERT, each word w in the clause is coded by a BERT BASE model j Encoding to obtain 768-dimensional word vector representation
Figure BDA0003178980130000021
Based on word class coding of a language psychological characteristic knowledge base of LIWC, an SC-LIWC dictionary (comprising 71 categories such as human sensory word class, emotional history word class, cognitive history word class and social history word class) constructed by Huanglangen and the like is adopted to code words w in clauses according to one-hot j Encoding is performed to obtain a 71-dimensional vector representation
Figure BDA0003178980130000022
Based on the part-of-speech coding of NLPIR, 9 parts-of-speech (including a person name nr, a place name ns, other nouns n, an adjective a, an adverb d, a verb v, a person pronoun rr, other pronouns r and other parts-of-speech other) are reserved, and words w in the clause are subjected to one-hot coding j To obtain a 9-dimensional vector representation
Figure BDA0003178980130000023
Preferably, the knowledge-assisted word encoding means that the semantic encoding of the BERT of the current word, the part-of-speech encoding of the LIWC linguistic-psychological characteristic knowledge base, and the part-of-speech encoding of the NLPIR are concatenated, and the calculation formula is as follows:
Figure BDA0003178980130000031
wherein x j A vector representation of the word.
Preferably, the S1 adopts a two-layer Bi-LSTM model of a word layer and a clause layer to encode and express the clause and carries out binary bounding prediction, namely if the model identifies the emotion clause in the text d, the emotion clause is known
Figure BDA0003178980130000032
The clauses of (2) are set with the recognition results corresponding to the clauses
Figure BDA0003178980130000033
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure BDA0003178980130000034
Then will be
Figure BDA0003178980130000035
Maximum first two clause recognition results
Figure BDA0003178980130000036
Set to 1 and the recognition results of the remaining clauses to 0.
Preferably, the calculation formula of S1 is as follows:
Figure BDA0003178980130000037
Figure BDA0003178980130000038
Figure BDA0003178980130000039
wherein the content of the first and second substances,
Figure BDA00031789801300000310
as clause c i Is used to indicate that the emotion is encoded by the emotion encoding,
Figure BDA00031789801300000311
is a clause c i Is indicative of the context of the user,
Figure BDA00031789801300000312
is the predicted probability of an emotional clause.
Preferably, the S2 adopts a two-layer Bi-LSTM model of a word layer and a clause layer to encode and express the clause, the clause is spliced with the emotion clause encoding and expressing, and then binary bounding prediction is carried out, namely if the model identifies a reason clause in the text d, namely, if the model identifies the reason clause, the reason clause exists
Figure BDA00031789801300000313
The clauses of (2) are set with the recognition results corresponding to the clauses
Figure BDA00031789801300000314
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure BDA00031789801300000315
Then will be
Figure BDA00031789801300000316
Maximum first two clause recognition results
Figure BDA00031789801300000317
Set to 1 and the recognition results of the remaining clauses to 0.
Preferably, the calculation formula of S2 is as follows:
Figure BDA0003178980130000041
Figure BDA0003178980130000042
Figure BDA0003178980130000043
wherein the content of the first and second substances,
Figure BDA0003178980130000044
is a clause c i Is used to indicate that the emotion is encoded by the emotion encoding,
Figure BDA0003178980130000045
is a clause c i Is indicative of the context of the user,
Figure BDA0003178980130000046
the predicted probability of a reason clause.
Preferably, in the step S3, a Bi-LSTM model of a word layer and a clause layer is used to encode and represent the clauses, and prediction probabilities and distance information of emotion clauses and reason clauses are added and then sent to a logistic regression model for prediction.
Preferably, the calculation formula of S3 is as follows:
Figure BDA0003178980130000047
Figure BDA0003178980130000048
preferably, the distance information is calculatedThe method is as follows: setting emotion clauses
Figure BDA0003178980130000049
And reason clause
Figure BDA00031789801300000410
Is d relative to i,j = j-i, and the maximum number of clauses in all texts does not exceed M sentences. Initializing a 2M x 50 dimensional array with each row conforming to a normal distribution function, then v d Represents the (d) th in the array i,j + M) rows, which are applied to the test dataset by continuous training of the dataset to obtain a final representation of each relative position.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) Emotion-cause pair extraction (ECPE) with greater accuracy
The evaluation results of ECPE-KA on the emotion-cause pair extraction task EPCE are shown in Table 1. As can be seen from Table 1, the ECPE-KA is significantly higher than the ECPE-2Steps and RANKCP models in the accuracy P and F1 values, and is respectively higher than the ECPE-2Steps and the RANKCP models in the F1 value by 18.84 percent and 4.59 percent; although ECPE-KA is slightly lower than the TDGC model in the accuracy rate P, the recall rate R is obviously improved, so that the F1 value of the ECPE-KA is also better than that of the TDGC model, and the improvement of the recall rate also indicates that the model obtains more correct clause pairs.
Compared with the ECPE-2D model which is the most advanced currently, ECPE-KA (F1 = 0.6914) achieves better effect than the ECPE-2D model (F1 = 0.6889) on the ECPE task, and the accuracy P is improved by 0.85%, while the recall rate only has a deficiency gap of 0.19%.
TABLE 1 results of the experimental evaluation
Figure BDA0003178980130000051
(2) Reduced number of candidate affective cause pairs
The binarization process adopted by ECPE-KA ensures that each text has at least one candidate emotion-reason pair to be sent into calculation, and the number of the candidate emotion-reason pairs in the three submodels of ECPE-2step is less than 1, which means that the model has serious defects in emotion clause extraction or reason clause extraction, so that the pairing number is sharply reduced, but a plurality of possible correct candidate emotion-reason pairs are inevitably lost.
Therefore, the ECPE-KA model not only ensures that the emotion clauses and reason clauses are extracted as accurately as possible, but also reduces the number of investigation candidate emotion-reason pairs and improves the identification efficiency.
(3) ECPE-KA is more accurate in Emotion Cause Extraction (ECE)
In a classical ECE task, emotion clauses are manually marked, and an ECPE-KA model does not require manual marking of the emotion clauses in a test set.
Table 2 shows that without sentiment clauses labeling the test dataset, the ECPE-KA model is only lower in accuracy than CANN and PAE-DGL, superior in recall to all reference models, and finally differs only by 2% from the best result (CANN) at 1. This shows that the method proposed herein can overcome the application limitation problem of manual sentiment clause annotation on ECE mission, and certainly there is room for improvement.
This document compares the CANN-E model, which is a label that removes the emotion clauses in the data set from the CANN model that performs better under test. As is clear from Table 2, the performance of the CANN-E model after the emotion labels are removed is reduced linearly, and compared with the CANN model, the performance is reduced by 47.74% in the value of 1. And under the condition that the ECPE-KA also has no emotional clause labels, the value 1 reaches 0.7083, which is improved by 86.54% compared with CANN-E.
TABLE 2 evaluation of emotional cause extraction tasks
Figure BDA0003178980130000061
Figure BDA0003178980130000071
Drawings
FIG. 1 is an example of emotional cause text;
FIG. 2 is a block diagram of a knowledge-aided emotion-cause pair extraction system;
fig. 3 is a knowledge-aided clause representation structure.
Detailed Description
An emotion-reason pair extraction system (ECPE-KA) based on knowledge assistance is provided by combining an external artificial knowledge LIWC (language-mental feature) knowledge base and an NLPIR (nlPIR) part-of-speech analysis platform. The ECPE-KA system structure is shown in FIG. 2.
Example 1: knowledge-assisted clause representation
The knowledge-aided clause representation structure is shown in fig. 3. Given a text d = { c) containing | d | clauses 1 ,c 2 ,…,c |d| }, each clause
Figure BDA0003178980130000072
Respectively contain | c i | words. Each word w j Is represented by the code x j The method comprises three parts, namely semantic coding based on BERT, part of speech coding based on an LIWC language psychological characteristic knowledge base and part of speech coding based on NLPIR.
(1) The ECPE-KA model first adopts the BERT BASE model to carry out the operation on each word w in the clause j Encoding to obtain 768-dimensional word vector representation
Figure BDA0003178980130000073
(2) Since the text is directed to a chinese dataset, the SC-LIWC dictionary constructed by golden blue et al is employed. The SC-LIWC dictionary contains 71 categories such as human sensory part of speech, emotional part of speech, cognitive part of speech, and social part of speech. One-hot pair of words w in clauses is adopted in the text j Encoding to obtain a 71-dimensional vector representation
Figure BDA0003178980130000074
(3) Because only the entities such as names, pronouns and the like need to be identified with emphasisIn order to help extraction of emotion clauses and avoid dimension sparseness caused by excessive part-of-speech types, the ECPE-KA model only adopts one type and partial two types of parts-of-speech, removes three types of parts-of-speech described in detail, and finally retains 8 types of parts-of-speech (including a name nr, a place ns, a noun n, an adjective a, an adverb, a verb v, a name pronoun rr and a pronoun r) after screening, and uniformly merges the other types of parts-of-speech into other parts-of-speech (other). One-hot is adopted in the text to the word w in the clause j To obtain a 9-dimensional vector representation
Figure BDA0003178980130000081
In the ECPE-KA model, a word w in a candidate clause j Is coded by
Figure BDA0003178980130000082
And
Figure BDA0003178980130000083
expressed as:
Figure BDA0003178980130000084
example 2: sentiment clause extraction
The extraction of emotion clauses adopts a two-layer Bi-LSTM model of a word layer and a clause layer:
(1) Word layer Bi-LSTM
One contains | c i Clause of | words
Figure BDA0003178980130000085
Is represented by a code
Figure BDA0003178980130000086
As input, send into Bi-LSTM model to get clause c i Hidden layer representation of the jth word in
Figure BDA0003178980130000087
Using self-attention to each wordGenerating clause c by mechanical action i Coded representation of
Figure BDA0003178980130000088
Figure BDA0003178980130000089
Where F represents a Bi-LSTM network using the self-attention mechanism.
(2) Clause layer Bi-LSTM
The purpose of the clause layer Bi-LSTM is to capture semantic dependencies between clauses. For text containing | d | clauses, d = { c 1 ,c 2 ,…,c 2 ,…,c |d| }, encoding each clause
Figure BDA00031789801300000810
Sending the hidden state of the Bi-LSTM, namely the clause c, into a Bi-LSTM model i Is represented by the context of
Figure BDA00031789801300000811
Figure BDA00031789801300000812
Finally will
Figure BDA00031789801300000813
Enter softmax function to get clause c i Probability of being an emotional clause
Figure BDA00031789801300000814
Figure BDA0003178980130000091
Considering that at least one emotion clause exists in the text and most of the text contains at most two emotion clauses, the ECPE-KA model considers that in the binary bounding stageTwo cases are: if the model has recognized an emotion clause in the text d, then
Figure BDA0003178980130000092
The clauses of (2) are set up with the recognition results corresponding to the clauses
Figure BDA0003178980130000093
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure BDA0003178980130000094
Then will be
Figure BDA0003178980130000095
Maximum first two clause recognition results
Figure BDA0003178980130000096
Set to 1 and the recognition results of the remaining clauses are set to 0.
Thus, a candidate emotion clause set in d is obtained
Figure BDA0003178980130000097
Example 3: reason clause extraction
The extraction of the reason clauses also adopts a Bi-LSTM with a word layer and a clause layer, wherein the coding structure of the clauses (the coding of the Bi-LSTM with the word layer) is the same as the clause coding structure in the emotion clause extraction stage.
Clause c i Is represented by a code
Figure BDA0003178980130000098
And the emotion prediction probability value obtained in the first stage
Figure BDA0003178980130000099
Splicing to obtain clause c i Is represented by a code
Figure BDA00031789801300000910
To capture context information, a vector representation of | d | clauses in text d is presented herein
Figure BDA00031789801300000911
As an input to the Bi-LSTM model, the hidden state of Bi-LSTM, i.e., clause c, is obtained i Is represented by the context of
Figure BDA00031789801300000912
Figure BDA00031789801300000913
Finally will be
Figure BDA00031789801300000914
Sending into softmax function to obtain clause c i Is predicted to have a probability value
Figure BDA00031789801300000915
Figure BDA00031789801300000916
Similar to binarization adopted by emotion clause extraction, considering that most texts contain at most two reason clauses, binarization of the reason clause extraction result is also divided into two cases: if the model has identified a reason clause in the text d, then there is
Figure BDA00031789801300000917
The clauses of (2) are set with the recognition results corresponding to the clauses
Figure BDA00031789801300000918
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure BDA00031789801300000919
Then will be
Figure BDA00031789801300000920
Maximum first two clause recognition results
Figure BDA00031789801300000921
Set to 1 and the recognition results of the remaining clauses are set to 0.
Thus, a candidate reason clause set in d is obtained
Figure BDA0003178980130000101
Example 4: emotion-reason pairing
For the set of emotion clauses in document d
Figure BDA0003178980130000102
And reason clause set
Figure BDA0003178980130000103
Performing a cartesian product to obtain all possible pairing results:
Figure BDA0003178980130000104
obtaining candidate emotion clauses by adopting text representation method in section 1
Figure BDA0003178980130000105
Coded representation of
Figure BDA0003178980130000106
And candidate reason clause
Figure BDA0003178980130000107
Coded representation of
Figure BDA0003178980130000108
Distance v for joining two clauses simultaneously d Prediction probability of candidate emotion clause
Figure BDA0003178980130000109
And predicted probability of candidate reason clause
Figure BDA00031789801300001010
As a feature, the five codes are spliced to obtain an input vector of the emotion-reason pair filtering model
Figure BDA00031789801300001011
Comprises the following steps:
Figure BDA00031789801300001012
distance feature v d The calculation of (c) is as follows: setting emotion clauses
Figure BDA00031789801300001013
And reason clause
Figure BDA00031789801300001014
Is d relative to i,j = j-i, and the maximum number of clauses in all texts does not exceed M sentences. Initializing a 2M x 50 dimensional array with each row conforming to a normal distribution function, then v d Represents the (d) th in the array i,j + M) rows, applied to the test dataset, through the continued training of the dataset to get a final representation of each relative position.
Then inputting the vector
Figure BDA00031789801300001015
Sending the sentence into a Logistic regression (Logistic) model to detect whether the two clauses have a causal relationship, and filtering to obtain an emotion-cause pair set:
Figure BDA00031789801300001016
retention
Figure BDA00031789801300001017
Emotion-origin ofThe cause pairs are extracted as final emotion-cause pairs.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. A system for emotion-reason pair extraction based on knowledge assistance, comprising the steps of:
S1-S3 are all expressed by adopting knowledge-assisted word coding;
the knowledge-assisted word coding representation consists of 3 parts: given a text d = { c) containing | d | clauses 1 ,c 2 ,…c |d| }, each clause
Figure FDA0003931825070000011
Respectively contain | c i L words; each word w j Is represented by the code x j The method consists of three parts, namely semantic coding based on BERT, part of speech coding based on an LIWC language psychological characteristic knowledge base and part of speech coding based on NLPIR, wherein,
based on BERT semantic coding, each word w in the clause is subjected to BERT BASE model j Encoding to obtain 768-dimensional word vector representation
Figure FDA0003931825070000012
The word class coding based on the LIWC language psychological characteristic knowledge base adopts an SC-LIWC dictionary, the SC-LIWC dictionary comprises 71 categories of human sense word classes, emotional history word classes, cognitive history word classes and social history word classes, and words w in clauses are coded according to one-hot j The encoding is carried out, and the data is transmitted,a 71-dimensional vector representation is obtained
Figure FDA0003931825070000013
Based on the part-of-speech coding of NLPIR, 9 parts-of-speech of a person name nr, a place name ns, other nouns n, an adjective a, an adverb d, a verb v, a person pronoun rr, other pronouns r and other parts-of-speech other are reserved, and a word w in a sub-sentence is coded according to one-hot j Is encoded to obtain a 9-dimensional vector representation
Figure FDA0003931825070000014
The knowledge-assisted word coding means that the semantic coding of the BERT of the current word, the part of speech coding of the LIWC language psychological characteristic knowledge base and the part of speech coding of the NLPIR are spliced, and the calculation formula is as follows:
Figure FDA0003931825070000021
wherein x is j A vector representation representing a word;
s1, extracting emotion clauses;
the calculation formula of S1 is as follows:
Figure FDA0003931825070000022
Figure FDA0003931825070000023
Figure FDA0003931825070000024
wherein the content of the first and second substances,
Figure FDA0003931825070000025
is a clause c i Is used to indicate that the emotion is encoded,
Figure FDA0003931825070000026
is a clause c i Is indicative of the context of the user,
Figure FDA0003931825070000027
is the predicted probability of an emotional clause;
s2, extracting reason clauses;
the calculation formula of S2 is as follows:
Figure FDA0003931825070000028
Figure FDA0003931825070000029
Figure FDA00039318250700000210
wherein the content of the first and second substances,
Figure FDA00039318250700000211
as clause c i Is used to indicate that the emotion is encoded,
Figure FDA00039318250700000212
is a clause c i Is indicative of the context of the user,
Figure FDA00039318250700000213
a predicted probability of a reason clause;
s3, emotion-reason pairing;
the calculation formula of S3 is as follows:
Figure FDA00039318250700000214
Figure FDA00039318250700000215
2. the system for emotion-reason pair extraction based on knowledge assistance as claimed in claim 1, wherein: s1, a phrase layer and a clause layer are adopted to encode and express clauses by a Bi-LSTM model, and binary bounding prediction is carried out, namely if the model identifies an emotional clause in a text d, the emotion clause exists
Figure FDA0003931825070000031
The clauses of (2) are set with the recognition results corresponding to the clauses
Figure FDA0003931825070000032
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure FDA0003931825070000033
Then will be
Figure FDA0003931825070000034
Maximum first two clause recognition results
Figure FDA0003931825070000035
Set to 1 and the recognition results of the remaining clauses to 0.
3. The system for emotion-reason pair extraction based on knowledge assistance as claimed in claim 1, wherein: s2, coding and expressing the clauses by adopting a two-layer Bi-LSTM model of a word layer and a clause layer, splicing the two-layer Bi-LSTM model with the emotion clause coding and expressing, and then carrying out binary bounding prediction, namely, if the model identifies the reason clause in the text d, the reason clause exists
Figure FDA0003931825070000036
The clauses of (2) are set with the recognition results corresponding to the clauses
Figure FDA0003931825070000037
1, and the recognition results of the other clauses are 0; if the recognition results of all clauses in the text d are 0, namely
Figure FDA0003931825070000038
Then will be
Figure FDA0003931825070000039
Maximum first two clause recognition results
Figure FDA00039318250700000310
Set to 1 and the recognition results of the remaining clauses are set to 0.
4. The system for emotion-reason pair extraction based on knowledge assistance as claimed in claim 1, wherein: and S3, coding and expressing the clauses by adopting a Bi-LSTM model of a word layer and a two-layer Bi-LSTM model of a clause layer, adding the prediction probability and distance information of the emotion clauses and the reason clauses, and then sending the prediction probability and distance information to a logistic regression model for prediction.
5. The system for emotion-reason pair extraction based on knowledge assistance as claimed in claim 4, wherein: the distance information is calculated as follows: setting sentiment clauses
Figure FDA00039318250700000311
And reason clause
Figure FDA00039318250700000312
Is d relative to i,j = j-i, the maximum number of clauses in all texts does not exceed M sentences, and a 2M multiplied by 50 dimension with each line conforming to a normal distribution function is initializedGroup, then v d Represents the (d) th in the array i,j + M) rows, applied to the test dataset, through the continued training of the dataset to get a final representation of each relative position.
CN202110841439.7A 2021-07-26 2021-07-26 Emotion-reason pair extraction system based on knowledge assistance Active CN113486657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110841439.7A CN113486657B (en) 2021-07-26 2021-07-26 Emotion-reason pair extraction system based on knowledge assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110841439.7A CN113486657B (en) 2021-07-26 2021-07-26 Emotion-reason pair extraction system based on knowledge assistance

Publications (2)

Publication Number Publication Date
CN113486657A CN113486657A (en) 2021-10-08
CN113486657B true CN113486657B (en) 2023-01-17

Family

ID=77943576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110841439.7A Active CN113486657B (en) 2021-07-26 2021-07-26 Emotion-reason pair extraction system based on knowledge assistance

Country Status (1)

Country Link
CN (1) CN113486657B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114676259B (en) * 2022-04-11 2022-09-23 哈尔滨工业大学 Conversation emotion recognition method based on causal perception interactive network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781369A (en) * 2018-07-11 2020-02-11 天津大学 Emotional cause mining method based on dependency syntax and generalized causal network
CN111382565A (en) * 2020-03-09 2020-07-07 南京理工大学 Multi-label-based emotion-reason pair extraction method and system
CN111914556A (en) * 2020-06-19 2020-11-10 合肥工业大学 Emotion guiding method and system based on emotion semantic transfer map
CN112183064A (en) * 2020-10-22 2021-01-05 福州大学 Text emotion reason recognition system based on multi-task joint learning
CN112364127A (en) * 2020-10-30 2021-02-12 重庆大学 Short document emotional cause pair extraction method, system and storage medium
CN112836515A (en) * 2019-11-05 2021-05-25 阿里巴巴集团控股有限公司 Text analysis method, recommendation device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472047B (en) * 2019-07-15 2022-12-13 昆明理工大学 Multi-feature fusion Chinese-Yue news viewpoint sentence extraction method
JP7290507B2 (en) * 2019-08-06 2023-06-13 本田技研工業株式会社 Information processing device, information processing method, recognition model and program
CN111126069B (en) * 2019-12-30 2022-03-29 华南理工大学 Social media short text named entity identification method based on visual object guidance
CN111581396B (en) * 2020-05-06 2023-03-31 西安交通大学 Event graph construction system and method based on multi-dimensional feature fusion and dependency syntax
CN111859957B (en) * 2020-07-15 2023-11-07 中南民族大学 Emotion reason clause label extraction method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781369A (en) * 2018-07-11 2020-02-11 天津大学 Emotional cause mining method based on dependency syntax and generalized causal network
CN112836515A (en) * 2019-11-05 2021-05-25 阿里巴巴集团控股有限公司 Text analysis method, recommendation device, electronic equipment and storage medium
CN111382565A (en) * 2020-03-09 2020-07-07 南京理工大学 Multi-label-based emotion-reason pair extraction method and system
CN111914556A (en) * 2020-06-19 2020-11-10 合肥工业大学 Emotion guiding method and system based on emotion semantic transfer map
CN112183064A (en) * 2020-10-22 2021-01-05 福州大学 Text emotion reason recognition system based on multi-task joint learning
CN112364127A (en) * 2020-10-30 2021-02-12 重庆大学 Short document emotional cause pair extraction method, system and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Mutually Auxiliary Multitask Model With Self-Distillation for Emotion-Cause Pair Extraction;JIAXIN YU等;《IEEE》;20210208;第9卷;第26811页-26821页 *
基于深度学习的文本情绪原因发现方法的研究与实现;郑胜协;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190815(第8期);第I138-1432页 *
子句级别的自注意力机制的情感原因抽取模型;覃俊;《中南民族大学学报(自然科学版)》;20210208;第40卷(第1期);第64页-73页 *
检索式自动问答研究综述;刘德喜;《计算机学报》;20210615;第44卷(第6期);第1214页-1232页 *
跨语言文本情感原因发现研究;高清红;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20200215(第2期);第I138-2380页 *

Also Published As

Publication number Publication date
CN113486657A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
US11068662B2 (en) Method for automatically detecting meaning and measuring the univocality of text
CN107092596B (en) Text emotion analysis method based on attention CNNs and CCR
Ghosh et al. Fracking sarcasm using neural network
WO2018028077A1 (en) Deep learning based method and device for chinese semantics analysis
KR100420096B1 (en) Automatic Text Categorization Method Based on Unsupervised Learning, Using Keywords of Each Category and Measurement of the Similarity between Sentences
CN112818698B (en) Fine-grained user comment sentiment analysis method based on dual-channel model
CN112434161B (en) Aspect-level emotion analysis method adopting bidirectional long-short term memory network
CN115292461B (en) Man-machine interaction learning method and system based on voice recognition
CN111753058A (en) Text viewpoint mining method and system
CN111339772B (en) Russian text emotion analysis method, electronic device and storage medium
CN114927177B (en) Medical entity identification method and system integrating Chinese medical field characteristics
CN108536781B (en) Social network emotion focus mining method and system
CN112380866A (en) Text topic label generation method, terminal device and storage medium
CN113486657B (en) Emotion-reason pair extraction system based on knowledge assistance
CN110929518A (en) Text sequence labeling algorithm using overlapping splitting rule
Ruposh et al. A computational approach of recognizing emotion from Bengali texts
Harris et al. Constructing a rhetorical figuration ontology
CN114416969A (en) LSTM-CNN online comment sentiment classification method and system based on background enhancement
CN111815426A (en) Data processing method and terminal related to financial investment and research
CN114492437B (en) Keyword recognition method and device, electronic equipment and storage medium
Aliero et al. Systematic review on text normalization techniques and its approach to non-standard words
CN116070620A (en) Information processing method and system based on big data
CN115292495A (en) Emotion analysis method and device, electronic equipment and storage medium
CN115269846A (en) Text processing method and device, electronic equipment and storage medium
CN114548113A (en) Event-based reference resolution system, method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240311

Address after: Room 501, Room 401, No. 50 Zhongshan 7th Road, Jinhua Street, Liwan District, Guangzhou City, Guangdong Province, 510000 (for office use only)

Patentee after: Zhaoyang Health (Guangzhou) Technology Co.,Ltd.

Country or region after: China

Address before: 330013 mailuyuan campus, Jiangxi University of Finance and economics, 665 Yuping West Street, Changbei national economic and Technological Development Zone, Nanchang City, Jiangxi Province

Patentee before: Liu Dexi

Country or region before: China

TR01 Transfer of patent right