CN112100376A - Mutual enhancement conversion network for fine-grained emotion analysis - Google Patents
Mutual enhancement conversion network for fine-grained emotion analysis Download PDFInfo
- Publication number
- CN112100376A CN112100376A CN202010951154.4A CN202010951154A CN112100376A CN 112100376 A CN112100376 A CN 112100376A CN 202010951154 A CN202010951154 A CN 202010951154A CN 112100376 A CN112100376 A CN 112100376A
- Authority
- CN
- China
- Prior art keywords
- attribute
- word
- representation
- layer
- bidirectional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to a mutual enhancement conversion network for fine-grained emotion analysis, belonging to a fine-grained emotion analysis task aiming at determining the emotion polarity of each specific attribute in a given sentence. The invention provides a mutual enhancement conversion network for fine-grained emotion analysis. First, the attribute enhancement module in the network refines the attribute characterization learning by semantic features extracted from sentences to give richer information to the attributes. Second, the network iteratively enhances the representation of attributes and contexts using a hierarchy to achieve more accurate emotion prediction. The invention is effective for fine-grained emotion analysis tasks and performs well in both single-attribute sentences and multi-attribute sentences.
Description
Technical Field
The invention relates to a mutual enhancement conversion network for fine-grained emotion analysis, and belongs to the technical field of fine-grained emotion analysis tasks.
Background
The fine-grained sentiment analysis task comprises two subtasks, namely attribute extraction and attribute sentiment classification. The invention assumes that the attributes are known and focuses only on the attribute emotion classification task. In the fine-grained emotion analysis task, multiple attributes may appear in one sentence. Other attributes and related words become noisy when predicting the emotion of the current attribute. Therefore, how to efficiently model semantic relationships between a given attribute and words in a sentence is an important challenge.
The traditional method mainly depends on the characteristics of manual design, and the characterization mode almost reaches the performance bottleneck. With the development of deep learning technology, especially the proposal of attention mechanism, the above problems are well solved, and many neural attention models are proposed. In these works, the model typically first takes a representation of the attributes, and then applies an attention mechanism to extract contextual features that are relevant to the given attributes for emotion prediction. Attention machines, however, have some drawbacks. When a sentence contains multiple attributes and they have different emotional tendencies, the viewpoint modifiers of the other attributes are noise information for the current attribute. However, attention mechanisms have difficulty learning different perspective modifiers that distinguish multiple attributes, and this directly affects the final prediction result. For example, in the sentence "I like coming back to Mac OS but this is a stitch is lacking in the spoke quality compensated to my $400old HP lap", the attention force mechanism should pay more attention to the opinion word "like" with positive emotional tendency for the attribute "Mac OS". However, attention mechanisms typically involve unrelated opinion words, such as the opinion word "lacing" with negative emotional tendencies, which can interfere with emotion prediction for the attribute "Mac OS". To this end, researchers have proposed some work to ameliorate the drawbacks of the attention mechanism. However, most work has been directed to designing complex neural networks to improve the characterization learning of context. Little work has been focused on improving the characterization learning of attributes.
Disclosure of Invention
The invention provides a mutual enhancement conversion network for fine-grained emotion analysis, which aims to determine the emotion polarity of each specific attribute in a given sentence.
The invention comprises a BERT layer, a bidirectional enhanced conversion layer and a convolution characteristic extractor which are connected in sequence;
the BERT layer generates a word representation of the sequence using pre-trained BERT;
the bidirectional enhancement conversion layer comprises a bidirectional LSTM layer, an attribute enhancement module and a group of word conversion units, wherein the bidirectional LSTM layer is respectively connected with the attribute enhancement module and the word conversion units;
the bidirectional LSTM layer is used for capturing long dependency relationship and position information between texts, and the encoding result has two directions, one is an attribute enhancement module, and the other is a word conversion unit;
the attribute enhancement module receives the attribute representation and the average of the bidirectional LSTM layer coding result, and finally outputs an enhanced attribute representation which is input into the word conversion unit;
the attribute enhancement module utilizes the extracted context characteristics to enhance the attributes;
the word conversion unit receives the encoding result from the bidirectional LSTM layer and the enhanced attribute representation from the attribute enhancement module;
the convolutional feature extractor receives attribute information using a GCAE network to control the transfer of the emotional features of the sentence, which further enhances the link between the attributes and the context, and furthermore, introduces relative position information to better extract the emotional features.
The process of reasoning and training through the mutual enhancement conversion network for fine-grained emotion analysis is as follows:
And 2, bidirectional enhancement conversion layers, wherein each bidirectional enhancement conversion layer comprises three parts, namely a bidirectional LSTM layer, an attribute enhancement module and a group of character conversion units. The bi-directional LSTM layer first generates a contextualized word representation from the input. The attribute enhancement module then uses the word representations to enhance the attribute representations. Finally, the word conversion unit generates an attribute-specific word representation based on the contextualized word representation and the enhanced attribute representation.
And S21, learning the context dependence relationship of the text through the bidirectional LSTM layer. As shown in fig. 1, the bidirectional enhancement translation layer is repeated a plurality of times through the hierarchical structure. The input of the bi-directional LSTM in the lowest bi-directional enhancement conversion layer is a contextual representation of the BERT layer output. The input to the bi-directional LSTM in the next bi-directional enhancement conversion layer is from the output of the word conversion unit in the previous bi-directional enhancement conversion layer.
The word representation of the bidirectional LSTM output may be represented asForward LSTM outputs a set of hidden state vectorsWherein d ishIndicating the number of hidden units. Similarly, backward LSTM also outputs a set of hidden state vectorsFinally, the word representation of the bidirectional LSTM output is obtained by connecting the two hidden state listsWherein
S22, attribute enhancing module, before the first attribute enhancing operation, obtaining the initial attribute representation. Specifically, the attribute vector a output by BERT is first set to { a ═ a }1,a2,...,an}∈Rn×dInput into another bi-directional LSTM, and then apply to the obtained hidden state vectorUsing an averaging pooling methodFinally, an initial attribute representation is obtained
Taking the lowest bi-directional enhancement translation layer as an example, after obtaining the initial attribute representation, the contextualized word vector h is output based on the bi-directional LSTM(1)We obtain a vector by averaging the pooling layersThis is referred to as a context vector. The context vectors are then fused into the initial attribute representation using a basic feature fusion method (point-by-bit addition), which can be expressed asThis is an enhancement operation that acts on the attribute. And so on, the final attribute representation isThis formula expands as follows:
whereinRepresenting the context vector in the ith dyad conversion layer. According to equation (1), the attributes are enhanced by different context vectors in multiple dyads. Attribute vectorThere are two destinations, one destination being a word conversion unit in the same dyad and the other destination being a property enhancement module in the next dyad.
S23, a word conversion unit, which uses the same structure as the CPT module in the TNet model. The unit will attribute the vectorAnd the word representsAs an input, whereinIs represented by the ith word level of the bi-directional LSTM layer output,is the enhanced attribute vector. Specifically, firstly, theAndis input to a full connection layer to obtain an ith attribute-specific word representation
Where g (—) is a non-linear activation function and ": means a vector join operation.Andrespectively weight matrix and bias. There is an information protection mechanism to ensure that context dependent information captured from the bi-directional LSTM layer is not lost. This information protection mechanism enhances the delivery and use of features, which can be expressed as:
Step 3, a convolution feature extractor is introduced to a variable piTo measure the relative position information between the ith word and the current attribute word in the context, piIs calculated as follows:
where k is the index of the first word in the attribute, C is a pre-specified constant, and n is the length of the attribute phrase. When a sentence is filled, the index i may be larger than the actual length m of the sentence. Then, p is addediThe word output by the ith word conversion unit in the Lth bidirectional enhanced conversion layer is multiplied by the weight to represent that:
x at this timeiIs a word representation that incorporates location information.
Then, the sentence with the position information is expressed with X ═ X1,x2,...,xmAnd the final attribute vectorInputting into a gated convolution network to generate a feature map c:
si=tanh(WsXi:i+k-1+bs) (7)
ci=si×ai (8)
where k is the convolution kernel size, Wa,Va,ba,WsAnd bsAre all learnable parameters, ciIs one item in the feature map c, siIs a calculated emotional feature, aiIs the calculated attribute feature; x denotes element-by-element multiplication. Then, the sentence representation z is obtained by s convolution kernels and applying the maximum pooling method:
z={max(c1),...,max(cs)} (9)
where max is a function of the maximum. Finally, z is input to a fully connected layer for final emotion prediction:
wherein softmax is a normalized exponential function, WfAnd bfAre learnable parameters.
Step 4, the mutually enhanced transformation network for fine-grained sentiment analysis referred to herein can be trained in an end-to-end manner within a supervised learning framework to optimize all parameters Θ. With L2The cross entropy of the regularization term is used as a loss function, defined as:
wherein y isiRepresenting the true probability that a given sentence is labeled as each emotion,representing the estimated probability that a given sentence is labeled as each emotion, O representing the number of classes of emotion polarity, and λ being L2Parameters of the regularization term.
The method has the advantages of improving the attribute characterization learning and realizing the iterative interactive learning between the attribute and the context. The attribute enhancement module in the network improves the attribute characterization learning through semantic features extracted from sentences so as to endow the attributes with richer information. Second, the network iteratively enhances the representation of attributes and contexts using a hierarchy to achieve more accurate emotion prediction.
Drawings
FIG. 1 is an overall architecture of a mutually enhanced conversion network for fine-grained sentiment analysis.
Fig. 2 is a structural diagram of the first bidirectional enhanced conversion module.
Fig. 3 is a block diagram of a word conversion unit.
Detailed Description
In the following, preferred embodiments of the present invention will be further explained with reference to fig. 1 to 3, wherein the dashed arrows in fig. 1 represent the conversion of attributes, and the solid arrows represent the conversion of sentences.
The invention comprises a BERT layer, a bidirectional enhanced conversion layer and a convolution characteristic extractor which are connected in sequence;
the BERT layer generates a word representation of the sequence using pre-trained BERT; BERT is an english abbreviation expressed by the bidirectional coder of the transform model, which is a common abbreviation of those skilled in the art.
The bidirectional enhancement conversion layer comprises a bidirectional LSTM layer, an attribute enhancement module and a group of word conversion units, wherein the bidirectional LSTM layer is respectively connected with the attribute enhancement module and the word conversion units; iterative interactive learning of attributes and contexts is realized by adopting a hierarchical structure, and each computing layer is a bidirectional enhancement conversion component; attribute information is added in the process of extracting emotional characteristics through GCAE, wherein the GCAE is an English abbreviation of a gated convolution network with embedded attributes and is a common abbreviation for a person skilled in the art; the relation between the attribute and the context is further enhanced, and relative position information is introduced to better provide emotional characteristics; compared with the prior art, the feature extractor is replaced by GCAE from CNN; CNN is an english abbreviation of convolutional neural network, which is commonly abbreviated by those skilled in the art.
The bidirectional LSTM layer is used for capturing long dependency relationship and position information between texts, and the encoding result has two directions, one is an attribute enhancement module, and the other is a word conversion unit;
the attribute enhancement module receives the attribute representation and the average of the bidirectional LSTM layer coding result, and finally outputs an enhanced attribute representation which is input into the word conversion unit;
the attribute enhancement module utilizes the extracted context characteristics to enhance the attributes;
the word conversion unit receives the encoding result from the bi-directional LSTM layer and the enhanced attribute representation from the attribute enhancement module.
The convolutional feature extractor receives attribute information using a GCAE network to control the transfer of emotional features of the sentence, which further enhances the link between the attribute and the context, and furthermore, introduces relative position information to better extract the emotional features, and GCAE is an english abbreviation of gated convolutional network with embedded attribute, which is commonly abbreviated by those skilled in the art.
The process of reasoning and training of the invention is as follows:
And 2, bidirectional enhancement conversion layers, wherein each bidirectional enhancement conversion layer comprises three parts, namely a bidirectional LSTM layer, an attribute enhancement module and a group of character conversion units. The bi-directional LSTM layer first generates a contextualized word representation from the input. The attribute enhancement module then uses the word representations to enhance the attribute representations. Finally, the word conversion unit generates an attribute-specific word representation based on the contextualized word representation and the enhanced attribute representation.
And S21, learning the context dependence relationship of the text through the bidirectional LSTM layer. As shown in fig. 1, the bidirectional enhancement translation layer is repeated a plurality of times through the hierarchical structure. The input of the bi-directional LSTM in the lowest bi-directional enhancement conversion layer is a contextual representation of the BERT layer output. The input to the bi-directional LSTM in the next bi-directional enhancement conversion layer is from the output of the word conversion unit in the previous bi-directional enhancement conversion layer.
The word representation of the bidirectional LSTM output may be represented asForward LSTM outputs a set of hidden state vectorsWherein d ishIndicating the number of hidden units. Similarly, backward LSTM also outputs a set of hidden state vectorsFinally, the word representation of the bidirectional LSTM output is obtained by connecting the two hidden state listsWherein
S22, attribute enhancing module, before the first attribute enhancing operation, obtaining the initial attribute representation. Specifically, the attribute vector a output by BERT is first set to { a ═ a }1,a2,...,an}∈Rn×dInput into another bi-directional LSTM, and then apply to the obtained hidden state vectorApplying an average pooling method to obtain an initial attribute representation
Taking the lowest bi-directional enhancement translation layer as an example, after obtaining the initial attribute representation, the contextualized word vector h is output based on the bi-directional LSTM(1)We obtain by averaging the pooling layersA vectorThis is referred to as a context vector. The context vectors are then fused into the initial attribute representation using a basic feature fusion method (point-by-bit addition), which can be expressed asThis is an enhancement operation that acts on the attribute. And so on, the final attribute representation isThis formula expands as follows:
whereinRepresenting the context vector in the ith dyad conversion layer. According to equation (1), the attributes are enhanced by different context vectors in multiple dyads. Attribute vectorThere are two destinations, one destination being a word conversion unit in the same dyad and the other destination being a property enhancement module in the next dyad.
S23, a word conversion unit, which uses the same structure as the CPT module in the TNet model. TNet is an english abbreviation for attribute-oriented transition network, which is commonly abbreviated by those skilled in the art; CPT is an english abbreviation of context protection mechanism, which is a common shorthand for those skilled in the art; the unit will attribute the vectorAnd the word representsAs an input, whereinIs represented by the ith word level output by the bidirectional LSTM layer, LSTM is an english abbreviation of long-short term memory network, which is commonly abbreviated by those skilled in the art,is the enhanced attribute vector. Specifically, firstly, theAndis input to a full connection layer to obtain an ith attribute-specific word representation
Where g (—) is a non-linear activation function and ": means a vector join operation.Andrespectively weight matrix and bias. There is an information protection mechanism to ensure that context dependent information captured from the bi-directional LSTM layer is not lost. This information protection mechanism enhances the delivery and use of features, which can be expressed as:
Step 3, a convolution feature extractor is introduced to a variable piTo measure the relative position information between the ith word and the current attribute word in the context, piIs calculated as follows:
where k is the index of the first word in the attribute, C is a pre-specified constant, and n is the length of the attribute phrase. When a sentence is filled, the index i may be larger than the actual length m of the sentence. Then, p is addediThe word output by the ith word conversion unit in the Lth bidirectional enhanced conversion layer is multiplied by the weight to represent that:
x at this timeiIs a word representation that incorporates location information.
Expressing the sentence integrated with the position information as X ═ X1,x2,...,xmAnd the final attribute vectorInputting into a gated convolution network to generate a feature map c:
si=tanh(WsXi:i+k-1+bs) (7)
ci=si×ai (8)
where k is the convolution kernel size, Wa,Va,ba,WsAnd bsAre all learnable parameters, ciIs one item in the feature map c, siIs a calculated emotional feature, aiIs the calculated attribute feature; x denotes element-by-element multiplication. Then, the sentence representation z is obtained by s convolution kernels and applying the maximum pooling method:
z={max(c1),...,max(cs)} (9)
where max is a function of the maximum. Finally, z is input to a fully connected layer for final emotion prediction:
wherein softmax is a normalized exponential function, WfAnd bfAre learnable parameters.
Step 4, the mutually enhanced transformation network for fine-grained sentiment analysis referred to herein can be trained in an end-to-end manner within a supervised learning framework to optimize all parameters Θ. With L2The cross entropy of the regularization term is used as a loss function, defined as:
Claims (2)
1. The mutual enhancement conversion network for fine-grained emotion analysis is characterized in that:
the device comprises a BERT layer, a bidirectional enhanced conversion layer and a convolution feature extractor which are connected in sequence;
the BERT layer generates a word representation of the sequence using pre-trained BERT;
the bidirectional enhancement conversion layer comprises a bidirectional LSTM layer, an attribute enhancement module and a group of word conversion units, wherein the bidirectional LSTM layer is respectively connected with the attribute enhancement module and the word conversion units;
the bidirectional LSTM layer is used for capturing long dependency relationship and position information between texts, and the encoding result has two directions, one is an attribute enhancement module, and the other is a word conversion unit;
the attribute enhancement module receives the attribute representation and the average of the bidirectional LSTM layer coding result, and finally outputs an enhanced attribute representation which is input into the word conversion unit;
the attribute enhancement module utilizes the extracted context characteristics to enhance the attributes;
the word conversion unit receives the encoding result from the bidirectional LSTM layer and the enhanced attribute representation from the attribute enhancement module;
the convolutional feature extractor receives attribute information using a GCAE network to control the transfer of the emotional features of the sentence, which further enhances the link between the attributes and the context, and furthermore, introduces relative position information to better extract the emotional features.
2. The mutual enhancement conversion network for fine-grained emotion analysis is characterized in that the reasoning and training process comprises the following steps:
step 1, a BERT layer, which uses pre-trained BERT to generate word representation of a sequence, supposing that a sentence contains m words and an attribute contains n words, vector representation of the sentence can be obtained through the BERT layerVector representation of sum attribute a ═ a1,a2,...,an}∈Rn×dWherein d represents the dimension of the BERT output layer;
step 2, a bidirectional enhanced conversion layer, wherein a bidirectional LSTM layer generates contextualized word representation according to input; the attribute enhancement module then further enhances the attribute representation with these word representations; finally, the word conversion unit generates an attribute-specific word representation based on the contextualized word representation and the enhanced attribute representation;
s21, learning the context dependency relationship of the text through the bidirectional LSTM layer; the bidirectional enhancement conversion layer is repeated a plurality of times through the hierarchical structure, and the input of the bidirectional LSTM in the bottommost bidirectional enhancement conversion layer is the context representation of the output of the BERT layer;
the input of the bidirectional LSTM in the next bidirectional enhancement conversion layer is from the output of the word conversion unit in the previous bidirectional enhancement conversion layer;
Forward LSTM outputs a set of hidden state vectorsWherein d ishIndicating the number of hidden units; backward LSTM also outputs a set of hidden state vectorsConnecting the two hidden state lists results in a word representation of the bi-directional LSTM outputWherein
S22, an attribute enhancement module, before the first attribute enhancement operation, obtaining the initial attribute representation; first, the attribute vector a output by BERT is ═ a1,a2,...,an}∈Rn×dInput into another bi-directional LSTM, and then apply to the obtained hidden state vectorApplying an average pooling method to obtain an initial attribute representation
Contextualized word vector h based on bi-directional LSTM output after initial attribute representation is obtained(1)Obtaining a vector by averaging the pooling layersIt is referred to as a context vector; the context vectors are then fused into the initial attribute representation using a point-by-point bitwise additive feature fusion method, resulting in an enhanced operation on the attributes, represented as
according to formula (1), the attributes are reinforced by different context vectors in a plurality of bidirectional enhancement conversion layers;
attribute vectorThere are two destinations, one destination being a word conversion unit in the same dyad and the other destination being a genus in the next dyadA sexual enhancement module;
s23, word conversion unit, converting attribute vectorAnd the word representsAs an input, whereinIs represented by the ith word level of the bi-directional LSTM layer output,is the enhanced attribute vector;
firstly, the method is carried outAndis input to a full connection layer to obtain an ith attribute-specific word representation
Wherein g () is a non-linear activation function, ": indicates a vector join operation;andrespectively, weight matrix and bias; using information protection mechanisms to ensure context-dependent information captured from a bi-directional LSTM layerInformation is not lost, and the information protection mechanism enhances the delivery and use of features, expressed as:
step 3, a convolution feature extractor is introduced to a variable piTo measure the relative position information between the ith word and the current attribute word in the context, piIs calculated as follows:
wherein k is the index of the first word in the attribute, C is a pre-specified constant, and n is the length of the attribute phrase; when a sentence is filled, the index i may be larger than the actual length m of the sentence;
p is to beiThe word output by the ith word conversion unit in the Lth bidirectional enhanced conversion layer is multiplied by the weight to represent that:
x at this timeiIs a word representation that incorporates location information.
Then, the sentence with the position information is expressed with X ═ X1,x2,...,xmAnd the final attribute vectorInputting into a gated convolution network to generate a feature map c:
si=tanh(WsXi:i+k-1+bs) (7)
ci=si×ai (8)
where k is the convolution kernel size, Wa,Va,ba,WsAnd bsAre all learned parameters, x represents element-by-element multiplication; c. CiIs one item in the feature map c, siIs a calculated emotional feature, aiIs the calculated attribute feature;
the sentence representation z is obtained by s convolution kernels and applying the maximum pooling method:
z={max(c1),...,max(cs)} (9)
where max is a function of the maximum. Finally, z is input to a fully connected layer for final emotion prediction:
wherein softmax is a normalized exponential function, WfAnd bfAre learnable parameters;
step 4, the mutually enhanced transformation network for fine-grained sentiment analysis referred to herein can be trained in an end-to-end manner within a supervised learning framework to optimize all parameters Θ, with L2The cross entropy of the regularization term is used as a loss function, defined as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951154.4A CN112100376B (en) | 2020-09-11 | 2020-09-11 | Mutual enhancement conversion method for fine-grained emotion analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951154.4A CN112100376B (en) | 2020-09-11 | 2020-09-11 | Mutual enhancement conversion method for fine-grained emotion analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112100376A true CN112100376A (en) | 2020-12-18 |
CN112100376B CN112100376B (en) | 2022-02-08 |
Family
ID=73752087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010951154.4A Active CN112100376B (en) | 2020-09-11 | 2020-09-11 | Mutual enhancement conversion method for fine-grained emotion analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112100376B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705197A (en) * | 2021-08-30 | 2021-11-26 | 北京工业大学 | Fine-grained emotion analysis method based on position enhancement |
CN118013962A (en) * | 2024-04-09 | 2024-05-10 | 华东交通大学 | Chinese chapter connective word recognition method based on two-way sequence generation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170046797A1 (en) * | 2003-04-07 | 2017-02-16 | 10Tales, Inc. | Method, system and software for associating attributes within digital media presentations |
CN110489554A (en) * | 2019-08-15 | 2019-11-22 | 昆明理工大学 | Property level sensibility classification method based on the mutual attention network model of location aware |
CN111414476A (en) * | 2020-03-06 | 2020-07-14 | 哈尔滨工业大学 | Attribute-level emotion analysis method based on multi-task learning |
-
2020
- 2020-09-11 CN CN202010951154.4A patent/CN112100376B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170046797A1 (en) * | 2003-04-07 | 2017-02-16 | 10Tales, Inc. | Method, system and software for associating attributes within digital media presentations |
CN110489554A (en) * | 2019-08-15 | 2019-11-22 | 昆明理工大学 | Property level sensibility classification method based on the mutual attention network model of location aware |
CN111414476A (en) * | 2020-03-06 | 2020-07-14 | 哈尔滨工业大学 | Attribute-level emotion analysis method based on multi-task learning |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705197A (en) * | 2021-08-30 | 2021-11-26 | 北京工业大学 | Fine-grained emotion analysis method based on position enhancement |
CN113705197B (en) * | 2021-08-30 | 2024-04-02 | 北京工业大学 | Fine granularity emotion analysis method based on position enhancement |
CN118013962A (en) * | 2024-04-09 | 2024-05-10 | 华东交通大学 | Chinese chapter connective word recognition method based on two-way sequence generation |
Also Published As
Publication number | Publication date |
---|---|
CN112100376B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119467B (en) | Project recommendation method, device, equipment and storage medium based on session | |
CN110598779B (en) | Abstract description generation method and device, computer equipment and storage medium | |
CN109934261B (en) | Knowledge-driven parameter propagation model and few-sample learning method thereof | |
CN109582956B (en) | Text representation method and device applied to sentence embedding | |
CN112633010B (en) | Aspect-level emotion analysis method and system based on multi-head attention and graph convolution network | |
CN110046248B (en) | Model training method for text analysis, text classification method and device | |
CN109947912A (en) | A kind of model method based on paragraph internal reasoning and combined problem answer matches | |
CN109919221B (en) | Image description method based on bidirectional double-attention machine | |
CN111897957B (en) | Capsule neural network integrating multi-scale feature attention and text classification method | |
CN112527966B (en) | Network text emotion analysis method based on Bi-GRU neural network and self-attention mechanism | |
CN112100376B (en) | Mutual enhancement conversion method for fine-grained emotion analysis | |
CN110851491A (en) | Network link prediction method based on multiple semantic influences of multiple neighbor nodes | |
CN113673535B (en) | Image description generation method of multi-modal feature fusion network | |
CN113626589A (en) | Multi-label text classification method based on mixed attention mechanism | |
CN112131886A (en) | Method for analyzing aspect level emotion of text | |
CN115408517A (en) | Knowledge injection-based multi-modal irony recognition method of double-attention network | |
CN114676332A (en) | Network API recommendation method facing developers | |
CN114510576A (en) | Entity relationship extraction method based on BERT and BiGRU fusion attention mechanism | |
CN110321565B (en) | Real-time text emotion analysis method, device and equipment based on deep learning | |
CN115422388B (en) | Visual dialogue method and system | |
CN116150334A (en) | Chinese co-emotion sentence training method and system based on UniLM model and Copy mechanism | |
CN113705197B (en) | Fine granularity emotion analysis method based on position enhancement | |
US11941508B2 (en) | Dialog system with adaptive recurrent hopping and dual context encoding | |
CN115169348A (en) | Event extraction method based on hybrid neural network | |
CN114610862A (en) | Conversation recommendation method for enhancing context sequence of graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |