CN111353040A - GRU-based attribute level emotion analysis method - Google Patents

GRU-based attribute level emotion analysis method Download PDF

Info

Publication number
CN111353040A
CN111353040A CN201910459539.6A CN201910459539A CN111353040A CN 111353040 A CN111353040 A CN 111353040A CN 201910459539 A CN201910459539 A CN 201910459539A CN 111353040 A CN111353040 A CN 111353040A
Authority
CN
China
Prior art keywords
sentence
layer
word
vector
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910459539.6A
Other languages
Chinese (zh)
Inventor
邢永平
禹晶
肖创柏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910459539.6A priority Critical patent/CN111353040A/en
Publication of CN111353040A publication Critical patent/CN111353040A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

The invention discloses an attribute level emotion analysis method. Emotion analysis is a basic task in natural language processing, and attribute-level emotion analysis is an important topic of emotion analysis. Different words in a sentence have different effects on the emotional polarity of the attribute (aspect) in the sentence, and how to model the relationship between the attribute and the words in the sentence and the meaning of the whole sentence is the key to solve the problem. Here we will introduce an attention mechanism to fuse the information of the attributes while modeling the sentence information through two cyclic networks in order to expect to achieve better results. Experiments with published data sets show that the algorithms presented herein achieve better results without the need for cumbersome feature engineering.

Description

GRU-based attribute level emotion analysis method
Technical Field
The invention relates to the field of Internet, in particular to an attribute level emotion analysis method based on GRU.
Background
With the rapid development of the internet, more and more text information is provided, how to obtain useful information from massive text information becomes more and more important, the massive text information also objectively promotes the development of natural language processing, and deep learning brings a new direction for the natural language processing. Emotion analysis (i.e., opinion mining) is a fundamental but important task in natural language processing. The enterprise can utilize the comment information of the client on the product to obtain feedback in time so as to provide reference for decision making. Therefore, how to extract emotion information from a large amount of text data has become an important research topic in natural language processing in recent years.
The main research on text emotion analysis at present is mainly based on emotion dictionaries and on machine learning. The method based on the emotion dictionary depends on the emotion dictionary, the emotion dictionary has great influence on emotion analysis, and Yangding and the like process and express texts based on the emotion dictionary so as to construct a classifier based on a naive Bayes theory. Another approach is a machine learning based approach. The machine learning method obtains an emotion analysis classifier by training manually calibrated data, and the excellent classification performance of the support vector machine is proved through verification. Both methods require manual marking of data to complete construction of an emotion dictionary and feature engineering, and these tasks are cumbersome and complex and a deep learning algorithm can solve the problem well. Deep learning has enjoyed tremendous success in natural language processing in recent years, such as machine translation, question and answer systems. The method is also applied to the field of emotion analysis, Socher and the like propose a deep learning method based on a semi-supervised recursive automatic encoding machine RAE) to realize text emotion classification; jurgovsky et al use Convolutional Neural Networks (CNN) to implement textual emotion classification. The textual emotion analysis may be divided into a chapter level, a sentence level, and a word level. The main study here is sentiment analysis based on attributes (aspect). This is because the emotional polarity may be different for different aspect in the same sentence, such as "the voice quality of this phone is not good, but the battery life is long," the evaluation of this sentence for quality is negative, but positive for battery life. Wang et al propose AE-LSTM, AT-LSTM, and AEAT-LSTM cyclic network algorithms for sentiment analysis of aspect granularity that fuse aspect information into long-short term memory networks LSTM to improve classification accuracy. The SVM-dep algorithm divides the features into features related to attribute aspect and features unrelated to attribute aspect, and the features are respectively extracted to complete emotion analysis based on attribute levels, and the accuracy of the classifier does not contain a support vector machine classifier of attribute features.
The attention mechanism is a mechanism that selectively focuses on some important information during information processing, while the ignoring and attention target-meaning-less-relevant information processing mechanism emphasizes information more focused on the essential aspect of information during information processing, and focuses limited resources on the processing of important information, thereby achieving great success. Attention (Attention) mechanisms have enjoyed great success in the fields of image recognition, automatic translation, and the like. In conjunction with the subject matter herein, information related to attributes may be of greater interest when processing attribute-based sentiment analysis to improve sentiment classification accuracy.
The loop network (RNN) is widely used in natural language processing because of its network memory capability to process context information, and typical loop networks include a long short term memory network (LSTM), a gated loop unit (GRU), and a MUT network. The algorithm based on the attribute granularity sentiment analysis of the GRU network is provided, and then the attribute information is fused into the model through an attention mechanism, so that the influence of the attribute on sentiment classification can be more concerned by the algorithm model, and the sentiment classification precision is improved.
Disclosure of Invention
In view of this, the invention aims to provide an attribute level emotion analysis model and method based on a GRU network, and the attribute level emotion classification algorithm based on the Att-CGRU is used for improving emotion classification precision.
In order to achieve the purpose, the attribute level emotion analysis model based on the GRU network comprises the following steps:
in the Att-CGRU model, attributes are shown to have important influence on the emotional polarity of the whole sentence through the introduction of an attention mechanism. In the processing of the sequence problem, encoding and decoding (encoder-decoder) is a very common model, different weights are allocated to hidden state vectors of encoding output by the encoding and decoding model according to different algorithms and task targets, and vector representations capable of representing input data as much as possible are extracted to improve the performance of the algorithm model. The specific structure of the Att-CGRU model is shown in the attached figure 1 of the specification. The model includes five components, namely an input layer, an embedding layer, a GRU layer, an attention layer and an output layer. The input layer inputs short texts, namely sentences, into the model; the embedding layer maps each word in the sentence into a vector; the GRU layer acquires characteristic information by embedding words; the attention layer realizes an attention mechanism, and the attention mechanism fuses the word-level characteristic information into sentence-level characteristic information through weight calculation to generate a sentence characteristic vector; and finally classifying the sentence feature vectors.
1.1 input layer
Inputting each sentence needing emotion polarity classification at an input layer, and assuming that the sentence length is T, the sentence can be expressed as s ═ x1,x2,...,xT},xiRepresenting the ith word in the sentence.
1.2 embedding layer
Obtaining a sentence s containing T words from an input layer1,x2,...,xTAnd fourthly, obtaining a corresponding word vector e of each word in the embedding layeri
First embedding a matrix from words
Figure RE-GDA0002499126760000031
A word vector for each word is obtained, where V is the length of the vocabulary, dwIs a word vector dimension that can be specified, then there is
embi=Wwrdvi(1)
Wherein v isiIs a vector of length | V | where i is 1 and others are 0. The word vector emb of aspect can be obtained in the same wayaspAnd when the aspect in the sentence is a plurality of words, adding the values of the same dimension of the word vector of each word to obtain the word vector of the aspect. Then will embiAnd embaspSpliced to obtain a final word vector ei
ei=[embi:embasp](2)
Finally will bee={e1,e2,...,eTThe input is to the next layer.
1.3 GRU layer
In the GRU layer, the attributes are used as demarcation points, and the sentence is divided into left and right parts to model the context of the attributes, and the structure is shown in figure 1, wherein { xl+1,xl+2,...,xr-1Denotes aspect, { x1,x2,...,xlDenotes the word before the attribute in the sentence, { xr-1,xr-2,...,xTRepresents the words after the attribute. The left and right sequences are input into the left and right networks, and then the hidden layer respectively obtains { h1,h2,...,hr-1H andl+1,hl+2,...,hT}。
1.4 attention layer
An attention mechanism is introduced into the model to obtain a better classification effect, because different words and attributes of the front part and the rear part in the sentence are in different relations, more attention is paid to information with close relation of the attributes. The attention mechanism is implemented as follows:
Figure RE-GDA0002499126760000041
at=softmax(wTM) (4)
r=Hat(5)
where a istIt is indicated that the attention weight coefficient,
Figure RE-GDA0002499126760000042
denotes repetition easpMultiple times until the dimension of the model is consistent with that of H, H is a matrix formed by hidden layer outputs in the model, r represents a weighted vector representing the meaning of a sentence, Wh、 WvW is a parameter matrix, and then a vector o capable of finally representing sentence information is obtained
o=tanh(Wpr+Wxh) (6)
h represents hr-1And hl+1The sum of the vectors.
1.5 output layer
Finally, the output o of the attention layer is input into the classifier
Figure RE-GDA0002499126760000043
Implementing a polarity classification of the emotion, where WoAnd boIs the parameter matrix to be trained.
The method comprises the following specific experimental steps:
step S1, first inputting the twitter collected data set used in the present invention into the input layer of the Att-CGRU model,
step S2, inputting the data obtained in step S1 into an embedding layer to obtain a word vector of each word in the input sentence,
step S3, obtaining the word vector of each word in the sentence through the mode of S2 in the GRU layer, and then using the attribute words { xl+1,xl+2,...,xr-1As the demarcation point, the left side { x }1,x2,...,xlWord vector and the right side of the word { x }r-1,xr-2,...,xTInputting the word vector of the word into two left and right GRU networks to model the context of the attribute word respectively, and obtaining output { h } from the hidden layer respectively1,h2,...,hr-1H andl+1,hl+2,...,hT}。
step S4, according to the output of S4, calculating a vector o capable of representing sentence information according to the following formula:
Figure RE-GDA0002499126760000044
at=softmax(wTM)
r=Hat
where r denotes a weighted vector, a, which characterizes the meaning of the sentencetDenoted is an attention weight coefficient, which is formed by dividing wTM is obtained after inputting the softmax function, M represents a vector obtained by a matrix H formed by the output of the hidden layer in the model GRU layer,
Figure RE-GDA0002499126760000051
word vector e representing repetitive attributesaspMultiple times until the dimension of the model is consistent with that of H, H is a matrix formed by hidden layer outputs in the model, tanh represents a tanh function, Wh、WvW is a parameter matrix. Finally, the vector o capable of finally pointing the sentence information is obtained
o=tanh(Wpr+Wxh)
h represents hr-1And hl+1Sum of vectors, hr-1Denotes the hidden layer output, h, corresponding to the r-1 th word in the left GRU networkl+1Denotes the hidden layer output, W, corresponding to the l +1 th word in the right GRU networkpAnd WxA parameter matrix is represented.
In step S5, the output layer inputs the vector o capable of representing sentence information into the softmax function to obtain the predicted emotion polarity
Figure RE-GDA0002499126760000052
In particular by
Figure RE-GDA0002499126760000053
To obtain WoAnd boAre all parameter matrices.
Step 6, calculating the loss function value loss according to the output of S5 and the actual classification y corresponding to each sentence
Figure RE-GDA0002499126760000054
Wherein lambda is a regularization coefficient, and training iteration is carried out to Accuracy through an error back propagation algorithm to obtain a maximum value, and an optimization algorithm in the error back propagation algorithm is an AdaGrad algorithm with an initialization coefficient of 0.01.
Compared with the prior art, the invention has the following technical effects.
In the method, comparison experiments are respectively carried out with a traditional machine learning method (comprising a support vector machine algorithm and an SVM-dep algorithm) and a deep learning method (AdaRNN-w/E, AdaRNcomb and TC-LSTM), and when the models are evaluated, the models are evaluated by Accuracy (Accuracy), and the results are shown in the following table:
TABLE 1 results of the experiment
Figure RE-GDA0002499126760000055
Figure RE-GDA0002499126760000061
Drawings
FIG. 1 is a detailed structure diagram of the Att-CGRU model.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and the structure diagram of the Att-CGRU model in fig. 1 includes five parts, i.e., an input layer, an embedded layer, a GRU layer, an attention layer and an output layer. The input layer inputs short texts, namely sentences, into the model; the embedding layer maps each word in the sentence into a vector; the GRU layer acquires characteristic information by embedding words; the attention layer realizes an attention mechanism, and the attention mechanism fuses the word-level characteristic information into sentence-level characteristic information through weight calculation to generate a sentence characteristic vector; and finally classifying the sentence feature vectors.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the implementation of the present invention, first, a data set is collected, and the data set used in the present invention is a basic data set collected in twitter.
The specific experimental steps of the algorithm are as follows:
the data set used herein at step S1 is a basic data set collected from twitter. Each training and testing data has been manually calibrated. The training data set is used to train the model and the test data set is used to test the model performance. The training data set had 6248 sentences and the test data set had 692 sentences. Positive, negative and neutral data in the test and training data sets were 25%, 25% and 50% respectively
Step S2, the model herein includes five components, namely, an input layer, an embedding layer, a GRU layer, an attention layer, and an output layer. The input layer inputs short text, i.e., a sentence, into the model, which may be expressed as s ═ x1,x2,...,xTWhere xiRepresenting the ith word that makes up the sentence and T represents the length of the sentence. The embedding layer combines each word x in the sentenceiMapping into word vectors e from a word vector dictionaryi=[embi:embasp]Wherein embiWord vectors, emb, representing the correspondence of the ith word in the dictionaryaspThe term vector of the attribute term is shown, and when the attribute term is composed of a plurality of terms, the term vector of the several terms is taken as the mean value. The GRU layer models the context of the attribute by taking the attribute as a boundary point and dividing the sentence into a left part and a right part on the basis of acquiring the semantic feature information from the embedded layer, and the structure of the GRU layer is shown as figure 1, wherein xl+1,xl+2,...,xr-1Denotes an attribute aspect, { x1,x2,...,xlDenotes the word before the attribute in the sentence, { xr-1,xr-2,...,xTRepresents the words after the attribute. Inputting the left and right sequences into the left and right GRU networks, and respectively obtaining { h } by the hidden layer1,h2,...,hr-1H andl+1,hl+2,...,hT}. The attention layer realizes an attention mechanism, combines the characteristic information of word level one into the characteristic information of sentence level through weight calculation to generate a sentence characteristic vector, and finally classifies the sentence characteristic vector, wherein the concrete realization is disclosed as follows
Figure RE-GDA0002499126760000071
at=softmax(wTM)
r=Hat
Where r denotes a weighted vector, a, which characterizes the meaning of the sentencetDenoted is an attention weight coefficient, which is formed by dividing wTM is obtained after inputting the softmax function, M represents a vector obtained by a matrix H formed by the output of the hidden layer in the model GRU layer,
Figure RE-GDA0002499126760000072
word vector e representing repetitive attributesaspMultiple times until the dimension of the model is consistent with that of H, H is a matrix formed by hidden layer outputs in the model, tanh represents a tanh function, Wh、WvW is a parameter matrix. Finally obtaining a vector o capable of finally representing the sentence information;
o=tanh(Wpr+Wxh)
h represents hr-1And hl+1Sum of vectors, hr-1Denotes the hidden layer output, h, corresponding to the r-1 th word in the left GRU networkl+1Denotes the hidden layer output, W, corresponding to the l +1 th word in the right GRU networkpAnd WxA parameter matrix is represented. The output layer is used for inputting a vector o capable of representing sentence information into the softmax function to obtain predicted emotion polarity
Figure RE-GDA0002499126760000073
In particular by
Figure RE-GDA0002499126760000074
To obtain WoAnd boAre all parameter matrices.
Step S3, adopting cross entropy as loss function when training model
Figure RE-GDA0002499126760000075
Indicating the prediction result. The process of training is to minimize all sentences true polarity y and predict
Figure RE-GDA0002499126760000076
Cross entropy loss value between:
Figure RE-GDA0002499126760000081
where j denotes its emotional polarity category, positive, negative and neutral in this context; i represents the index number of the sentence, lambda is a second-order norm regularization coefficient, and theta is a parameter to be solved; while the dropout probability is set to 0.5 to prevent overfitting. In the method, 200-dimensional word vectors are adopted to initialize each word in the sentence, the hidden layer dimension is also 100, and other parameter matrixes are initialized to be uniformly distributed samples. The model is trained in a batch mode, and each batch comprises 20 sentences. L is2The regularization coefficient lambda is 0.001, the optimization algorithm adopts AdaGrad, and the initialization coefficient is 0.01.
Step S4, respectively performing comparison experiments with the conventional machine learning method (including support vector machine algorithm, SVM-dep algorithm) and the deep learning method (AdaRNN-w/E, adarncomb, TC-LSTM) in the experiment, and evaluating each model with Accuracy (Accuracy) when evaluating the model, the results are shown in the following table:
TABLE 1 results of the experiment
Figure RE-GDA0002499126760000082
The experimental data show that the method for modeling sentences by using the left network and the right network and introducing the attention mechanism based on the attribute words has certain advantages in accuracy compared with other models.

Claims (2)

1. An attribute level emotion analysis model based on a GRU network is characterized in that: the model comprises five parts, namely an input layer, an embedded layer, a GRU layer, an attention layer and an output layer; the input layer inputs short texts, namely sentences, into the model; the embedding layer maps each word in the sentence into a vector; the GRU layer acquires characteristic information by embedding words; the attention layer realizes an attention mechanism, and the attention mechanism fuses the word-level characteristic information into sentence-level characteristic information through weight calculation to generate a sentence characteristic vector; finally, classifying the sentence characteristic vectors;
1.1 input layer
Inputting each sentence needing emotion polarity classification at an input layer, and assuming that the sentence length is T, the sentence is expressed as s ═ x1,x2,...,xT},xiRepresenting the ith word in the sentence;
1.2 embedding layer
Obtaining a sentence s containing T words from an input layer1,x2,...,xTAnd fourthly, obtaining a corresponding word vector e of each word in the embedding layeri
First embedding a matrix from words
Figure RE-FDA0002499126750000011
A word vector for each word is obtained, where V is the length of the vocabulary, dwIs a word vector dimension that can be specified, then there is
embi=Wwrdvi(1)
Wherein v isiIs a vector of length | V | where i is 1 and others are 0; likewise, the word vector emb of aspect is obtainedaspWhen the aspect in the sentence is a plurality of words, adding the values of the same dimensionality of the word vector of each word to obtain the word vector of the aspect; then will embiAnd embaspSpliced to obtain a final word vector ei
ei=[embi:embasp](2)
Finally, e is ═ e1,e2,...,eTThe input is to the next layer;
1.3 GRU layer
In the GRU layer, the sentence is divided into left and right parts by taking the attribute as a demarcation point to model the context of the attribute, wherein { xl+1,xl+2,...,xr-1Denotes aspect, { x1,x2,...,xlDenotes the word before the attribute in the sentence, { xr-1,xr-2,...,xTRepresents the words after the attribute; the left and right sequences are input into the left and right networks, and then the hidden layer respectively obtains { h1,h2,...,hr-1H andl+1,hl+2,...,hT};
1.4 attention layer
An attention mechanism is introduced into the model to obtain a better classification effect, because different words and attributes of the front part and the rear part in the sentence are in different relations, more information which is closely related to the attributes is concerned; the attention mechanism is implemented as follows:
Figure RE-FDA0002499126750000021
at=softmax(wTM) (4)
r=Hat(5)
where a istIt is indicated that the attention weight coefficient,
Figure RE-FDA0002499126750000023
denotes repetition easpMultiple times until the dimension of the model is consistent with that of H, H is a matrix formed by hidden layer outputs in the model, r represents a weighted vector representing the meaning of a sentence, Wh、WvW is a parameter matrix, and then a vector o capable of finally representing sentence information is obtained
o=tanh(Wpr+Wxh) (6)
h represents hr-1And hl+1The sum of the vectors;
1.5 output layer
Finally, the output o of the attention layer is input into the classifier
Figure RE-FDA0002499126750000022
Implementing a polarity classification of the emotion, where WoAnd boIs a ginseng to be trainedA matrix of numbers.
2. The attribute level emotion analysis method based on GRU is characterized by comprising the following steps: the method comprises the following specific steps:
step S1, firstly, inputting the collected twitter data set to an input layer of the Att-CGRU model;
step S2, inputting the data obtained in step S1 into an embedding layer to obtain a word vector of each word in the input sentence,
step S3, obtaining the word vector of each word in the sentence through the mode of S2 in the GRU layer, and then using the attribute words { xl+1,xl+2,...,xr-1As the demarcation point, the left side { x }1,x2,...,xlWord vector and the right side of the word { x }r-1,xr-2,...,xTInputting the word vector of the word into two left and right GRU networks to model the context of the attribute word respectively, and obtaining output { h } from the hidden layer respectively1,h2,...,hr-1H andl+1,hl+2,...,hT};
step S4, according to the output of S4, calculating a vector o capable of representing sentence information according to the following formula:
Figure RE-FDA0002499126750000031
at=softmax(wTM)
r=Hat
where r denotes a weighted vector, a, which characterizes the meaning of the sentencetDenoted is an attention weight coefficient, which is formed by dividing wTM is obtained after inputting the softmax function, M represents a vector obtained by a matrix H formed by the output of the hidden layer in the model GRU layer,
Figure RE-FDA0002499126750000035
word vector e representing repetitive attributesaspMultiple times until the dimension of the model is consistent with that of H, H is a matrix formed by hidden layer outputs in the model, tanh represents a tanh function, Wh、WvW is a parameter matrix; finally, the vector o capable of finally pointing the sentence information is obtained
o=tanh(Wpr+Wxh)
h represents hr-1And hl+1Sum of vectors, hr-1Denotes the hidden layer output, h, corresponding to the r-1 th word in the left GRU networkl+1Denotes the hidden layer output, W, corresponding to the l +1 th word in the right GRU networkpAnd WxRepresenting a parameter matrix;
in step S5, the output layer inputs the vector o capable of representing sentence information into the softmax function to obtain the predicted emotion polarity
Figure RE-FDA0002499126750000032
In particular by
Figure RE-FDA0002499126750000033
To obtain WoAnd boAre all parameter matrices; step 6, calculating the loss function value loss according to the output of S5 and the actual classification y corresponding to each sentence
Figure RE-FDA0002499126750000034
Wherein lambda is a regularization coefficient, and training iteration is carried out to Accuracy through an error back propagation algorithm to obtain a maximum value, and an optimization algorithm in the error back propagation algorithm is an AdaGrad algorithm with an initialization coefficient of 0.01.
CN201910459539.6A 2019-05-29 2019-05-29 GRU-based attribute level emotion analysis method Pending CN111353040A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910459539.6A CN111353040A (en) 2019-05-29 2019-05-29 GRU-based attribute level emotion analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910459539.6A CN111353040A (en) 2019-05-29 2019-05-29 GRU-based attribute level emotion analysis method

Publications (1)

Publication Number Publication Date
CN111353040A true CN111353040A (en) 2020-06-30

Family

ID=71196950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910459539.6A Pending CN111353040A (en) 2019-05-29 2019-05-29 GRU-based attribute level emotion analysis method

Country Status (1)

Country Link
CN (1) CN111353040A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813895A (en) * 2020-08-07 2020-10-23 深圳职业技术学院 Attribute level emotion analysis method based on level attention mechanism and door mechanism
CN112131886A (en) * 2020-08-05 2020-12-25 浙江工业大学 Method for analyzing aspect level emotion of text
CN114492521A (en) * 2022-01-21 2022-05-13 成都理工大学 Intelligent lithology while drilling identification method and system based on acoustic vibration signals

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595601A (en) * 2018-04-20 2018-09-28 福州大学 A kind of long text sentiment analysis method incorporating Attention mechanism
CN108984724A (en) * 2018-07-10 2018-12-11 凯尔博特信息科技(昆山)有限公司 It indicates to improve particular community emotional semantic classification accuracy rate method using higher-dimension
US20190005027A1 (en) * 2017-06-29 2019-01-03 Robert Bosch Gmbh System and Method For Domain-Independent Aspect Level Sentiment Detection
CN109145304A (en) * 2018-09-07 2019-01-04 中山大学 A kind of Chinese Opinion element sentiment analysis method based on word

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190005027A1 (en) * 2017-06-29 2019-01-03 Robert Bosch Gmbh System and Method For Domain-Independent Aspect Level Sentiment Detection
CN108595601A (en) * 2018-04-20 2018-09-28 福州大学 A kind of long text sentiment analysis method incorporating Attention mechanism
CN108984724A (en) * 2018-07-10 2018-12-11 凯尔博特信息科技(昆山)有限公司 It indicates to improve particular community emotional semantic classification accuracy rate method using higher-dimension
CN109145304A (en) * 2018-09-07 2019-01-04 中山大学 A kind of Chinese Opinion element sentiment analysis method based on word

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MEISHAN ZHANG: "Gated Neural Networks for Targeted Sentiment Analysis", 《PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
YEQUANWANG: "Attention-based LSTM for Aspect-level Sentiment Classification", 《PROCEEDINGS OF THE 2016 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING》 *
ZHAI PENGHUA: "Bidirectional-GRU Based on Attention Mechanism for Aspect-level Sentiment Analysis", 《PROCEEDINGS OF THE 2019 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131886A (en) * 2020-08-05 2020-12-25 浙江工业大学 Method for analyzing aspect level emotion of text
CN111813895A (en) * 2020-08-07 2020-10-23 深圳职业技术学院 Attribute level emotion analysis method based on level attention mechanism and door mechanism
CN111813895B (en) * 2020-08-07 2022-06-03 深圳职业技术学院 Attribute level emotion analysis method based on level attention mechanism and door mechanism
CN114492521A (en) * 2022-01-21 2022-05-13 成都理工大学 Intelligent lithology while drilling identification method and system based on acoustic vibration signals

Similar Documents

Publication Publication Date Title
CN108363753B (en) Comment text emotion classification model training and emotion classification method, device and equipment
CN109472024B (en) Text classification method based on bidirectional circulation attention neural network
CN113254599B (en) Multi-label microblog text classification method based on semi-supervised learning
CN108319666B (en) Power supply service assessment method based on multi-modal public opinion analysis
CN111160467B (en) Image description method based on conditional random field and internal semantic attention
CN110717334A (en) Text emotion analysis method based on BERT model and double-channel attention
CN111858931B (en) Text generation method based on deep learning
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN110222178A (en) Text sentiment classification method, device, electronic equipment and readable storage medium storing program for executing
CN111460132B (en) Generation type conference abstract method based on graph convolution neural network
CN111353040A (en) GRU-based attribute level emotion analysis method
Rhodes Author attribution with cnns
CN107679225A (en) A kind of reply generation method based on keyword
CN109614611B (en) Emotion analysis method for fusion generation of non-antagonistic network and convolutional neural network
CN113435211A (en) Text implicit emotion analysis method combined with external knowledge
CN111368082A (en) Emotion analysis method for domain adaptive word embedding based on hierarchical network
CN112541364A (en) Chinese-transcendental neural machine translation method fusing multilevel language feature knowledge
CN111339772B (en) Russian text emotion analysis method, electronic device and storage medium
CN115630156A (en) Mongolian emotion analysis method and system fusing Prompt and SRU
CN114417851A (en) Emotion analysis method based on keyword weighted information
CN113806543B (en) Text classification method of gate control circulation unit based on residual jump connection
CN115270752A (en) Template sentence evaluation method based on multilevel comparison learning
CN114385813A (en) Water environment text aspect-level viewpoint mining method based on multi-feature fusion
CN114444481A (en) Sentiment analysis and generation method of news comments
CN112199503B (en) Feature-enhanced unbalanced Bi-LSTM-based Chinese text classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630