CN111325027A - Sparse data-oriented personalized emotion analysis method and device - Google Patents
Sparse data-oriented personalized emotion analysis method and device Download PDFInfo
- Publication number
- CN111325027A CN111325027A CN202010102417.4A CN202010102417A CN111325027A CN 111325027 A CN111325027 A CN 111325027A CN 202010102417 A CN202010102417 A CN 202010102417A CN 111325027 A CN111325027 A CN 111325027A
- Authority
- CN
- China
- Prior art keywords
- document
- emotion
- sentence
- user
- representation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 126
- 238000004458 analytical method Methods 0.000 title claims abstract description 71
- 238000013528 artificial neural network Methods 0.000 claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 31
- 230000007246 mechanism Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000002457 bidirectional effect Effects 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000007477 logistic regression Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 230000006978 adaptation Effects 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a sparse data-oriented personalized emotion analysis method and device, which are used for grouping users with similar scoring habits and enhancing user representation by utilizing grouping information to realize personalized emotion analysis. The method comprises the following steps: preprocessing a document; calculating to obtain an emotion scoring basis by using a basic emotion analysis model based on a deep neural network; calculating to obtain emotion scoring offset and fluctuation by using a group-based personalized analysis model; and calculating the final emotion scoring by combining the emotion scoring basis and the emotion scoring offset. Compared with the prior personalized emotion analysis method, the method can learn to obtain good user representation under the condition that the text data of the user is sparse, can effectively model the user in the personalized emotion analysis, and can more accurately perform the personalized emotion analysis.
Description
Technical Field
The invention relates to emotion analysis of a text by using user text data under the condition of sparse data, and belongs to the technical field of machine learning.
Background
The user generated text sentiment analysis is intended to calculate a corresponding sentiment score (e.g., satisfaction) based on a text (e.g., a Twitter or a shopping comment) composed by the user. Traditional emotion analysis methods consider the mapping between text and emotion scores to be the same for all users, i.e., do not distinguish individual differences between users. However, such an assumption is not in accordance with the actual situation. Because the emotion expression modes of users are different due to different educational backgrounds, social experiences and the like, the personalized emotion analysis for the users is very necessary. However, some existing personalized emotion analysis methods generally use a fixed-dimension user vector to represent each user, the user vector is generally randomly initialized and then learned by the network, and the user representation mode has strong dependence on data and the network. According to network statistics, most users of Twitter send Twitter rarely, and nearly 80% of Twitter is sent by 10% of active users. This means that in real life, the situation that user data is sparse often exists, so it has very important social meaning to solve the problem of personalized emotion analysis under the sparse environment of data.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a sparse data-oriented personalized emotion analysis method and device, which can solve the problem of data sparsity in current personalized emotion analysis.
The technical scheme is as follows: in order to achieve the above purpose, the method for analyzing personalized emotion facing to sparse data, provided by the invention, comprises the following steps:
(1) preprocessing a document;
(2) using a basic emotion analysis model based on a deep neural network, taking words of a document as input, respectively calculating semantic representation of each sentence in the document and semantic representation of the document through sentence-level semantic representation learning and document-level semantic representation learning, and taking a numerical value obtained by mapping the semantic representation of the document as an emotion scoring basis;
(3) using a group-based personalized emotion analysis model, taking semantic representation, user vectors and global group vectors of a document obtained by a deep neural network-based basic emotion analysis model as input, respectively calculating and obtaining user representation of each sentence and user representation of the document in the document through sentence-level user representation learning and document-level user representation learning, cascading the user representation of the document and the semantic representation obtained by the deep neural network-based basic emotion analysis model as final representation of the document, and mapping the final representation of the document to two values to be used as emotion scoring offset and fluctuation respectively; the emotion scoring offset is used for final scoring calculation, and emotion scoring fluctuation is used for network optimization;
(4) and adding the emotion scoring basis and the emotion scoring offset to obtain a final emotion scoring.
Further, the document preprocessing in the step (1) comprises: the method comprises the steps of segmenting words of a document, and filtering out stop words in the document and words which only appear once in a processed data set.
Further, the step (2) of calculating the emotion scoring basis by using the basic emotion analysis model based on the deep neural network comprises the following steps:
(2.1) aiming at each word in the sentence, mapping the word into a word vector trained in advance, and then coding each word in the sentence by using a bidirectional long and short memory network Bi-LSTM to obtain a corresponding hidden state of each word; calculating a weight for each word using an attention mechanism; finally, weighting and summing each word to obtain semantic representation of each sentence;
(2.2) regarding each sentence in the document, taking semantic representation of the sentence as input, and coding each sentence in the document by utilizing Bi-LSTM to obtain a corresponding hidden state of each sentence; calculating a weight for each sentence using an attention mechanism; finally, weighting and summing each sentence to obtain semantic representation of the document;
(2.3) mapping the semantic representation of the document level to a numerical value, namely emotion scoring basis, by using a multi-layer perceptron.
Further, the calculating of the emotion scoring offset and fluctuation by using the group-based personalized emotion analysis model in the step (3) comprises:
(3.1) calculating to obtain the user hidden state of each word on the basis of the hidden state of each word in the Bi-LSTM, the group global vector and the user vector corresponding to the document; calculating the weight of the user hidden state corresponding to each word by using an attention mechanism; finally, weighting and summing the hidden states of the users corresponding to each word to obtain the user representation of the sentence;
(3.2) calculating to obtain the user hidden state of each sentence on the basis of the hidden state of each sentence, the group global vector and the sentence user representation in the Bi-LSTM; calculating a weight of the hidden state of each sentence user using an attention mechanism; finally, weighting and summing the hidden states of the users of each sentence to obtain the user representation of the document;
(3.3) concatenating the semantic representation of the document and the user representation as a final representation of the document;
and (3.4) respectively mapping the final document representation to two numerical values, namely emotion scoring offset calculation and emotion scoring fluctuation by using two multilayer perceptrons.
Further, the user of the sentence is represented as:
wherein ,
ekis the global vector for the k-th group,is the word wijThe corresponding hidden state, u is the user vector corresponding to the document,andis a model parameter, softmax (·) is a normalized logistic regression function, and tanh (·) is a hyperbolic tangent activation function.
Further, the user of the document is represented as:
wherein ,
is the sentence siThe corresponding hidden state is set to be in a hidden state,andare the model parameters.
Further, using the joint loss to optimize the network includes: using mean square error loss for a basic emotion analysis model based on a deep neural network; the group-based personalized emotion analysis model is subjected to Gaussian penalty loss, so that emotion fluctuation is learned from a loss function, and the influence of samples with overlarge fluctuation on a network is reduced; adding a group vector-based penalty term to enable the group vector obtained by learning to have discriminability; l joining network parameters2The regularization term avoids overfitting.
Further, the loss function of the network is defined as:
wherein ,
λ‖Θ‖2is a network parameter L2The regularization term, T is the number of samples,is the true sentiment score for the tth document,is the output result of the basic emotion analysis model based on the deep neural network of the t document, ytThe group-based personalized emotion analysis model for the tth document outputs a final emotion score,the emotion scoring fluctuation of the output of the t document based on the personalized emotion analysis model of the group, wherein I is an identity matrix, and E is { E ═ E }1,…,eKIs a matrix of group vectors, | - |FIs the Frobenius norm of the matrix.
Based on the same inventive concept, the individualized emotion analysis device facing to sparse data comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the individualized emotion analysis method facing to sparse data when being loaded to the processor.
Has the advantages that: compared with the existing personalized emotion analysis method, the method can effectively solve the problem of sparse user data in the personalized emotion analysis which generally exists in real life, and the personalized emotion analysis performance can be improved by establishing the emotion score of one user as Gaussian distribution and considering the offset and fluctuation of the emotion score of the user.
Drawings
FIG. 1 is a flow chart of a method in an embodiment of the invention.
Fig. 2 is a schematic diagram of calculation in user representation learning at a sentence level of a user Encoder (U-Encoder) in the embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following description in conjunction with the accompanying drawings and the specific embodiments, it is to be understood that these examples are given solely for the purpose of illustration and are not intended as a definition of the limits of the invention, since various equivalent modifications will occur to those skilled in the art upon reading the present invention and fall within the limits of the appended claims.
The problem can be described as follows: for a document and a user u thereof, the personalized emotion analysis task is to predict an emotion score y corresponding to the document (such as a satisfaction score, for example, 1-5 scores, 1 score indicates dissatisfaction, and 5 scores indicates satisfaction). According to the observation, since the user has a generalized emotion expression mode, an emotion scoring basis y can be obtainedbThe users have the deviation y of sentiment scoring due to individual differencesAnd a fluctuation σ2Therefore, the predicted sentiment score can be modeled as a value that follows a Gaussian distributionSo the personalized sentiment analysis problem can be converted into yb,ys and σ2The prediction problem of (1).
The individualized emotion analysis method facing to sparse data, disclosed by the embodiment of the invention, as shown in fig. 1, mainly comprises the following steps: .
S1: document preprocessing: segmenting the document, removing stop words in the document and words appearing only once in the data set to obtain a processed document d, wherein d comprises M sentences, and each sentence ciComprising NiA word.
S2: calculating to obtain an emotion scoring basis by using a basic emotion analysis model based on a deep neural network, and specifically comprising the following steps of:
(1) semantic representation learning at sentence level. Firstly, each word in a sentence is mapped into a word vector trained in advance, and then a sentence can be represented asEach word in the sentence is then sequence encoded using Bi-LSTM:
Since each word contributes differently to the semantic representation of the sentence, the attention mechanism is used so that important words in the sentence have higher weights, which is calculated as follows:
wherein Is the word wijThe weight of (a) is determined,andare the model parameters. So the semantics s of the sentenceiThe representation can be obtained by weighted summation of the hidden states of the words:
(2) semantic representation learning at the document level. Bi-LSTM encodes each sentence representation in the document into a corresponding hidden state for each sentence:
Since different sentences contribute differently to the final semantic representation of the document, the weight of each sentence is further calculated using an attention mechanism, as follows:
wherein Is the sentence siThe weight of (a) is determined,andare the model parameters. So the document semantic representation rbThe hidden state of a sentence can be weighted and summed to obtain:
(3) and calculating the emotion scoring basis. A semantic representation at the document level is mapped to a numerical value using a multi-layered perceptron,
namely emotion scoring basis:
yb=MLP(rb)
s3: and calculating the emotion scoring offset and fluctuation by using a group-based personalized emotion analysis model. Assume that there are a total of K groups, each group having a corresponding global vector ekThis vector is initialized randomly and learned by network adaptation. The method specifically comprises the following steps:
(1) sentence-level users represent learning. With each wordHidden state ofBased on the user vector U corresponding to the document, a user Encoder (U-Encoder) is designed to enhance the user representation, as shown in fig. 2, and includes: calculating the word wijCorresponding group representationAnd computing an enhanced user representationFirst to compute the relationship between words and groups, an attention mechanism is used to compute the words wijCorresponding group representationIf the word is associated with the groupIf the habit is more similar, the word has a higher weight, which is specifically calculated as follows:
then, since the learned user representation may be unreliable under the condition of sparse data, a group-representation-based mechanism is used to enhance the user representation, and the specific calculation manner is as follows:
wherein Is an enhanced user representation for each word,control ofTo pijThe influence of (a) on the performance of the device,andare the model parameters. It should be noted that the conventional recurrent neural network usually calculates the hidden state of each word sequentially, but the U-Encoder is not a sequential structure, so that the user representations can be simultaneously calculated in parallel, and the calculation performance can be effectively improved.
Since different words contribute differently to the user representation of the sentence, the attention mechanism is further used to calculate the weight of the user hidden state corresponding to each word:
wherein Is a user representationThe weight of (a) is determined,andare the model parameters. Finally, weighting and summing the hidden states of the users corresponding to each word to obtain the user representation v of the sentencei:
(2) The users at the document level represent learning. In the hidden state of each sentenceAnd corresponding user representation viOn the basis, the U-Encoder is used for coding, and the method comprises the following steps: calculating a sentence siCorresponding group representationAnd computing an enhanced user representationFirst to calculate the relationship between words and groups, attention is usedMechanism calculation sentence siCorresponding group representationThe specific calculation is as follows:
then, since the learned user representation may be unreliable under the condition of sparse data, a group-representation-based mechanism is used to enhance the user representation, and the specific calculation manner is as follows:
wherein Is an enhanced user representation for each sentence,control ofTo pairThe influence of (a) on the performance of the device,andare the model parameters.
Since different sentences contribute differently to the user representation of the document, an attention mechanism is further used to calculate the weight of the user's hidden state for each sentence:
wherein Is a user representationThe weight of (a) is determined,andare the model parameters. Finally, weighting and summing the hidden states of the users corresponding to each sentence to obtain the user representation r of the documentu i:
(3) And calculating emotion scoring offset and fluctuation. Representing a user of a document ruAnd semantic representation r obtained by basic emotion analysis model based on deep neural networkbAnd (3) the cascaded final representations are respectively mapped to the final representation of the document level by using two multi-layer perceptrons:
ys=MLP([ru,rb])
σ2=MLP([ru,rb])
s4: base y combined with sentiment scoringbSentiment score offset ysAnd emotional score fluctuation sigma2The final prediction result can be obtainedDue to yb+ysIs an unbiased estimate of y, so will yb+ysAs the value of y, i.e. y ═ yb+ys. And the emotional expression fluctuation is further used as a constraint in the optimization of the model.
Since the present invention can be considered a multitasking architecture, we use joint loss to optimize the network. For a traditional generalized emotion analysis model, we use the mean square error as a loss function:
where T is the number of samples and,is the true sentiment score for the tth document,is the output result of the basic emotion analysis model based on the deep neural network of the t document.
For the group-based personalized emotion analysis model, a Gaussian penalty function is used as a loss function, and the loss function comprises two parts: (1) residual regression based on the emotion fluctuation is used for learning from the loss function to obtain the emotion fluctuation, and the influence of a sample with overlarge fluctuation on a network is reduced; (2) and a regularization term of emotion fluctuation to avoid the predicted emotion fluctuation from being too large. The specific definition is as follows:
wherein ytThe group-based personalized emotion analysis model for the tth document outputs a final emotion score,and fluctuating the emotion scoring output by the group-based personalized emotion analysis model of the tth document.
In addition, in order to make the global vectors of different groups have a certain distinctiveness, a group vector-based penalty term is further introduced:
where I is the identity matrix, E ═ E1,…,eKIs a matrix of group vectors, | - |FIs the Frobenius norm of the matrix.
The loss function of the network is defined as follows:
wherein λ‖Θ‖2Is a network parameter L2A regularization term.
In the experimental process, the experimental parameters are set as follows: pre-trained GloVe was used as word embedding, with the hidden state dimensions of word embedding, user and word all being 300. Dropout rate of 0.5, learning rate of 0.001, L2The weight lost is 0.01. The network was optimized using Adam with a number of groups of 6. The RMSE obtained on the user data sparse data set of the Yelp 2013 is 0.7166, the MAE is 0.5501, and the performance is superior to that of the existing personalized emotion analysis method.
Compared with the existing method, the individualized emotion analysis method facing sparse data considers the problem of sparse user data in individualized emotion analysis, which is ubiquitous in real life, and the method further enhances user representation by grouping users with similar emotion expression modes and utilizing group information. Meanwhile, the emotion score corresponding to the user text is generally considered to be a single numerical value by the original method, the emotion score of one user is established to be uniform Gaussian distribution by the method, and the deviation and fluctuation of the emotion score of the user are considered, so that the personalized emotion analysis performance is favorably improved.
Based on the same inventive concept, the individualized emotion analysis device for sparse data disclosed by the embodiment of the invention comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the individualized emotion analysis method for sparse data when being loaded to the processor.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (9)
1. A sparse data-oriented personalized emotion analysis method is characterized by comprising the following steps:
(1) preprocessing a document;
(2) using a basic emotion analysis model based on a deep neural network, taking words of a document as input, respectively calculating semantic representation of each sentence in the document and semantic representation of the document through sentence-level semantic representation learning and document-level semantic representation learning, and taking a numerical value obtained by mapping the semantic representation of the document as an emotion scoring basis;
(3) using a group-based personalized emotion analysis model, taking semantic representation, user vectors and global group vectors of a document obtained by a deep neural network-based basic emotion analysis model as input, respectively calculating and obtaining user representation of each sentence and user representation of the document in the document through sentence-level user representation learning and document-level user representation learning, cascading the user representation of the document and the semantic representation obtained by the deep neural network-based basic emotion analysis model as final representation of the document, and mapping the final representation of the document to two values to be used as emotion scoring offset and fluctuation respectively; the emotion scoring offset is used for final scoring calculation, and emotion scoring fluctuation is used for network optimization;
(4) and adding the emotion scoring basis and the emotion scoring offset to obtain a final emotion scoring.
2. The sparse-data-oriented personalized emotion analysis method of claim 1, wherein the document preprocessing in step (1) comprises: the method comprises the steps of segmenting words of a document, and filtering out stop words in the document and words which only appear once in a processed data set.
3. The sparse data-oriented personalized emotion analysis method of claim 1, wherein the step (2) of calculating the emotion scoring basis by using the deep neural network-based basic emotion analysis model comprises:
(2.1) aiming at each word in the sentence, mapping the word into a word vector trained in advance, and then coding each word in the sentence by using a bidirectional long and short memory network Bi-LSTM to obtain a corresponding hidden state of each word; calculating a weight for each word using an attention mechanism; finally, weighting and summing each word to obtain semantic representation of each sentence;
(2.2) regarding each sentence in the document, taking semantic representation of the sentence as input, and coding each sentence in the document by utilizing Bi-LSTM to obtain a corresponding hidden state of each sentence; calculating a weight for each sentence using an attention mechanism; finally, weighting and summing each sentence to obtain semantic representation of the document;
(2.3) mapping the semantic representation of the document level to a numerical value, namely emotion scoring basis, by using a multi-layer perceptron.
4. The sparse data-oriented personalized emotion analysis method of claim 1, wherein the step (3) of calculating emotion scoring offsets and fluctuations by using the group-based personalized emotion analysis model comprises:
(3.1) calculating to obtain the user hidden state of each word on the basis of the hidden state of each word in the Bi-LSTM, the group global vector and the user vector corresponding to the document; calculating the weight of the user hidden state corresponding to each word by using an attention mechanism; finally, weighting and summing the hidden states of the users corresponding to each word to obtain the user representation of the sentence;
(3.2) calculating to obtain the user hidden state of each sentence on the basis of the hidden state of each sentence, the group global vector and the sentence user representation in the Bi-LSTM; calculating a weight of the hidden state of each sentence user using an attention mechanism; finally, weighting and summing the hidden states of the users of each sentence to obtain the user representation of the document;
(3.3) concatenating the semantic representation of the document and the user representation as a final representation of the document;
and (3.4) respectively mapping the final document representation to two numerical values, namely emotion scoring offset calculation and emotion scoring fluctuation by using two multilayer perceptrons.
5. The sparse data-oriented personalized emotion analysis method of claim 4, wherein the user representation of the sentence is:
wherein ,
7. The sparse data-oriented personalized emotion analysis method of claim 1, wherein joint loss is used for optimizing the network, and comprises: using mean square error loss for a basic emotion analysis model based on a deep neural network; the group-based personalized emotion analysis model is subjected to Gaussian penalty loss, so that emotion fluctuation is learned from a loss function, and the influence of samples with overlarge fluctuation on a network is reduced; adding a group vector-based penalty term to enable the group vector obtained by learning to have discriminability; l joining network parameters2The regularization term avoids overfitting.
8. The sparse data-oriented personalized emotion analysis method of claim 7, wherein a loss function of the network is defined as:
wherein ,
λ‖Θ‖2is a network parameter L2The regularization term, T is the number of samples,is the true sentiment score for the tth document,is the output result of the basic emotion analysis model based on the deep neural network of the t document, ytThe group-based personalized emotion analysis model for the tth document outputs a final emotion score,the emotion scoring fluctuation of the output of the t document based on the personalized emotion analysis model of the group, wherein I is an identity matrix, and E is { E ═ E }1,…,eKIs a matrix of group vectors, | - |FIs the Frobenius norm of the matrix.
9. A sparse data oriented personalized emotion analysis apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when loaded into the processor implements the sparse data oriented personalized emotion analysis method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010102417.4A CN111325027B (en) | 2020-02-19 | 2020-02-19 | Sparse data-oriented personalized emotion analysis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010102417.4A CN111325027B (en) | 2020-02-19 | 2020-02-19 | Sparse data-oriented personalized emotion analysis method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111325027A true CN111325027A (en) | 2020-06-23 |
CN111325027B CN111325027B (en) | 2023-04-28 |
Family
ID=71171083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010102417.4A Active CN111325027B (en) | 2020-02-19 | 2020-02-19 | Sparse data-oriented personalized emotion analysis method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111325027B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990430A (en) * | 2021-02-08 | 2021-06-18 | 辽宁工业大学 | Group division method and system based on long-time and short-time memory network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363753A (en) * | 2018-01-30 | 2018-08-03 | 南京邮电大学 | Comment text sentiment classification model is trained and sensibility classification method, device and equipment |
CN108446275A (en) * | 2018-03-21 | 2018-08-24 | 北京理工大学 | Long text emotional orientation analytical method based on attention bilayer LSTM |
CN108804417A (en) * | 2018-05-21 | 2018-11-13 | 山东科技大学 | A kind of documentation level sentiment analysis method based on specific area emotion word |
CN108984724A (en) * | 2018-07-10 | 2018-12-11 | 凯尔博特信息科技(昆山)有限公司 | It indicates to improve particular community emotional semantic classification accuracy rate method using higher-dimension |
CN110334187A (en) * | 2019-07-09 | 2019-10-15 | 昆明理工大学 | Burmese sentiment analysis method and device based on transfer learning |
-
2020
- 2020-02-19 CN CN202010102417.4A patent/CN111325027B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363753A (en) * | 2018-01-30 | 2018-08-03 | 南京邮电大学 | Comment text sentiment classification model is trained and sensibility classification method, device and equipment |
CN108446275A (en) * | 2018-03-21 | 2018-08-24 | 北京理工大学 | Long text emotional orientation analytical method based on attention bilayer LSTM |
CN108804417A (en) * | 2018-05-21 | 2018-11-13 | 山东科技大学 | A kind of documentation level sentiment analysis method based on specific area emotion word |
CN108984724A (en) * | 2018-07-10 | 2018-12-11 | 凯尔博特信息科技(昆山)有限公司 | It indicates to improve particular community emotional semantic classification accuracy rate method using higher-dimension |
CN110334187A (en) * | 2019-07-09 | 2019-10-15 | 昆明理工大学 | Burmese sentiment analysis method and device based on transfer learning |
Non-Patent Citations (1)
Title |
---|
胡均毅等: "基于情感评分的分层文本表示情感分类方法", 《计算机工程》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990430A (en) * | 2021-02-08 | 2021-06-18 | 辽宁工业大学 | Group division method and system based on long-time and short-time memory network |
CN112990430B (en) * | 2021-02-08 | 2021-12-03 | 辽宁工业大学 | Group division method and system based on long-time and short-time memory network |
Also Published As
Publication number | Publication date |
---|---|
CN111325027B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jin et al. | Multi-task learning model based on multi-scale CNN and LSTM for sentiment classification | |
CN108536679B (en) | Named entity recognition method, device, equipment and computer readable storage medium | |
Qian et al. | Hierarchical CVAE for fine-grained hate speech classification | |
CN110796160B (en) | Text classification method, device and storage medium | |
KR20180125905A (en) | Method and apparatus for classifying a class to which a sentence belongs by using deep neural network | |
CN111061843A (en) | Knowledge graph guided false news detection method | |
CN109189933A (en) | A kind of method and server of text information classification | |
CN112749274B (en) | Chinese text classification method based on attention mechanism and interference word deletion | |
CN112883714B (en) | ABSC task syntactic constraint method based on dependency graph convolution and transfer learning | |
CN112527966A (en) | Network text emotion analysis method based on Bi-GRU neural network and self-attention mechanism | |
CN109214006A (en) | The natural language inference method that the hierarchical semantic of image enhancement indicates | |
CN111538841B (en) | Comment emotion analysis method, device and system based on knowledge mutual distillation | |
Zhang et al. | A BERT fine-tuning model for targeted sentiment analysis of Chinese online course reviews | |
CN114528490B (en) | Self-supervision sequence recommendation method based on long-term and short-term interests of user | |
CN113704460A (en) | Text classification method and device, electronic equipment and storage medium | |
CN111259147B (en) | Sentence-level emotion prediction method and system based on self-adaptive attention mechanism | |
Luo et al. | EmotionX-DLC: self-attentive BiLSTM for detecting sequential emotions in dialogue | |
CN111046157A (en) | Universal English man-machine conversation generation method and system based on balanced distribution | |
CN111325027B (en) | Sparse data-oriented personalized emotion analysis method and device | |
CN113761910A (en) | Comment text fine-grained emotion analysis method integrating emotional characteristics | |
CN116644759A (en) | Method and system for extracting aspect category and semantic polarity in sentence | |
CN111368524A (en) | Microblog viewpoint sentence recognition method based on self-attention bidirectional GRU and SVM | |
CN113779244B (en) | Document emotion classification method and device, storage medium and electronic equipment | |
CN115659990A (en) | Tobacco emotion analysis method, device and medium | |
CN115309894A (en) | Text emotion classification method and device based on confrontation training and TF-IDF |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |