CN113191135A - Multi-category emotion extraction method fusing facial characters - Google Patents

Multi-category emotion extraction method fusing facial characters Download PDF

Info

Publication number
CN113191135A
CN113191135A CN202110412378.2A CN202110412378A CN113191135A CN 113191135 A CN113191135 A CN 113191135A CN 202110412378 A CN202110412378 A CN 202110412378A CN 113191135 A CN113191135 A CN 113191135A
Authority
CN
China
Prior art keywords
emotion
text
word
document
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110412378.2A
Other languages
Chinese (zh)
Inventor
骆曦
刘晓晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Publication of CN113191135A publication Critical patent/CN113191135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/44Statistical methods, e.g. probability models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a multi-class emotion extraction method fused with characters, which comprises the following steps of preprocessing a text set: putting the preprocessed text set into a Skip-Gram model in Word2Vec for training, and embedding the context relationship of words into a low-dimensional space to obtain Word vectors corresponding to all the words; constructing a facial character emotion dictionary; calculating the emotion probability of the characters in the document; calculating the text emotion probability; and calculating the comprehensive emotion probability of the document. The method extracts various emotion probabilities of the face characters by calculating the similarity and generates a face character emotion dictionary, integrates face character emotion information on the basis of the text by calculating the document face character emotion probability, helps to improve comprehensiveness and accuracy of emotion extraction of a user, further improves decision-making accuracy, provides reliable basis for emotion extraction by utilizing high efficiency and strong feature learning capacity of a neural network and a recurrent neural network, and reduces dependence on manual construction of emotion dictionaries and rules.

Description

Multi-category emotion extraction method fusing facial characters
Technical Field
The invention relates to the technical field of natural language processing technology and emotion analysis, in particular to a multi-class emotion extraction method fusing facial characters.
Background
With the development of information technology and network technology, social media become a main platform for modern people to communicate with each other and transfer information, such as forums, microblogs, online comments and the like, and a large amount of information rich in subjective emotion emerges every day. Through analyzing the information published by the user, the emotional information implicit in the information can be identified, the evolution rule of the user emotion can be found, and valuable information prediction is carried out, so that the method has important value in internet information mining. The emotion analysis is to analyze information such as the viewpoint, emotion, evaluation, attitude, emotion and the like of people by using methods such as natural language processing, text analysis, computational linguistics and the like, and mainly aims to predict valuable information based on a mining result and display the prediction result in a more intuitive mode. In recent years, emotion analysis technology has wide application in marketing, public opinion monitoring, policy analysis and public relationship management, and has high economic and social values.
The existing emotion analysis technology has two main means:
(1) the method based on the emotion dictionary comprises the following steps: the emotion words play an important role in expressing the text emotional tendency, and the method based on the dictionary mainly uses the related information of the emotion words to judge the emotional tendency. The method comprises the steps of making an emotion dictionary, utilizing rules such as sentence grammar and word occurrence positions, splitting a sentence, analyzing and matching the dictionary for a text, weighting emotion words, and finally using emotion values as the basis for judging the emotion tendency of the text. The emotion dictionary has high accuracy, but the recall rate is low; the construction and the perfection of the rules and the dictionaries need a large amount of manpower, the quality of the rules and the dictionaries determines the emotion analysis quality, and for different fields, the difficulty of constructing the emotion dictionaries is different, and the accurate construction cost is higher; furthermore, this approach does not take into account the effect of word context on emotion changes.
(2) Method based on machine learning: the method is used as a supervised classification problem, a model is trained by using a labeled text, and then the trained model is used for predicting the emotion polarity of the unlabeled text, so that the method is mature at present. The Convolutional Neural Network (CNN) performs convolution calculation by using a plurality of convolution kernels, can well extract local features of texts from different angles, but cannot solve the context dependence of long texts. Long-short term memory network (LSTM) is a kind of recurrent neural network, and uses a three-gate design method, which can capture a user's changing emotion by using the ability of a text sequence, but has a weak ability to recognize local features.
The character-based emoticon is a character-based emoticon, and the combination sequence of the emoticon is arranged by utilizing the display appearance of specific characters in a computer character code table to form a pattern for describing the emoticon of a character. In social media, more and more people frequently use the characters to express and express rich internal emotions, the imagination space of network communication is enriched, the social media is deeply loved by young users, and the social media are developed into network culture symbols affecting the world at present. The use of the color characters can bring about the changes of semantics and context emotions, so that the traditional emotion analysis based on the text alone cannot meet the requirements, more and more accurate information needs to be provided for the emotion decision of the user by combining the color characters, and the decision accuracy is further improved.
The invention patent application with the application number of 201910976409.X discloses a multi-class emotion classification method based on model fusion. The method has the disadvantages that a larger data set is needed for adjustment and pre-training, the influence of the characters in the text is not considered, the capability of capturing sentence sequence information is poor, and more complex semantic features cannot be obtained.
Disclosure of Invention
In order to solve the technical problems, the method for extracting the multi-class emotion fusing the color characters, provided by the invention, extracts the multi-emotion probability of the color characters by calculating the similarity and generates a color character emotion dictionary, and integrates the color character emotion information on the basis of the text by calculating the color character emotion probability of the document so as to help improve the comprehensiveness and accuracy of emotion extraction of a user and further improve the accuracy of decision making, and meanwhile, the method provides a reliable basis for emotion extraction by utilizing the high efficiency and strong feature learning capability of a neural network and a cyclic neural network, and reduces the dependence on manually constructed emotion dictionaries and rules.
The invention aims to provide a multi-class emotion extraction method fusing characters and words, which comprises the following steps of preprocessing a text set:
step 1: putting the preprocessed text set into a Skip-Gram model in Word2Vec for training, and embedding the context relationship of words into a low-dimensional space to obtain Word vectors corresponding to all the words;
step 2: constructing a facial character emotion dictionary;
and step 3: calculating the emotion probability of the characters in the document;
and 4, step 4: calculating the text emotion probability;
and 5: and calculating the comprehensive emotion probability of the document.
Preferably, the pre-treatment step comprises the sub-steps of:
step 01: extracting the face characters from the text set by using a regular expression to generate a face character dictionary;
and step 02, adding the facial character dictionary into a user-defined dictionary such as a Chinese word segmentation tool, performing word segmentation processing on all texts in the text set, and stopping words.
In any of the above schemes, preferably, the step 2 includes the following sub-steps:
step 21: dividing the emotion into four groups of opposite emotions according to the Plutchik emotion wheel, and respectively acquiring eight emotion words and word vectors corresponding to each word in a word dictionary from the trained Skip-Gram model;
step 22: respectively calculating the similarity between each color word vector and eight emotion word vectors, namely the cosine distance s1,s2,...,s8The cosine distance between two word vectors X and Y is calculated as follows:
Figure BDA0003024393150000041
wherein X is (X)1,x2,x3,…xD),Y=(y1,y2,y3,…yD) All the word vectors comprise D-dimension characteristics, X is the word vector representation of a word X, Y is the word vector representation of a word Y, D is the dimension of the word vector, and i is the ith component of the word vector;
step 23: the cosine distance sim1,sim2,...,sim8Performing a normalization process, P (w)iThe classification probability of the ith emotion corresponding to the character w can be calculated by the following formula:
Figure BDA0003024393150000042
wherein, simiRepresents the cosine distance between the character and the ith emotional word, and finally, P (w)1+P(w)2+…+P(w)8=1;
Step 24: calculating the emotion probability of all the face characters and generating a face character emotion dictionary.
Preferably, in any of the above schemes, the emotion includes eight: happy music
Figure BDA0003024393150000044
Sadness and weakness,Xi Huan
Figure BDA0003024393150000045
Aversion and surprise
Figure BDA0003024393150000046
Anger and anger
Figure BDA0003024393150000047
Terrorism, where happiness, trust, expectation are positive emotions, sadness, disgust, anger, terrorism are negative emotions, and surprise is neutral emotions.
In any of the above schemes, preferably, the step 3 includes collecting all the words { w } for a document1,w2,...wmAnd (4) by inquiring a facial character emotion dictionary, averaging all kinds of emotion probabilities to obtain a document facial character emotion probability value:
Figure BDA0003024393150000043
wherein S isiThe ith emotion value of the document text is m is the number of the documents containing the text, and j represents the jth text in the document.
In any of the above schemes, preferably, the step 4 includes the following sub-steps:
step 41: carrying out word vector representation;
step 42: inputting a bidirectional LSTM network;
step 43: inputting a text convolution neural network;
step 44: using maximum pooling to perform down-sampling processing to obtain sequence characteristic z ═ z { (z)1,z2,…,zq};
Step 45: and inputting the softmax layer.
In any of the above schemes, preferably, the step 41 includes representing the text by using a word vector output by Skip-Gram, and obtaining a word vector sequence t ═ t of the text1,t2,…,tn]Wherein t isiTo representThe ith word in the text, n is the maximum number of words that can be input.
In any of the above schemes, preferably, the step 42 includes setting the word vector sequence t ═ t1,t2,…,tn]Respectively inputting the forward and reverse long and short term memory network, fully fusing the context information of the text to obtain the forward characteristic sequence tf=[tf1,tf2,…,tfn]And reverse signature sequence tb=[tb1,tb2,…,tbn]Will tfAnd tbSplicing two word vectors of the same word to obtain a spliced word vector sequence tfb=[tfb1,tfb2,…,tfbn]Wherein t isfbi=[tfi;tbi]Dimension 2D.
In any of the above solutions, preferably, the step 43 includes using a text convolution model to pair the matrix tfbPerforming convolution operation, wherein a convolution kernel w belongs to R [ h ] 2D]With a convolution kernel height h and a convolution kernel w at the matrix tfbOne-dimensional convolution with step size 1, sequence feature ciIs formed by a convolution kernel w and a matrix region xi:i+h-1Performing convolution operation to obtain:
ci=f(w·xi:i+h-1+b)
where f is a non-linear activation function such as the tanh, and b ∈ R is the bias term. Convolution kernel w to tfbEach region { x1:h,x2:h+1,x3:h+2,…,xn-h+1:nThe convolution operation will obtain a sequence feature C ═ C with 1 columns1,c2,…,cn-h+1]. The convolution kernel height h may be set to 1,2,3, …, q (where q is the number of convolution kernels), and q sequence features may be obtained.
In any of the above embodiments, preferably, the step 45 includes inputting z, and outputting a vector P ═ P of T × 11,P2,…,Pi,…,PT}=softmax[w﹒(z。r)+b]Where w is the weight matrix, r is used to introduce Dropout operations, b is the bias vector, T is the number of classes, PiAnd indicating the probability value of the current text belonging to the ith emotion category.
In any of the above schemes, preferably, the document integrated emotion probability G ═ G1,G2,…,Gi,…,GTIn which G isiThe probability value of the document belonging to the ith emotion category is calculated by the color word emotion probability value and the text emotion probability value:
Gi=αSi+(1-α)Pi
wherein alpha is a weight coefficient of emotion probability of the face characters, and the value range is 0< alpha < 1.
In any of the above schemes, preferably, the step 5 includes using the class with the highest probability value as the final emotion classification of the document.
The invention provides a multi-category emotion extraction method fusing facial characters, which can effectively mine and extract a large amount of information published by users in social media, and carry out valuable information prediction and decision based on results, can be applied to multiple fields of politics, economy, services, medical treatment and the like, and has higher economic and social values.
Word2Vec is a tool developed by Google corporation for training Word vectors, and includes both CBOW (Continuous Bag-of-Words Model) and Skip-gram (Continuous Skip-gram Model) training models.
The Skip-Gram model is a training of word vectors based on the context of the target word.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a multi-category emotion extraction method with text and color fusion according to the invention.
FIG. 2 is a flowchart of an embodiment of a method for constructing a facial character emotion dictionary in accordance with the method for extracting multi-class emotion fusing facial characters of the present invention.
FIG. 3 is a flowchart of an embodiment of a text emotion probability calculation method of a multi-class emotion extraction method with color and text fusion according to the invention.
FIG. 4 is a flow chart of another preferred embodiment of the multi-category emotion extraction method with color text fusion according to the invention.
FIG. 5 is a flowchart of an embodiment of text emotion probability calculation according to the multi-class emotion extraction method with color word fusion of the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
Example one
As shown in fig. 1, step 100 is performed to preprocess the text set. In the step, step 101 is executed, wherein a regular expression is used for extracting the facial characters from the text set to generate a facial character dictionary; and step 102, adding the facial-character dictionary into a user-defined dictionary such as a Chinese word segmentation tool, performing word segmentation processing on all texts in the text set, and stopping words.
And step 110 is executed, the preprocessed text set is put into a Skip-Gram model in Word2Vec for training, the context relation of the words is embedded into a low-dimensional space, and Word vectors corresponding to all the words are obtained.
Step 120 is executed to construct a facial-word emotion dictionary. As shown in fig. 2, step 121 is executed, the emotions are divided into four sets of opposite emotions according to the Plutchik emotion wheel, and eight emotion words and a word vector corresponding to each word in the word dictionary are respectively obtained from the Skip-Gram model after training;
step 122 is executed to calculate the similarity between each color word vector and the eight emotion word vectors, i.e. the cosine distance s1,s2,...,s8The cosine distance between two word vectors X and Y is calculated as follows:
Figure BDA0003024393150000071
wherein X is (X)1,x2,x3,…xD),Y=(y1,y2,y3,…yD) All contain D-dimensional features, X is the word vector representation of the word X, Y is the word vector representation of the word Y, D represents the dimension of the word vector, and i represents the ith component of the word vector.
Step 123 is executed to determine the cosine distance sim1,sim2,...,sim8Performing a normalization process, P (w)iThe classification probability of the ith emotion corresponding to the character w can be calculated by the following formula:
Figure BDA0003024393150000081
wherein, simiRepresents the cosine distance between the character and the ith emotional word, and finally, P (w)1+P(w)2+…+P(w)8=1。
Step 124 is executed to calculate the emotion probabilities of all the color words and generate a color word emotion dictionary. The emotion includes eight types: happy music
Figure BDA0003024393150000083
Sadness and liking
Figure BDA0003024393150000084
Aversion and surprise
Figure BDA0003024393150000085
Anger and anger
Figure BDA0003024393150000086
Terrorism, where happiness, trust, expectation are positive emotions, sadness, disgust, anger, terrorism are negative emotions, and surprise is neutral emotions.
Step 130 is executed to calculate the emotion probability of the characters in the document. All words and phrases set for a document w1,w2,...wmAnd (4) by inquiring a facial character emotion dictionary, averaging all kinds of emotion probabilities to obtain a document facial character emotion probability value:
Figure BDA0003024393150000082
wherein S isiThe ith emotion value of the document text is mThe document contains the number of the text words, and j represents the jth text word in the document.
Step 140 is executed to calculate the text emotion probability. As shown in fig. 3, step 141 is executed to perform word vector representation, represent the text using the word vector output by Skip-Gram, and obtain a word vector sequence t of the text as [ t ═ t1,t2,…,tn]Wherein t isiRepresents the ith word in the text, and n is the maximum number of inputtable words.
Step 142 is executed, inputting the bidirectional LSTM network, and setting the word vector sequence t ═ t1,t2,…,tn]Respectively inputting the forward and reverse long and short term memory network, fully fusing the context information of the text to obtain the forward characteristic sequence tf=[tf1,tf2,…,tfn]And reverse signature sequence tb=[tb1,tb2,…,tbn]Will tfAnd tbSplicing two word vectors of the same word to obtain a spliced word vector sequence tfb=[tfb1,tfb2,…,tfbn]Wherein t isfbi=[tfi;tbi]Dimension 2D.
Step 143 is executed, the text convolution neural network is input, and the matrix t is aligned by the text convolution modelfbPerforming convolution operation, wherein a convolution kernel w belongs to R [ h ] 2D]With a convolution kernel height h and a convolution kernel w at the matrix tfbOne-dimensional convolution with step size 1, sequence feature ciIs formed by a convolution kernel w and a matrix region xi:i+h-1Performing convolution operation to obtain:
ci=f(w·xi:i+h-1+b)
where f is a non-linear activation function such as the tanh, and b ∈ R is the bias term. Convolution kernel w to tfbEach region { x1:h,x2:h+1,x3:h+2,…,xn-h+1:nThe convolution operation will obtain a sequence feature C ═ C with 1 columns1,c2,…,cn-h+1]. The convolution kernel height h may be set to 1,2,3, …, q (where q is the number of convolution kernels), and q sequence features may be obtained.
Execution step144, downsampling using maximum pooling to obtain sequence feature z ═ z { (z)1,z2,…,zq}。
Step 145 is executed to input softmax layer, input z, and output T × 1 vector P ═ { P }1,P2,…,Pi,…,PT}=softmax[w﹒(z。r)+b]Where w is the weight matrix, r is used to introduce Dropout operations, b is the bias vector, T is the number of classes, PiAnd indicating the probability value of the current text belonging to the ith emotion category.
And step 150, calculating the comprehensive emotion probability of the document, and taking the class with the maximum probability value as the final emotion classification of the document. Document integrated emotion probability G ═ G1,G2,…,Gi,…,GTIn which G isiThe probability value of the document belonging to the ith emotion category is calculated by the color word emotion probability value and the text emotion probability value:
Gi=αSi+(1-α)Pi
wherein alpha is a weight coefficient of emotion probability of the face characters, and the value range is 0< alpha < 1.
Example two
The main drawbacks of the prior art solutions are:
(1) and (3) emotion dictionary: the construction and the perfection of the rules and the dictionaries need a large amount of manpower, the difficulty of constructing the emotion dictionaries is different in different fields, the cost of accurate construction is high, and the influence of word context on emotion change is not considered.
(2) Machine learning: the convolutional neural network can better extract local features of the text from different angles, but cannot solve the context dependence of the long text. The long-term and short-term memory network can effectively integrate the adjacent position information, solves the problems of gradient disappearance, gradient explosion and the like caused by long-term dependence, and has poor capability of identifying local characteristics.
(3) No consideration of the text: currently, emotion analysis is mostly based on simple text information, but as the color characters are developed, more and more users frequently use the color characters to express and express the internal emotion. Therefore, the traditional emotion analysis based on the text cannot meet the requirement, and the characters can provide more and more accurate information for the emotion decision of the user, so that the decision accuracy is improved.
Aiming at the problems, the invention provides a multi-class emotion extraction method fusing facial characters. Compared with simple text data, the characters are more beneficial to emotion expression, so that the cooperative text use can help to improve comprehensiveness and accuracy of emotion extraction. Firstly, generating a facial character emotion dictionary, extracting facial characters in a corpus set through a regular expression, representing the facial characters by using word vectors, calculating the similarity of each facial character and various emotion vocabularies, and performing normalization processing to obtain various emotion probability values of the facial characters. When extracting document emotion information, the method is divided into two parts of processing of a face character and a text: the facial character part calculates the facial character emotion probability of the document by inquiring a facial character emotion dictionary; the text part converts words into low-dimensional vectors blended into context information based on a Skip-gram word vector model, further extracts context characteristic information through a bidirectional long-short term memory network, then performs convolution operation on a text matrix by using convolution kernels with different heights to extract text local characteristics, and calculates text emotion probability through pooling, full connection and a Softmax function; and finally, weighting and calculating the emotion information of the document by the results of the characters and the text.
As shown in fig. 2, the detailed steps are described as follows:
step 1 pretreatment
Extracting the face characters from the text set by using a regular expression to generate a face character dictionary; and adding the word dictionary into a self-defined dictionary of a Chinese word segmentation tool such as Jieba, NLPIR and the like, performing word segmentation processing on all texts in the text set, and stopping words.
Step 2 word embedding
And putting the preprocessed text set into a Skip-Gram model in Word2Vec for training, and embedding the context relationship of the words into a low-dimensional space to obtain Word vectors corresponding to all the words. The size of the window in the model parameters can be 10, and the number of the neurons of the hidden layer can be 300.
Step 3, constructing a facial character emotion dictionary
3.1 Emotion is divided into four groups according to Plutchik Emotion wheel disk, and the four groups are opposite and are eight types: happy music
Figure BDA0003024393150000112
Sadness and liking
Figure BDA0003024393150000113
Aversion and surprise
Figure BDA0003024393150000114
Anger and anger
Figure BDA0003024393150000115
Terrorism, where happiness, trust, expectation belong to positive emotions, sadness, disgust, anger, terrorism belong to negative emotions, and surprise belongs to neutral emotions; respectively acquiring eight emotion words and a word vector corresponding to each word in a word dictionary from the trained Skip-Gram model;
3.2 calculating the similarity between each color word vector and eight emotion word vectors, namely the cosine distance s1,s2,...,s8The cosine distance between two word vectors X and Y is calculated as follows:
Figure BDA0003024393150000111
wherein X ═ X1, X2, X3, … xD, Y ═ Y1, Y2, Y3, … yD, all including D dimensional features.
3.3 the cosine distance sim obtained above1,sim2,...,sim8Performing a normalization process, P (w)iThe classification probability of the ith emotion corresponding to the character w can be calculated by the following formula:
Figure BDA0003024393150000121
wherein, simiRepresents the cosine distance between the character and the ith emotional word, and finally, P (w)1+P(w)2+…+P(w)81. Calculating the emotion probability of all the face characters and generating a face character emotion dictionary.
Step 4, calculating the emotional probability of the document characters
All words and phrases set for a document w1,w2,...wmAnd (4) by inquiring a facial character emotion dictionary, averaging all kinds of emotion probabilities to obtain a document facial character emotion probability value:
Figure BDA0003024393150000122
wherein S isiThe ith emotion value of the document text is m, and the m is the number of the documents containing the text.
Step 5, calculating the text emotion probability
FIG. 2 is a flowchart of text emotion probability calculation.
5.1 the word vector represents: using the word vector output by Skip-Gram in step 1.2 to represent the text, and obtaining a word vector sequence t ═ t of the text1,t2,…,tn]Wherein t isiAnd (4) representing the ith word in the text, wherein n is the maximum number of inputtable words, and the word vectors with insufficient number are subjected to 0 complementing treatment.
5.2 input two-way LSTM network: converting the word vector sequence t to [ t ]1,t2,…,tn]Respectively inputting the forward and reverse long-short term memory network (LSTM) networks, and fully fusing the context information of the text to obtain a forward characteristic sequence tf=[tf1,tf2,…,tfn]Reverse signature sequence tb=[tb1,tb2,…,tbn]Will tfAnd tbSplicing two word vectors of the same word to obtain a spliced word vector sequence tfb=[tfb1,tfb2,…,tfbn]Wherein t isfbi=[tfi;tbi]Dimension of 2D
5.3 convolution of input textA neural network: matrix t pair using text convolution modelfbAnd performing convolution operation to further extract local features of the text. Convolution kernel w epsilon R [ h x 2D [ ]]The convolution kernel height is h, multiple heights can be set, and the convolution kernel width remains unchanged to 2 times the word vector dimension D. Convolution kernel w in matrix tfbThe above one-dimensional convolution is carried out by step length 1, the convolution operation is similar to the traditional N-gram feature extraction mode, and the sequence feature ciIs formed by a convolution kernel w and a matrix region xi:i+h-1Performing convolution operation to obtain
ci=f(w·xi:i+h-1+b)
Where f is a non-linear activation function such as the tanh and b ∈ R is the bias term. Convolution kernel w to tfbEach region { x1:h,x2:h+1,x3:h+2,…,xn-h+1:nThe convolution operation will obtain a sequence feature C ═ C with 1 columns1,c2,…,cn-h+1]. The convolution kernel height h may be set to 1,2,3, …, q (where q is the number of convolution kernels), and q sequence features may be obtained.
5.4 pooling: using maximum pooling to perform down-sampling processing to obtain sequence characteristic z ═ z { (z)1,z2,…,zq}。
5.5 input softmax layer: vector P ═ { P } of input z, output T11,P2,…,Pi,…,PT}=softmax[w﹒(z。r)+b]W is a weight matrix, r is used for introducing Dropout operation, b is a bias vector, T is a category number of 8, and Pi represents a probability value that the current text belongs to the ith emotion category.
Step 6, calculating the comprehensive emotion probability of the document
Document comprehensive emotion probability value G ═ { G ═ G1,G2,…,Gi,…,GTIn which G isiThe probability value of the document belonging to the ith emotion category is calculated by the color word emotion probability value and the text emotion probability value:
Gi=αSi+(1-α)Pi
wherein alpha is a facial character emotion probability weight coefficient, the value range is 0< alpha <1, and finally the class with the maximum probability value is used as the final emotion classification of the document. If positive, negative and neutral emotional polarities are required to be output, three types of probabilities of happiness, trust and expectation are added to be used as positive emotional probabilities, four types of emotions of sadness, disgust, anger and terror are added to be used as negative emotional probabilities, surprise is directly used as neutral emotional probabilities, and finally the type with the maximum positive, negative and neutral probability values is used as the final emotional polarity.
The method and the device fuse the characters on the basis of the text to extract various emotions: in social media, characters are frequently used and are more beneficial to expression of emotion, and changes in semantic and contextual emotion are caused, so that the traditional emotion analysis based on text only cannot meet the requirement. The method extracts various emotion probabilities of the face characters by calculating the similarity and generates a face character emotion dictionary, and integrates face character emotion information on the basis of the text by calculating the document face character emotion probability so as to help improve the comprehensiveness and accuracy of emotion extraction of the user and further improve the accuracy of decision making.
The method integrates various models to extract text features: the method comprises the steps of firstly obtaining an embedded word vector by using a Skip-gram model, then further extracting context feature information through a bidirectional LSTM, and finally performing convolution operation by using convolution kernels with different heights to extract text local features, so that the finally extracted feature space has low dimensionality, not only contains the whole context information, but also can concern the text local information. The method provided by the invention integrates the advantages of various models, provides reliable basis for emotion extraction by utilizing the high efficiency and strong characteristic learning capability of the neural network and the recurrent neural network, and reduces the dependence on artificially constructed emotion dictionaries and rules.
For a better understanding of the present invention, the foregoing detailed description has been given in conjunction with specific embodiments thereof, but not with the intention of limiting the invention thereto. Any simple modifications of the above embodiments according to the technical essence of the present invention still fall within the scope of the technical solution of the present invention. In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the respective embodiments may be referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A multi-category emotion extraction method fusing facial characters comprises the steps of preprocessing a text set, and is characterized by further comprising the following steps:
step 1: putting the preprocessed text set into a Skip-Gram model in Word2Vec for training, and embedding the context relationship of words into a low-dimensional space to obtain Word vectors corresponding to all the words;
step 2: constructing a facial character emotion dictionary;
and step 3: calculating the emotion probability of the characters in the document;
and 4, step 4: calculating the text emotion probability;
and 5: and calculating the comprehensive emotion probability of the document.
2. The method for extracting multi-category emotion fusing text and text as claimed in claim 1, wherein said step 2 comprises the sub-steps of:
step 21: dividing the emotion into four groups of opposite emotions according to the Plutchik emotion wheel, and respectively acquiring eight emotion words and word vectors corresponding to each word in a word dictionary from the trained Skip-Gram model;
step 22: respectively calculating the similarity between each color word vector and eight emotion word vectors, namely the cosine distance s1,s2,...,s8The cosine distance between two word vectors X and Y is calculated as follows:
Figure FDA0003024393140000011
wherein X is (X)1,x2,x3,…xD),Y=(y1,y2,y3,…yD) All contain D-dimensional features, X is the word vector representation of the word X, and y isA word vector representation of word Y, D represents the dimension of the word vector, and i represents the ith component of the word vector;
step 23: the cosine distance sim1,sim2,...,sim8Performing a normalization process, P (w)iThe classification probability of the ith emotion corresponding to the character w can be calculated by the following formula:
Figure FDA0003024393140000021
wherein, simiRepresents the cosine distance between the character and the ith emotional word, and finally, P (w)1+P(w)2+…+P(w)8=1;
Step 24: calculating the emotion probability of all the face characters and generating a face character emotion dictionary.
3. The method as claimed in claim 2, wherein said step 3 comprises collecting all the color words { w } for a document1,w2,...wmAnd (4) by inquiring a facial character emotion dictionary, averaging all kinds of emotion probabilities to obtain a document facial character emotion probability value:
Figure FDA0003024393140000022
wherein S isiThe ith emotion value of the document text is m is the number of the documents containing the text, and j represents the jth text in the document.
4. The method for extracting multi-category emotion fusing text and text as claimed in claim 3, wherein said step 4 comprises the sub-steps of:
step 41: carrying out word vector representation;
step 42: inputting a bidirectional LSTM network;
step 43: inputting a text convolution neural network;
step 44: using maximum pooling to perform down-sampling processing to obtain sequence characteristic z ═ z { (z)1,z2,…,zq};
Step 45: and inputting the softmax layer.
5. The method as claimed in claim 4, wherein the step 41 comprises using the word vector output from Skip-Gram to represent the text, and obtaining a word vector sequence t ═ t [ t ] of the text1,t2,…,tn]Wherein t isiRepresents the ith word in the text, and n is the maximum number of inputtable words.
6. The method of claim 5, wherein the step 42 comprises selecting the word vector sequence t ═ t [ t ] according to the multi-category emotion extraction method1,t2,…,tn]Respectively inputting the forward and reverse long and short term memory network, fully fusing the context information of the text to obtain the forward characteristic sequence tf=[tf1,tf2,…,tfn]And reverse signature sequence tb=[tb1,tb2,…,tbn]Will tfAnd tbSplicing two word vectors of the same word to obtain a spliced word vector sequence tfb=[tfb1,tfb2,…,tfbn]Wherein t isfbi=[tfi;tbi]Dimension 2D.
7. The method of claim 6, wherein said step 43 comprises using a text convolution model to pair said matrix tfbPerforming convolution operation, wherein a convolution kernel w belongs to R [ h ] 2D]With a convolution kernel height h and a convolution kernel w at the matrix tfbOne-dimensional convolution with step size 1, sequence feature ciIs formed by a convolution kernel w and a matrix region xi:i+h-1Performing convolution operation to obtain:
ci=f(w·xi:i+h-1+b)
where f is a non-linear activation function such as the tanh, and b ∈ R is the bias term. Convolution kernel w to tfbEach region { x1:h,x2:h+1,x3:h+2,…,xn-h+1:nThe convolution operation will obtain a sequence feature C ═ C with 1 columns1,c2,…,cn-h+1]. The convolution kernel height h may be set to 1,2,3, …, q (where q is the number of convolution kernels), and q sequence features may be obtained.
8. The method of claim 7, wherein step 45 comprises inputting z and outputting T1 vector P { P ═ P { (m ═ P) } in the text-to-text fusion1,P2,…,Pi,…,PT}=softmax[w﹒(z。r)+b]Where w is the weight matrix, r is used to introduce Dropout operations, b is the bias vector, T is the number of classes, PiAnd indicating the probability value of the current text belonging to the ith emotion category.
9. The method of claim 8, wherein the document integrated emotion probability G ═ G1,G2,…,Gi,…,GTIn which G isiThe probability value of the document belonging to the ith emotion category is calculated by the color word emotion probability value and the text emotion probability value:
Gi=αSi+(1-α)Pi
wherein alpha is a weight coefficient of emotion probability of the face characters, and the value range is 0< alpha < 1.
10. The method as claimed in claim 9, wherein the step 5 comprises classifying the class with the highest probability value as the final emotion classification of the document.
CN202110412378.2A 2021-01-26 2021-04-16 Multi-category emotion extraction method fusing facial characters Pending CN113191135A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021101012186 2021-01-26
CN202110101218 2021-01-26

Publications (1)

Publication Number Publication Date
CN113191135A true CN113191135A (en) 2021-07-30

Family

ID=76977325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412378.2A Pending CN113191135A (en) 2021-01-26 2021-04-16 Multi-category emotion extraction method fusing facial characters

Country Status (1)

Country Link
CN (1) CN113191135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114912437A (en) * 2022-04-29 2022-08-16 上海交通大学 Bullet screen text detection and extraction method, system, terminal and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595632A (en) * 2018-04-24 2018-09-28 福州大学 A kind of hybrid neural networks file classification method of fusion abstract and body feature
CN109376251A (en) * 2018-09-25 2019-02-22 南京大学 A kind of microblogging Chinese sentiment dictionary construction method based on term vector learning model
CN109543012A (en) * 2018-10-25 2019-03-29 苏宁易购集团股份有限公司 A kind of user's intension recognizing method and device based on Word2Vec and RNN
CN110301117A (en) * 2017-11-24 2019-10-01 微软技术许可有限责任公司 Response is provided in a session
CN110502753A (en) * 2019-08-23 2019-11-26 昆明理工大学 A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN111046136A (en) * 2019-11-13 2020-04-21 天津大学 Method for calculating multi-dimensional emotion intensity value by fusing emoticons and short text
CN111353044A (en) * 2020-03-09 2020-06-30 重庆邮电大学 Comment-based emotion analysis method and system
CN112164391A (en) * 2020-10-16 2021-01-01 腾讯科技(深圳)有限公司 Statement processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110301117A (en) * 2017-11-24 2019-10-01 微软技术许可有限责任公司 Response is provided in a session
CN108595632A (en) * 2018-04-24 2018-09-28 福州大学 A kind of hybrid neural networks file classification method of fusion abstract and body feature
CN109376251A (en) * 2018-09-25 2019-02-22 南京大学 A kind of microblogging Chinese sentiment dictionary construction method based on term vector learning model
CN109543012A (en) * 2018-10-25 2019-03-29 苏宁易购集团股份有限公司 A kind of user's intension recognizing method and device based on Word2Vec and RNN
CN110502753A (en) * 2019-08-23 2019-11-26 昆明理工大学 A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN111046136A (en) * 2019-11-13 2020-04-21 天津大学 Method for calculating multi-dimensional emotion intensity value by fusing emoticons and short text
CN111353044A (en) * 2020-03-09 2020-06-30 重庆邮电大学 Comment-based emotion analysis method and system
CN112164391A (en) * 2020-10-16 2021-01-01 腾讯科技(深圳)有限公司 Statement processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114912437A (en) * 2022-04-29 2022-08-16 上海交通大学 Bullet screen text detection and extraction method, system, terminal and medium

Similar Documents

Publication Publication Date Title
CN111767741B (en) Text emotion analysis method based on deep learning and TFIDF algorithm
CN106919673B (en) Text mood analysis system based on deep learning
CN106776581B (en) Subjective text emotion analysis method based on deep learning
Ren et al. Intention detection based on siamese neural network with triplet loss
CN110287323B (en) Target-oriented emotion classification method
CN111209401A (en) System and method for classifying and processing sentiment polarity of online public opinion text information
CN110502626B (en) Aspect level emotion analysis method based on convolutional neural network
CN110362819B (en) Text emotion analysis method based on convolutional neural network
CN110909736B (en) Image description method based on long-term and short-term memory model and target detection algorithm
CN111382565A (en) Multi-label-based emotion-reason pair extraction method and system
CN107818084B (en) Emotion analysis method fused with comment matching diagram
CN111259153B (en) Attribute-level emotion analysis method of complete attention mechanism
CN110750648A (en) Text emotion classification method based on deep learning and feature fusion
CN112287106A (en) Online comment emotion classification method based on dual-channel hybrid neural network
CN113360582B (en) Relation classification method and system based on BERT model fusion multi-entity information
CN117236338B (en) Named entity recognition model of dense entity text and training method thereof
CN110297986A (en) A kind of Sentiment orientation analysis method of hot microblog topic
CN111159405B (en) Irony detection method based on background knowledge
Gan et al. DHF-Net: A hierarchical feature interactive fusion network for dialogue emotion recognition
CN115759119A (en) Financial text emotion analysis method, system, medium and equipment
CN111814450A (en) Aspect-level emotion analysis method based on residual attention
CN115510230A (en) Mongolian emotion analysis method based on multi-dimensional feature fusion and comparative reinforcement learning mechanism
CN116757218A (en) Short text event coreference resolution method based on sentence relation prediction
Bölücü et al. Hate Speech and Offensive Content Identification with Graph Convolutional Networks.
Yan et al. Implicit emotional tendency recognition based on disconnected recurrent neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination