CN104965822A - Emotion analysis method for Chinese texts based on computer information processing technology - Google Patents

Emotion analysis method for Chinese texts based on computer information processing technology Download PDF

Info

Publication number
CN104965822A
CN104965822A CN201510452024.5A CN201510452024A CN104965822A CN 104965822 A CN104965822 A CN 104965822A CN 201510452024 A CN201510452024 A CN 201510452024A CN 104965822 A CN104965822 A CN 104965822A
Authority
CN
China
Prior art keywords
word
comment
node
chinese
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510452024.5A
Other languages
Chinese (zh)
Other versions
CN104965822B (en
Inventor
高琰
晁绪耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201510452024.5A priority Critical patent/CN104965822B/en
Publication of CN104965822A publication Critical patent/CN104965822A/en
Application granted granted Critical
Publication of CN104965822B publication Critical patent/CN104965822B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The invention discloses an emotion analysis method for Chinese texts based on computer information processing technology. Comments on Chinese products are subjected to word segmentation. By utilizing a bag-of-words model, vector representations of product comments are generated. The vector of every comment is inputted to a visible unit of a limited Boltzmann machine (RBM) in deep learning. Sentimental characteristics of Chinese texts are extracted by the RBM and the extracted emotional characteristics are inputted to a SVM for text emotion classification. The emotion analysis method for the Chinese texts based on computer information processing technology is capable of improving relevance of emotional semantics of characteristics while the SVM is capable of improving accuracy of emotion classification of comments on Chinese products.

Description

A kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech
Technical field
The present invention relates to natural language processing technique, particularly a kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech.
Background technology
Along with the develop rapidly of internet and ecommerce, people more and more like in shopping online and deliver oneself on the net to the suggestion of certain product and view.Sentiment analysis is the popular research field of of artificial intelligence aspect, and its target makes computing machine extract the emotion tendency of author to evaluation object from the subjective texts that user produces, i.e. forward or negative sense.Forward refers to user's liking this product, and they hold the attitude praised and affirm to this product; Negative sense refers to that user holds negative attitude to this product because a certain product exists certain defect or discontented place.
At present, the main method for cyber comment sentiment analysis has in the method based on emotion knowledge and the method based on machine learning.
Method based on emotion knowledge mainly relies on some existing sentiment dictionary and linguistries, classifies to the Sentiment orientation of text.As: people such as Polanyi are absorbed in, several sentiment dictionary and emotion rule are built to polarity word and relevant negator lexical word; The people such as Velikovich illustrate and how learn a large sentiment dictionary with a seed dictionary and generation figure framework and then carry out sentiment analysis etc. to text.The people such as Lee differentiate the emotion tendency of text by the number comparing forward vocabulary and negative sense vocabulary, and namely more which side of number is as the Sentiment orientation of whole text.Sentiment dictionary depends on the formulation of expert, can not reflect the comprehensive of its.
Method based on machine learning includes supervision and unsupervised learning.The people such as Turney propose a kind of unsupervised learning that uses and carry out the method for sentiment analysis to documentation level text; The people such as Tang Huifeng use n-gram model, extract the feature as text such as noun, verb, adverbial word, adjective, KNN, Bayes and SVM classifier is used again to classify to text respectively, train in different scales and different feature quantity simultaneously, thus identify the emotional orientation analysis of text; The people such as Sun Yan propose a kind of unsupervised theme emotion mixture model, this model is on the basis do not marked text, gather affective tag and the theme label of each sentence, thus obtain the emotion word of this text, and then realize classifying to the emotion of documentation level.But the product review of Chinese text, towards the Chinese of multi-semantic meaning, more rules, because the feature selected or extract is incorrect when carrying out emotion automatic analysis with machine learning to them, often causes accuracy not high.
Based in the sentiment analysis of machine learning, feature extraction is the key factor determining analytical effect.Degree of depth study forms more abstract high level by combination low-level feature and represents attribute classification or feature, to find that the distributed nature of data represents.Degree of depth study is the neural network set up, simulation human brain carries out analytic learning, and the mechanism that it imitates human brain carrys out decryption, such as image, sound and text.A kind of algorithm that RBM learns as the degree of depth, has degree of depth study Level by level learning, obtains the characteristic of data structure information, therefore may be used for extracting the feature representing feeling polarities.
Summary of the invention
Technical matters to be solved by this invention is, not enough for prior art, provides a kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech.
For solving the problems of the technologies described above, the technical solution adopted in the present invention is: a kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech, comprises the following steps:
1) Chinese product review text set is inputted, Chinese word segmentation is carried out with the accurate model in stammerer participle, and determine part of speech, filter according to part of speech, reservation part of speech is the vocabulary of noun, adjective, adverbial word and conjunction, obtains the corpus after participle, calculate the text being cut into word by Chinese word segmentation by Chinese word segmentation, obtain the weights that each word is commented at every bar, produce comment vector;
2) limited Boltzmann machine is adopted to carry out feature selecting to above-mentioned comment vector; Described limited Boltzmann machine is made up of visible layer V and hidden layer H, and layer interior joint neuron is binary random number, and the node of visible layer is separate, separate between hidden layer node; Connection weights between visible layer node and hidden layer node are W; Above-mentioned steps 1) the comment vector input visible element layer that produces;
3) limited Boltzmann machine is by training sdpecific dispersion method, the correlation parameter of the limited Boltzmann machine of initialization, after iteration 100 times, obtains hidden layer node h i=1 conditional probability p (h under visual layers v i=1/v) and visual layers node v i=1 conditional probability p (v under hidden layer h i=1/h); Wherein, p ( h j = 1 | v ) = σ ( b j + Σ i v i w i j ) ; p ( v i = 1 | h ) = σ ( a i + Σ i h j w i j ) ; Function amount of bias a iand b jinitial value is 0; w ijfor the connection weight weight values between the node i of visible layer and the node j of hidden layer, w ijobeying average is 0, and standard deviation is the normal distribution of 1;
4) according to hidden layer to visual layers node v ibe reconstructed; If v ireconstruction value be less than or equal to 0.01, then think visible node v ican not high-level structure be represented, give up v icorresponding word; If v ireconstruction value be greater than 0.01, then by v ireconstruction value with and input value subtract each other and obtain difference value; The maximum affective characteristics number that every section is commented on is set to 30, difference value is arranged from small to large, front 30 word v idifference value less than word below, v is described ido not change after reconstruct, then find out v icorresponding Chinese word, and using this word as Feature Words; The difference that relatively above 30 words are corresponding of word is below large, and v is described iafter reconstruct, result changes a lot, then by v icorresponding to non-Feature Words; If in certain comment, the quantity that reconstruction value is greater than the visible elements of 0.01 is less than 30, then the word that these visible nodes are corresponding all retains as Feature Words; The Feature Words retained forms the vector space of each comment jointly;
5) by through step 4) the new comment vector produced is input in support vector machine and classifies after process, obtains the emotion classification that each comment is vectorial; Described emotion classification is forward or negative sense.
Described step 1) in, the production process of comment vector is: adopt the word bag model in natural language processing, if certain word occurs in certain comment, this word respective weights value in comment vector is just set to 1, otherwise value is set to 0, the length of comment vector is the size of the dictionary extracting word.
The nodes of visible layer is 476, and the nodes of hidden layer is 50.
Compared with prior art, the beneficial effect that the present invention has is: the present invention adopts the primitive character information of degree of depth learning algorithm RBM to Chinese product review to carry out higher level of abstraction, according to high layer information reconstruct primitive character information, and carry out feature selecting according to reconstruction value, obtain the Feature Words more expressing high-level structure information.This method can extract the feature that more can show product review emotion essence; RBM carries out the combination that new feature that feature extraction can obtain is former feature, and adopts RBM to carry out feature selecting to be compare according to the comment of reconstruct and former comment, to select the special testimony of this comment.Compared with feature extraction, the feature chosen has more intuitive; Carry out feature selecting at Chinese product review primitive character by RBM, then the comment vector produced by Feature Words is input in SVM, carries out emotional semantic classification.Experimental result shows, the method has very high precision.
Accompanying drawing explanation
Fig. 1 is RBM-SVM algorithm flow chart;
Fig. 2 is bag-of-words text vector expression figure;
Fig. 3 is the word space after RBM reconstruct;
Fig. 4 contrasts for before and after certain review information feature selecting dimensionality reduction.
Embodiment
Specific implementation process of the present invention is as follows:
Step 1: extract 1706 sections of comment texts in the Chinese comment corpus that electronic product is correlated with from data hall, wherein have 853 forwards comments, 853 negative senses comments.
Step 2: Chinese word segmentation is carried out to the accurate model in these 1706 sections of comment text stammerer participles, judges part of speech.Remove stop words, define part of speech filtering rule simultaneously.Definition part of speech filtering rule is exactly:
A. the tendentious part of speech set of performance emotion is set up: { noun, adjective, adverbial word and conjunction },
B. judge the part of speech of each word in comment, this part of speech of certain word not in part of speech set, then deletes this word, otherwise retains this word.
Step 3: adopt the word bag model in natural language processing, if this word occurs in this comment, be just set to 1, otherwise be set to 0, the term vector in this, as text represents.The length of vector is the size of the dictionary of the word extracted.As: there are two to comment on d1, d2, through processing these two comments, extract the noun in them, adjective, adverbial word, conjunction etc., being labeled as w1 respectively, w2, w3, w4, w5.Then the term vector of word bag method model representation is as follows:
Step 4: set up two layers of limited Boltzmann machine (RBM), the nodes of visible layer is 476, and the nodes of hidden layer is 50.Each visible element in visible layer is separate.There is connection between the node i of visible layer with the node j of hidden layer, connection weight weight values is W ji.Limited Boltzmann machine has an energy of combining configuration between visual layers with hidden layer, can be expressed as:
E ( v , h ; θ ) = - Σ i j w j i v i h j - Σ i b i v i - Σ j a j h j
Wherein i is the number of the node of visual layers unit, and j is the number of hidden layer cell node, θ={ w ij, a i, b jbe model parameter, w ijfor the connection weight weight values between the node i of visible layer and the node j of hidden layer, a iand b ibe respectively the amount of bias of visible layer and hidden layer.Their joint probability distribution is:
p θ ( v , h ) = 1 z ( θ ) exp ( - E ( v , h ; θ ) ) = 1 z ( θ ) Π i j e w i j v i h j Π i e b i v i Π j e a j h j
Wherein joint probability is the sample probability in order to try to achieve hidden layer and visual layers.
Due in RBM, directly not connecting between hidden layer unit, is namely conditional sampling between hidden layer node, namely the data being then easy to the unbiased sample obtaining a Data distribution8 are expected.Given a visual layers v selected at random, hidden layer unit h jbinary condition be 1 probability be:
p ( h j = 1 | v ) = σ ( b j + Σ i v i w i j ) - - - ( 3 - 3 )
Wherein function
σ ( x ) = 1 1 + exp ( - x ) - - - ( 3 - 4 )
Now v i, h jbe a unbiased sample, this function is the same with the function in neural network, is a condition Bei Nuli distribution.Owing to also directly not connecting between visible layer unit, be then easy to the sample obtaining visible element, the probability being 1 in the binary condition of given visual layers unit is:
p ( v i = 1 | h ) = σ ( a i + Σ i h j w i j )
Step 5: the comment of the Chinese of cutting is input in RBM, a node of the corresponding visible layer of each word in dictionary.RBM is adopted to carry out unsupervised learning.
Step 6:RBM is called sdpecific dispersion (Contrastive Divergence by one, CD) method rapidly unsupervised training .RBM in sdpecific dispersion (Contrastive Divergence, CD) carry out k and walked Gibbs sampling, namely adopt k-CDgibbs to learn.All a random initial state with a training sample or visible elements in k-CDgibbs algorithm, the state v of initialization visible elements 0, the sampling hocketed below, wherein P (h|v 0) be h 0sampled probability, wherein P (v|h 0) be v 1sampled probability, all the other are similar.
h 0~p(h|v 0),v 1~p(v|h 0) (3-13)
h 1~p(h|v 1),v 2~p(v|h 1) (3-14)
......,v k+1~p(v|h k) (3-15)
Suppose that S is training set, (v 1..., v m, h 1... h n) be respectively visible elements and the concealed nodes of RBM.K-CD Gibbs specific implementation flow process is as follows, and the parameter declaration in flow process is as follows: Δ w ijfor W ijthe increment of weight, Δ a j=be amount of bias a jincrement, Δ b ifor amount of bias b iincrement
Step 7: the parameter of the limited Boltzmann machine of initialization (RBM): for step 4, the weights W in RBM is obedience average is 0, and standard deviation is the normal distribution of 1; Amount of bias a jand b jinitial value is all 0 respectively.Learning rate is 0.01
Step 8: iterations is set to the weights W that 100, CD-k gibbs algorithm finally obtains between visible node i and concealed nodes j ij, amount of bias a jand b iend value.And with reference to the formulae discovery p (v listed in step 4 i=1|h)
Step 9: to the word space of each comment according to p (v i=1|h) be reconstructed.If by v ireconstruction value be less than or equal to 0.01, then think visible node v ican not high-level structure be represented, give up v icorresponding word.If v ireconstruction value be greater than 0.01, then by v ireconstruction value with and input value subtract each other and obtain difference value.If the difference of correspondence is less, illustrate that this word does not change after reconstruct, what the Chinese word then found out corresponding to it is, then just can using this word as character representation; On the contrary, if the difference of correspondence is comparatively large, illustrate that this word result after reconstruct changes a lot, this word just can not as the character representation of this comment like this.Therefore, difference value around this principle arranges by from small to large, select front 30 as Feature Words.If in certain comment, the quantity that reconstruction value is greater than the visible elements of 0.01 is less than 30, then the word that these visible nodes are corresponding all retains as Feature Words.Dimensionality reduction has been carried out by the term vector space of feature selecting to comment.
Step 10: we select by experiment to Gibbs frequency in sampling K.In experiment, 1706 sections of comment texts are pressed training set and test set by 3:1 random choose.The node of the hidden layer of RBM is set to 2 by us, with 0,1 represents, wherein 0 represents that the product review emotion now judged is that namely negative sense represents that user holds negative attitude to this product, and 1 represents that the product review emotion now judged is that namely forward represents that user holds positive attitude to this product, by RBM as cluster, export the emotion classification of comment. K=5 is set respectively, 10,15,20,25.Along with the change of K, RBM, to the different recognition results of product review, weighs by accuracy rate, recall rate and F average 3 indexs.
Table 1 RBM is to the different recognition results of product review
When experimental result shows K=10, F average is the highest, and show that combination property is best, therefore K is set to 10.
Step 11: the comment through dimensionality reduction is input in support vector machine (SVM) by we, carries out emotional semantic classification.The linear classifier that interval on SVM feature space is maximum, its learning strategy makes margin maximization, is finally converted into solving of a convex quadratic programming problem.Here is the objective function of SVM:
max W ( α ) = Σ i = 1 N α i - 1 2 Σ i , j = 1 N α i α j y i y j K ( x i , x j )
s . t . Σ i = 1 N α i y i = 0 ; C ≥ α > > 0 , i = 1 , ... .
Ask this objective function, obtain:
According to &alpha; * = ( &alpha; 1 * , ... ... &alpha; 1 * ) , Calculate w * = &Sigma; i = 1 l y i &alpha; i * x i , Select α *one-component meet 0 < &alpha; j * < C ,
And calculate accordingly b * = y j - &Sigma; i = 1 l y i &alpha; i * K ( x i , x j )
Wherein, α={ α 1α n; K (x i, x j) be Chinese comment x iand x jkernel function, y ifor Chinese comment x iemotion classification, C is penalty factor, and we select C value to be 0.0001 in an experiment.
In SVM, Chinese comment x 1and x 2kernel function we adopt be radial basis function RBF, see following formula.By kernel function, feature space is mapped to higher dimensional space by low-dimensional.
K ( x 1 , x 2 ) = exp ( - | | x 1 - x 2 | | 2 2 &sigma; 2 )
Step 12: the sentiment analysis method that RBM and SVM is combined, and be used alone the Measures compare of SVM or RBM, calculate accuracy rate, recall rate and F average 3 indexs according to by the result exported and the result manually marked.Experimental result sees the following form:
The recognition result of table 2 distinct methods
As seen from Table 2, RBM and SVM combines and greatly improves sentiment analysis accuracy.Be used alone compared with SVM, RBM and SVM combines not only accuracy rate and improves, and recall rate also improves.Be used alone compared with RBM, RBM and SVM improves greatly in conjunction with accuracy rate, although recall rate reduces a some point, it is much higher that F average ratio is used alone RBM, shows that combination property should the sentiment analysis method of RBM and SVM combination.

Claims (3)

1., based on a Chinese text sentiment analysis method for Computerized Information Processing Tech, it is characterized in that, comprise the following steps:
1) Chinese product review text set is inputted, Chinese word segmentation is carried out with the accurate model in stammerer participle, and determine part of speech, filter according to part of speech, reservation part of speech is the vocabulary of noun, adjective, adverbial word and conjunction, obtains the corpus after participle, calculate the text being cut into word by Chinese word segmentation by Chinese word segmentation, obtain the weights that each word is commented at every bar, produce comment vector;
2) limited Boltzmann machine is adopted to carry out feature selecting to above-mentioned comment vector; Described limited Boltzmann machine is made up of visible layer V and hidden layer H, and layer interior joint neuron is binary random number, and the node of visible layer is separate, separate between hidden layer node; Connection weights between visible layer node and hidden layer node are W; Above-mentioned steps 1) the comment vector input visible element layer that produces;
3) limited Boltzmann machine is by training sdpecific dispersion method, the correlation parameter of the limited Boltzmann machine of initialization, after iteration 100 times, obtains hidden layer node h i=1 conditional probability p (h under visual layers v j=1/v) and visual layers node v i=1 conditional probability p (v under hidden layer h i=1/h); Wherein, p ( h j = 1 | v ) = &sigma; ( b j + &Sigma; i v r w i j ) ; p ( v i = 1 | h ) = &sigma; ( a i + &Sigma; i h j w i j ) ; Function amount of bias a iand b jinitial value is 0; w ijfor the connection weight weight values between the node i of visible layer and the node j of hidden layer, w ijobeying average is 0, and standard deviation is the normal distribution of 1;
4) according to visual layers node v i=1 conditional probability p (v under hidden layer h i=1/h) to visual layers node v ibe reconstructed; If v ireconstruction value be less than or equal to 0.01, then think visible node v ican not high-level structure be represented, give up v icorresponding word; If v ireconstruction value be greater than 0.01, then by v ireconstruction value with and input value subtract each other and obtain difference value; The maximum affective characteristics number that every section is commented on is set to 30, difference value is arranged from small to large, front 30 word v idifference value less than word below, v is described ido not change after reconstruct, then find out v icorresponding Chinese word, and using this word as Feature Words; The difference that relatively above 30 words are corresponding of word is below large, and v is described iafter reconstruct, result changes a lot, then by v icorresponding to non-Feature Words; If in certain comment, the quantity that reconstruction value is greater than the visible elements of 0.01 is less than 30, then the word that these visible nodes are corresponding all retains as Feature Words; The Feature Words retained forms the vector space of each comment jointly;
5) by through step 4) the new comment vector produced is input in support vector machine and classifies after process, obtains the emotion classification that each comment is vectorial; Described emotion classification is forward or negative sense.
2. the Chinese text sentiment analysis method based on Computerized Information Processing Tech according to claim 1, it is characterized in that, described step 1) in, the production process of comment vector is: adopt the word bag model in natural language processing, if certain word occurs in certain comment, this word respective weights value in comment vector is just set to 1, otherwise value is set to 0, and the length of comment vector is the size of the dictionary extracting word.
3. the Chinese text sentiment analysis method based on Computerized Information Processing Tech according to claim 2, is characterized in that, the nodes of visible layer is 476, and the nodes of hidden layer is 50.
CN201510452024.5A 2015-07-29 2015-07-29 A kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech Expired - Fee Related CN104965822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510452024.5A CN104965822B (en) 2015-07-29 2015-07-29 A kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510452024.5A CN104965822B (en) 2015-07-29 2015-07-29 A kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech

Publications (2)

Publication Number Publication Date
CN104965822A true CN104965822A (en) 2015-10-07
CN104965822B CN104965822B (en) 2017-08-25

Family

ID=54219858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510452024.5A Expired - Fee Related CN104965822B (en) 2015-07-29 2015-07-29 A kind of Chinese text sentiment analysis method based on Computerized Information Processing Tech

Country Status (1)

Country Link
CN (1) CN104965822B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589941A (en) * 2015-12-15 2016-05-18 北京百分点信息科技有限公司 Emotional information detection method and apparatus for web text
CN105930368A (en) * 2016-04-13 2016-09-07 深圳大学 Emotion classification method and system
CN106126502A (en) * 2016-07-07 2016-11-16 四川长虹电器股份有限公司 A kind of emotional semantic classification system and method based on support vector machine
CN106446117A (en) * 2016-09-18 2017-02-22 西安电子科技大学 Text analysis method based on poisson-gamma belief network
CN106502394A (en) * 2016-10-18 2017-03-15 哈尔滨工业大学深圳研究生院 Term vector computational methods and device based on EEG signals
CN106547740A (en) * 2016-11-24 2017-03-29 四川无声信息技术有限公司 Text message processing method and device
CN106844632A (en) * 2017-01-20 2017-06-13 清华大学 Based on the product review sensibility classification method and device that improve SVMs
CN106919673A (en) * 2017-02-21 2017-07-04 浙江工商大学 Text mood analysis system based on deep learning
CN106933847A (en) * 2015-12-30 2017-07-07 北京神州泰岳软件股份有限公司 A kind of method and apparatus for setting up data classification model
CN107193801A (en) * 2017-05-21 2017-09-22 北京工业大学 A kind of short text characteristic optimization and sentiment analysis method based on depth belief network
CN107608956A (en) * 2017-09-05 2018-01-19 广东石油化工学院 A kind of reader's mood forecast of distribution algorithm based on CNN GRNN
WO2018028077A1 (en) * 2016-08-11 2018-02-15 中兴通讯股份有限公司 Deep learning based method and device for chinese semantics analysis
CN107918487A (en) * 2017-10-20 2018-04-17 南京邮电大学 A kind of method that Chinese emotion word is identified based on skin electrical signal
CN108021549A (en) * 2016-11-04 2018-05-11 华为技术有限公司 Sequence conversion method and device
WO2018188567A1 (en) * 2017-04-13 2018-10-18 腾讯科技(深圳)有限公司 Server information pushing method and device, terminal information sending method and device, information pushing system and storage medium
CN109464147A (en) * 2019-01-09 2019-03-15 浙江强脑科技有限公司 Breath signal detection method, device and computer readable storage medium
CN109783806A (en) * 2018-12-21 2019-05-21 众安信息技术服务有限公司 A kind of text matching technique using semantic analytic structure

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009017464A1 (en) * 2007-07-31 2009-02-05 Agency For Science, Technology And Research Relation extraction system
CN104331706A (en) * 2014-10-29 2015-02-04 西安电子科技大学 Polarization SAR image classification based on RBM and SVM

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009017464A1 (en) * 2007-07-31 2009-02-05 Agency For Science, Technology And Research Relation extraction system
CN104331706A (en) * 2014-10-29 2015-02-04 西安电子科技大学 Polarization SAR image classification based on RBM and SVM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RONAN COLLOBERT ET AL.: "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning", 《PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 *
任远远 等: "中文网络评论的产品特征提取及情感倾向判定", 《计算机系统应用》 *
张春霞 等: "受限波尔兹曼机", 《工程数学学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589941A (en) * 2015-12-15 2016-05-18 北京百分点信息科技有限公司 Emotional information detection method and apparatus for web text
CN106933847B (en) * 2015-12-30 2019-12-27 北京神州泰岳软件股份有限公司 Method and device for establishing data classification model
CN106933847A (en) * 2015-12-30 2017-07-07 北京神州泰岳软件股份有限公司 A kind of method and apparatus for setting up data classification model
CN105930368A (en) * 2016-04-13 2016-09-07 深圳大学 Emotion classification method and system
CN105930368B (en) * 2016-04-13 2019-07-30 深圳大学 A kind of sensibility classification method and system
CN106126502A (en) * 2016-07-07 2016-11-16 四川长虹电器股份有限公司 A kind of emotional semantic classification system and method based on support vector machine
CN106126502B (en) * 2016-07-07 2018-10-30 四川长虹电器股份有限公司 A kind of emotional semantic classification system and method based on support vector machines
WO2018028077A1 (en) * 2016-08-11 2018-02-15 中兴通讯股份有限公司 Deep learning based method and device for chinese semantics analysis
CN106446117A (en) * 2016-09-18 2017-02-22 西安电子科技大学 Text analysis method based on poisson-gamma belief network
CN106502394A (en) * 2016-10-18 2017-03-15 哈尔滨工业大学深圳研究生院 Term vector computational methods and device based on EEG signals
CN106502394B (en) * 2016-10-18 2019-06-25 哈尔滨工业大学深圳研究生院 Term vector calculation method and device based on EEG signals
CN108021549B (en) * 2016-11-04 2019-08-13 华为技术有限公司 Sequence conversion method and device
CN108021549A (en) * 2016-11-04 2018-05-11 华为技术有限公司 Sequence conversion method and device
WO2018082569A1 (en) * 2016-11-04 2018-05-11 华为技术有限公司 Sequence conversion method and device
US11132516B2 (en) 2016-11-04 2021-09-28 Huawei Technologies Co., Ltd. Sequence translation probability adjustment
CN106547740A (en) * 2016-11-24 2017-03-29 四川无声信息技术有限公司 Text message processing method and device
CN106844632A (en) * 2017-01-20 2017-06-13 清华大学 Based on the product review sensibility classification method and device that improve SVMs
CN106844632B (en) * 2017-01-20 2020-02-21 清华大学 Product comment emotion classification method and device based on improved support vector machine
CN106919673A (en) * 2017-02-21 2017-07-04 浙江工商大学 Text mood analysis system based on deep learning
CN106919673B (en) * 2017-02-21 2019-08-20 浙江工商大学 Text mood analysis system based on deep learning
US10924572B2 (en) 2017-04-13 2021-02-16 Tencent Technology (Shenzhen) Company Limited Information push method and apparatus, information sending method and apparatus, system, and storage medium
WO2018188567A1 (en) * 2017-04-13 2018-10-18 腾讯科技(深圳)有限公司 Server information pushing method and device, terminal information sending method and device, information pushing system and storage medium
CN107193801A (en) * 2017-05-21 2017-09-22 北京工业大学 A kind of short text characteristic optimization and sentiment analysis method based on depth belief network
CN107193801B (en) * 2017-05-21 2020-07-03 北京工业大学 Short text feature optimization and emotion analysis method based on deep belief network
CN107608956A (en) * 2017-09-05 2018-01-19 广东石油化工学院 A kind of reader's mood forecast of distribution algorithm based on CNN GRNN
CN107608956B (en) * 2017-09-05 2021-02-19 广东石油化工学院 Reader emotion distribution prediction algorithm based on CNN-GRNN
CN107918487A (en) * 2017-10-20 2018-04-17 南京邮电大学 A kind of method that Chinese emotion word is identified based on skin electrical signal
CN109783806A (en) * 2018-12-21 2019-05-21 众安信息技术服务有限公司 A kind of text matching technique using semantic analytic structure
CN109783806B (en) * 2018-12-21 2023-05-02 众安信息技术服务有限公司 Text matching method utilizing semantic parsing structure
CN109464147A (en) * 2019-01-09 2019-03-15 浙江强脑科技有限公司 Breath signal detection method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN104965822B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN104965822A (en) Emotion analysis method for Chinese texts based on computer information processing technology
CN109977413A (en) A kind of sentiment analysis method based on improvement CNN-LDA
CN109189925A (en) Term vector model based on mutual information and based on the file classification method of CNN
Chang et al. Research on detection methods based on Doc2vec abnormal comments
CN112989802B (en) Bullet screen keyword extraction method, bullet screen keyword extraction device, bullet screen keyword extraction equipment and bullet screen keyword extraction medium
Zhao et al. ZYJ123@ DravidianLangTech-EACL2021: Offensive language identification based on XLM-RoBERTa with DPCNN
Kandhro et al. Sentiment analysis of students’ comment using long-short term model
Chaturvedi et al. Bayesian deep convolution belief networks for subjectivity detection
Song et al. An exploration-based approach to computationally supported design-by-analogy using D3
Tran et al. Effective opinion words extraction for food reviews classification
CN114265936A (en) Method for realizing text mining of science and technology project
Chaturvedi et al. Multilingual subjectivity detection using deep multiple kernel learning
Chen et al. Using deep belief nets for Chinese named entity categorization
Preetham et al. Comparative Analysis of Research Papers Categorization using LDA and NMF Approaches
Huspi et al. Sentiment Analysis on Roman Urdu Students’ Feedback Using Enhanced Word Embedding Technique
CN116108840A (en) Text fine granularity emotion analysis method, system, medium and computing device
Purpura et al. Supervised lexicon extraction for emotion classification
Munir et al. A comparison of topic modelling approaches for urdu text
Jana et al. Network embeddings from distributional thesauri for improving static word representations
Rawat et al. A Systematic Review of Question Classification Techniques Based on Bloom's Taxonomy
Robinson Disaster tweet classification using parts-of-speech tags: a domain adaptation approach
CN111859910A (en) Word feature representation method for semantic role recognition and fusing position information
Farhadloo Statistical Methods for Aspect Level Sentiment Analysis
Yelisetti et al. Aspect-based text classification for sentimental analysis using attention mechanism with RU-BiLSTM
Javeed An lstm model for extracting hierarchical relations between words for better topic modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170825

Termination date: 20210729