CN108733652B - Test method for film evaluation emotion tendency analysis based on machine learning - Google Patents

Test method for film evaluation emotion tendency analysis based on machine learning Download PDF

Info

Publication number
CN108733652B
CN108733652B CN201810480801.0A CN201810480801A CN108733652B CN 108733652 B CN108733652 B CN 108733652B CN 201810480801 A CN201810480801 A CN 201810480801A CN 108733652 B CN108733652 B CN 108733652B
Authority
CN
China
Prior art keywords
feature
probability
feature vector
word
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810480801.0A
Other languages
Chinese (zh)
Other versions
CN108733652A (en
Inventor
赵丹丹
高宠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN201810480801.0A priority Critical patent/CN108733652B/en
Publication of CN108733652A publication Critical patent/CN108733652A/en
Application granted granted Critical
Publication of CN108733652B publication Critical patent/CN108733652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Abstract

A method for testing emotion tendentiousness analysis of film evaluation based on machine learning belongs to the field of natural language processing and aims to solve the problem of testing a training set represented by film evaluation features.

Description

Test method for film evaluation emotion tendentiousness analysis based on machine learning
Technical Field
The invention belongs to the field of natural language processing, and relates to a test method for film evaluation emotion tendentiousness analysis based on machine learning.
Background
More and more users can publish own opinions, attitudes and emotions on various forums, shopping websites, comment websites, microblogs and the like, and if the emotion change process of the users can be analyzed, the comments can provide a large amount of information for the users. Such as a review of a certain movie, an evaluation of a certain product, etc. The attitude of the user is identified as liked, disliked or neutral by analyzing the subjective text with emotional colors. In real life, the method has many applications, such as forecasting stock tendency, forecasting movie box office, selecting result and the like through emotion analysis of microblog users, can be used for knowing the preference of users to companies and products, can be used for improving products and services through analysis results, and can also find out the advantages and disadvantages of competitors.
In the prior art, emotion analysis on a text is mainly Chinese emotion analysis based on an emotion dictionary, and words in the emotion dictionary can be single words or words. According to the different emotion polarities of the emotion words in the dictionary, the emotion dictionary is divided into a commendation dictionary and a derviation dictionary, the emotion score of the whole sentence is calculated according to the polarity and the emotion intensity of the emotion words in the dictionary, and finally the emotion tendency of the sentence is obtained.
Disclosure of Invention
In order to solve the test problem of the training set represented by the film evaluation characteristics, the invention provides the following scheme: a test method for film evaluation emotion tendency analysis based on machine learning comprises the following steps:
step 1: downloading the film comments;
step 2: selecting characteristic words, extracting a meaningful emotion word set as a characteristic word set according to downloaded film comments, wherein each word in the characteristic word set is a characteristic word;
and step 3: for downloaded movie ratings, using a feature word set to represent each movie rating by a feature vector, wherein the set of positive feature vectors is a positive feature text, the set of negative feature vectors is a negative feature text, and the positive feature vectors and the negative feature vectors with the same number are selected to form a feature vector text;
and 4, step 4: the method comprises the steps of randomly dividing a feature vector text into a training set, adding a positive label or a negative label to each feature vector of the training set, training a classifier constructed by a naive Bayesian idea, randomly dividing the feature vector text into a test set, wherein no positive label or negative label is added to each feature vector of the test set, and the test set is used for testing the classifier constructed by the naive Bayesian idea.
Furthermore, each feature vector in the training set is classified by a classifier trained by the training set, different emotional tendency probabilities of the feature vector tested at present are calculated and classified as emotional tendency with higher probability, the emotional tendency of the shadow evaluation reflected by the feature vector is artificially judged, the results of the emotional tendency and the probability are compared, and the emotional tendency analysis accuracy of the feature vector of the testing set under the current classifier is judged.
Further, a method for representing each film comment by a feature vector by using the feature word set is as follows: and judging whether each feature word in the feature word set appears in the comment, if so, marking 1, otherwise, marking 0, forming an array of the comment, and converting each comment into a feature representation form to serve as a feature vector.
Further, each feature vector of the test set is calculated by a mathematical model of the classifier to determine its categorical emotional tendency, which is calculated as follows:
Figure BDA0001665786440000031
Figure BDA0001665786440000032
Figure BDA0001665786440000033
Figure BDA0001665786440000034
C i feature vector text representing a classification, i ═ 0,1, w j The characteristic words in the characteristic word set are represented, j is 1,2 … n, n is the number of the characteristic words in the characteristic word set, i is 0 and represents negative classification of the negative emotion tendency of the film evaluation to be tested, i is 1 and represents positive classification of the emotion tendency of the film evaluation to be tested, and data is a characteristic vector in the test set.
Has the advantages that: the testing method of the invention expresses film comments by features, randomly divides a feature vector text into a training set, adds a positive or negative label to each feature vector of the training set, trains a classifier constructed by a naive Bayesian idea, randomly divides the feature vector text into a testing set, does not add a positive or negative label to each feature vector of the testing set, and is used for testing the classifier constructed by the naive Bayesian idea. The testing method can be used for adaptively testing the classifier trained by the feature representation.
Drawings
Fig. 1 is a flowchart of a method for analyzing emotion tendentiousness of film evaluation based on machine learning in embodiment 1;
FIG. 2 is a diagram of the processing result of the principal extraction by the jieba;
FIG. 3 is a graph comparing classification results with Bernoulli naive Bayes classification results;
wherein: the solid line is the classification result of the invention and the dotted line is Bernoulli naive Bayes classification
The result of (1); the y-axis is the accuracy and the x-axis is different test samples;
FIG. 4 is a schematic diagram of classifier construction.
Detailed Description
Example 1:
the embodiment provides an emotional tendency distinguishing method aiming at the emotional tendency analysis of Chinese film evaluation, which mainly comprises a training method, a testing method and an analyzing method.
The technical scheme disclosed by the embodiment is as follows:
a film evaluation emotion tendentiousness analysis method based on machine learning comprises the following steps:
step 1: compiling a crawler to download the broad bean movie film comments, wherein the downloaded film comments form a corpus;
step (a): and acquiring the website of the movie to be downloaded in the bean cotyledon.
Step (b): and downloading the information such as movie reviews, movie names, appraisers, scores, review time and the like corresponding to each movie, and storing the information in the csv format.
Step 2: extracting features to form a feature set of the corpus:
according to downloaded film comments (namely, the film comments in the corpus), meaningful emotion words of the film comments in the corpus are extracted as feature words, and in the step, if a single method is adopted, more valuable feature words cannot be extracted, so in one embodiment, the feature words are extracted by combining the following two modes, and the extraction rate of valuable feature word pairs can be improved.
Step (a): and performing word segmentation on all film comments in the corpus by using jieba word segmentation, and extracting words of adjectives, idioms, distinguished words and verbs as a characteristic set.
Step (b): and extracting stems from all the film comments in the corpus by using jieba word segmentation, and extracting stem words and adding the stem words into a feature set.
Step (c): stop words may be present in the feature set, and therefore the stop words are removed using the stop dictionary.
And step 3: and processing the film comments to form a feature representation text:
step (a): using jieba word segmentation to segment each shadow comment in the corpus, using the feature set obtained in step 2 to judge whether each feature word in the feature set appears in the shadow comment, if so, marking 1, otherwise, marking 0, forming an array of the shadow comment, namely, converting each shadow comment into a feature representation form, and it needs to be explained that in the invention, the feature vector of the shadow comment refers to the text after the feature representation of the shadow comment.
Step (b): the comments in the corpus are all represented by texts after feature representation by the steps, and the text representations after the feature representation of the comments form feature vector texts.
Step (c): text after the feature representation without any features is removed.
Step (d): in order to reduce the influence on the analysis result caused by the difference between the positive and negative comment numbers, in one scheme, the same number of texts of positive and negative feature representations are extracted from the feature vector, the feature vector text used in the embodiment is formed, the feature vector text is randomly divided into a training set, a positive or negative label is added to the text after each feature representation in the training set, 1(true) represents positive, and 0(false) represents negative.
It should be noted that, because each shadow score is short, the thought of bernoulli naive bayes algorithm is adopted in the embodiment, and whether a word appears is counted, rather than how many times the word appears.
And 4, step 4: the classifier is constructed by using naive Bayes thought, and is improved to be more suitable for film evaluation text classification.
The method for constructing and improving the classifier based on the naive Bayes idea comprises the following steps:
a step (a): analyzing a naive Bayes classifier, wherein the naive Bayes classification is defined as follows:
1. let X be { a ═ a 1 ,a 2 ,...,a m Is an item to be classified, and each a is a characteristic attribute of X.
2. Set of categories C ═ y 1 ,y 2 ,...,y n }。
3. Calculating p (y) 1 |x),p(y 2 |x),...,p(y n |x)。
4. If p (y) k |x)=max{p(y 1 |x),p(y 2 |x),...,p(y n | x) }, then x ∈ y k
Bayesian text classification is based on this formula, namely:
Figure BDA0001665786440000061
wherein p (C) i ) Is the probability of occurrence of the ith text class, p (w) 1 ,w 2 ...w n |C i ) For a text category Ci, a feature vector (w) occurs 1 ,w 2 ...w n ) Probability of p (w) 1 ,w 2 ...w n ) Is the probability of the occurrence of the feature vector. In this embodiment, assuming that the probabilities of the occurrence of the feature words in the text are independent, i.e. there is no correlation between the utterance and the word, the joint probability can be expressed as a product, as follows:
Figure BDA0001665786440000071
for a fixed training set, P (w) in the above equation 1 )P(w 2 )…P(w n ) Is a fixed constant, the calculation of the denominator can be omitted when performing the classification calculation, such that:
p(C i |w 1 ,w 2 ...w n )=p(w 1 |C i )p(w 2 |C i )...p(w n |C i )p(C i )
step (c): classifiers were constructed and improved using naive bayes thought.
Converting naive Bayes thought into a calculation formula, and obtaining the result through a large amount of training texts
p(C i ),p(w n |C i ) To prevent the problem of overflow of results due to too small a factor, a logarithm is used for processing. I.e. to obtain l to obtain (p (C) i ) L (p (w)) n |C i ) ) and brings the test data in to get the scores of the test data in the different categories.
Namely:
Figure BDA0001665786440000072
by analyzing the shadow comments, it can be concluded that the probability of positive terms appearing in the positive shadow comments is much higher than the probability of positive terms appearing in the negative shadow comments relative to the terms. In contrast, the probability of negative words appearing in a negative rating is much higher than the probability of negative words appearing in a positive rating. I.e. the probability of a word appearing in a certain type of text is specific, the probability of a word appearing can be used to influence the last p (C) i |w 1 ,w 2 …w n ) The value is obtained.
Namely:
Figure BDA0001665786440000073
finally, only p (C) under different categories is calculated i |w 1 ,w 2 ...w n ) And taking the maximum value.
Step (d): using the training set above to obtain p (C) i )、p(w j |C i )、p(C i |w j ) Values of the isoparametric:
calculating p (C) i ) It includes negative class probability and positive class probability:
Figure BDA0001665786440000081
negative class probability:
Figure BDA0001665786440000082
the active class probability:
Figure BDA0001665786440000083
C i the feature vector text representing the classification, i ═ 0, 1.
Calculating the probability of the feature words in the feature word set appearing in the class-like feature vector text of the training set according to the classes: calculating p (w) j |C i ) The probability of the feature words appearing in the passive feature vector texts in the training set and the probability of the feature words appearing in the active feature vector texts in the training set are as follows:
Figure BDA0001665786440000084
probability of appearance of feature words in negative feature vector text in training set:
p(w j |C 0 )=[p(w 0 |C 0 ),p(w 1 |C 0 ),p(w 2 |C 0 ),…,p(w n |C 0 )]
probability of appearance of feature words in the text of the active feature vectors in the training set:
p(w j |C 1 )=[p(w 0 |C 1 ),p(w 1 |C 1 ),p(w 2 |C 1 ),…,p(w n |C 1 )]
C i feature vector text representing a classification, i ═ 0,1, w j A feature word representing a feature word set, j ═1,2 … n, n is the number of feature words in the feature word set.
Calculating the probability that the characteristic words in the characteristic word set can respectively appear in each type of vector text of the training set: calculating p (C) i |w j ) Which includes the probability that a feature word can appear in the passive class of the training set and the probability that a feature word can appear in the active class of the training set:
Figure BDA0001665786440000091
probability that a feature word can appear in a negative class of the training set:
p(C 0 |w j )=[p(C 0 |w 0 ),p(C 0 |w 1 ),p(C 0 |w 2 ),…,p(C 0 |w n )]
probability that a feature word can appear in the active class of the training set:
p(C 1 |w j )=[p(C 1 |w 0 ),p(C 1 |w 1 ),p(C 1 |w 2 ),…,p(C 1 |w n )]
C i feature vector text representing a classification, i ═ 0,1, w j And j is 1,2 … n, and n is the number of the characteristic words in the characteristic word set.
The above is a detailed disclosure of the training steps.
And 5: randomly dividing the feature vector text into a test set, wherein in the test set, no positive or negative label is added to the text after each feature is represented, and the test set is used for testing the trained model and modifying parameters:
step (a): and training by using a training set to obtain a classification model, testing on the test set data, and classifying the unlabeled test set data.
Step (b): for log (p (C) in the formula i ))、
Figure BDA0001665786440000092
Figure BDA0001665786440000093
Any two of the three items are added with parameters to balance the influence of the three items on the final result (note: the parameters are between 0 and 1). And analyzing the comparison test result, and adjusting parameters.
Step (c): and modifying the parameters, repeatedly testing to find the optimal parameters, and comparing the optimal parameters with a naive Bayes classifier.
The above is a detailed disclosure of the testing procedure.
According to the text tendency analysis based on machine learning, words with high frequency are obtained from a large number of film evaluation texts as features, the film evaluation texts are changed into use feature representation, and emotion classification is carried out by using learning algorithms such as naive Bayes and support vector machines.
Because natural language is complex, a word has different emotion extrema in different sentences, and any emotion dictionary cannot summarize all characteristics of emotion words, the method improves the analysis of movie evaluation tendency based on machine learning, because everyone adopts the word with higher word frequency as a characteristic, if the data is insufficient, the effect of the trained classifier is quite unsatisfactory, the text extracts the characteristic by utilizing the part of speech, the sentence stem and a small amount of artificial interference of the word, then converts all movie evaluation texts into a characteristic representation form by utilizing the obtained characteristic, and further constructs the classifier by a naive Bayes thought. The method has low requirement on the performance of the computer, the selected characteristics are not interfered by frequency, and the method is more suitable for film evaluation classification, and has high speed and high accuracy.
Example 2:
as an example supplement of the technical solution in embodiment 1, fig. 1 shows a flow of the analysis method of the present invention, in this embodiment, jieba is used to perform word segmentation on a large number of texts and select a specific part-of-speech word, jieba is used to extract a sentence stem word, the two are combined, and downloaded movie scores are classified according to their scores, including positive and negative categories. And converting the film evaluation text into a characteristic representation form, constructing a classifier by using a classification algorithm, and performing necessary post-processing. The present invention will be described in detail with reference to fig. 1, taking an evaluation of one image in the data set as an example.
Step 1, downloading film comments, namely compiling reptiles to download the film comments of the broad bean film. One of the movie reviews as downloaded is as follows:
Figure BDA0001665786440000111
step 2, extracting characteristics of the film comments:
2.1 using the jieba word-dividing to perform word-dividing processing on all the film scores, and extracting words of adjectives, idioms, distinguishing words and verbs as a characteristic set. The results after the parts of speech are extracted by example sentence evaluation are as follows:
Figure BDA0001665786440000112
note: the above is the result of being extracted, and the eliminated words are not listed.
2.2 extracting stems of all the film comments by using jieba partial words, and extracting stem words and adding feature sets. The example sentence image scoring word and the result of extracting the main stem after processing are as follows:
Figure BDA0001665786440000113
2.3 stop words may be present in the feature set, the stop words are removed using the stop dictionary.
Figure BDA0001665786440000114
Figure BDA0001665786440000121
And step 3: and processing the film comments, and converting each film comment into a characteristic representation form. Using jieba word to divide each film comment, using the characteristic word set to represent each film comment,
example sentence evaluation: the milestone of the domestic type piece has tight and clear whole course of 2 hours of rhythm and true heat and blood stimulation.
Suppose a characteristic word set of [ very good, like, …, homemade, milestone, hour, rhythm, whole course, clear, hot blood, stimulus, …, resonance, boring ]
The characteristics of the example sentence are expressed as: [0,0, …,1,1,1,1,1,1,1,1, …,0,0 ].
In order to reduce the influence on the analysis result caused by the difference between the positive and negative comment numbers, in one scheme, the same number of texts of positive and negative feature representations are extracted from the feature vector, the feature vector text used in the embodiment is formed, the feature vector text is randomly divided into a training set, a positive or negative label is added to the text after each feature representation in the training set, 1(true) represents positive, and 0(false) represents negative.
If example sentence evaluation is randomized to the training set, the characteristic representation form is that an identifier is inserted at the first position, 0 represents negative, and 1 represents positive. Then its feature represents the text as: [1,0,0, …,1,1,1,1,1,1,1,1, …,0,0 ].
And 4, step 4: the algorithm is realized as follows: the following three parts are obtained by the training set.
Calculating p (C) i ) It includes negative class probability and positive class probability:
Figure BDA0001665786440000131
negative class probability:
Figure BDA0001665786440000132
the active class probability:
Figure BDA0001665786440000133
C i the feature vector text representing the classification, i ═ 0, 1.
Calculating the probability of the feature words in the feature word set appearing in the class-like feature vector text of the training set according to the classes: calculating p (w) j |C i ) The probability of the feature words appearing in the passive feature vector texts in the training set and the probability of the feature words appearing in the active feature vector texts in the training set are as follows:
Figure BDA0001665786440000134
probability of appearance of feature words in negative feature vector text in training set:
p(w j |C 0 )=[p(w 0 |C 0 ),p(w 1 |C 0 ),p(w 2 |C 0 ),…,p(w n |C 0 )]
probability of appearance of feature words in the text of the active feature vectors in the training set:
p(w j |C 1 )=[p(w 0 |C 1 ),p(w 1 |C 1 ),p(w 2 |C 1 ),…,p(w n |C 1 )]
C i feature vector text representing a classification, i ═ 0,1, w j And j is 1,2 … n, and n is the number of the characteristic words in the characteristic word set.
Calculating the probability that the characteristic words in the characteristic word set can respectively appear in each type of vector text of the training set: calculating p (C) i |w j ) Which includes the probability that a feature word can appear in the passive class of the training set and the probability that a feature word can appear in the active class of the training set:
Figure BDA0001665786440000141
probability that a feature word can appear in a negative class of the training set:
p(C 0 |w j )=[p(C 0 |w 0 ),p(C 0 |w 1 ),p(C 0 |w 2 ),…,p(C 0 |w n )]
probability that a feature word can appear in the active class of the training set:
p(C 1 |w j )=[p(C 1 |w 0 ),p(C 1 |w 1 ),p(C 1 |w 2 ),…,p(C 1 |w n )]
C i feature vector text representing a classification, i ═ 0,1, w j And j is 1,2 … n, and n is the number of the characteristic words in the characteristic word set.
And 5: and testing the trained model by using the test set, randomly generating the test set in the feature vector text by using the obtained classification model, testing by using the data of the test set, classifying the text after the characteristic representation of the film evaluation of the unlabeled test set, and comparing the test result to analyze so as to judge the accuracy of the current training model.
5.1. Acquiring an array of characteristic representations of the movie reviews which need to be classified, namely texts after the characteristic representations;
5.2. respectively calculating the characteristic words w of the film comment i Probability of occurrence in both types of documents.
Namely: to prevent too little or too much of the result we are dealing with p (w) j |C i ) One logarithm of the array is multiplied by the evaluation feature expression array and summed to obtain the tendency score (reflecting the probability).
Let the resulting negative score be f 0 (ii) a Positive score f 1
5.3. And calculating the probability of each characteristic word appearing in the two types of words respectively.
Namely: to prevent too little or too much of the result we are dealing with p (C) i |w j ) And taking a logarithm of the array and the evaluation characteristic of the strip to represent the idea of the array and summing to obtain the tendency score.
Setting the resulting negative scoreIs g 0 (ii) a Positive score g 1
5.4. Score merging
The final score of the bar score in negative was:
Figure BDA0001665786440000151
the final score of the bar score in positive was:
Figure BDA0001665786440000152
for example sentence evaluation, the probability result is as follows:
probability of aggressiveness Probability of negativity Predicted results Whether it is correct or not
-38.352214246565453 -41.408669267263221 Active Is that
For the above scores, the greater the value of which data belongs to different categories, the greater the likelihood of belonging to which category, e.g., a set of data-28.5338768667 less than-23.4792674766, the greater the likelihood of belonging to a negative category.
Figure BDA0001665786440000153
Figure BDA0001665786440000161
The above description is only for the purpose of creating a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (1)

1. A test method for film evaluation emotion tendentiousness analysis based on machine learning is characterized in that,
step 1: downloading the film comments;
step 2: selecting characteristic words, extracting a meaningful emotion word set as a characteristic word set according to downloaded film comments, wherein each word in the characteristic word set is a characteristic word;
and step 3: for downloaded movie ratings, using a feature word set to represent each movie rating by a feature vector, wherein the set of positive feature vectors is a positive feature text, the set of negative feature vectors is a negative feature text, and the positive feature vectors and the negative feature vectors with the same number are selected to form a feature vector text;
and 4, step 4: randomly dividing a feature vector text into a training set, adding a positive or negative label to each feature vector of the training set, training a classifier constructed by a naive Bayes idea, and randomly dividing the feature vector text into a test set, wherein no positive or negative label is added to each feature vector of the test set, and the test set is used for testing the classifier constructed by the naive Bayes idea;
classifying each feature vector in the training set by a classifier trained by the training set, calculating different emotion tendency probabilities of the feature vector tested at present, classifying the different emotion tendency probabilities into emotion tendencies with higher probabilities, artificially judging the emotion tendencies of the film evaluation reflected by the feature vector, comparing the results of the emotion tendencies with the results of the emotion tendencies of the feature vector tested at present, and judging the emotion tendentiousness analysis accuracy of the feature vector tested at present by the classifier;
the method for representing each film evaluation by a feature vector by using the feature word set comprises the following steps: judging whether each feature word in the feature word set appears in the film comment, if so, marking 1, otherwise, marking 0, forming an array of the film comment, and converting each film comment into a feature representation form as a feature vector;
each feature vector of the test set is calculated by a mathematical model of the classifier to judge the classification emotional tendency, and the calculation is as follows:
Figure FDA0003685397680000011
judging the classification emotional tendency of the different types of the emotion by calculating the size of p (Ci | w1, w2... wn) and taking the maximum value, wherein data is test data;
Figure FDA0003685397680000012
p(C i ) Including negative class probability and positive class probability:
Figure FDA0003685397680000013
negative class probability:
Figure FDA0003685397680000014
the active class probability:
Figure FDA0003685397680000015
C i a feature vector text representing a classification, i ═ 0, 1;
Figure FDA0003685397680000016
p(w j |C i ) Calculating the probability of the feature words in the feature word set appearing in the class-like feature vector text of the training set according to the categories, wherein the probability comprises the probability of the feature words appearing in the passive feature vector text of the training set and the probability of the feature words appearing in the active feature vector text of the training set;
probability of appearance of feature words in negative feature vector text in training set:
p(w j |C 0 )=[p(w 0 |C 0 ),p(w 1 |C 0 ),p(w 2 |C 0 ),...,p(w n |C 0 )]
probability of appearance of feature words in the text of the active feature vectors in the training set:
p(w j |C 1 )=[p(w 0 |C 1 ),p(w 1 |C 1 ),p(w 2 |G 1 ),...,p(w n |C 1 )]
C i feature vector text representing a classification, i ═ 0,1, w j N represents a feature word in the feature word set, wherein j is 1, 2.. n is the number of the feature words in the feature word set;
Figure FDA0003685397680000021
p(C i |w j ) Representing the probability that the feature words in the feature word set can respectively appear in each class of vector texts of the training set, which comprises the probability that the feature words can appear in the passive class of the training set and the probability that the feature words can appear in the active class of the training set:
probability that a feature word can appear in a negative class of the training set:
p(C 0 |w j )=[p(C 0 |w 0 ),p(C 0 |w 1 ),p(C 0 |w 2 ),...,p(G 0 |w n )]
probability that a feature word can appear in the active class of the training set:
p(C 1 |w j )=[p(C 1 |w 0 ),p(C 1 |w 1 ),p(C 1 |w 2 ),...,p(C 1 |w n )]
C i feature vector text representing a classification, i ═ 0,1, w j N represents a feature word in the feature word set, wherein j is 1, 2.. n is the number of the feature words in the feature word set;
for log (p (C) in the formula i ))、
Figure FDA0003685397680000022
Adding parameters into any two of the three items, balancing the influence of the three items on the final result, analyzing the comparison test result when the parameters are between 0 and 1, and adjusting the parameters;
and modifying the parameters, repeatedly testing to find the optimal parameters, and comparing the optimal parameters with a naive Bayes classifier.
CN201810480801.0A 2018-05-18 2018-05-18 Test method for film evaluation emotion tendency analysis based on machine learning Active CN108733652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810480801.0A CN108733652B (en) 2018-05-18 2018-05-18 Test method for film evaluation emotion tendency analysis based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810480801.0A CN108733652B (en) 2018-05-18 2018-05-18 Test method for film evaluation emotion tendency analysis based on machine learning

Publications (2)

Publication Number Publication Date
CN108733652A CN108733652A (en) 2018-11-02
CN108733652B true CN108733652B (en) 2022-08-09

Family

ID=63938765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810480801.0A Active CN108733652B (en) 2018-05-18 2018-05-18 Test method for film evaluation emotion tendency analysis based on machine learning

Country Status (1)

Country Link
CN (1) CN108733652B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697232B (en) * 2018-12-28 2020-12-11 四川新网银行股份有限公司 Chinese text emotion analysis method based on deep learning
CN110096618B (en) * 2019-05-10 2021-06-15 北京友普信息技术有限公司 Movie recommendation method based on dimension-based emotion analysis
CN111144103A (en) * 2019-12-18 2020-05-12 北京明略软件系统有限公司 Film review identification method and device
CN112949713B (en) * 2021-03-01 2023-11-21 武汉工程大学 Text emotion classification method based on complex network integrated learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955856A (en) * 2012-11-09 2013-03-06 北京航空航天大学 Chinese short text classification method based on characteristic extension
CN103116637A (en) * 2013-02-08 2013-05-22 无锡南理工科技发展有限公司 Text sentiment classification method facing Chinese Web comments
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
CN107025284A (en) * 2017-04-06 2017-08-08 中南大学 The recognition methods of network comment text emotion tendency and convolutional neural networks model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103793A1 (en) * 2000-08-02 2002-08-01 Daphne Koller Method and apparatus for learning probabilistic relational models having attribute and link uncertainty and for performing selectivity estimation using probabilistic relational models
EP3213226A1 (en) * 2014-10-31 2017-09-06 Longsand Limited Focused sentiment classification
US20160189037A1 (en) * 2014-12-24 2016-06-30 Intel Corporation Hybrid technique for sentiment analysis
CN105912576B (en) * 2016-03-31 2020-06-09 北京外国语大学 Emotion classification method and system
CN107301200A (en) * 2017-05-23 2017-10-27 合肥智权信息科技有限公司 A kind of article appraisal procedure and system analyzed based on Sentiment orientation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955856A (en) * 2012-11-09 2013-03-06 北京航空航天大学 Chinese short text classification method based on characteristic extension
CN103116637A (en) * 2013-02-08 2013-05-22 无锡南理工科技发展有限公司 Text sentiment classification method facing Chinese Web comments
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
CN107025284A (en) * 2017-04-06 2017-08-08 中南大学 The recognition methods of network comment text emotion tendency and convolutional neural networks model

Also Published As

Publication number Publication date
CN108733652A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108733652B (en) Test method for film evaluation emotion tendency analysis based on machine learning
CN108628833B (en) Method and device for determining summary of original content and method and device for recommending original content
CN107357793B (en) Information recommendation method and device
CN108694647B (en) Method and device for mining merchant recommendation reason and electronic equipment
CN107944911B (en) Recommendation method of recommendation system based on text analysis
CN112861541B (en) Commodity comment sentiment analysis method based on multi-feature fusion
CN110909116B (en) Entity set expansion method and system for social media
CN107103093B (en) Short text recommendation method and device based on user behavior and emotion analysis
CN109325120A (en) A kind of text sentiment classification method separating user and product attention mechanism
Desai Sentiment analysis of Twitter data
CN110955750A (en) Combined identification method and device for comment area and emotion polarity, and electronic equipment
Yamamoto et al. Multidimensional sentiment calculation method for Twitter based on emoticons
Biba et al. Sentiment analysis through machine learning: an experimental evaluation for Albanian
CN114077661A (en) Information processing apparatus, information processing method, and computer readable medium
CN108804416B (en) Training method for film evaluation emotion tendency analysis based on machine learning
CN108717450B (en) Analysis algorithm for emotion tendentiousness of film comment
Samuel et al. Textual data distributions: Kullback leibler textual distributions contrasts on gpt-2 generated texts, with supervised, unsupervised learning on vaccine & market topics & sentiment
Pratama et al. Predicting big five personality traits based on twitter user u sing random forest method
CN110019563B (en) Portrait modeling method and device based on multi-dimensional data
Baboo et al. Sentiment analysis and automatic emotion detection analysis of twitter using machine learning classifiers
KR102410715B1 (en) Apparatus and method for analyzing sentiment of text data based on machine learning
Wang et al. Joint Learning on Relevant User Attributes in Micro-blog.
CN108763203B (en) Method for expressing film comments by feature vectors by using feature word sets in film comment emotion analysis
Hapsari et al. Naive bayes classifier and word2vec for sentiment analysis on bahasa indonesia cosmetic product reviews
KR101652433B1 (en) Behavioral advertising method according to the emotion that are acquired based on the extracted topics from SNS document

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant