CN108509421A - Text sentiment classification method based on random walk and Rough Decision confidence level - Google Patents
Text sentiment classification method based on random walk and Rough Decision confidence level Download PDFInfo
- Publication number
- CN108509421A CN108509421A CN201810298350.9A CN201810298350A CN108509421A CN 108509421 A CN108509421 A CN 108509421A CN 201810298350 A CN201810298350 A CN 201810298350A CN 108509421 A CN108509421 A CN 108509421A
- Authority
- CN
- China
- Prior art keywords
- text
- feature
- decision
- random walk
- confidence level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/247—Thesauruses; Synonyms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
Abstract
The present invention relates to the text sentiment classification methods based on random walk and Rough Decision confidence level, include the following steps:Step 1 pre-processes data, generates pretreated data set DOC;Step 2 carries out feature selecting according to feature selecting algorithm to data set;Step 3 carries out part of speech analysis according to random walk feature to the result of feature selecting;Step 4 obtains text candidates attribute, establishes emotion decision table;Step 5 carries out it sliding-model control and forms discretization decision table with rough set knowledge;Step 6, then by Rough Decision confidence level, final emotional category judgement is carried out to discretization decision table.The present invention makes correct classification for the emotion generation of feature text and judges using the polar discrimination of emotion word during random walk feature and the solution text classification of Rough Decision confidence level.
Description
Technical field
The present invention relates to Intelligent Information Processing fields, and in particular to arrives the polar production method of natural language emotion word, especially
It is related to the text sentiment classification method based on random walk and Rough Decision confidence level.
Background technology
Random walk is characterized in the abstract concept feature that the user behavior for browsing webpage is established.It is used in vocabulary
In the structure of network, by indicating each node and determining its vocabulary polarity, and can be there are two label:It is positive or negative.It is right
In application of the random walk feature on emotion word polarity discriminating, it is most important that need to build the picture frame frame of word, data are reflected
Random walk figure is penetrated into, the correlativity between word is added in picture frame frame, and then obtain to the polar differentiation of unknown word.
The discretization method of rough set has the characteristics that certain, it is necessary to meet the coherence request of decision system discretization.
Rough set theory contains the yojan to information system, by removing redundancy, completes the extraction to rule, it is real there are currently no
Genealogical classification on the basis of any priori.During Discretization for Continuous Attribute, the yojan to decision table is also contained,
By selecting cut point and merging adjacent spaces section, the yojan of decision table conditional attribute is obtained.
The implicit data in knowledge system, i.e. decision rule can be obtained by the decision table after discretization, are increased with this
To the matched possibility of new object.Usually, in text data processing procedure, it inevitably will appear " dimension disaster " and ask
Topic, in text vector space, dimension disaster problem has translated into the Linear Partition Problem of high-dimensional feature space, dimension
Increase may result in Sparse, and difficult to draw attribute value matching, this is also problem to be solved of the present invention.
Invention content
In view of the drawbacks of the prior art, the present invention proposes the text emotion based on random walk and Rough Decision confidence level
Sorting technique is asked using the polar differentiation of emotion word during random walk feature and the solution text classification of Rough Decision confidence level
Topic is made correct classification for the emotion generation of feature text and is judged.
To achieve the above object, the present invention proposes the text emotion classification side based on random walk and Rough Decision confidence level
Method includes the following steps:
Step 1 pre-processes data, generates pretreated data set DOC;
Step 2 carries out feature selecting according to feature selecting algorithm to data set;
Step 3 carries out part of speech analysis according to random walk feature to the result of feature selecting, and output result is vocabulary
Emotion word polarity S entiment (w), normalizes the weight of the outside of node, and weighted value terminates the hundred of number by random walk
Ratio is divided to be adjusted;
Step 4 obtains text according to the obtained feature selecting of step 2 as a result, in conjunction with the weighted value that step 3 obtains
Candidate attribute establishes emotion decision table;
Step 5, with rough set knowledge, sliding-model control is carried out to it according to the obtained emotion decision table of step 4
Form discretization decision table;
Step 6, according to the obtained discretization decision table of step 5, by Rough Decision confidence level, to discretization decision
Table carries out final emotional category judgement.
Further, the data prediction in step 1 includes that participle and part of speech extract;Feature selecting algorithm in step 2
For SM-CHI=LF (t) * CHI (t) * SM (t), wherein if t belongs to stop words and is not belonging to verb, name in the language piece of part
Word, adjective, then LF (t)=0, otherwise LF (t)=1;CHI (t) is chi-square value;SM (t) represents whether word t is synonym,
If it is, merging its all synonyms.
Further, the result progress part of speech analysis in step 3 by random walk feature to feature selecting, and
The weight adjustment after migration is carried out in step 3, method of adjustment is:
Wherein, PijIndicate the migration weight after from node i to node j normalization, WkminIt indicates in K migration weight adjustment
Minimum value, WkmaxIndicate the maximum value in K migration weight adjustment.
Further, following emotion decision table is established in step 4:
Attribute (Fj,Pj) be made of feature Fj and its vocabulary feeling polarities weight Pj, wij indicates text Doci in feature Fj
Under weight, in table 1, " C " row indicate the Sentiment orientation classification of text, take " just " or negation.
Further, it before carrying out step 4 sliding-model control, first has to carry out equivalence class calculating, obtains corresponding objects collection U
Division U/B;
1. a couple object set U is ranked up;
2. first division in U is put into object set U;
3. if xiAnd xkThere is identical value for each attribute in B, then Bs=Bs∪{xi, otherwise enter the first step
It continues cycling through.
Further, after the division U/B for obtaining corresponding objects collection U, then emotion decision table DT=(U, C ∪ D, V, F) is carried out
Sliding-model control, it is contemplated that sliding-model control result is DT*=(U*,C*∪D*,V*,F*)。
Further, the single text confidence level in step 6 in discretization decision table is using the following method:
Wherein,X is expressed as in attribute j*Under about Decision Classes CdConfidence level,
The r equivalence classes determined in domain U by x are expressed as,Closer to 1, then show that it is under the jurisdiction of Decision Classes CdCredible journey
Degree is higher;By single text Confidence, the emotional category to obtain entire text divides:
Feeling polarities weight PjIt has been normalized that, wij≠ 0, μ cd(DOC) closer to 1, then show that it is under the jurisdiction of Decision Classes
CdCredibility it is bigger.
Advantageous effect
1) method of the utilization of the invention based on random trip feature, is converted into text vocabulary the structure of figure shape network
It builds, is analyzed by random walk characteristic behavior, migration feature step number is weighted, to obtain the feelings after random walk
Feel vocabulary polarity.
2) the emotion word polarity check result after the present invention builds word network carries out sliding-model control, and to attribute value
It is weighted, has expanded the use scope of random walk feature, by final Rough Decision confidence level to the attribute after weighting
Text carries out category division.
3) present invention can carry out arbitrary random text the division of emotional category.
Description of the drawings
Fig. 1 is the system flow chart of the present invention;
Fig. 2 is the system structure diagram of the present invention;
Fig. 3 is the raw data sample figure of the present invention;
Fig. 4 is the data processed result figure of the present invention.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
The method of the present invention is as follows in detail:
Step 1 pre-processes data, including participle, and part of speech extracts, and generates pretreated data set DOC;
Step 2 carries out feature choosing according to feature selecting algorithm SM-CHI=LF (t) * CHI (t) * SM (t) to data set
It selects, wherein if t belongs to stop words and is not belonging to verb, noun, adjective in the language piece of part, LF (t)=0, otherwise LF
(t)=1;CHI (t) is chi-square value;SM (t) represents whether word t is synonym, if it is, merging its all synonyms.
By the processing to original CHI record logs, high CHI values can be effectively reduced, CHI calculation formula are as follows:
Wherein, A+B represents the textual data for including word t, and N represents text sum.If t occurs simultaneously in all categories
And frequency is very high, then its CHI value is close to 0, therefore can screen the high frequency words for not having characteristic attribute.
Step 3 is proposed random based on extension by the description to random walk feature and the prediction to migration state
The emotion word polarity discriminating method of migration feature.A vocabulary association figure, and the random walk defined in figure are built first.Enable S+
Indicate to have been marked as two groups of vertex of positive or negative target word respectively with S-.Pass through the polar side of random walk feature calculation word
Method is:
If rw → Pos (w)
E[h(w|s+)]=h*(w|s+);
Spreading parameter λ (0 < λ < 1) is set;
If h*(w|S+)≤λh*(w|S-) then
Sentiment (w)=Pos (w);
In method, bluebeard compound associated diagram obtains final vocabulary feeling polarities.Wherein, rw indicates the hash of random walk
Number, if one of migration hash reaches destination node Pos (w), migration terminates, and the target for obtaining h (w | S+) it is expected
Value h* (w | S+), then the polarity of grammatical term for the character is just.Similarly, it also can determine whether that the polarity of word is negative or neutral.Wherein, spreading parameter can
It is determining value with the polarity ensured.
When due to calculating transition probability P, the weight W (0 for normalizing the outside of node is needed<W<1), so obtained emotion
Word memory tendency degree can be considered by weighted value, weighted value is higher, then vocabulary tend to negative or positive degree it is higher.
The percentage that weighted value terminates number by random walk is adjusted, and weight indicates as follows:
Wherein, PijIndicate the migration weight after from node i to node j normalization, WkminIt indicates in K migration weight adjustment
Minimum value, WkmaxIndicate the maximum value in K migration weight adjustment.
Step 4 obtains text according to the obtained feature selecting of step 2 as a result, in conjunction with the weighted value that step 3 obtains
Candidate attribute collection, and establish following context emotion decision table.
The obtaining step of candidate attribute collection is as follows:
1) participle is carried out to training text
2) according to the 1st step, vocabulary CHI values are calculated, lexical feature is screened, obtains candidate characteristic set
3) by the 2nd step, the feeling polarities weight of candidate characteristic set is obtained, in conjunction with lexical feature, and then constitutes candidate belong to
Property collection
By above-mentioned 3 steps, candidate attribute collection is obtained, is next established a kind of with the polar text of feature emotion vocabulary
This expression.It can be obtained by formalization, a text Doci is represented as in one group of attribute (F1,P1),(F2,P2),…,(Fn,Pn)
Under the vector (w that is constituted of valuei1,…,wij,…,win), wherein attribute (Fj,Pj) by feature Fj and its vocabulary feeling polarities
Weight Pj compositions, wij indicates weights of the text Doci at feature Fj, and in table 1, " C " row indicate decision attribute, value 1
Or 0, wherein 1 indicates that document is that just, 0 indicates that document is negative.
Step 5 carries out at discretization it with rough set knowledge according to the obtained emotion decision table of step 4
Reason.Before treatment, it first has to carry out equivalence class calculating, obtains the division U/B of corresponding objects collection U.
1.) object set U is ranked up;
2.) first division in U is put into object set U;
3. if) xiAnd xkThere is identical value for each attribute in B, then Bs=Bs∪{xi, otherwise enter first
Step continues cycling through.
After being handled by upper step, emotion decision table DT=(U, C ∪ D, V, F) sliding-model control is carried out, it is contemplated that at discretization
Reason result is DT*=(U*,C*∪D*,V*,F*)。
It is specific as follows:
In order to overcome the complexity issue of conventional discrete method ordering rule, by random walk feature to migration side
Weight divide, obtain the migration weight of each word, for each migration weight, in conjunction with candidate feature word, to candidate category
Property collection C.To solve the problems, such as that cut point chooses excessive caused data redundancy in conventional discrete method, fights to the finish by the following method
Plan table carries out sliding-model control:
1)low←Max({f(x,a):x∈Xj});
2)high←Min({f(x,a):x∈Xj+1});
3) X is calculatedj/ (C- { h }), Xj+1/ (C- { h }) and (Xj∪Xj+1)/(C-{h})
4) it calculatesWith
If 5)AndThen
count←count+1,Ycoount←Xj∪Xj+1;
In the above-mentioned methods, by obtaining 1) and 2) maximum value and minimum value of adjacent interval, and then cut point is obtained
Divide c=(low+high)/2;By the positive domain in 3) can calculating 4), that is, it is based on the classifying and dividing of (C- { h }).Sentence in 5)
It is disconnected that lower X is divided based on (C- { h })j, Xj+1The sum of positive domain whether with Xj∪Xj+1It is equal, it is less by adjusting ω gain of parameter
Cut point and higher accuracy rate, wherein ω values set P are expressed as P={ ω ∈ R | 0.1 < ω < 2.0 }, obtain final
Cut point, and finally return that discretization decision table DT*。
Step 6 carries out decision table by Rough Decision confidence level according to the obtained discretization decision table of step 5
Final emotional category judgement.
Text emotion kind judging method based on Rough Decision confidence level, by conditional attribute in classifying text and its
Migration weighted value finds out equivalence class of the text in training set, and calculates the confidence level of its Decision Classes, then calculates in attribute spy
Weight under sign and corresponding text position constructs the membership function under Decision Classes, obtains final text emotion classification
Judgement.By calculating the degree of membership of each attribute in text to be sorted, so as to avoid the matching difficult problem of more attribute values
The Sparse Problem caused with it.
Decision table DT=(U, C, D) is given, whereinJ ∈ B, x ∈ U, wherein attribute are symbol value attribute, j*For
A conditional attribute under attribute j, CdFor a Decision Classes, Cd=x ∈ U | and C (x)=d }, then
Wherein,X is expressed as in attribute j*Under about Decision Classes CdConfidence level,
The r equivalence classes determined in domain U by x are expressed as,Closer to 1, then show that it is under the jurisdiction of Decision Classes CdCredible journey
Degree is higher.
If DOC is a text to be sorted, decision table DT*In all properties be expressed aswjIndicate text
DOC is in feature FjUnder weight, CdFor Decision Classes, then claim
The Rough Decision confidence level for being text at Decision Classes d.Wherein, feeling polarities weight PjIt has been normalized that, wij≠
0, μ cd(DOC) closer to 1, then show that it is under the jurisdiction of Decision Classes CdCredibility it is higher.
In text DOC to be sorted, CPOSAnd CNEGPositive and negative two kinds of emotional categories are indicated respectively, by μ cd(DOC) it obtains just
Anti- two class text is respectively:
Wherein:
CPOS=x ∈ DOC | C (x)=POS }
CNEG=x ∈ DOC | C (x)=NEG }
By calculating the confidence level of document, it is under the jurisdiction of the maximum value of positive and negative two class text to obtain it, is exactly for just
The classification of the maximum confidence of anti-two class text.
So far, the present invention generated text classification as a result, by input text data collection, pass through 6 steps successively
Processing, to obtain final emotional semantic classification text.
The present invention obtains preferable text candidates feature by step 2.The present invention uses random walk feature, by text
Spatial transformation is vocabulary figure, it is proposed that the emotion word polarity discriminating method based on extension random walk feature effectively differentiates emotion
Word polarity finally obtains candidate attribute collection.
The present invention builds emotion vocabulary decision table, proposes to be based on by the processing to candidate attribute collection, binding characteristic weight
The attribute discretization method of emotion word polarity weight sequence, obtains discretization decision table, by Rough Decision confidence level method, generates
Final text Decision Classfication.
Due to the randomness and ambiguity of user's its behavior when browsing webpage, by user to the click behavior of webpage into
Row analysis and method for building up can obtain good effect to which realization is for the webpage sorting of user behavior.Random walk
Method of the feature in webpage sorting is applied in the structure of word network, combining rough set method, after word network is built
Emotion word polarity check result carry out sliding-model control, and attribute value is weighted, has expanded making for random walk feature
With range, category division is carried out to the attribute text after weighting by final Rough Decision confidence level.
Embodiment 1:
1, input text DOC (see attached drawing 3), the pretreatment of word is carried out by step 1, and carries out entity extraction, obtains original
Beginning text and original processing data, are shown in attached drawing 4.
2, by being pre-processed to obtain text DOC1.0 to text DOC, feature selecting is carried out by step 2, passes through step
Rapid three carry out the polarity and weight of word, integrate to obtain candidate attribute and text decision table by step 4:
Text emotion decision table DOC
3, the method provided by step 5, to original decision table, it carries out equivalence class partition, and discretization is selected with this
In cut point, carry out final sliding-model control.
Wherein ω values are selected as 0.5, and obtained cut point divides at least, and discretization decision table is:
Discretization text emotion decision table DOC
4, the method provided by step 6, carries out discretization decision table final division, and division result is product
Pole, the i.e. text are positive text.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.
Claims (7)
1. the text sentiment classification method based on random walk and Rough Decision confidence level, which is characterized in that include the following steps:
Step 1 pre-processes data, generates pretreated data set DOC;
Step 2 carries out feature selecting according to feature selecting algorithm to data set;
Step 3 carries out part of speech analysis according to random walk feature to the result of feature selecting, and output result is the emotion of vocabulary
Word polarity S entiment (w), normalizes the weight of the outside of node, and weighted value terminates the percentage of number by random walk
It is adjusted;
Step 4 obtains text candidates according to the obtained feature selecting of step 2 as a result, in conjunction with the weighted value that step 3 obtains
Attribute establishes emotion decision table;
Step 5, with rough set knowledge, carries out sliding-model control to it and is formed according to the obtained emotion decision table of step 4
Discretization decision table;
Step 6, according to the obtained discretization decision table of step 5, by Rough Decision confidence level, to discretization decision table into
The final emotional category judgement of row.
2. text sentiment classification method according to claim 1, it is characterised in that:Data prediction in step 1 includes
Participle and part of speech extract;Feature selecting algorithm in step 2 is SM-CHI=LF (t) * CHI (t) * SM (t), wherein if t belongs to
It is not belonging to verb, noun, adjective in stop words and in the language piece of part, then LF (t)=0, otherwise LF (t)=1;CHI(t)
For chi-square value;SM (t) represents whether word t is synonym, if it is, merging its all synonyms.
3. text sentiment classification method according to claim 2, it is characterised in that:Pass through random walk spy in step 3
It levies and part of speech analysis, and the weight adjustment in step 3 after progress migration, method of adjustment is carried out to the result of feature selecting
For:
Wherein, PijIndicate the migration weight after from node i to node j normalization, WkminIt indicates in K migration weight adjustment most
Small value, WkmaxIndicate the maximum value in K migration weight adjustment.
4. text sentiment classification method according to claim 3, it is characterised in that:Following emotion is established in step 4
Decision table:
Attribute (Fj,Pj) be made of feature Fj and its vocabulary feeling polarities weight Pj, wij indicates text Doci at feature Fj
Weight, in table 1, " C " row indicate the Sentiment orientation classification of text, take " just " or negation.
5. text sentiment classification method according to claim 4, it is characterised in that:Carrying out step 4 sliding-model control
Before, it first has to carry out equivalence class calculating, obtains the division U/B of corresponding objects collection U;
1. a couple object set U is ranked up;
2. first division in U is put into object set U;
3. if xiAnd xkThere is identical value for each attribute in B, then Bs=Bs∪{xi, otherwise enter the first step and continues
Cycle.
6. text sentiment classification method according to claim 5, it is characterised in that:Obtaining the division of corresponding objects collection U
After U/B, then carry out emotion decision table DT=(U, C ∪ D, V, F) sliding-model control, it is contemplated that sliding-model control result is DT*=(U*,
C*∪D*,V*,F*)。
7. text sentiment classification method according to claim 6, it is characterised in that:In step 6 in discretization decision table
Single text confidence level using the following method:
Wherein,X is expressed as in attribute j*Under about Decision Classes CdConfidence level,It is expressed as
The r equivalence classes determined in domain U by x,Closer to 1, then show that it is under the jurisdiction of Decision Classes CdCredibility get over
It is high;By single text Confidence, the emotional category to obtain entire text divides:
Feeling polarities weight PjIt has been normalized that, wij≠ 0, μ cd(DOC) closer to 1, then show that it is under the jurisdiction of Decision Classes Cd's
Credibility is bigger.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810298350.9A CN108509421B (en) | 2018-04-04 | 2018-04-04 | Text emotion classification method based on random walk and rough decision confidence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810298350.9A CN108509421B (en) | 2018-04-04 | 2018-04-04 | Text emotion classification method based on random walk and rough decision confidence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108509421A true CN108509421A (en) | 2018-09-07 |
CN108509421B CN108509421B (en) | 2021-09-28 |
Family
ID=63380607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810298350.9A Active CN108509421B (en) | 2018-04-04 | 2018-04-04 | Text emotion classification method based on random walk and rough decision confidence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509421B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109739953A (en) * | 2018-12-30 | 2019-05-10 | 广西财经学院 | The text searching method extended based on chi-square analysis-Confidence Framework and consequent |
CN109800296A (en) * | 2019-01-21 | 2019-05-24 | 四川长虹电器股份有限公司 | A kind of meaning of one's words fuzzy recognition method based on user's true intention |
CN110399595A (en) * | 2019-07-31 | 2019-11-01 | 腾讯科技(成都)有限公司 | A kind of method and relevant apparatus of text information mark |
CN111143569A (en) * | 2019-12-31 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Data processing method and device and computer readable storage medium |
CN112069319A (en) * | 2020-09-10 | 2020-12-11 | 杭州中奥科技有限公司 | Text extraction method and device, computer equipment and readable storage medium |
CN112667817A (en) * | 2020-12-31 | 2021-04-16 | 杭州电子科技大学 | Text emotion classification integration system based on roulette attribute selection |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101944218A (en) * | 2010-01-27 | 2011-01-12 | 北京大学 | Personalized recommended method based on picture under social network and system thereof |
CN103678703A (en) * | 2013-12-30 | 2014-03-26 | 中国科学院自动化研究所 | Method and device for extracting open category named entity by means of random walking on map |
CN103886062A (en) * | 2014-03-18 | 2014-06-25 | 浙江大学 | Text phrase weight calculation method based on semantic network |
CN105117428A (en) * | 2015-08-04 | 2015-12-02 | 电子科技大学 | Web comment sentiment analysis method based on word alignment model |
CN106776884A (en) * | 2016-11-30 | 2017-05-31 | 江苏大学 | A kind of act of terrorism Forecasting Methodology that multi-categorizer is combined based on multi-tag |
-
2018
- 2018-04-04 CN CN201810298350.9A patent/CN108509421B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101944218A (en) * | 2010-01-27 | 2011-01-12 | 北京大学 | Personalized recommended method based on picture under social network and system thereof |
CN103678703A (en) * | 2013-12-30 | 2014-03-26 | 中国科学院自动化研究所 | Method and device for extracting open category named entity by means of random walking on map |
CN103886062A (en) * | 2014-03-18 | 2014-06-25 | 浙江大学 | Text phrase weight calculation method based on semantic network |
CN105117428A (en) * | 2015-08-04 | 2015-12-02 | 电子科技大学 | Web comment sentiment analysis method based on word alignment model |
CN106776884A (en) * | 2016-11-30 | 2017-05-31 | 江苏大学 | A kind of act of terrorism Forecasting Methodology that multi-categorizer is combined based on multi-tag |
Non-Patent Citations (3)
Title |
---|
CHENG MINGZHI 等: ""A Random Walk Method for Sentiment Classification"", 《2009 SECOND INTERNATIONAL CONFERENCE ON FUTURE INFORMATION TECHNOLOGY AND MANAGEMENT ENGINEERING》 * |
康世泽 等: ""一种基于Opinosis图和马尔科夫随机游走模型的多文本情感摘要框架"", 《电子学报》 * |
杨帅华 等: ""粗糙集近似集的KNN文本分类算法研究"", 《小型微型计算机系统》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109739953A (en) * | 2018-12-30 | 2019-05-10 | 广西财经学院 | The text searching method extended based on chi-square analysis-Confidence Framework and consequent |
CN109739953B (en) * | 2018-12-30 | 2021-07-20 | 广西财经学院 | Text retrieval method based on chi-square analysis-confidence framework and back-part expansion |
CN109800296A (en) * | 2019-01-21 | 2019-05-24 | 四川长虹电器股份有限公司 | A kind of meaning of one's words fuzzy recognition method based on user's true intention |
CN109800296B (en) * | 2019-01-21 | 2022-03-01 | 四川长虹电器股份有限公司 | Semantic fuzzy recognition method based on user real intention |
CN110399595A (en) * | 2019-07-31 | 2019-11-01 | 腾讯科技(成都)有限公司 | A kind of method and relevant apparatus of text information mark |
CN110399595B (en) * | 2019-07-31 | 2024-04-05 | 腾讯科技(成都)有限公司 | Text information labeling method and related device |
CN111143569A (en) * | 2019-12-31 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Data processing method and device and computer readable storage medium |
CN111143569B (en) * | 2019-12-31 | 2023-05-02 | 腾讯科技(深圳)有限公司 | Data processing method, device and computer readable storage medium |
CN112069319A (en) * | 2020-09-10 | 2020-12-11 | 杭州中奥科技有限公司 | Text extraction method and device, computer equipment and readable storage medium |
CN112069319B (en) * | 2020-09-10 | 2024-03-22 | 杭州中奥科技有限公司 | Text extraction method, text extraction device, computer equipment and readable storage medium |
CN112667817A (en) * | 2020-12-31 | 2021-04-16 | 杭州电子科技大学 | Text emotion classification integration system based on roulette attribute selection |
CN112667817B (en) * | 2020-12-31 | 2022-05-31 | 杭州电子科技大学 | Text emotion classification integration system based on roulette attribute selection |
Also Published As
Publication number | Publication date |
---|---|
CN108509421B (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108363753B (en) | Comment text emotion classification model training and emotion classification method, device and equipment | |
CN108509421A (en) | Text sentiment classification method based on random walk and Rough Decision confidence level | |
RU2662688C1 (en) | Extraction of information from sanitary blocks of documents using micromodels on basis of ontology | |
CN103365867B (en) | It is a kind of that the method and apparatus for carrying out sentiment analysis are evaluated to user | |
CN107025284A (en) | The recognition methods of network comment text emotion tendency and convolutional neural networks model | |
CN107239439A (en) | Public sentiment sentiment classification method based on word2vec | |
Li et al. | A sentiment analysis method of short texts in microblog | |
CN112001186A (en) | Emotion classification method using graph convolution neural network and Chinese syntax | |
CN110765769B (en) | Clause feature-based entity attribute dependency emotion analysis method | |
CN107688870B (en) | Text stream input-based hierarchical factor visualization analysis method and device for deep neural network | |
Peng et al. | Human–machine dialogue modelling with the fusion of word-and sentence-level emotions | |
CN110287314B (en) | Long text reliability assessment method and system based on unsupervised clustering | |
CN109101490B (en) | Factual implicit emotion recognition method and system based on fusion feature representation | |
Lu et al. | Sentiment analysis of film review texts based on sentiment dictionary and SVM | |
CN112256866A (en) | Text fine-grained emotion analysis method based on deep learning | |
CN111241425A (en) | POI recommendation method based on hierarchical attention mechanism | |
CN114997288A (en) | Design resource association method | |
Mozafari et al. | Emotion detection by using similarity techniques | |
CN112215629B (en) | Multi-target advertisement generating system and method based on construction countermeasure sample | |
CN115309897A (en) | Chinese multi-modal confrontation sample defense method based on confrontation training and contrast learning | |
Zhou et al. | A text sentiment classification model using double word embedding methods | |
CN111259156A (en) | Hot spot clustering method facing time sequence | |
CN110569355A (en) | Viewpoint target extraction and target emotion classification combined method and system based on word blocks | |
Du et al. | Sentiment analysis method based on piecewise convolutional neural network and generative adversarial network | |
CN116304063B (en) | Simple emotion knowledge enhancement prompt tuning aspect-level emotion classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230420 Address after: 803, block B, No. 8 Xueqing Road (Science and technology wealth center), Haidian District, Beijing 100083 Patentee after: Fuxin Kunpeng (Beijing) Information Technology Co.,Ltd. Address before: No.100, Kexue Avenue, high tech Industrial Development Zone, Zhengzhou City, Henan Province, 450000 Patentee before: Zhengzhou University |