CN109815331A - Construction method, device and the computer equipment of text emotion disaggregated model - Google Patents
Construction method, device and the computer equipment of text emotion disaggregated model Download PDFInfo
- Publication number
- CN109815331A CN109815331A CN201910012242.5A CN201910012242A CN109815331A CN 109815331 A CN109815331 A CN 109815331A CN 201910012242 A CN201910012242 A CN 201910012242A CN 109815331 A CN109815331 A CN 109815331A
- Authority
- CN
- China
- Prior art keywords
- text
- text data
- model
- data
- mixed gauss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 84
- 238000010276 construction Methods 0.000 title claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 134
- 238000000605 extraction Methods 0.000 claims abstract description 57
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000013527 convolutional neural network Methods 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 27
- 230000015654 memory Effects 0.000 claims description 21
- 241001269238 Data Species 0.000 claims description 18
- 230000002996 emotional effect Effects 0.000 claims description 10
- 230000001537 neural effect Effects 0.000 claims description 6
- 230000002441 reversible effect Effects 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 230000007423 decrease Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000013145 classification model Methods 0.000 description 5
- 238000011478 gradient descent method Methods 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application involves construction method, device, computer equipment and the storage mediums of a kind of text emotion disaggregated model.This method includes: the second text data for obtaining the first text data and ameleia label that have affective tag;The term vector for obtaining each vocabulary in text data, the term vector of each text data is input in Text character extraction model, the hidden feature vector of each text data is obtained;Semi-supervised learning training is carried out to mixed Gauss model using the hidden feature vector and affective tag of the first text data and the hidden feature vector of the second text data, and the mixed Gauss model after training is determined as target mixed Gauss model;Text character extraction model and target mixed Gauss model are finally determined as text emotion disaggregated model.This method is based on semi-supervised learning and constructs text emotion disaggregated model, effectively reduces dependence of the text emotion disaggregated model to labeled data resource, and identifies that the accuracy rate of comment text data emotion classification improves.
Description
Technical field
This application involves technical field of data processing, more particularly to a kind of text emotion disaggregated model construction method,
Device, computer equipment and storage medium.
Background technique
Text emotion classification is an important branch of natural language processing, currently, neural network model is in text emotion
Do well in classification, but the training of neural network model parameter needs a large amount of labeled data resources.However for some spies
Under fixed application field, the training corpus added with mark is less, the nerve that training obtains in the case where labeled data is less
The effect is unsatisfactory to text emotion classification for network model, reduces to the accuracy of text emotion classification.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide construction method, the dress of a kind of text emotion disaggregated model
It sets, computer equipment and storage medium.
A kind of construction method of text emotion disaggregated model, which comprises
Text training data is obtained, the text training data includes the first text data, first text data
Affective tag and the second text data;
Obtain the corresponding term vector of each vocabulary in first text data and second text data, Jiang Gesuo
The term vector for stating the first text data and second text data is input to the trained Text character extraction model of pre-selection
In, obtain the hidden feature vector of each first text data and second text data;
Utilize the hidden feature vector of first text data and affective tag and the institute of first text data
The hidden feature vector for stating the second text data carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will
Mixed Gauss model after training is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to the Text character extraction model and the target mixed Gauss model.
It is described in one of the embodiments, to utilize the hidden feature vector of first text data and described first
The hidden feature vector of the affective tag of text data and second text data to the mixed Gauss model constructed in advance into
The training of row semi-supervised learning, and the step of mixed Gauss model after training is determined as target mixed Gauss model, comprising:
Using the hidden feature vector of first text data and the affective tag of first text data to institute
It states mixed Gauss model and carries out Training, obtain the first mixed Gauss model;
The hidden feature vector of second text data is input to first mixed Gauss model, obtains described
First affective tag of two text datas;
Utilize the hidden feature vector of first text data and affective tag and the institute of first text data
The hidden feature vector of the second text data and the first affective tag of second text data are stated, to first mixing
Gauss model carries out Training, obtains the second mixed Gauss model;
The hidden feature vector of second text data is input to second mixed Gauss model, obtains described
Second affective tag of two text datas;
The quantity for obtaining first affective tag and the second inconsistent text data of second affective tag accounts for the
The ratio value of two text datas sum, when the ratio value be less than preset threshold, second mixed Gauss model is determined as
Target mixed Gauss model.
Acquisition first affective tag and second affective tag are inconsistent in one of the embodiments,
The quantity of second text data accounted for after the step of ratio value of the second text data sum, further includes:
When the ratio value be greater than or equal to the preset threshold, second mixed Gauss model is determined as first and is mixed
It closes Gauss model and the second emotion label is determined as the first emotion label, jump to and utilize first text data
Hidden feature vector and first text data affective tag and second text data hidden feature vector
And the first affective tag of second text data, Training is carried out to first mixed Gauss model, is obtained
The step of second mixed Gauss model.
The term vector by each text data is input to the trained text of pre-selection in one of the embodiments,
Before step in Feature Selection Model, further includes:
News corpus data are obtained, include multiple news corpus samples and each news in the news corpus data
The type label of corpus sample;
Obtain the term vector of each vocabulary in each news corpus sample;
Using the term vector and type label of each news corpus sample to the convolutional neural networks mould constructed in advance
Type carries out Training;
The characteristic parameter of the convolutional layer in convolutional Neural model after extracting training, according to the characteristic parameter of the convolutional layer
Generate Text character extraction model.
The term vector and type label using each news corpus sample is to pre- in one of the embodiments,
The convolutional neural networks model first constructed carries out the step of Training, comprising:
The term vector of each news corpus sample is input to the convolutional neural networks model, obtains each news
The classification results of corpus sample;
According to the corresponding type label of each news corpus sample and classification results, reverse transfer and gradient are utilized
Descent method is adjusted the parameter of the convolutional neural networks model.
It is described according to the Text character extraction model and the target mixed Gaussian mould in one of the embodiments,
Type generated after the step of text emotion disaggregated model, further includes:
Text to be predicted is obtained, and obtains the term vector of each vocabulary in the text to be predicted;
The term vector of the text to be predicted is input in the Text character extraction model, the text to be predicted is obtained
This hidden feature vector;
The hidden feature vector is input in the target mixed Gauss model, the feelings of the text to be predicted are obtained
Feel type.
A kind of emotional semantic classification device of text, described device include:
Training data obtains module, and for obtaining text training data, the text training data includes the first textual data
According to the affective tag and the second text data of, first text data;
Hidden feature obtains module, for obtaining each word in first text data and second text data
It converges corresponding term vector, the term vector of each first text data and second text data is input to pre-selection training
In good Text character extraction model, obtain the hidden feature of each first text data and second text data to
Amount;
Model training module, hidden feature vector and first textual data for utilization first text data
According to affective tag and second text data hidden feature vector, half is carried out to the mixed Gauss model that constructs in advance and is supervised
Learning training is superintended and directed, and the mixed Gauss model after training is determined as target mixed Gauss model;
Disaggregated model generation module, for according to the Text character extraction model and the target mixed Gauss model
Generate text emotion disaggregated model.
The model training module is used in one of the embodiments:
Using the hidden feature vector of first text data and the affective tag of first text data to institute
It states mixed Gauss model and carries out Training, obtain the first mixed Gauss model;
The hidden feature vector of second text data is input to first mixed Gauss model, obtains described
First affective tag of two text datas;
Utilize the hidden feature vector of first text data and affective tag and the institute of first text data
The hidden feature vector of the second text data and the first affective tag of second text data are stated, to first mixing
Gauss model carries out Training, obtains the second mixed Gauss model;
The hidden feature vector of second text data is input to second mixed Gauss model, obtains described
Second affective tag of two text datas;
The quantity for obtaining first affective tag and the second inconsistent text data of second affective tag accounts for the
The ratio value of two text datas sum, when the ratio value be less than preset threshold, second mixed Gauss model is determined as
Target mixed Gauss model.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device performs the steps of when executing the computer program
Text training data is obtained, the text training data includes the first text data, first text data
Affective tag and the second text data;
Obtain the corresponding term vector of each vocabulary in first text data and second text data, Jiang Gesuo
The term vector for stating the first text data and second text data is input to the trained Text character extraction model of pre-selection
In, obtain the hidden feature vector of each first text data and second text data;
Utilize the hidden feature vector of first text data and affective tag and the institute of first text data
The hidden feature vector for stating the second text data carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will
Mixed Gauss model after training is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to the Text character extraction model and the target mixed Gauss model.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It is performed the steps of when row
Text training data is obtained, the text training data includes the first text data, first text data
Affective tag and the second text data;
Obtain the corresponding term vector of each vocabulary in first text data and second text data, Jiang Gesuo
The term vector for stating the first text data and second text data is input to the trained Text character extraction model of pre-selection
In, obtain the hidden feature vector of each first text data and second text data;
Utilize the hidden feature vector of first text data and affective tag and the institute of first text data
The hidden feature vector for stating the second text data carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will
Mixed Gauss model after training is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to the Text character extraction model and the target mixed Gauss model.
Construction method, device, computer equipment and the storage medium of above-mentioned text emotion disaggregated model, by training in advance
Text character extraction model obtain have mark text and the text without mark hidden feature vector, and using these text
This hidden feature vector carries out semi-supervised learning training to mixed Gauss model, to determine the parameter of mixed Gauss model, most
Text character extraction model and trained mixed Gauss model be as text emotion disaggregated model at last, this programme according to
Labeled data containing label and the training for being completed mixed Gauss model using semi-supervised algorithm without labeled data without label, are had
Effect reduces the dependence of text emotion disaggregated model labeled data resource, reduces modeling cost, and text emotional semantic classification mould
The generalization of type is strong, effectively improves the accuracy rate of identification comment text data emotion classification.
Detailed description of the invention
Fig. 1 is the application scenario diagram of the construction method of text sentiment classification model in one embodiment;
Fig. 2 is the flow diagram of the construction method of text sentiment classification model in one embodiment;
Fig. 3 is the flow diagram of the training step of mixed Gauss model in one embodiment;
Fig. 4 is the flow diagram of the training step of Text character extraction model in another embodiment;
Fig. 5 is the structural block diagram of the construction device of text sentiment classification model in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
The construction method of text emotion disaggregated model provided by the present application, can be applied to application environment as shown in Figure 1
In.Fig. 1 provides a kind of computer equipment, which can be server, which includes passing through system
Processor, memory, network interface and the database of bus connection.Wherein, the processor of the computer equipment is based on providing
Calculation and control ability.The memory of the computer equipment includes non-volatile memory medium, built-in storage.The non-volatile memories
Media storage has operating system, computer program and database.The built-in storage is the operation system in non-volatile memory medium
The operation of system and computer program provides environment.The database of the computer equipment is used to store the parameter of training data, model
Etc. data.The network interface of the computer equipment is used to communicate with external terminal by network connection.The computer program quilt
A kind of construction method of text emotion disaggregated model is realized when processor executes, to construct text emotion disaggregated model.
It will be understood by those skilled in the art that structure shown in Fig. 1, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, as shown in Fig. 2, a kind of construction method of text emotion disaggregated model is provided, with the party
Method is applied to be illustrated for the server in Fig. 1, comprising the following steps:
Step S210: text training data is obtained, text training data includes the first text data, the first text data
Affective tag and the second text data.
In this step, this training data is commented in server acquisition, wherein includes being labeled with affective tag in text training data
The first text data and ameleia label the second text data.
Step S220: obtaining the corresponding term vector of each vocabulary in the first text data and the second text data, will be each
The term vector of first text data and the second text data is input in the trained Text character extraction model of pre-selection, is obtained
The hidden feature vector of each first text data and the second text data.
In this step, text data include be labeled with affective tag the first text data and ameleia label second
Text data;For preselecting trained Text character extraction model, set defeated for the corresponding term vector of text data
Enter item, sets the hidden feature vector of text data to the output item of model.Server obtains each vocabulary in text data
Term vector, and by the way that the term vector of acquisition to be input in Text character extraction model, Text character extraction model is to input
Term vector carry out data characteristics analysis, final Text character extraction model export the corresponding hidden feature of this article notebook data to
Amount.By the text hidden feature vector of Text character extraction model extraction training data, the later use hidden feature to
Amount is capable of providing the relevant characteristic information of more texts when being modeled, and effectively reduces building text emotion disaggregated model to mark
The dependence of data, while effectively avoiding the feelings for leading to the text of model pair because of the text information of offer deficiency when text is shorter
The accuracy decline of sense classification.
Specifically, server carries out the operation processings such as stop words and participle to text, the vocabulary in text is obtained, and
It is trained using vocabulary of the Word2Vec to acquisition and obtains corresponding term vector.
Step S230: the hidden feature vector of the first text data and the affective tag of the first text data and are utilized
The hidden feature vector of two text datas carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will train
Mixed Gauss model afterwards is determined as target mixed Gauss model.
In this step, server by utilizing is labeled with the first text data of affective tag and the second text of ameleia label
Notebook data carries out semi-supervised learning training to mixed Gauss model, is improving mixed Gauss model to text emotion classification accuracy
While, effectively reduce dependence of the building text emotion disaggregated model to labeled data.
Step S240: text emotion classification mould is generated according to Text character extraction model and target mixed Gauss model
Type.
In this step, server is by Text character extraction model and target mixed Gauss model combination producing text emotion
The generalization of disaggregated model, the model is strong, effectively improves the accuracy rate of identification comment text data emotion classification.Subsequent progress text
During this emotional semantic classification, the hidden feature vector in text is obtained first with Text character extraction model, then by the hidden of text
The emotional semantic classification that target mixed Gauss model obtains text is input to containing feature vector.
In the construction method of above-mentioned text emotion disaggregated model, have by Text character extraction model acquisition trained in advance
The hidden feature vector of the text of mark and the text without mark, and it is high to mixing using the hidden feature vector of these texts
This model carries out semi-supervised learning training, to determine the parameter of mixed Gauss model, finally by Text character extraction model and
For trained mixed Gauss model as text emotion disaggregated model, this programme effectively reduces text emotion disaggregated model mark
The dependence for infusing data resource, reduces modeling cost, and the generalization of text sentiment classification model is strong, effectively improves identification and comment
The accuracy rate of paper notebook data emotion classification.
In one embodiment, using the affective tag of the hidden feature vector of the first text and the first text data and
The hidden feature vector of second text carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will be after training
Mixed Gauss model the step of being determined as target mixed Gauss model, comprising: using the first text data hidden feature to
The affective tag of amount and the first text data carries out Training to mixed Gauss model, obtains the first mixed Gaussian mould
Type;The hidden feature vector of second text data is input to the first mixed Gauss model, obtains the first of the second text data
Affective tag;Utilize the hidden feature vector of the first text data and the affective tag of the first text data and the second textual data
According to hidden feature vector and the second text data the first affective tag, to the first mixed Gauss model carried out supervision instruction
Practice, obtains the second mixed Gauss model;The hidden feature vector of second text data is input to the second mixed Gauss model, is obtained
To the second affective tag of the second text data;Obtain the first affective tag and inconsistent the second textual data of the second affective tag
According to quantity account for the ratio value of the second text data sum, it is when ratio value is less than preset threshold, the second mixed Gauss model is true
It is set to target mixed Gauss model.
In the present embodiment, server is obtaining the first text data and the corresponding hidden feature vector of the second text data
Afterwards, Training is carried out to mixed Gauss model by being labeled with the first text data of affective tag, seeks mixed Gaussian
The initial parameter of model obtains the first mixed Gauss model.Server predicts the second textual data by the first mixed Gauss model
According to the first affective tag, in conjunction with the first text data for being labeled with affective tag and with the first affective tag second text
Notebook data carries out Training to mixed Gauss model again, to update the parameters in mixed Gauss model, obtains
Second mixed Gauss model;After obtaining the second mixed Gauss model, the second mixed Gauss model of server by utilizing is again to
The affective style of two text datas is predicted, the second affective tag of the second text data is obtained, by calculating the first emotion
The quantity of label and inconsistent the second text data of the second affective tag, to determine that the quantity accounts for the second text data total quantity
Ratio value, when the ratio value is less than preset threshold, then it is assumed that the second mixed Gauss model has been restrained, and by the second mixed Gaussian
Model is determined as target mixed Gauss model.
Further, in one embodiment, the first affective tag and inconsistent second literary of the second affective tag are obtained
The quantity of notebook data accounted for after the step of ratio value of the second text data sum, further includes:
When ratio value be greater than or equal to preset threshold, by the second mixed Gauss model be determined as the first mixed Gauss model with
And the second emotion label is determined as the first emotion label, jump to the hidden feature vector using the first text data and
First emotion mark of the hidden feature vector and the second text data of the affective tag of one text data and the second text data
Label, the step of carrying out Training to the first mixed Gauss model, obtain the second mixed Gauss model.
When the quantity and the second text data of the first affective tag and inconsistent the second text data of the second affective tag
When the ratio value of total quantity is greater than preset threshold, then it is assumed that the second mixed Gauss model is not converged, and server is high by the second mixing
This model is determined as the first mixed Gauss model, the second affective tag of the second text data is determined as the second text data
The first affective tag after, recombine the first text for being labeled with affective tag and with the first affective tag second text
This carries out Training to mixed Gauss model, updates the parameters in mixed Gauss model, obtains the second mixed Gaussian
Model.In the present embodiment, when the second inconsistent amount of text of the affective tag of front and back twice is greater than preset value, server knot
It closes the first text for being labeled with affective tag and last time predicts the second text of resulting affective tag to mixed Gaussian
Model carries out Training again, until the variation ratio for the affective tag predicted twice before and after the second text is less than in advance
If threshold value.
In one embodiment, as shown in figure 3, providing the training step of mixed Gauss model;Utilize the first textual data
According to hidden feature vector and the affective tag of the first text data and the hidden feature vector of the second text data to preparatory
The mixed Gauss model of building carries out semi-supervised learning training, and the mixed Gauss model after training is determined as target and mixes height
The step of this model the following steps are included:
Step S310: using the hidden feature vector of the first text data and the affective tag of the first text data to mixed
It closes Gauss model and carries out Training, obtain the first mixed Gauss model.
In text training data, the first text data includes the text data of l group label data containing emotion, the second text
Data include the text data that u group is free of affective tag data, then D={ (x1,y1),(x2,y2),…,(xl,yl),xl+1,
xl+2,…,xl+u, wherein x1、x2、…、xlIndicate the corresponding hidden feature vector of the first text data of l group, y1、y2、…、ylTable
Show x1、x2、…、xlWait the corresponding label of l group text data, xl+1、xl+2、…、xl+uIndicate that the second text data of u group is corresponding hidden
Containing feature vector;Server is first with x1、x2、…、xlEqual the first text data of l group opening relationships model obtains mixed Gaussian mould
The parameter of type.
Assuming that the affective style of text data includes m class, the first text data and the total n group data of the second text data, then
γijThen indicate text data xjBelong to the probability value of the i-th class (i-th gauss component, the i.e. corresponding emotional semantic classification of text data),
And the γ of the sample of the label data containing emotionijValue is 1 for class shown in affective tag, is 0 for remaining class.
Shown in for example following formula (1) of the probability distribution of mixed Gauss model:
In formula, π is mixed coefficint.Since hidden feature is multidimensional, problem belongs to multivariate Gaussian distribution, therefore mixes high
X in this model is the hidden feature vector of text data, and μ is the mean vector of x, and ∑ is covariance matrix.
The initial parameter π of mixed Gauss model is sought according to the first text data for having the label data containing emotioni、μi、
∑i, the calculation of initial value of this three parameters is shown below:
Step S320: the hidden feature vector of the second text data is input to the first mixed Gauss model, obtains second
First affective tag of text data.
In this step, after the completion of initial parameter calculates, the first mixed Gaussian mould constructed by server by utilizing initial parameter
Type estimates the first affective tag of the second text data.Specifically, the following formula of server by utilizing (5) estimate the second text data
Belong to the probability value γ of the i-th class affective stylei, thus according to γiValue determines corresponding first affective tag.
Step S330: the hidden feature vector of the first text data and the affective tag of the first text data and are utilized
The hidden feature vector of two text datas and the first affective tag of the second text data carry out the first mixed Gauss model
Training obtains the second mixed Gauss model.
In this step, server according to the first text data and the second text data for having the label data containing emotion again
The secondary undated parameter π for seeking mixed Gauss modeli’、μi’、∑I', utilize parameter πi’、μi’、∑I' update the first mixed Gaussian mould
Type is the second mixed Gauss model.
Step S340: the hidden feature vector of the second text data is input to the second mixed Gauss model, obtains second
Second affective tag of text data.
In this step, the second mixed Gauss model constructed by server by utilizing undated parameter estimates the second text data
Second affective tag.Specifically, server by utilizing following formula estimates that the second text data belongs to the probability value of the i-th class affective style
γi, thus according to γiValue determines corresponding first affective tag.
Step S350: the quantity for obtaining the first affective tag and inconsistent the second text data of the second affective tag accounts for the
The ratio value of two text datas sum, when ratio value be less than preset threshold, jump to S360;It is preset when ratio value is greater than or equal to
Second mixed Gauss model is determined as the first mixed Gauss model and by the second emotion label of the second text data by threshold value
It is determined as the first emotion label of the second text data, go to step S330;
Step S360: the second mixed Gauss model is determined as target mixed Gauss model.
In the present embodiment, the first text data of server by utilizing label data containing emotion establishes preliminary mixed Gaussian mould
Type, and using the affective tag of mixed Gauss model the second text data of prediction, then in conjunction with first of the label data containing emotion
Second text data of text data and the affective tag obtained containing prediction, training mixed Gauss model (M step) jointly again,
It then predicts the affective tag (E step) of the second text data again by the mixed Gauss model updated, and repeats E step and walked with M
Until the variation ratio for the affective tag predicted twice before and after the second text data is less than preset threshold, mixed Gauss model
Training is completed, and the mixed Gauss model finally obtained is determined as target mixed Gauss model, according to the labeled data containing label
And without label without labeled data, utilize maximum likelihood method (Expectation Maximization Algorithm, EM)
Algorithm complete mixed Gauss model training, effectively improve mixed Gauss model classify to comment text data emotion it is accurate
Rate effectively reduces the dependence of text emotion disaggregated model labeled data resource, reduces modeling cost.
In one embodiment, as shown in figure 4, providing the training step of Text character extraction model;, by text training
Before step of the text input of concentration into Text character extraction model, further includes:
Step S410: obtaining news corpus data, includes multiple news corpus samples in news corpus data and each new
Hear the type label of corpus sample.
In this step, the available polytypic news corpus data of server are as training sample, and each trained sample
This all has news corpus sample and corresponding type label, for example, news corpus data may include that finance and economic is new
Hear corpus data, law class news corpus data, cultural class news corpus data and movement class news corpus data;
Step S420: the term vector of each vocabulary in each news corpus sample is obtained.
It is obtained specifically, server can carry out the processing operations such as stop words and Chinese word segmentation to news corpus sample
Vocabulary in news corpus sample, and utilize the corresponding term vector of vocabulary each in Word2Vec acquisition news corpus sample.It can
Choosing, the dimension of the corresponding term vector of each vocabulary is set as 300 dimensions in news corpus sample.
Step S430: using the term vector and type label of each news corpus sample to the convolutional Neural net constructed in advance
Network model carries out Training.
Server is using the term vector of news corpus sample as the input item data of convolutional neural networks model, by news language
Expect output item data of the type label of sample as convolutional neural networks model, being input to convolutional neural networks model is had
Supervised training.Optionally, each 128 channel of one-dimensional convolutional layer that length is respectively 2,3,4 can be set in convolutional neural networks model
Step S440: the characteristic parameter of the convolutional layer in convolutional neural networks model after extracting training, according to convolutional layer
Characteristic parameter generate Text character extraction model.
After the completion of convolutional neural networks model training, the characteristic parameter that server extracts convolutional layer generates text feature and mentions
Modulus type.Specifically, the output layer in convolutional neural networks model can be replaced with full articulamentum by server, it is special to generate text
Sign extracts model, i.e., only retains the characteristic parameter of convolutional layer, instructed convolutional layer in convolutional neural networks model by full articulamentum
Practice the text hidden feature vector obtained to export outward.
The present embodiment is the building process of Text character extraction model, by utilizing the training of classified news corpus data
Convolutional neural networks model, and the characteristic parameter for extracting convolutional layer in convolutional neural networks model generates Text character extraction mould
Type, Text character extraction model can obtain the hidden feature vector in corpus of text, which can provide more
The relevant feature of more texts is to be effectively reduced in the building and text emotion classification for subsequent text emotion disaggregated model
Sentiment classification model constructs the dependence to labeled data resource, reduces modeling cost, improves the extensive of text emotion disaggregated model
Property, effectively improve the accuracy rate of comment text data emotion classification.
In one embodiment, using the term vector and type label of each news corpus sample to the convolution constructed in advance
Neural network model carries out the step of Training, comprising: the term vector of each news corpus sample is input to convolutional Neural
Network model obtains the classification results of each news corpus sample;According to the corresponding type label of each news corpus sample and divide
Class is as a result, be adjusted the parameter of convolutional neural networks model using reverse transfer and gradient descent method.
In this implementation, server is using the term vector of news corpus sample as the input item number of convolutional neural networks model
According to convolutional neural networks model carries out analytic learning, expected news and journals corpus to the data characteristics of the term vector of news corpus sample
The classification results of sample;Server can calculate the loss of the classification results of news corpus sample and the type label of script,
The parameter of each layer neuron in convolutional neural networks model is reversely adjusted by gradient descent method, completes supervised training;By right
The parameter of convolutional neural networks model is adjusted, and is improving convolutional neural networks model to the accurate of news corpus text classification
Property while, the accuracy of the hidden feature vector of convolutional layer acquisition is effectively improved, in later use convolutional neural networks model
The hidden feature vector of text data that obtains of convolutional layer can provide more texts relevant feature, emotion point is effectively reduced
Class model constructs the dependence to labeled data resource, reduces modeling cost, improves the generalization of text emotion disaggregated model, so that
The accuracy rate of comment text data emotion classification improves.
In one embodiment, text emotion point is generated according to Text character extraction model and target mixed Gauss model
After the step of class model, further includes: obtain text to be predicted, and obtain the term vector of each vocabulary in text to be predicted;It will
The term vector of text to be predicted is input in Text character extraction model, obtains the hidden feature vector of text to be predicted;It will be hidden
It is input in target mixed Gauss model containing feature vector, obtains the affective style of text to be predicted.
In the present embodiment, by obtaining the hidden feature vector in text using Text character extraction model, then by text
Hidden feature vector be input to target mixed Gauss model obtain text emotional semantic classification, effectively avoid Yin Wenben shorter and lead
The accuracy to the emotional semantic classification of text is caused, the accuracy rate of comment text data emotion classification is improved.
It should be understood that although each step in the flow chart of Fig. 2 to Fig. 4 is successively shown according to the instruction of arrow,
But these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these
There is no stringent sequences to limit for the execution of step, these steps can execute in other order.Moreover, Fig. 2 is into Fig. 4
At least part step may include that perhaps these sub-steps of multiple stages or stage are not necessarily same to multiple sub-steps
One moment executed completion, but can execute at different times, and the execution in these sub-steps or stage sequence is also not necessarily
Be successively carry out, but can at least part of the sub-step or stage of other steps or other steps in turn or
Alternately execute.
In one embodiment, as shown in figure 5, providing a kind of construction device of text emotion disaggregated model, comprising: instruction
Practice data acquisition module, hidden feature obtains module, model training module and disaggregated model generation module, in which:
Training data obtains module 510, and for obtaining text training data, text training data includes the first textual data
According to, the affective tag of the first text data and the second text data;
Hidden feature obtains module 520, for obtaining each vocabulary pair in the first text data and the second text data
The term vector of each first text data and the second text data is input to the trained text feature of pre-selection by the term vector answered
It extracts in model, obtains the hidden feature vector of each first text data and the second text data;
Model training module 530, for the hidden feature vector and first text data using the first text data
The hidden feature vector of affective tag and the second text data carries out semi-supervised learning instruction to the mixed Gauss model constructed in advance
Practice, and the mixed Gauss model after training is determined as target mixed Gauss model;
Disaggregated model generation module 540, for being generated according to Text character extraction model and target mixed Gauss model
Text emotion disaggregated model.
In one embodiment, model training module 530 be used for using the first text data hidden feature vector and
The affective tag of first text data carries out Training to mixed Gauss model, obtains the first mixed Gauss model;By
The hidden feature vector of two text datas is input to the first mixed Gauss model, obtains the first emotion mark of the second text data
Label;Utilize the hidden of the hidden feature vector of the first text data and the affective tag of the first text data and the second text data
The first affective tag containing feature vector and the second text data carries out Training to the first mixed Gauss model, obtains
Obtain the second mixed Gauss model;The hidden feature vector of second text data is input to the second mixed Gauss model, obtains
Second affective tag of two text datas;Obtain the first affective tag and inconsistent the second text data of the second affective tag
Quantity accounts for the ratio value of the second text data sum, and when ratio value is less than preset threshold, the second mixed Gauss model is determined
For target mixed Gauss model.
In one embodiment, model training module 530 is also used to when ratio value is greater than or equal to preset threshold, by the
Two mixed Gauss models are determined as the first mixed Gauss model and the second emotion label are determined as the first emotion label, make mould
Type training module 530 utilizes the hidden feature vector of the first text data and the affective tag and the of the first text data again
The hidden feature vector of two text datas and the first affective tag of the second text data carry out the first mixed Gauss model
Training obtains the second mixed Gauss model.
In one embodiment, the emotional semantic classification device of text further includes that Text character extraction model obtains module, is used for
News corpus data are obtained, include the type mark of multiple news corpus samples and each news corpus sample in news corpus data
Label;Obtain the term vector of each vocabulary in each news corpus sample;Utilize the term vector and type mark of each news corpus sample
It signs and Training is carried out to the convolutional neural networks model constructed in advance;The convolution in convolutional Neural model after extracting training
The characteristic parameter of layer generates Text character extraction model according to the characteristic parameter of convolutional layer.
In one embodiment, Text character extraction model obtains module for the term vector of each news corpus sample is defeated
Enter the classification results that each news corpus sample is obtained to convolutional neural networks model;According to the corresponding class of each news corpus sample
Type label and classification results adjust the parameter of convolutional neural networks model using reverse transfer and gradient descent method
It is whole.
In one embodiment, the emotional semantic classification device of text further includes text emotion categorization module, for obtaining to pre-
Text is surveyed, and obtains the term vector of each vocabulary in text to be predicted;The term vector of text to be predicted is input to text feature
It extracts in model, obtains the hidden feature vector of text to be predicted;Hidden feature vector is input to target mixed Gauss model
In, obtain the affective style of text to be predicted.
The specific restriction of construction device about text emotion disaggregated model may refer to above for text emotion point
The restriction of the construction method of class model, details are not described herein.Each mould in the construction device of above-mentioned text emotion disaggregated model
Block can be realized fully or partially through software, hardware and combinations thereof.Above-mentioned each module can be embedded in the form of hardware or independence
In processor in computer equipment, it can also be stored in a software form in the memory in computer equipment, in order to
Processor, which calls, executes the corresponding operation of the above modules.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, the processor perform the steps of when executing computer program
Text training data is obtained, text training data includes the affective tag of the first text data, the first text data
And second text data;
The corresponding term vector of each vocabulary in the first text data and the second text data is obtained, by each first textual data
Accordingly and the term vector of the second text data is input in the trained Text character extraction model of pre-selection, obtains each first text
The hidden feature vector of data and the second text data;
Utilize the hidden feature vector of the first text data and the affective tag of the first text data and the second textual data
According to hidden feature vector, semi-supervised learning training carried out to the mixed Gauss model that constructs in advance, and by the mixing after training
Gauss model is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to Text character extraction model and target mixed Gauss model.
In one embodiment, processor executes computer program and realizes the hidden feature vector for utilizing the first text data
And first text data affective tag and the second text data hidden feature vector to the mixed Gaussian mould constructed in advance
Type carries out semi-supervised learning training, and when the mixed Gauss model after training is determined as the step of target mixed Gauss model,
Implement following steps: using the hidden feature vector of the first text data and the affective tag of the first text data to mixed
It closes Gauss model and carries out Training, obtain the first mixed Gauss model;The hidden feature vector of second text data is defeated
Enter to the first mixed Gauss model, obtains the first affective tag of the second text data;Utilize the implicit spy of the first text data
Levy the affective tag of vector and the first text data and the hidden feature vector and the second text data of the second text data
The first affective tag, to the first mixed Gauss model carry out Training, obtain the second mixed Gauss model;By the second text
The hidden feature vector of notebook data is input to the second mixed Gauss model, obtains the second affective tag of the second text data;It obtains
The quantity of the first affective tag and inconsistent the second text data of the second affective tag is taken to account for the ratio of the second text data sum
Example value, when ratio value be less than preset threshold, the second mixed Gauss model is determined as target mixed Gauss model.
In one embodiment, it also performs the steps of when processor executes computer program when ratio value is greater than or waits
In preset threshold, the second mixed Gauss model is determined as the first mixed Gauss model and the second emotion label is determined as
One emotion label jumps to and utilizes the hidden feature vector of the first text data and the affective tag of the first text data and
The hidden feature vector of two text datas and the first affective tag of the second text data carry out the first mixed Gauss model
The step of Training, the second mixed Gauss model of acquisition.
In one embodiment, it is also performed the steps of when processor executes computer program and obtains news corpus data,
It include the type label of multiple news corpus samples and each news corpus sample in news corpus data;Obtain each news corpus
The term vector of each vocabulary in sample;Using the term vector and type label of each news corpus sample to the convolution constructed in advance
Neural network model carries out Training;The characteristic parameter of the convolutional layer in convolutional Neural model after extracting training, according to
The characteristic parameter of convolutional layer generates Text character extraction model.
In one embodiment, processor execute computer program realize using each news corpus sample term vector and
When type label carries out the step of Training to the convolutional neural networks model constructed in advance, following steps are implemented:
The term vector of each news corpus sample is input to convolutional neural networks model, obtains the classification results of each news corpus sample;
According to the corresponding type label of each news corpus sample and classification results, using reverse transfer and gradient descent method to convolution
The parameter of neural network model is adjusted.
In one embodiment, it is also performed the steps of when processor executes computer program and obtains text to be predicted, and
Obtain the term vector of each vocabulary in text to be predicted;The term vector of text to be predicted is input to Text character extraction model
In, obtain the hidden feature vector of text to be predicted;Hidden feature vector is input in target mixed Gauss model, obtain to
Predict the affective style of text.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Text training data is obtained, text training data includes the affective tag of the first text data, the first text data
And second text data;
The corresponding term vector of each vocabulary in the first text data and the second text data is obtained, by each first textual data
Accordingly and the term vector of the second text data is input in the trained Text character extraction model of pre-selection, obtains each first text
The hidden feature vector of data and the second text data;
Utilize the hidden feature vector of the first text data and the affective tag of the first text data and the second textual data
According to hidden feature vector, semi-supervised learning training carried out to the mixed Gauss model that constructs in advance, and by the mixing after training
Gauss model is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to Text character extraction model and target mixed Gauss model.
In one embodiment, computer program be executed by processor realize using the first text data hidden feature to
Amount and the affective tag of the first text data and the hidden feature vector of the second text data are to the mixed Gaussian constructed in advance
Model carries out semi-supervised learning training, and the step of mixed Gauss model after training is determined as target mixed Gauss model
When, it implements following steps: utilizing the hidden feature vector of the first text data and the affective tag of the first text data
Training is carried out to mixed Gauss model, obtains the first mixed Gauss model;By the hidden feature of the second text data to
Amount is input to the first mixed Gauss model, obtains the first affective tag of the second text data;Utilize the hidden of the first text data
The hidden feature vector and the second text of affective tag and the second text data containing feature vector and the first text data
First affective tag of data carries out Training to the first mixed Gauss model, obtains the second mixed Gauss model;By
The hidden feature vector of two text datas is input to the second mixed Gauss model, obtains the second emotion mark of the second text data
Label;The quantity for obtaining the first affective tag and inconsistent the second text data of the second affective tag accounts for the second text data sum
Ratio value, when ratio value be less than preset threshold, the second mixed Gauss model is determined as target mixed Gauss model.
In one embodiment, also perform the steps of when computer program is executed by processor when ratio value is greater than or
Equal to preset threshold, the second mixed Gauss model is determined as the first mixed Gauss model and is determined as the second emotion label
First emotion label, jump to using the first text data hidden feature vector and the first text data affective tag and
The hidden feature vector of second text data and the first affective tag of the second text data, to the first mixed Gauss model into
The step of row Training, the second mixed Gauss model of acquisition.
In one embodiment, it is also performed the steps of when computer program is executed by processor and obtains news corpus number
According to including the type label of multiple news corpus samples and each news corpus sample in news corpus data;Obtain each news
The term vector of each vocabulary in corpus sample;Using the term vector and type label of each news corpus sample to constructing in advance
Convolutional neural networks model carries out Training;The characteristic parameter of the convolutional layer in convolutional Neural model after extracting training,
Text character extraction model is generated according to the characteristic parameter of convolutional layer.
In one embodiment, computer program be executed by processor realize using each news corpus sample term vector with
And type label implements following step when carrying out the step of Training to the convolutional neural networks model that constructs in advance
It is rapid: the term vector of each news corpus sample being input to convolutional neural networks model, obtains the classification knot of each news corpus sample
Fruit;According to the corresponding type label of each news corpus sample and classification results, reverse transfer and gradient descent method pair are utilized
The parameter of convolutional neural networks model is adjusted.
In one embodiment, it is also performed the steps of when computer program is executed by processor and obtains text to be predicted,
And obtain the term vector of each vocabulary in text to be predicted;The term vector of text to be predicted is input to Text character extraction model
In, obtain the hidden feature vector of text to be predicted;Hidden feature vector is input in target mixed Gauss model, obtain to
Predict the affective style of text.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of construction method of text emotion disaggregated model, which comprises
Text training data is obtained, the text training data includes the emotion of the first text data, first text data
Label and the second text data;
The corresponding term vector of each vocabulary in first text data and second text data is obtained, by each described
The term vector of one text data and second text data is input in the trained Text character extraction model of pre-selection, is obtained
To each first text data and the hidden feature vector of second text data;
Utilize the hidden feature vector of first text data and the affective tag of first text data and described
The hidden feature vector of two text datas carries out semi-supervised learning training to the mixed Gauss model constructed in advance, and will train
Mixed Gauss model afterwards is determined as target mixed Gauss model;
Text emotion disaggregated model is generated according to the Text character extraction model and the target mixed Gauss model.
2. the method according to claim 1, wherein the hidden feature using first text data to
The hidden feature vector of the affective tag and second text data of amount and first text data, to what is constructed in advance
Mixed Gauss model carries out semi-supervised learning training, and the mixed Gauss model after training is determined as target mixed Gauss model
The step of, comprising:
Using the hidden feature vector of first text data and the affective tag of first text data to described mixed
It closes Gauss model and carries out Training, obtain the first mixed Gauss model;
The hidden feature vector of second text data is input to first mixed Gauss model, obtains second text
First affective tag of notebook data;
Utilize the hidden feature vector of first text data and the affective tag of first text data and described
First affective tag of the hidden feature vector of two text datas and second text data, to first mixed Gaussian
Model carries out Training, obtains the second mixed Gauss model;
The hidden feature vector of second text data is input to second mixed Gauss model, obtains second text
Second affective tag of notebook data;
The quantity for obtaining first affective tag and the second inconsistent text data of second affective tag accounts for the second text
The ratio value of notebook data sum, when the ratio value be less than preset threshold, second mixed Gauss model is determined as target
Mixed Gauss model.
3. according to the method described in claim 2, it is characterized in that, described obtain first affective tag and second feelings
After the step of quantity of the second inconsistent text data of sense label accounts for the ratio value of the second text data sum, further includes:
When the ratio value is greater than or equal to the preset threshold, it is high that second mixed Gauss model is determined as the first mixing
This model and the second emotion label is determined as the first emotion label, jumps to and utilize the hidden of first text data
The hidden feature vector of affective tag and second text data containing feature vector and first text data and
First affective tag of second text data carries out Training to first mixed Gauss model, obtains second
The step of mixed Gauss model.
4. the method according to claim 1, wherein the term vector by each text data be input to it is pre-
Before selecting the step in trained Text character extraction model, further includes:
News corpus data are obtained, include multiple news corpus samples and each news corpus in the news corpus data
The type label of sample;
Obtain the term vector of each vocabulary in each news corpus sample;
Using each news corpus sample term vector and type label to the convolutional neural networks model constructed in advance into
Row Training;
The characteristic parameter of the convolutional layer in convolutional Neural model after extracting training, generates according to the characteristic parameter of the convolutional layer
Text character extraction model.
5. according to the method described in claim 4, it is characterized in that, the term vector using each news corpus sample with
And type label the step of Training is carried out to the convolutional neural networks model that constructs in advance, comprising:
The term vector of each news corpus sample is input to the convolutional neural networks model, obtains each news corpus
The classification results of sample;
According to the corresponding type label of each news corpus sample and classification results, reverse transfer and gradient decline are utilized
Method is adjusted the parameter of the convolutional neural networks model.
6. the method according to claim 1, wherein described according to the Text character extraction model and described
Target mixed Gauss model generated after the step of text emotion disaggregated model, further includes:
Text to be predicted is obtained, and obtains the term vector of each vocabulary in the text to be predicted;
The term vector of the text to be predicted is input in the Text character extraction model, the text to be predicted is obtained
Hidden feature vector;
The hidden feature vector is input in the target mixed Gauss model, the emotion class of the text to be predicted is obtained
Type.
7. a kind of emotional semantic classification device of text, which is characterized in that described device includes:
Training data obtains module, and for obtaining text training data, the text training data includes the first text data, institute
State the affective tag and the second text data of the first text data;
Hidden feature obtains module, for obtaining each vocabulary pair in first text data and second text data
It is trained to be input to pre-selection by the term vector answered for the term vector of each first text data and second text data
In Text character extraction model, the hidden feature vector of each first text data and second text data is obtained;
Model training module, for the hidden feature vector and first text data using first text data
The hidden feature vector of affective tag and second text data carries out semi-supervised to the mixed Gauss model constructed in advance
Training is practised, and the mixed Gauss model after training is determined as target mixed Gauss model;
Disaggregated model generation module, for being generated according to the Text character extraction model and the target mixed Gauss model
Text emotion disaggregated model.
8. device according to claim 7, which is characterized in that the model training module is used for:
Using the hidden feature vector of first text data and the affective tag of first text data to described mixed
It closes Gauss model and carries out Training, obtain the first mixed Gauss model;
The hidden feature vector of second text data is input to first mixed Gauss model, obtains second text
First affective tag of notebook data;
Utilize the hidden feature vector of first text data and the affective tag of first text data and described
First affective tag of the hidden feature vector of two text datas and second text data, to first mixed Gaussian
Model carries out Training, obtains the second mixed Gauss model;
The hidden feature vector of second text data is input to second mixed Gauss model, obtains second text
Second affective tag of notebook data;
The quantity for obtaining first affective tag and the second inconsistent text data of second affective tag accounts for the second text
The ratio value of notebook data sum, when the ratio value be less than preset threshold, second mixed Gauss model is determined as target
Mixed Gauss model.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 6 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910012242.5A CN109815331A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the computer equipment of text emotion disaggregated model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910012242.5A CN109815331A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the computer equipment of text emotion disaggregated model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109815331A true CN109815331A (en) | 2019-05-28 |
Family
ID=66604040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910012242.5A Pending CN109815331A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the computer equipment of text emotion disaggregated model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815331A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144451A (en) * | 2019-12-10 | 2020-05-12 | 东软集团股份有限公司 | Training method, device and equipment of image classification model |
CN111753197A (en) * | 2020-06-18 | 2020-10-09 | 达而观信息科技(上海)有限公司 | News element extraction method and device, computer equipment and storage medium |
CN112256369A (en) * | 2020-10-20 | 2021-01-22 | 北京达佳互联信息技术有限公司 | Content display method, device and system and storage medium |
CN112329436A (en) * | 2019-07-30 | 2021-02-05 | 北京国双科技有限公司 | Legal document element analysis method and system |
CN112948575A (en) * | 2019-12-11 | 2021-06-11 | 京东数字科技控股有限公司 | Text data processing method, text data processing device and computer-readable storage medium |
CN113139051A (en) * | 2021-03-29 | 2021-07-20 | 广东外语外贸大学 | Text classification model training method, text classification method, device and medium |
CN113360654A (en) * | 2021-06-23 | 2021-09-07 | 深圳平安综合金融服务有限公司 | Text classification method and device, electronic equipment and readable storage medium |
WO2023103308A1 (en) * | 2021-12-07 | 2023-06-15 | 苏州浪潮智能科技有限公司 | Model training method and apparatus, text prediction method and apparatus, and electronic device and medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318242A (en) * | 2014-10-08 | 2015-01-28 | 中国人民解放军空军工程大学 | High-efficiency SVM active half-supervision learning algorithm |
US20170052946A1 (en) * | 2014-06-06 | 2017-02-23 | Siyu Gu | Semantic understanding based emoji input method and device |
US20170308790A1 (en) * | 2016-04-21 | 2017-10-26 | International Business Machines Corporation | Text classification by ranking with convolutional neural networks |
-
2019
- 2019-01-07 CN CN201910012242.5A patent/CN109815331A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170052946A1 (en) * | 2014-06-06 | 2017-02-23 | Siyu Gu | Semantic understanding based emoji input method and device |
CN104318242A (en) * | 2014-10-08 | 2015-01-28 | 中国人民解放军空军工程大学 | High-efficiency SVM active half-supervision learning algorithm |
US20170308790A1 (en) * | 2016-04-21 | 2017-10-26 | International Business Machines Corporation | Text classification by ranking with convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
栗雨晴;礼欣;韩煦;宋丹丹;廖乐健;: "基于双语词典的微博多类情感分析方法", 电子学报, vol. 44, no. 09, pages 2068 - 2073 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329436A (en) * | 2019-07-30 | 2021-02-05 | 北京国双科技有限公司 | Legal document element analysis method and system |
CN111144451A (en) * | 2019-12-10 | 2020-05-12 | 东软集团股份有限公司 | Training method, device and equipment of image classification model |
CN111144451B (en) * | 2019-12-10 | 2023-08-25 | 东软集团股份有限公司 | Training method, device and equipment for image classification model |
CN112948575A (en) * | 2019-12-11 | 2021-06-11 | 京东数字科技控股有限公司 | Text data processing method, text data processing device and computer-readable storage medium |
CN112948575B (en) * | 2019-12-11 | 2023-09-26 | 京东科技控股股份有限公司 | Text data processing method, apparatus and computer readable storage medium |
CN111753197A (en) * | 2020-06-18 | 2020-10-09 | 达而观信息科技(上海)有限公司 | News element extraction method and device, computer equipment and storage medium |
CN111753197B (en) * | 2020-06-18 | 2024-04-05 | 达观数据有限公司 | News element extraction method, device, computer equipment and storage medium |
CN112256369A (en) * | 2020-10-20 | 2021-01-22 | 北京达佳互联信息技术有限公司 | Content display method, device and system and storage medium |
CN113139051A (en) * | 2021-03-29 | 2021-07-20 | 广东外语外贸大学 | Text classification model training method, text classification method, device and medium |
CN113360654A (en) * | 2021-06-23 | 2021-09-07 | 深圳平安综合金融服务有限公司 | Text classification method and device, electronic equipment and readable storage medium |
CN113360654B (en) * | 2021-06-23 | 2024-04-05 | 深圳平安综合金融服务有限公司 | Text classification method, apparatus, electronic device and readable storage medium |
WO2023103308A1 (en) * | 2021-12-07 | 2023-06-15 | 苏州浪潮智能科技有限公司 | Model training method and apparatus, text prediction method and apparatus, and electronic device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815331A (en) | Construction method, device and the computer equipment of text emotion disaggregated model | |
Kukačka et al. | Regularization for deep learning: A taxonomy | |
CN110232183B (en) | Keyword extraction model training method, keyword extraction device and storage medium | |
US11544474B2 (en) | Generation of text from structured data | |
Deng et al. | Length-controllable image captioning | |
US20210342371A1 (en) | Method and Apparatus for Processing Knowledge Graph | |
CN109446514A (en) | Construction method, device and the computer equipment of news property identification model | |
KR20190085098A (en) | Keyword extraction method, computer device, and storage medium | |
CN108986908A (en) | Interrogation data processing method, device, computer equipment and storage medium | |
CN109543031A (en) | A kind of file classification method based on multitask confrontation study | |
Hou et al. | Bottom-up top-down cues for weakly-supervised semantic segmentation | |
CN109492215A (en) | News property recognition methods, device, computer equipment and storage medium | |
CN108491406B (en) | Information classification method and device, computer equipment and storage medium | |
CN111783993A (en) | Intelligent labeling method and device, intelligent platform and storage medium | |
US20170116521A1 (en) | Tag processing method and device | |
US11238050B2 (en) | Method and apparatus for determining response for user input data, and medium | |
CN108959305A (en) | A kind of event extraction method and system based on internet big data | |
CN110968725B (en) | Image content description information generation method, electronic device and storage medium | |
CN110472049B (en) | Disease screening text classification method, computer device and readable storage medium | |
CN109977394A (en) | Text model training method, text analyzing method, apparatus, equipment and medium | |
CN112395412B (en) | Text classification method, apparatus and computer readable medium | |
Jindal et al. | Offline handwritten Gurumukhi character recognition system using deep learning | |
CN113051914A (en) | Enterprise hidden label extraction method and device based on multi-feature dynamic portrait | |
CN110414005A (en) | Intention recognition method, electronic device, and storage medium | |
Huang et al. | ORDNet: Capturing omni-range dependencies for scene parsing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |