CN114281993B - Text co-emotion prediction system and method - Google Patents
Text co-emotion prediction system and method Download PDFInfo
- Publication number
- CN114281993B CN114281993B CN202111592897.8A CN202111592897A CN114281993B CN 114281993 B CN114281993 B CN 114281993B CN 202111592897 A CN202111592897 A CN 202111592897A CN 114281993 B CN114281993 B CN 114281993B
- Authority
- CN
- China
- Prior art keywords
- polarity
- private
- public
- feature
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000014509 gene expression Effects 0.000 claims abstract description 39
- 230000004927 fusion Effects 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 6
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000008451 emotion Effects 0.000 abstract description 17
- 238000013526 transfer learning Methods 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 27
- 238000011156 evaluation Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- -1 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 239000000306 component Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000012173 estrus Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007858 starting material Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a text co-emotion prediction system and a method, wherein the system comprises the following steps: the public-emotion private feature encoder encodes private features of the public-emotion data into public-emotion private features; the polarity private feature encoder encodes the private feature of the polarity data into a polarity private feature; the public feature encoder encodes public features of the common-case data and the polarity data into public features of the common-case data and the polarity data respectively; the public and private common-situation feature fusion module is used for weighting and fusing public and private common-situation features into final common-situation prediction feature expression; the polarity public and private feature fusion module is used for weighting and fusing the polarity public and private features into final polarity classification feature expression; the co-condition predictor predicts the final co-condition prediction feature expression to obtain a prediction label; the polarity classifier predicts the final polarity classification feature expression to obtain a predictive tag. The system and the method can realize small-scale text co-emotion prediction through large-scale text emotion classification data based on transfer learning, and improve prediction accuracy.
Description
Technical Field
The invention relates to the field of natural language processing, in particular to a text co-emotion prediction system and a method.
Background
Co-emotion (concentricity) is taken as an important component of emotion and reflects the corresponding emotion generated when people encounter others or witnesse the circumstances of others. The co-emotion can reflect emotion and feedback generated by people encountering other people, and the field of co-emotion analysis is also closely related to the fields of man-machine interaction, emotion analysis and the like. Therefore, the identification of the co-emotion factors contained in the text is very necessary, and has a strong research value.
But the sample size of the text-based co-occurrence dataset is too small. The currently accepted open source text co-condition data sets only comprise one thousand pieces of data, and the existing mainstream co-condition prediction methods are used for performing co-condition prediction on the co-condition data sets independently. Obviously, when the data amount is insufficient, the generalization ability of the trained model is poor, and the prediction accuracy is not too high. In sharp contrast, some other emotion analysis tasks possess very sufficient training data. Taking the task of emotion polarity classification as an example, the task currently corresponds to a very large number of open source data sets containing tens of thousands or even hundreds of thousands of training samples. Therefore, the poor accuracy of the co-emotion prediction in text processing is a problem to be solved due to the small data volume of the open source text co-emotion dataset.
In view of this, the present invention has been made.
Disclosure of Invention
The invention aims to provide a text co-emotion prediction system and a method, which can utilize text emotion classification data with large sample data volume to assist in text co-emotion prediction, improve the accuracy of text processing co-emotion prediction and further solve the technical problems in the prior art.
The invention aims at realizing the following technical scheme:
the embodiment of the invention provides a text co-emotion prediction system, which comprises:
the public-private characteristic encoder, the polarity private characteristic encoder, the public-private characteristic fusion module, the polarity public-private characteristic fusion module, the public-private predictor and the polarity classifier; wherein,
the input of the public-emotion private feature encoder is public-emotion data in a public-emotion data set, and private features of the input public-emotion data can be encoded to obtain public-emotion private features;
the input of the polarity private feature encoder is polarity data in a polarity data set, and the private features of the input polarity data can be encoded to obtain the polarity private features;
the public feature encoder is used for respectively receiving the common-case data in the common-case data set and the polarity data in the polarity data set, and encoding the public features of the common-case data to obtain common-case public features and encoding the public features of the polarity data to obtain polarity public features;
The public and private feature fusion module is respectively connected with the output end of the public feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the public feature output by the public feature encoder and the public feature output by the public feature encoder into a final public prediction feature expression;
the polarity public-private feature fusion module is respectively connected with the output end of the polarity private feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the polarity private feature output by the polarity private feature encoder and the polarity public feature output by the public feature encoder into final polarity classification feature expression;
the co-condition predictor is connected with the output end of the co-condition public and private characteristic fusion module and can predict the final co-condition prediction characteristic expression to obtain a corresponding co-condition label;
the polarity classifier is connected with the output end of the polarity public and private feature fusion module and can predict the final polarity classification feature expression to obtain a corresponding polarity label.
The embodiment of the invention also provides a text co-condition prediction method, which adopts the text co-condition prediction system as a text co-condition prediction model and comprises the following steps:
Step 1, encoding private features of co-occurrence data in an input co-occurrence data set to obtain co-occurrence private features, and encoding private features of polarity data in an input polarity data set to obtain polarity private features;
encoding the public features of the common-case data in the input common-case data set to obtain common-case public features, and encoding the public features of the polarity data in the input polarity data set to obtain polarity public features;
step 2, weighting and fusing the public common-case characteristics and the private common-case characteristics obtained in the step 1 into final common-case predicted characteristic expression; the polarity private characteristics and the polarity public characteristics obtained in the step 1 are weighted and fused into final polarity prediction characteristic expression;
step 3, carrying out co-emotion prediction on the final co-emotion prediction feature expression obtained in the step 2 to obtain a corresponding co-emotion tag serving as a prediction result; and performing polarity classification on the final polarity classification feature expression to obtain a corresponding polarity label serving as a prediction result.
Compared with the prior art, the text co-emotion prediction system and method provided by the invention have the beneficial effects that:
The emotion polarity data set containing a large amount of sample data is used for assisting the text co-emotion data set with small sample data amount to conduct co-emotion prediction, and interference caused by different data set fields and different prediction labels can be eliminated simultaneously in a mode of transferring and learning between tasks with different prediction labels and different data set fields. According to the method, transfer learning is introduced into text co-condition prediction for the first time, the performance is better than that of other methods in the field of text co-condition prediction at present, and the co-condition data set is assisted by the large-scale emotion polarity data set, so that a better co-condition prediction effect is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a text co-emotion prediction system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another text co-emotion prediction system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a configuration of a text-sharing prediction system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a text co-condition prediction system with countermeasure classification and interference resistance according to an embodiment of the present invention.
Detailed Description
The technical scheme in the embodiment of the invention is clearly and completely described below in combination with the specific content of the invention; it will be apparent that the described embodiments are only some embodiments of the invention, but not all embodiments, which do not constitute limitations of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The terms that may be used in the present invention will be described first as follows:
the term "and/or" is intended to mean that either or both may be implemented, e.g., X and/or Y are intended to include both the cases of "X" or "Y" and the cases of "X and Y".
The terms "comprises," "comprising," "includes," "including," "has," "having" or other similar referents are to be construed to cover a non-exclusive inclusion. For example: including a particular feature (e.g., a starting material, component, ingredient, carrier, formulation, material, dimension, part, means, mechanism, apparatus, step, procedure, method, reaction condition, processing condition, parameter, algorithm, signal, data, product or article of manufacture, etc.), should be construed as including not only a particular feature but also other features known in the art that are not explicitly recited.
The text co-emotion prediction system and method provided by the invention are described in detail below. What is not described in detail in the embodiments of the present invention belongs to the prior art known to those skilled in the art. The specific conditions are not noted in the examples of the present invention and are carried out according to the conditions conventional in the art or suggested by the manufacturer. The reagents or apparatus used in the examples of the present invention were conventional products commercially available without the manufacturer's knowledge.
As shown in fig. 1, an embodiment of the present invention provides a text co-emotion prediction system, including:
the public-private characteristic encoder, the polarity private characteristic encoder, the public-private characteristic fusion module, the polarity public-private characteristic fusion module, the public-private predictor and the polarity classifier; wherein,
the input of the public-emotion private feature encoder is public-emotion data in a public-emotion data set, and private features of the input public-emotion data can be encoded to obtain public-emotion private features;
the input of the polarity private feature encoder is polarity data in a polarity data set, and the private features of the input polarity data can be encoded to obtain the polarity private features;
The public feature encoder is used for respectively receiving the common-case data in the common-case data set and the polarity data in the polarity data set, and encoding the public features of the common-case data to obtain common-case public features and encoding the public features of the polarity data to obtain polarity public features;
the public and private feature fusion module is respectively connected with the output end of the public feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the public feature output by the public feature encoder and the public feature output by the public feature encoder into a final public prediction feature expression;
the polarity public-private feature fusion module is respectively connected with the output end of the polarity private feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the polarity private feature output by the polarity private feature encoder and the polarity public feature output by the public feature encoder into final polarity classification feature expression;
the co-condition predictor is connected with the output end of the co-condition public and private characteristic fusion module and can predict the final co-condition prediction characteristic expression to obtain a corresponding co-condition label;
The polarity classifier is connected with the output end of the polarity public and private feature fusion module and can predict the final polarity classification feature expression to obtain a corresponding polarity label.
As shown in fig. 2, in the above-described text co-emotion prediction system,
the public-emotion private feature encoder also inputs polarity data in the polarity data set, and can encode the private features of the input polarity data to obtain public-emotion polarity private features;
the polarity private feature encoder also inputs the co-emotion data in the co-emotion data set, and can encode the private features of the input co-emotion data to obtain polarity co-emotion private features;
further comprises: the input end of the second domain classifier is respectively connected with the output ends of the public feature encoder, the polar private feature encoder and the public feature encoder, and can perform classification processing on the public feature codes output by the public feature encoder in a mode of resisting classification loss;
as shown in fig. 3, in the above-described text co-emotion prediction system,
the public-emotion private feature encoder also inputs polarity data in the polarity data set, and can encode the private features of the input polarity data to obtain public-emotion polarity private features;
The polarity private feature encoder also inputs the co-emotion data in the co-emotion data set, and can encode the private features of the input co-emotion data to obtain polarity co-emotion private features;
further comprises: a co-occurrence hinge loss module and a polarity hinge loss module; wherein,
the co-condition hinge loss module is connected with the output end of the co-condition predictor and can predict the result L of the co-condition corresponding to the correct splicing result em Co-situation prediction result L corresponding to error splicing result em Carrying out co-emotion prediction when the difference value between's is smaller than a preset difference value, wherein the correct splicing result refers to the splicing result of the co-emotion private feature and the co-emotion public feature, and the wrong splicing result refers to the splicing result of the polarity private feature and the co-emotion private feature;
the polarity hinge loss module is connected with the output end of the polarity classifier and can predict a result L of polarity corresponding to a correct splicing result em Polarity prediction result L corresponding to error splicing result em ' betweenAnd (3) performing polarity prediction when the difference value of the two images is smaller than a preset difference value, wherein the correct splicing result refers to the splicing result of the polarity private feature and the polarity public feature, and the wrong splicing result refers to the splicing result of the common-case private feature and the polarity private feature.
As shown in fig. 4, in the above-described text co-emotion prediction system,
the public-emotion private feature encoder also inputs polarity data in the polarity data set, and can encode the private features of the input polarity data to obtain public-emotion polarity private features;
the polarity private feature encoder also inputs the co-emotion data in the co-emotion data set, and can encode the private features of the input co-emotion data to obtain polarity co-emotion private features;
further comprises: the input end of the second domain classifier is respectively connected with the output ends of the public feature encoder, the polar private feature encoder and the public feature encoder, and can perform classification processing on the public feature codes output by the public feature encoder in a mode of resisting classification loss;
and/or, a co-living hinge loss module and a polar hinge loss module; wherein,
the co-condition hinge loss module is connected with the output end of the co-condition predictor and can predict the result L corresponding to the correct co-condition splicing result em Co-situation prediction result L corresponding to co-situation error splicing result em Carrying out co-condition prediction when the difference value between 'and' is smaller than a preset difference value, wherein the co-condition correct splicing result refers to the splicing result of the co-condition private feature and the co-condition public feature, and the co-condition incorrect splicing result refers to the splicing result of the polarity private feature and the co-condition private feature; the common-case hinge loss module can enlarge a common-case prediction result L corresponding to a common-case correct splicing result em Co-situation prediction result L corresponding to co-situation error splicing result em ' difference between;
the polarity hinge loss module is connected with the output end of the polarity classifier and can predict a result L corresponding to the result of correct polarity splicing em Splice with polarity errorPolarity prediction result L corresponding to result em And when the difference value between the's is smaller than the preset difference value, polarity prediction is carried out, the polarity correct splicing result refers to the splicing result of the polarity private feature and the polarity public feature, and the polarity incorrect splicing result refers to the splicing result of the public condition private feature and the polarity private feature. The polarity hinge loss module can enlarge a polarity prediction result L corresponding to a polarity correct splicing result po Polarity prediction result L corresponding to polarity error splicing result po ' difference between.
In the text co-emotion prediction system, the field two classifier adopts a fully connected network.
In the text co-emotion prediction system, the co-emotion private feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the polarity private feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the public feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
The public and private shared characteristic fusion module and the polarity public and private characteristic fusion module adopt an attention network module;
the co-situation predictor adopts a fully-connected network;
the polarity classifier adopts a fully-connected network.
The systems shown in fig. 2, 3 and 4 reduce the interference caused by the differences in the field of data sets by designing the countermeasure against the classification loss, and reduce the interference caused by the different tags by arranging a co-condition and polarity hinge loss module.
The embodiment of the invention also provides a text co-condition prediction method, which adopts the text co-condition prediction system as a text co-condition prediction model and comprises the following steps:
step 1, encoding private features of co-occurrence data in an input co-occurrence data set to obtain co-occurrence private features, and encoding private features of polarity data in an input polarity data set to obtain polarity private features;
encoding the public features of the common-case data in the input common-case data set to obtain common-case public features, and encoding the public features of the polarity data in the input polarity data set to obtain polarity public features;
step 2, weighting and fusing the public common-case characteristics and the private common-case characteristics obtained in the step 1 into final common-case predicted characteristic expression; the polarity private characteristics and the polarity public characteristics obtained in the step 1 are weighted and fused into final polarity prediction characteristic expression;
Step 3, carrying out co-emotion prediction on the final co-emotion prediction feature expression obtained in the step 2 to obtain a corresponding co-emotion tag serving as a prediction result; and performing polarity classification on the final polarity classification feature expression to obtain a corresponding polarity label serving as a prediction result.
In step 2 of the above method, the final co-emotion prediction feature expression f obtained by weighted fusion em And final polarity classification feature expression f po The method comprises the following steps of:
in the above formulas (1) and (2), q is q k *U(S em ) Wherein q is k Is a super-parametric vector, the dimension of which is R 1*d ,q k Is a super parameter vector, and is used for obtaining an initial value Q through random initialization; t is the transposed matrix, i.e. U(s) em ) T Is U(s) em ) Is convenient for matrix multiplication operation; d is the dimension of the feature vector; u(s) em ) Is a common-case private feature e em (s em ) Common feature of co-occurrence e c (s em ) Is the result of the concatenation of U(s) em )∈R 2*d ;s em The data are co-emotion data in a co-emotion data set; v(s) po ) As a polarity private feature e po (s po ) Common to polarity characteristic e c (s po ) Is the result of the concatenation of V(s) po )∈R 2*d ,s po Polarity data in the polarity data set;
in the step 3, the final co-condition prediction characteristic expression f is obtained em Input to the co-emotion predictor g em () The co-condition label is obtained by prediction according to the formula (3)The formula (3) is:
Expressing the final polarity prediction characteristic f po Input to a polarity classifier g po (.) predicted polarity tag according to equation (4)The formula (4) is:
In step 3 of the above method, if the co-emotion task is a regression task, the training loss function of the co-emotion predictor is formula (5):
in the above formula (5), θ eem 、θ ec 、θ gem Respectively common-case private feature encoder e em () Common feature encoder e c () Co-emotion predictor g em () Parameters of (2);a predicted co-occurrence tag; em (em) * Is a true co-situation label;
said theta eem 、θ ec 、θ gem Encoder e based on public-private characteristics em () Common feature encoder e c () Together with the emotionPredictor g em () Parameters for setting a specific model framework of (1), such as a Bi-LSTM network for an encoder, corresponding to a set of theta eem 、θ ec 、θ gem Parameters, the encoder will correspond to another set of theta for the rest of the network model eem 、θ ec 、θ gem Parameters.
If the co-occurrence task is a classification task, the training loss function of the co-occurrence predictor is equation (6):
in the above formula (6), θ eem 、θ ec 、θ gem Respectively common-case private feature encoder e em () Common feature encoder e c () Co-emotion predictor g em () Parameters of (2);a predicted co-occurrence tag; em (em) * Is a true co-situation label; n is the number of co-emotion tag categories when the co-emotion task is a classification task;
in step 3 of the above method, the training loss function of the polarity classifier is:
In the above formula (7), θ epo 、θ ec 、θ gpo Respectively polar private feature encoder e po () Common feature encoder e c () Polarity classifier g po () Parameters of (2);a polarity tag that is predicted; po (po) * Is a true polarity tag.
Said theta epo 、θ ec 、θ gpo Respectively polar private feature encoder e po () Common feature encoder e c () Polarity ofClassifier g po () Is equal to the above-mentioned theta eem 、θ ec 、θ gem The manner in which the parameters are determined is similar and will not be repeated here.
The method also comprises the steps of adopting a field two classifier g l () Encoder e for co-occurrence private features in a manner to combat losses em () Derived co-occurrence private feature and polar private feature encoder e po () The source field of the obtained polar private feature is judged, and a field two classifier g is adopted l () Training loss function of L cop The method comprises the following steps:
and adopts a field two classifier g l () For the common feature encoder e in a loss-countering manner c () The obtained common-case public feature and polarity public feature source domain is judged, and the domain two classifier g l () Training loss function of L adv The method comprises the following steps:
and/or, outputting a prediction result L corresponding to the correct joint result of the common-case prediction output through the common-case hinge loss module em Prediction result L corresponding to common-case error splicing result em Carrying out co-condition prediction when the difference value between 'and' is smaller than a preset difference value, wherein the co-condition correct splicing result refers to the splicing result of the co-condition private feature and the co-condition public feature, and the co-condition incorrect splicing result refers to the splicing result of the polarity private feature and the co-condition private feature; and a predicted result L corresponding to the correct polarity splicing result outputted by the co-situation prediction through a polarity hinge loss module em Prediction result L corresponding to polarity error splicing result em When the difference between' is smaller than the preset difference, polarity prediction is carried out, wherein the correct polarity splicing result refers to the splicing result of the polarity private characteristic and the polarity public characteristic, and the polarityThe error splicing result refers to the splicing result of the common-case private feature and the polarity private feature; the training loss function (10) of the co-condition hinge loss module is as follows:
in the above formula (10), the meaning of each parameter is: l (L) em A common-case loss value (i.e., a common-case loss value) corresponding to the correct common-case splicing result; l (L) em ' is a common-case loss value (i.e. a common-case loss value) corresponding to a common-case error splicing result; delta 1 The difference value required to be achieved between the two different splicing results of the set co-condition is obtained;
the training loss function (11) of the polar hinge loss module is as follows:
In the above formula (11), the meaning of each parameter is: l (L) po A polarity loss value (namely a polarity loss value) corresponding to the polarity correct splicing result; l (L) po ' is the polarity loss value (i.e. the polarity loss value) corresponding to the polarity error splicing result; delta 2 The difference to be achieved between the two different splice results for the set polarity.
The final hinge loss function (12) of the text-consensus prediction model is:
L hin =L hin_1 +L hin_2 (12);
final loss function L of the text-consensus prediction model o The method comprises the following steps:
L o =λ 1 *L em +λ 2 *L po +λ 3 *L cop +λ 4 *L adv +λ 5 *L hin (13);
in the above formula (13), lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 For each loss function L em 、L po 、L cop 、L adv 、L hin In a specific experiment, lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 The weight values are 1, 0.5, 2 and 1.5 respectively.
The final loss function L described above o By the method of L em 、L po 、L cop 、L adv 、L hin The weighted average results in a final loss function for each module.
The method reduces the interference caused by the difference of the data set fields by resisting the classification loss, and reduces the interference caused by different labels by a hinge loss mode. Wherein the step 5 of reducing the data set domain differences against the classification loss has corresponding loss functions of (8), (9); the hinge loss method for reducing the interference caused by different labels is reflected in loss functions (10), (11) and (12).
Because the polarity and the co-emotion are both emotion sub-attributes, and the sizes of the tag values of the polarity and the co-emotion are relatively dependent on emotion words appearing in the text, a relatively large correlation exists between the text polarity data and the text co-emotion data. The data set in the text polarity data field often contains large data quantity, which reaches tens of thousands or hundreds of thousands, and compared with the data set which only contains text co-condition data of one thousand data, the text co-condition prediction model and method based on text emotion classification assistance have the advantage of large data quantity.
In summary, the text co-emotion prediction system and the method based on text emotion classification assistance in the embodiment of the invention assist text co-emotion analysis by using text emotion classification, so that not only the difference in the field of data sets between two tasks is considered, but also the difference in the field of labels between two tasks is considered, and a more accurate text co-emotion prediction result can be obtained.
In order to clearly demonstrate the technical scheme and the technical effects provided by the invention, the text co-emotion prediction model and the method based on text emotion classification assistance provided by the embodiment of the invention are described in detail below by using specific embodiments.
Examples
As shown in fig. 1, the embodiment of the invention provides a text co-condition prediction system, which is a model capable of assisting small-scale text co-condition prediction by large-scale text emotion classification data based on transfer learning, and particularly, the co-condition prediction is assisted by learning transferable public features from a polarity classification task with a large amount of polarity data because a co-condition predictor cannot be trained by the co-condition predictor because the co-condition data of a co-condition data set is very small. The method for carrying out the present co-emotion prediction by the text co-emotion prediction model comprises the following steps:
The data sets and associated definitions to which it relates:
the co-condition data set is set as D em ,D em ={(s em 1 ,em 1 ),...,(s em n ,em n ),(s em N ,em N ) -a }; wherein s is em n (1<=n<=n) and em n Input text and co-emotion tags respectively representing nth co-emotion data;
the polar data set is set to D po ,D po ={(s po 1 ,po 1 ),...,(s po m ,po m ),...,(s po M ,po M ) -a }; wherein s is po m (1<=m<=m) and em m E {0,1} represents the input text and the polarity label of the mth polarity data, respectively;
the total number of polar data M of the polar data set is much larger than the total number of co-occurrence data N of the co-occurrence data set.
The main network frame diagram of the text co-emotion prediction model of the invention is shown in FIG. 1, and the encoder part mainly comprises three characteristic encoders, namely e em (),e po (),e c () Wherein e is em ()、e po () A private feature encoder representing two tasks, a co-emotion prediction and a polarity classification, for encoding private features of the two tasks, and e c () Then represent common feature compilationsA coder for coding a common feature between two tasks; assume that the co-occurrence data in the input sample pair is noted as s em The polarity data is denoted s po The final encoding result is e em (s em )、e po (s em )、e c (s em )、e em (s po ),e po (s po )、e c (s po ).
In the subsequent network architecture, the contribution of public and private features to the co-situation prediction is possibly inconsistent, so the public and private features are weighted through the two public and private feature fusion modules of the attention architecture; therefore, public and private characteristics are fused into final characteristic expression in a dynamic weighting mode, and prediction of corresponding labels is carried out; meanwhile, aiming at the interference caused by the difference of the data set field and the label evaluation difference between the two tasks of the co-condition prediction and the polarity classification, the invention further reduces the interference caused by the difference of the data set field by resisting the classification loss, and reduces the interference caused by the different labels by arranging a Hinge loss module (namely a Hinge-loss module, comprising a co-condition Hinge loss module and a polarity Hinge loss module). Therefore, the movable characteristics learned by the characteristic encoder can be suitable for different fields and labels, and public and private characteristics between two tasks obtained in the disentanglement are further realized.
The loss function of the hinge loss module is generally loss final =max (0, δ -loss), meaning: when the original loss is small, loss is reduced final Is not 0; when the loss is large, loss is reduced final Is 0; in the corresponding invention, when the difference between the correct splicing result and the error combination mode is small, the characteristic separation is not thorough, so loss final Other than 0, the hinge loss module is active; if the difference between the correct splicing result and the wrong splicing result is larger, the characteristic separation effect is better, and loss is reduced final At 0, the hinge loss module is not functional.
Specifically, the specific description of each module is as follows:
one pair (I)In the input co-occurrence data s em And polarity data s po In other words, co-occurrence data s em The common-case private feature obtained by the common-case private feature encoder and the common-case public feature obtained by the common-case private feature encoder are e respectively em (s em ) And e c (s em ) The method comprises the steps of carrying out a first treatment on the surface of the And polarity data s po The polarity private feature obtained by the polarity private feature encoder and the polarity public feature obtained by the public feature encoder are respectively e po (s po ) And e c (s po ) The method comprises the steps of carrying out a first treatment on the surface of the Because the benefits of the field private features and the public features on the final label prediction may be different, the invention respectively carries out dynamic weighting on the public and private features through the two public and private feature fusion modules of the attention architecture, thereby obtaining the final feature expression f after the fusion of the public and private features em And f po The specific flow is shown in formulas (1) and (2):
in the formulas (1) and (2), d is the dimension of the feature vector; u(s) em )∈R 2*d It is e em (s em ) And e c (s em ) Is a splicing result of (2); and V(s) po )∈R 2*d It is e po (s po ) And e c (s po ) Is a splicing result of (2);
after the final feature expressions of the co-condition prediction task and the polarity classification task are obtained, the obtained final feature expressions are respectively and correspondingly sent into a co-condition predictor and a polarity classifier to obtain corresponding prediction labels, namely a co-condition label and a polarity label; wherein f em Input to the co-emotion predictor g em () Inner f po Input to a polarity classifier g po Within (), the specific treatment formulas are (3), (4):
in the formulas (3) and (4), em is a co-estrus tag; po is a polar tag; em (em) * Is a true co-emotion label, po * Is a true polarity tag.
If the co-emotion task is a regression task, the training loss function is shown in a formula (5); if it is a classification task, its training loss function is as shown in equation (6):
the training loss function for the polarity classification is shown in equation (7):
formulae (5), (6) and (7), θ eem 、θ epo 、θ ec 、θ gem 、θ gpo E respectively em ()、e po ()、e c ()、g em ()、g po () Parameters of (2); n is the number of co-occurrence tag categories when the co-occurrence task is a classification task.
And secondly, the invention separates the learned features into the domain public features and the domain unique features by an anti-learning mode, improves the public features coded by the public feature coder, and is beneficial and universal to any data set domain. Specifically, the invention passes through a field two classifier g l () To e em ()、e po ()、e c () The code features of (2) are discriminated in the source field. Wherein for e em ()、e po () Coding features, g, of the two private feature encoders l () It should be clear whether these features originate from the co-estrus domain or the polar domain, as these private features are unique features of both domains. The corresponding training process is shown in the formula (8), and the corresponding loss function is L cop :
And for the common feature encoder e c () In terms of the coding features of (a), g is a common feature which is possessed by both tasks, regardless of whether the source domain is a co-occurrence domain or a polarity domain l () The source of these features should be unclear, i.e., confused. The specific training loss function is shown in the formula (9), and the corresponding loss function is L adv :
(3) The invention provides a label training strategy for eliminating label differences between a co-emotion task and a polarity task. In particular, taking co-morbid predictions as an example, some of the migratable features are useful for co-morbid predictions, while some features are not. The invention extracts useful features by a public feature encoder, while ineffective features are extracted by a private feature extractor of another task. Based on this, the invention proposes to use a Hinge loss module (i.e. a Hinge-loss module, which is divided into a common-case Hinge loss module and a polarity Hinge loss module) to expand the difference between the experimental results corresponding to the public feature encoder and the common-case private feature encoder and the experimental results corresponding to the polarity private feature encoder and the common-case private feature encoder as much as possible.
Specifically, in the formulas (1) and (2), f em For final fusion features, correspond toLoss of L em This is the experimental result corresponding to the correct splice result; in addition, the present invention defines f em ’=e em (s em )+e po (s em ) The predictive label based on this is em' =g em (f em '), thereby obtaining a new estrus loss L em ’,L em ' is the experimental result corresponding to the wrong splicing result; the common-case hinge loss module expands L as much as possible em And L is equal to em The difference between' to make the features that are beneficial to co-emotion prediction more concentrated in the public feature encoder, while those that are useless are more concentrated in the private feature encoder of another task; the implementation method of the polar hinge loss module of the polar classification task part is consistent with the co-condition prediction, so that the training target and the loss function of the final co-condition hinge loss module are shown as a formula (10):
in the above formula (10), the meaning of each parameter is: l (L) em A common-case loss value (i.e., a common-case loss value) corresponding to the correct common-case splicing result; l (L) em ' is a common-case loss value (i.e. a common-case loss value) corresponding to a common-case error splicing result; delta 1 The difference value required to be achieved between the two different splicing results of the set co-condition is obtained;
the training loss function (11) of the polar hinge loss module is as follows:
In the above formula (11), the meaning of each parameter is: l (L) po A polarity loss value (namely a polarity loss value) corresponding to the polarity correct splicing result; l (L) po ' is the polarity loss value (i.e. the polarity loss value) corresponding to the polarity error splicing result; delta 2 The difference to be achieved between the two different splice results for the set polarity.
The final hinge loss function (12) of the text-consensus prediction model is:
L hin =L hin_1 +L hin_2 (12);
final loss function L of the text-consensus prediction model o Is to L em 、L po 、L cop 、L adv And L hin As shown in equation (13):
L o =λ 1 *L em +λ 2 *L po +λ 3 *L cop +λ 4 *L adv +λ 5 *L hin (13);
wherein lambda is 1 ,λ 2 ,λ 3 ,λ 4 ,λ 5 Weights corresponding to the respective loss functions for controlling balance between the multiple loss functions, the weights lambda in a particular experiment 1 ,λ 2 ,λ 3 ,λ 4 ,λ 5 The values are 1, 0.5, 2 and 1.5 respectively.
Experiments are carried out on the model and the method of the invention on two data sets, and preliminary experimental results are obtained, and the experimental results prove the effectiveness of the method of the invention.
And the co-condition data set part is provided by Buchel and comprises 1860 pieces of marking data, and is mainly derived from the reading postinduction of marking personnel on various news. Each data sample includes two co-occurrence tags, EC and PD, respectively, whose values range from 1 to 7. The co-emotion dataset proposed by Zhou includes 1000 pieces of annotation data, which originate from the Reddit forum, and the main content is the posting of the user on the forum and the reply of the corresponding post. Each data sample was labeled with a consensus label, ranging from 1 to 5.
The former of the two polarity classification data sets mainly comprises various types of tweets issued by users on twitter, and the latter is mainly the sightedness and comments of the users on various types of movies. The former includes 7061 positive samples and 3240 negative samples; the latter includes 25000 positive samples and 25000 negative samples.
In order to ensure the unification of evaluation standards, for a co-condition data set proposed by Buchel, according to the original evaluation standard in the data set, a Pearson Correlation Coefficient (PCC) is also adopted as an evaluation mark, and the data set is provided with two labels of EC and PD, so that the evaluation standards are PCC-EC and PCC-PD; and for the co-condition data set proposed by Zhou, MSE loss and R2 are adopted as evaluation standards according to the original evaluation standards in the data set. Since the work involved is a transfer learning work, only the experimental results on the co-emotion prediction dataset need be considered, and the experimental results on the polarity classification dataset are not discussed in the following analysis.
The experimental parameters section, first, employs two encoders, bi-LSTM and BERT, for the encoder section. When Bi-LSTM is used as the encoder, both its hidden layer dimensions of forward LSTM and backward LSTM are set to 200. In order to keep consistent, the BERT-base-uncased model in BERT is directly adopted as a reference coding model. When Bi-LSTM is used as an encoder, the learning rate is set to 0.001, and when BERT is used as the encoder, the learning rate is set to 0.00002, the attenuation coefficient is uniformly set to 0.95.Dropout rate is set to 0.3, the trained batch size is 16, and the regularization method is L2 regularization. In the training process, the samples in the polar data set are combined with the samples in the co-condition data set and are input into the network architecture in a paired mode. The experimental framework is Pytorch, and the optimizer adopts an Adam optimizer.
The comparison of related works is mainly divided into two types, namely, the works without transfer learning, namely, the methods for carrying out the co-emotion prediction only by using the co-emotion data, such as FNN, CNN, roBERTa, etc.; the second type of comparison works for the rest of the related works of transfer learning using the co-condition data, such as DATNet, ADV-SA, etc., and the comparison is also performed. The experimental results are shown in the table.
Table 4 experimental results based on correlation work comparisons
The experimental results of the model and method of the present invention can be found from table 4, both on the Buchel co-occurrence dataset and on the Zhou co-occurrence dataset, to obtain the experimental results that are currently optimal in this field. For the specific analysis, the performance of the model presented herein is significantly better than that of a work without transfer learning, such as CNN, FNN, roBERTa, BERT, random Forest, etc. This is because the migration learning model presented herein can help small-scale co-affective analysis datasets learn better common feature expressions through large-scale polar classification datasets, thus making co-affective prediction better. In addition, compared with the DATNet, ADV-SA and other works for performing transfer learning, the experimental results of the model provided herein are also better. This is because the model presented herein not only reduces the interference caused by the difference in the field between two tasks by way of counterlearning; the interference caused by label difference between two tasks is reduced by designing a range-loss mode, so that the learned transferable public feature is effective for different fields and different labels.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims. The information disclosed in the background section herein is only for enhancement of understanding of the general background of the invention and is not to be taken as an admission or any form of suggestion that this information forms the prior art already known to those of ordinary skill in the art.
Claims (9)
1. A text co-emotion prediction system, comprising:
the public-private characteristic encoder, the polarity private characteristic encoder, the public-private characteristic fusion module, the polarity public-private characteristic fusion module, the public-private predictor and the polarity classifier; wherein,
the input of the public-emotion private feature encoder is public-emotion data in a public-emotion data set, and private features of the input public-emotion data can be encoded to obtain public-emotion private features;
the input of the polarity private feature encoder is polarity data in a polarity data set, and the private features of the input polarity data can be encoded to obtain the polarity private features;
The public feature encoder is used for respectively receiving the common-case data in the common-case data set and the polarity data in the polarity data set, and encoding the public features of the common-case data to obtain common-case public features and encoding the public features of the polarity data to obtain polarity public features;
the public and private feature fusion module is respectively connected with the output end of the public feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the public feature output by the public feature encoder and the public feature output by the public feature encoder into a final public prediction feature expression;
the polarity public-private feature fusion module is respectively connected with the output end of the polarity private feature encoder and the output end of the public feature encoder, and can be used for weighting and fusing the polarity private feature output by the polarity private feature encoder and the polarity public feature output by the public feature encoder into final polarity classification feature expression;
the co-condition predictor is connected with the output end of the co-condition public and private characteristic fusion module and can predict the final co-condition prediction characteristic expression to obtain a corresponding co-condition label;
The polarity classifier is connected with the output end of the polarity public and private feature fusion module and can predict the final polarity classification feature expression to obtain a corresponding polarity label.
2. The text co-emotion prediction system of claim 1,
the public-emotion private feature encoder also inputs polarity data in the polarity data set, and can encode the private features of the input polarity data to obtain public-emotion polarity private features;
the polarity private feature encoder also inputs the co-emotion data in the co-emotion data set, and can encode the private features of the input co-emotion data to obtain polarity co-emotion private features;
further comprises: the input end of the second domain classifier is respectively connected with the output ends of the public feature encoder, the polar private feature encoder and the public feature encoder, and can perform classification processing on the public feature codes output by the public feature encoder in a mode of resisting classification loss;
and/or, a co-living hinge loss module and a polar hinge loss module; wherein,
the co-condition hinge loss module is connected with the output end of the co-condition predictor and can predict the result L of the co-condition corresponding to the correct splicing result em Co-situation prediction result L corresponding to error splicing result em Carrying out co-emotion prediction when the difference value between's is smaller than a preset difference value, wherein the correct splicing result refers to the splicing result of the co-emotion private feature and the co-emotion public feature, and the wrong splicing result refers to the splicing result of the polarity private feature and the co-emotion private feature;
the polarity hinge loss module is connected with the output end of the polarity classifier and can predict a result L of polarity corresponding to a correct splicing result em Polarity prediction result L corresponding to error splicing result em And when the difference value between the 'and the' is smaller than a preset difference value, polarity prediction is carried out, the correct splicing result refers to the splicing result of the polarity private feature and the polarity public feature, and the wrong splicing result refers to the splicing result of the public condition private feature and the polarity private feature.
3. The text co-emotion prediction system of claim 2,
the polarity private feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the public feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the public and private shared characteristic fusion module and the polarity public and private characteristic fusion module adopt an attention network module;
The co-situation predictor adopts a fully-connected network;
the polarity classifier adopts a fully-connected network;
the second classifier in the field adopts a fully-connected network.
4. A text co-emotion prediction system according to claim 1 or 2, characterized in that the co-emotion private feature encoder employs Bi-LSTM type or BERT type feature encoder;
the polarity private feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the public feature encoder adopts a Bi-LSTM type or BERT type feature encoder;
the public and private shared characteristic fusion module and the polarity public and private characteristic fusion module adopt an attention network module;
the co-situation predictor adopts a fully-connected network;
the polarity classifier adopts a fully-connected network.
5. A text co-emotion prediction method, characterized in that the text co-emotion prediction system according to any one of claims 1 to 4 is employed as a text co-emotion prediction model, comprising:
step 1, encoding private features of co-occurrence data in an input co-occurrence data set to obtain co-occurrence private features, and encoding private features of polarity data in an input polarity data set to obtain polarity private features;
Encoding the public features of the common-case data in the input common-case data set to obtain common-case public features, and encoding the public features of the polarity data in the input polarity data set to obtain polarity public features;
step 2, weighting and fusing the public common-case characteristics and the private common-case characteristics obtained in the step 1 into final common-case predicted characteristic expression; the polarity private characteristics and the polarity public characteristics obtained in the step 1 are weighted and fused into final polarity prediction characteristic expression;
step 3, carrying out co-emotion prediction on the final co-emotion prediction feature expression obtained in the step 2 to obtain a corresponding co-emotion tag serving as a prediction result; and performing polarity classification on the final polarity classification feature expression to obtain a corresponding polarity label serving as a prediction result.
6. The text co-emotion prediction method of claim 5, wherein in step 2, the final co-emotion prediction feature expression f obtained by weighted fusion is expressed em And final polarity classification feature expression f po The method comprises the following steps of:
in the above formulas (1) and (2), q is q k *U(S em ) Wherein q is k Is a super-parametric vector, the dimension of which is R 1*d ,q k Obtaining an initial value Q by random initialization for the super parameter vector; t is the transposed matrix, i.e. U(s) em ) T Is U(s) em ) Is a transposed matrix of (a); d is the dimension of the feature vector; u(s) em ) Is a common-case private feature e em (s em ) Common feature of co-occurrence e c (s em ) Is the result of the concatenation of U(s) em )∈R 2*d ;s em The data are co-emotion data in a co-emotion data set; v(s) po ) As a polarity private feature e po (s po ) Common to polarity characteristic e c (s po ) V is equal tos po )∈R 2*d ,s po Polarity data in the polarity data set;
in the step 3, the final co-condition prediction characteristic expression f is obtained em Input to the co-emotion predictor g em () The co-condition label is obtained by prediction according to the formula (3)The formula (3) is:
Expressing the final polarity prediction characteristic f po Input to a polarity classifier g po Predicting according to formula (4) to obtain polar labelThe formula (4) is:
7. The text co-emotion prediction method of claim 6, wherein in step 3, if the co-emotion task is a regression task, the training loss function of the co-emotion predictor is formula (5):
in the above formula (5), θ eem 、θ ec 、θ gem Respectively common-case private feature encoder e em () Common feature encoder e c () Co-emotion predictor g em () Parameters of (2);a predicted co-occurrence tag; em (em) * Is a true co-situation label;
if the co-occurrence task is a classification task, the training loss function of the co-occurrence predictor is equation (6):
in the above formula (6), θ eem 、θ ec 、θ gem Respectively common-case private feature encoder e em () Common feature encoder e c () Co-emotion predictor g em () Parameters of (2);a predicted co-occurrence tag; em (em) * Is a true co-situation label; n is the number of co-occurrence tag categories when the co-occurrence task is a classification task.
8. The text co-emotion prediction method of claim 6, wherein in step 3, the training loss function of the polarity classifier is:
in the above formula (7), θ epo 、θ ec 、θ gpo Respectively polar private feature encoder e po () Common feature encoder e c () Polarity classifier g po () Parameters of (2);a polarity tag that is predicted; po (po) * Is a true polarity tag.
9. The text co-emotion prediction method of claim 6, further comprising employing a domain two classifier g l () Encoder e for co-occurrence private features in a manner to combat losses em () Derived co-occurrence private feature and polar private feature encoder e po () The source field of the obtained polar private feature is judged, and a field two classifier g is adopted l () Training loss function of L cop The method comprises the following steps:
and adopts a field two classifier g l () For the common feature encoder e in a loss-countering manner c () The obtained common-case public feature and polarity public feature source domain is judged, and the domain two classifier g l () Training loss function of L adv The method comprises the following steps:
and/or, outputting a prediction result L corresponding to the correct joint result of the common-case prediction output through the common-case hinge loss module em Prediction result L corresponding to common-case error splicing result em Carrying out co-condition prediction when the difference value between 'and' is smaller than a preset difference value, wherein the co-condition correct splicing result refers to the splicing result of the co-condition private feature and the co-condition public feature, and the co-condition incorrect splicing result refers to the splicing result of the polarity private feature and the co-condition private feature; and a predicted result L corresponding to the correct polarity splicing result outputted by the co-situation prediction through a polarity hinge loss module em Prediction result L corresponding to polarity error splicing result em When the difference value between the's is smaller than a preset difference value, polarity prediction is carried out, the polarity correct splicing result refers to the splicing result of the polarity private feature and the polarity public feature, and the polarity incorrect splicing result refers to the splicing result of the public condition private feature and the polarity private feature; the training loss function (10) of the co-condition hinge loss module is as follows:
in the above formula (10), the meaning of each parameter is: l (L) em A common-case loss value corresponding to a common-case correct splicing result; l (L) em ' is the result of co-emotion error splicingCorresponding co-occurrence loss values; delta 1 The difference value required to be achieved between the two different splicing results of the set co-condition is obtained;
the training loss function (11) of the polar hinge loss module is as follows:
in the above formula (11), the meaning of each parameter is: l (L) po The polarity loss value corresponding to the correct polarity splicing result is obtained; l (L) po ' is the polarity loss value corresponding to the polarity error splicing result; delta 2 The difference value required to be achieved between two different splicing results of the set polarity;
the final hinge loss function (12) of the text-consensus prediction model is:
L hin =L hin_1 +L hin_2 (12);
final loss function L of the text-consensus prediction model o The method comprises the following steps:
L o =λ 1 *L em +λ 2 *L po +λ 3 *L cop +λ 4 *L adv +λ 5 *L hin (13);
in the above formula (13), lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 For each loss function L em 、L po 、L cop 、L adv 、L hin The weight of lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 The values of (2) are 1, 0.5, 2 and 1.5 respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111592897.8A CN114281993B (en) | 2021-12-23 | 2021-12-23 | Text co-emotion prediction system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111592897.8A CN114281993B (en) | 2021-12-23 | 2021-12-23 | Text co-emotion prediction system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114281993A CN114281993A (en) | 2022-04-05 |
CN114281993B true CN114281993B (en) | 2024-03-29 |
Family
ID=80874968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111592897.8A Active CN114281993B (en) | 2021-12-23 | 2021-12-23 | Text co-emotion prediction system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114281993B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114722207B (en) * | 2022-06-07 | 2022-08-12 | 广东海洋大学 | Information classification method and system for microblogs |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414476A (en) * | 2020-03-06 | 2020-07-14 | 哈尔滨工业大学 | Attribute-level emotion analysis method based on multi-task learning |
US10878505B1 (en) * | 2020-07-31 | 2020-12-29 | Agblox, Inc. | Curated sentiment analysis in multi-layer, machine learning-based forecasting model using customized, commodity-specific neural networks |
CN113643046A (en) * | 2021-08-17 | 2021-11-12 | 中国平安人寿保险股份有限公司 | Common situation strategy recommendation method, device, equipment and medium suitable for virtual reality |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11763093B2 (en) * | 2020-04-30 | 2023-09-19 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for a privacy preserving text representation learning framework |
-
2021
- 2021-12-23 CN CN202111592897.8A patent/CN114281993B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414476A (en) * | 2020-03-06 | 2020-07-14 | 哈尔滨工业大学 | Attribute-level emotion analysis method based on multi-task learning |
US10878505B1 (en) * | 2020-07-31 | 2020-12-29 | Agblox, Inc. | Curated sentiment analysis in multi-layer, machine learning-based forecasting model using customized, commodity-specific neural networks |
CN113643046A (en) * | 2021-08-17 | 2021-11-12 | 中国平安人寿保险股份有限公司 | Common situation strategy recommendation method, device, equipment and medium suitable for virtual reality |
Non-Patent Citations (2)
Title |
---|
基于共享空间的跨语言情感分类;张萌萌;;信息技术与信息化;20200528(第05期);全文 * |
基于逐步优化分类模型的跨领域文本情感分类;张军;王素格;;计算机科学;20160715(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114281993A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI754033B (en) | Generating document for a point of interest | |
CN107391623B (en) | Knowledge graph embedding method fusing multi-background knowledge | |
CN110348968B (en) | Recommendation system and method based on user and project coupling relation analysis | |
CN112818676A (en) | Medical entity relationship joint extraction method | |
CN113722439B (en) | Cross-domain emotion classification method and system based on antagonism class alignment network | |
CN112256859A (en) | Recommendation method based on bidirectional long-short term memory network explicit information coupling analysis | |
CN112966503A (en) | Aspect level emotion analysis method | |
CN114281993B (en) | Text co-emotion prediction system and method | |
CN114218928A (en) | Abstract text summarization method based on graph knowledge and theme perception | |
CN113420212A (en) | Deep feature learning-based recommendation method, device, equipment and storage medium | |
CN113254616A (en) | Intelligent question-answering system-oriented sentence vector generation method and system | |
CN113901208A (en) | Method for analyzing emotion tendentiousness of intermediate-crossing language comments blended with theme characteristics | |
CN110309360A (en) | A kind of the topic label personalized recommendation method and system of short-sighted frequency | |
CN115129807A (en) | Fine-grained classification method and system for social media topic comments based on self-attention | |
Wang et al. | Saliencybert: Recurrent attention network for target-oriented multimodal sentiment classification | |
Firdaus et al. | Sentiment guided aspect conditioned dialogue generation in a multimodal system | |
CN117688185A (en) | User information enhanced long text fine granularity emotion analysis method | |
CN111737591B (en) | Product recommendation method based on heterogeneous heavy side information network translation model | |
Rauf et al. | BCE4ZSR: Bi-encoder empowered by teacher cross-encoder for zero-shot cold-start news recommendation | |
CN110968675A (en) | Recommendation method and system based on multi-field semantic fusion | |
CN115309894A (en) | Text emotion classification method and device based on confrontation training and TF-IDF | |
CN114239548A (en) | Triple extraction method for merging dependency syntax and pointer generation network | |
Lai et al. | Cross-domain sentiment classification using topic attention and dual-task adversarial training | |
CN116052291A (en) | Multi-mode emotion recognition method based on non-aligned sequence | |
Wang et al. | Deep learning-based sentiment analysis for social media |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |