CN113204645A - Knowledge-guided aspect-level emotion analysis model training method - Google Patents

Knowledge-guided aspect-level emotion analysis model training method Download PDF

Info

Publication number
CN113204645A
CN113204645A CN202110353985.6A CN202110353985A CN113204645A CN 113204645 A CN113204645 A CN 113204645A CN 202110353985 A CN202110353985 A CN 202110353985A CN 113204645 A CN113204645 A CN 113204645A
Authority
CN
China
Prior art keywords
model
training
emotion analysis
level emotion
knowledge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110353985.6A
Other languages
Chinese (zh)
Other versions
CN113204645B (en
Inventor
刘菊华
钟起煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110353985.6A priority Critical patent/CN113204645B/en
Publication of CN113204645A publication Critical patent/CN113204645A/en
Application granted granted Critical
Publication of CN113204645B publication Critical patent/CN113204645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a knowledge-guided aspect emotion analysis model training method1(ii) a Then, using knowledge-guided based training strategy, pre-trained model M is trained on the aspect-level emotion analysis dataset1Training again, and guiding the learner model with slow learning speed by the navigator model with fast learning speed to make the learner model (i.e. model M)2) Can learn the pre-training numberDomain invariant semantic knowledge between the data set and the target task data set; finally, a final aspect level sentiment analysis model is constructed and model M is used2Initializing the model, and finely adjusting the emotion analysis model on the aspect level emotion analysis data set to obtain a final high-performance aspect level emotion analysis model Mfinal. The invention achieves the optimal effect on the multi-aspect-level emotion analysis public data set.

Description

Knowledge-guided aspect-level emotion analysis model training method
Technical Field
The invention belongs to the technical field of fine-grained emotion analysis, and particularly relates to a knowledge-guided aspect-level emotion analysis model training method.
Background
With the proposal and rapid development of deep learning, the aspect level emotion analysis model training technology based on deep learning has been advanced in stages. However, because the aspect-level emotion analysis model training data is difficult to label, the problem that the number of samples in the current aspect-level emotion analysis data set is insufficient is common, and therefore the aspect-level emotion analysis model training still faces a great challenge. At present, the industry mainly uses a knowledge migration training method to solve such problems, specifically, the training method firstly pre-trains a sentence-level emotion analysis data set (pre-training data set) with sufficient sample number to obtain a pre-training model for learning rich semantic knowledge; and then, fine-tuning the pre-training model on an aspect-level emotion analysis data set (target task data set) with a small number of samples, and migrating semantic knowledge in the pre-training model to the target task model to obtain a final aspect-level emotion analysis model. The training method can relieve the problem of insufficient training samples to a certain extent, but the ideal training effect is difficult to achieve. Because huge field differences often exist between the pre-training data set and the target task data set, the pre-training model is directly subjected to fine adjustment on the target task data set, so that semantic knowledge obtained by pre-training is disastrous forgotten, and the training effect of the aspect-level emotion analysis model is greatly influenced.
To solve the above problems, a few inventions try to further introduce a domain adaptive technique in the knowledge migration training method to reduce the domain difference between the pre-training dataset and the target task dataset. Specifically, the method learns the semantic knowledge with unchanged domain by aligning the knowledge spaces of the pre-training data set and the target task data set, so that the domain difference between the pre-training data set and the target task data set is reduced, and the problem that the semantic knowledge is catastrophically forgotten in the model fine-tuning process is solved. Although these inventions can solve the forgotten knowledge problem, these inventions are only suitable for specific network structures, such as recurrent neural networks and attention-machine networks, and are poorly adapted to other network structures.
Disclosure of Invention
In view of the above, the present invention provides a knowledge-guided aspect-level emotion analysis model training method. The method creatively provides a model training framework which can adapt to any aspect level emotion analysis network structure, solves the problem of insufficient training samples by using a knowledge migration training strategy, and can effectively relieve the problem of field difference between a pre-training data set and a target task data set, thereby effectively improving the semantic knowledge migration effect.
In order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows: a knowledge-guided aspect-level emotion analysis model training method is characterized by comprising the following steps:
(1) pre-training the aspect-level emotion analysis model on the sentence-level emotion analysis data set with sufficient sample number to obtain the aspect-level emotion analysis pre-training model M1The model learns in a sentence-level emotion analysis data set to obtain rich semantic knowledge;
(2) using knowledge-guided based training strategy, pre-trained model M obtained in step (1) is subjected to aspect-level emotion analysis dataset1Training again to obtain an aspect-level emotion analysis pre-training model M2. Specifically, a navigator model and a learner model are introduced into the knowledge guidance strategy, wherein the navigator model has a high learning rate, and the learner model has a low learning rate. The method has the advantages that the training and updating of the learner model are guided through the navigator model, so that the learner model can keep the domain knowledge of the pre-training data set learned previously while learning the domain knowledge of the target task data set, and finally the domain-invariant knowledge is learned under the constraint of a knowledge guide loss function. The learner model obtained by training is a pre-training model M2
(3) Finally, the model M obtained by training in the step (2) is subjected to aspect-level emotion analysis data set2Fine adjustment is carried out, and the domain invariant knowledge obtained by learning is transferred to an aspect level emotion analysis model, so that the domain invariant knowledge is obtainedTo the final high-performance aspect level emotion analysis model Mfinal
Moreover, the specific implementation of the aspect level emotion analysis pre-training model trained on the sentence level emotion analysis data set in the step (1) is as follows,
11) using a method for extracting the aspect words based on the grammar rules to extract the aspect words from the samples in the sentence-level emotion analysis data set to obtain pseudo aspect words, namely converting the sentence-level emotion analysis data set into a pseudo aspect-level emotion analysis data set;
12) a network of any aspect level emotion analysis model is selected as a pre-training network, and in consideration of the fact that the position information of the aspect words and the keywords in the extracted pseudo aspect level emotion analysis data set has large noise, a position information processing module in the pre-training network is removed (if the model does not have the module, the position information processing module is not removed, wherein the main principle of the position information processing module is that the more close the words are more likely to be related emotion words. The purpose of helping the model to extract the key emotional features is achieved by endowing the words with similar distance from the aspect words with high weight and endowing the words with low weight from far distance. For example, the sentence: "the restaurant tastes good but the service is not very good", and "good" which is close to the facet "taste" is given better weight, thereby alleviating the problem of misleading the model by "good". The aspect level emotion analysis models in the last two years almost comprise an information processing module, and are universal modules);
13) inputting the text in the pseudo-aspect-level emotion analysis data set into a pre-training network, and training to obtain an aspect-level emotion analysis pre-training model M1
Furthermore, the model M is pre-trained for the aspect-level emotion analysis in the step (2) by using a training strategy based on knowledge guidance1The specific embodiment of performing the retraining is as follows,
21) respectively constructing a navigator model and a learner model which have the same network structure and are the same as the pre-training model M1The network structure is substantially similar. In particular, a pre-training dataset and target task data are consideredThe label categories between sets are different, and the pre-training model M is subjected to1Modifying the last classification layer in the network to obtain a new network structure of the navigator and the learner model;
22) using a Pre-trained model M1Performing parameter initialization on the navigator model and the learner model, namely keeping the parameters of the navigator model and the learner model consistent, wherein the parameters of the network classification layers of the navigator model and the learner model are obtained by random initialization because the classification layers of the network of the navigator model and the learner model are different from a pre-training network;
23) the navigator and learner models are trained on aspect level emotion analysis data sets. Specifically, the navigator model performs parameter update according to a back propagation algorithm, and uses a knowledge-guided loss function LGAnd (6) carrying out constraint. The loss function consists of two parts, respectively a cross-entropy loss function LcAnd a consistency loss function LrCalculating the formula (1),
Figure BDA0003003084050000031
wherein, y, pgAnd plRespectively representing the real label of the aspect-level emotion analysis data set, the predicted result of the navigator model and the predicted result of the learner model; i and j respectively represent a sample index and a label category index in the data set; α is a balance parameter, controlling the weight of the loss function, set to 0.7 during implementation. The classification loss function is used for guiding the navigator model to learn the semantic knowledge of the target task field, and the consistency loss function is used for relieving the problem that the previously learned semantic knowledge is catastrophically forgotten.
The learner model does not update parameters through back propagation, but updates according to the parameters of the navigator model by using a moving average method, which is shown in formula (2),
Figure BDA0003003084050000032
wherein, thetalAnd thetagParameters representing a learner model and a navigator model, respectively; t represents the t-th training iteration, beta is a control parameter, the updating speed of the learner model is controlled, and the updating speed is set to be 0.99 in the implementation process.
Through the constraint of the knowledge-guided loss function, the learner model can finally learn common semantic knowledge between the field of the pre-training data set and the field of the target task data set; in addition, because the learner model is trained under the guidance of the navigator model, and is not directly obtained through back propagation, the semantic knowledge in the pre-training model can be effectively prevented from being catastrophically forgotten in the process of obtaining the learner model through training.
Furthermore, the model M is subjected to the aspect-level emotion analysis data set in the step (3)2Fine adjustment is carried out to obtain a final high-performance aspect level emotion analysis model MfinalIn the following manner, the concrete embodiment of (1),
31) constructing final aspect level emotion analysis model, network structure and model M thereof2Compared with the network structure of the system, except for reintroducing the position information processing module (only when any aspect level emotion analysis model originally selected has the position information processing module), other structures are kept consistent;
32) using model M2Initializing parameters of the constructed aspect level emotion analysis model, then finely adjusting the model on an aspect level emotion analysis target data set (namely an aspect level emotion analysis data set), and finally training to obtain a high-performance aspect level emotion analysis model Mfinal
Compared with the prior art, the invention has the following advantages and beneficial effects:
1) the method transfers rich semantic knowledge learned by a sentence-level emotion analysis data set to a target data set, successfully relieves the field difference between the sentence-level emotion analysis data set and the target data set through a knowledge guide strategy, and effectively avoids the problem of catastrophic forgetting of knowledge, thereby achieving a better knowledge transfer effect.
2) Compared with the prior art, the model training method provided by the invention has the advantages of simplicity in implementation and wide applicability. The method is suitable for any aspect level emotion analysis model, and can obviously improve the training effect of the model on the premise of not greatly changing the structure of the model.
3) Compared with the prior art, the model training method can effectively improve the robustness and classification performance of the model, and the trained model achieves the optimal effect on a plurality of aspect-level emotion analysis public data sets.
Drawings
FIG. 1 is a diagram illustrating a model training method according to an embodiment of the present invention.
FIG. 2 is a diagram showing the test results of the emotion analysis model according to the embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
As shown in FIG. 1, the technical scheme adopted by the invention is a knowledge-guided aspect-level emotion analysis model training method, which comprises the following steps:
(1) extracting aspect words of each sample in the sentence-level emotion analysis data set by using an aspect word extraction method based on grammar rules, and converting the data set into a pseudo-aspect-level emotion analysis data set;
(2) selecting a network of any aspect level emotion analysis model as a pre-training network, removing a position information processing module in the pre-training network, and then training on a pseudo aspect level emotion analysis data set to obtain an aspect level emotion analysis pre-training model M1
(3) Constructing a navigator model and a learner model having the same network structure, the network structure of which is the same as that of the pre-training model M in (2)1The network structure is otherwise consistent, except for the final classification level. And using a pre-trained model M1Respectively initializing navigator model and learner model;
(4) Training the navigator model and the learner model on the aspect-level emotion analysis data set, wherein the training result of the learner model is the second-stage pre-training model M2. Specifically, the navigator model performs parameter update by a back propagation algorithm and uses knowledge to guide a loss function LGConstraint is carried out, the loss function comprises two parts, namely a cross entropy loss function LcAnd a consistency loss function LrCalculating the formula (1),
Figure BDA0003003084050000051
Figure BDA0003003084050000052
LG=α*Lc+(1-α)*Lr
wherein, y, pgAnd plRespectively representing the real label of the aspect-level emotion analysis data set, the predicted result of the navigator model and the predicted result of the learner model; i and j respectively represent a sample index and a label category index in the data set; α is the balance parameter, controlling the weight of the loss function, and is taken to be 0.7.
It is noted that the learner model is not updated with parameters through back propagation, but updated with parameters of the navigator model using a moving average method, which is shown in formula (2),
Figure BDA0003003084050000053
wherein, thetalAnd thetagParameters representing a learner model and a navigator model, respectively; t represents the t-th training iteration, beta is a control parameter and controls the updating speed of the learner model, and the beta is 0.99.
(5) Constructing final aspect level emotion analysis model, network structure and model M thereof2Network structure ofIn contrast, the other structures remain consistent except for the reintroduction of the location information processing module. Use of the model M in (4)2The emotion analysis model is initialized.
(6) Fine adjustment is carried out on the emotion analysis model on the aspect level emotion analysis target data set to obtain a final high-performance aspect level emotion analysis model Mfinal
The aspect level emotion analysis model training method provided by the invention is suitable for any aspect level emotion analysis model, and in the embodiment, a classical model GCAE based on a convolutional neural network in an aspect level emotion analysis task is selected to explain the specific implementation process of the model training method applied to different models. Specific implementation details include the following:
firstly, a GCAE network is used as a pre-training network, and a pre-training model M is obtained by training on a sentence-level emotion analysis data set1(since the position information processing module is not included in the GCAE model, it is not removed in this embodiment); then, considering the inconsistency of the number of label categories in the pre-training data set and the target task data set, modifying the output dimension of a classification layer in the GCAE network to obtain a new GCAE network, and taking the new GCAE network as the network structure of the navigator model and the learner model; second, using the pre-training model M1The navigator model and the learner model are subjected to parameter initialization, and are trained on a target task data set by using a training strategy based on knowledge guidance to obtain a learner model which learns abundant domain-invariant knowledge, namely a second-stage pre-training model M2(ii) a Finally, the GCAE network after the output dimension of the middle classification layer is modified is used as a final emotion analysis model network, and a model M is used2And initializing parameters of the object task data set, and then finely adjusting the object task data set to obtain a final aspect level emotion analysis GCAE model.
FIG. 2 shows the visualization effect of the prediction result of the aspect level emotion analysis model. Specifically, the figure shows the improvement effect of the model training method provided by the invention on the GCAE model: the first row represents using a conventional model training method; the second row represents the knowledge-guided training method proposed using the present invention. The method can effectively help the aspect-level emotion analysis model to learn rich semantic knowledge and extract key features, and finally effectively improves the classification performance of the emotion analysis model.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A knowledge-guided aspect-level emotion analysis model training method is characterized by comprising the following steps:
(1) pre-training the aspect-level emotion analysis model on the sentence-level emotion analysis data set with sufficient sample number to obtain the aspect-level emotion analysis pre-training model M1The model learns in a sentence-level emotion analysis data set to obtain rich semantic knowledge;
(2) using knowledge-guided based training strategy, pre-trained model M obtained in step (1) is subjected to aspect-level emotion analysis dataset1Training again to obtain an aspect-level emotion analysis pre-training model M2(ii) a Specifically, a navigator model and a learner model are introduced into the knowledge guiding strategy, wherein the navigator model has a high learning rate, the learner model has a low learning rate, the training and updating of the learner model are guided by the navigator model, so that the learner model can keep the domain knowledge of the previously learned pre-training data set while learning the domain knowledge of the target task data set, and finally learning the domain-invariant knowledge under the constraint of a knowledge guiding loss function, wherein the trained learner model is the pre-training model M2
(3) Finally, the model M obtained by training in the step (2) is subjected to aspect-level emotion analysis data set2Fine adjustment is carried out, the domain invariant knowledge obtained by learning is transferred to an aspect level emotion analysis model, and therefore a final high-performance aspect level emotion analysis model M is obtainedfinal
2. The knowledge-guided aspect-level emotion analysis model training method of claim 1, wherein: the specific implementation of the aspect level emotion analysis pre-training model trained on the sentence level emotion analysis data set in step (1) is as follows,
11) using a method for extracting the aspect words based on the grammar rules to extract the aspect words from the samples in the sentence-level emotion analysis data set to obtain pseudo aspect words, namely converting the sentence-level emotion analysis data set into a pseudo aspect-level emotion analysis data set;
12) selecting a network of any aspect level emotion analysis model as a pre-training network, removing a position information processing module in the pre-training network, and if the pre-training network does not have the module, not removing the module;
13) inputting the text in the pseudo-aspect-level emotion analysis data set into a pre-training network, and training to obtain an aspect-level emotion analysis pre-training model M1
3. The knowledge-guided aspect-level emotion analysis model training method of claim 1, wherein: the pre-training model M is analyzed and pre-trained on the aspect-level emotion by using a training strategy based on knowledge guidance in the step (2)1The specific embodiment of performing the retraining is as follows,
21) respectively constructing a navigator model and a learner model which have the same network structure, and comparing the pre-training model M1Modifying the last classification layer in the network to obtain a new network structure of the navigator and the learner model;
22) using a Pre-trained model M1Performing parameter initialization on a navigator model and a learner model, wherein the navigator and learner model networks are separated because the classification layer of the navigator and learner model networks is different from that of a pre-training modelThe parameters of the class layer are obtained by random initialization;
23) training a navigator model and a learner model on an aspect level emotion analysis data set; specifically, the navigator model performs parameter update according to a back propagation algorithm, and uses a knowledge-guided loss function LGConstraint is carried out, the loss function comprises two parts, namely a cross entropy loss function LcAnd a consistency loss function LrCalculating the formula (1),
Figure FDA0003003084040000021
Figure FDA0003003084040000022
LG=α*Lc+(1-α)*Lr
wherein, y, pgAnd plRespectively representing the real label of the aspect-level emotion analysis data set, the predicted result of the navigator model and the predicted result of the learner model; i and j respectively represent a sample index and a label category index in the data set; α is a balance parameter, controlling the weight of the loss function;
the learner model is updated according to the parameters of the navigator model by using a moving average method, the specific updating method is shown in formula (2),
Figure FDA0003003084040000023
wherein, thetalAnd thetagParameters representing a learner model and a navigator model, respectively; t denotes the t-th training iteration and β is a control parameter.
4. The knowledge-guided aspect-level emotion analysis model training method of claim 2, wherein: the model M is subjected to aspect-level emotion analysis data set in the step (3)2Fine adjustment is carried out to obtain a final high-performance aspect level emotion analysis model MfinalIn the following manner, the concrete embodiment of (1),
31) constructing a final aspect level emotion analysis model, and introducing the position information processing module, other structures and the pre-training model M into the aspect level emotion analysis model again when the pre-training network has the position information processing module2Keeping consistent;
32) using a Pre-trained model M2Initializing parameters of the constructed aspect level emotion analysis model, then finely adjusting the model on an aspect level emotion analysis data set, and finally training to obtain a high-performance aspect level emotion analysis model Mfinal
5. The knowledge-guided aspect-level emotion analysis model training method of claim 3, wherein: alpha is taken to be 0.7 and beta is taken to be 0.99.
6. The knowledge-guided aspect-level emotion analysis model training method of claim 2, wherein: 12) and selecting a GCAE network as a pre-training network.
CN202110353985.6A 2021-04-01 2021-04-01 Knowledge-guided aspect-level emotion analysis model training method Active CN113204645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110353985.6A CN113204645B (en) 2021-04-01 2021-04-01 Knowledge-guided aspect-level emotion analysis model training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110353985.6A CN113204645B (en) 2021-04-01 2021-04-01 Knowledge-guided aspect-level emotion analysis model training method

Publications (2)

Publication Number Publication Date
CN113204645A true CN113204645A (en) 2021-08-03
CN113204645B CN113204645B (en) 2023-05-16

Family

ID=77026093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110353985.6A Active CN113204645B (en) 2021-04-01 2021-04-01 Knowledge-guided aspect-level emotion analysis model training method

Country Status (1)

Country Link
CN (1) CN113204645B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836892A (en) * 2021-09-08 2021-12-24 灵犀量子(北京)医疗科技有限公司 Sample size data extraction method and device, electronic equipment and storage medium
CN115062611A (en) * 2022-05-23 2022-09-16 广东外语外贸大学 Training method, device, equipment and storage medium of grammar error correction model
CN117540725A (en) * 2024-01-05 2024-02-09 摩尔线程智能科技(北京)有限责任公司 Aspect-level emotion analysis method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032646A (en) * 2019-05-08 2019-07-19 山西财经大学 The cross-domain texts sensibility classification method of combination learning is adapted to based on multi-source field
CN110516245A (en) * 2019-08-27 2019-11-29 蓝盾信息安全技术股份有限公司 Fine granularity sentiment analysis method, apparatus, computer equipment and storage medium
US20200167419A1 (en) * 2018-11-27 2020-05-28 Sap Se Exploiting document knowledge for aspect-level sentiment classification
CN111680160A (en) * 2020-06-16 2020-09-18 西北师范大学 Deep migration learning method for text emotion classification
CN112163091A (en) * 2020-09-25 2021-01-01 大连民族大学 CNN-based aspect-level cross-domain emotion analysis method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200167419A1 (en) * 2018-11-27 2020-05-28 Sap Se Exploiting document knowledge for aspect-level sentiment classification
CN110032646A (en) * 2019-05-08 2019-07-19 山西财经大学 The cross-domain texts sensibility classification method of combination learning is adapted to based on multi-source field
CN110516245A (en) * 2019-08-27 2019-11-29 蓝盾信息安全技术股份有限公司 Fine granularity sentiment analysis method, apparatus, computer equipment and storage medium
CN111680160A (en) * 2020-06-16 2020-09-18 西北师范大学 Deep migration learning method for text emotion classification
CN112163091A (en) * 2020-09-25 2021-01-01 大连民族大学 CNN-based aspect-level cross-domain emotion analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU JUHUA等: "SemiText: Scene text detection with semi-supervised learning", 《NEUROCOMPUTING》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836892A (en) * 2021-09-08 2021-12-24 灵犀量子(北京)医疗科技有限公司 Sample size data extraction method and device, electronic equipment and storage medium
CN113836892B (en) * 2021-09-08 2023-08-08 灵犀量子(北京)医疗科技有限公司 Sample size data extraction method and device, electronic equipment and storage medium
CN115062611A (en) * 2022-05-23 2022-09-16 广东外语外贸大学 Training method, device, equipment and storage medium of grammar error correction model
CN117540725A (en) * 2024-01-05 2024-02-09 摩尔线程智能科技(北京)有限责任公司 Aspect-level emotion analysis method and device, electronic equipment and storage medium
CN117540725B (en) * 2024-01-05 2024-03-22 摩尔线程智能科技(北京)有限责任公司 Aspect-level emotion analysis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113204645B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN113204645B (en) Knowledge-guided aspect-level emotion analysis model training method
CN113326731B (en) Cross-domain pedestrian re-identification method based on momentum network guidance
CN108399428B (en) Triple loss function design method based on trace ratio criterion
CN111177376B (en) Chinese text classification method based on BERT and CNN hierarchical connection
CN109003601A (en) A kind of across language end-to-end speech recognition methods for low-resource Tujia language
CN110046252B (en) Medical text grading method based on attention mechanism neural network and knowledge graph
CN110070855B (en) Voice recognition system and method based on migrating neural network acoustic model
CN111259987A (en) Method for extracting event main body based on BERT (belief-based regression analysis) multi-model fusion
CN110059191A (en) A kind of text sentiment classification method and device
CN114757182A (en) BERT short text sentiment analysis method for improving training mode
CN113887480B (en) Burma language image text recognition method and device based on multi-decoder joint learning
CN115391563B (en) Knowledge graph link prediction method based on multi-source heterogeneous data fusion
CN113822054A (en) Chinese grammar error correction method and device based on data enhancement
CN115496072A (en) Relation extraction method based on comparison learning
CN116522165B (en) Public opinion text matching system and method based on twin structure
CN114357166B (en) Text classification method based on deep learning
CN111708896B (en) Entity relationship extraction method applied to biomedical literature
CN115080736A (en) Model adjusting method and device of discriminant language model
CN112598065A (en) Memory-based gated convolutional neural network semantic processing system and method
CN117574258B (en) Text classification method based on text noise labels and collaborative training strategies
CN110263352A (en) For training the method and device of deep layer nerve Machine Translation Model
CN114444506B (en) Relation triplet extraction method for fusing entity types
CN114996424B (en) Weak supervision cross-domain question-answer pair generation method based on deep learning
CN113379068B (en) Deep learning architecture searching method based on structured data
CN114462380B (en) Story ending generation method based on emotion pre-training model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant