CN117540725A - Aspect-level emotion analysis method and device, electronic equipment and storage medium - Google Patents
Aspect-level emotion analysis method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117540725A CN117540725A CN202410023911.XA CN202410023911A CN117540725A CN 117540725 A CN117540725 A CN 117540725A CN 202410023911 A CN202410023911 A CN 202410023911A CN 117540725 A CN117540725 A CN 117540725A
- Authority
- CN
- China
- Prior art keywords
- training
- segment
- vector
- layer
- token
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 347
- 238000004458 analytical method Methods 0.000 title claims abstract description 256
- 238000012549 training Methods 0.000 claims abstract description 470
- 239000002775 capsule Substances 0.000 claims abstract description 98
- 238000000034 method Methods 0.000 claims abstract description 56
- 239000013598 vector Substances 0.000 claims description 595
- 238000011176 pooling Methods 0.000 claims description 53
- 230000006870 function Effects 0.000 claims description 33
- 230000004044 response Effects 0.000 claims description 25
- 230000004913 activation Effects 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 230000015654 memory Effects 0.000 claims description 16
- 239000000758 substrate Substances 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 19
- 238000010586 diagram Methods 0.000 description 18
- 239000012634 fragment Substances 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 7
- 230000007812 deficiency Effects 0.000 description 6
- 230000007935 neutral effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Machine Translation (AREA)
Abstract
The disclosure relates to the technical field of electric digital data processing, in particular to an aspect-level emotion analysis method, an aspect-level emotion analysis device, electronic equipment and a storage medium. The method comprises the following steps: training a sentence-level emotion analysis model; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, and the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer; responding to sentence-level emotion analysis model training until a first preset training condition is met, initializing parameters of a second input layer according to parameters of a first input layer, and initializing parameters of a second capsule layer according to parameters of the first capsule layer; training an aspect emotion analysis model by adopting a second training text set; the second training text set includes a plurality of second training texts, the second training texts including aspect-level emotion tags. The method and the device can improve the accuracy of the aspect emotion analysis model in aspect emotion analysis on the premise of reducing the acquisition cost of the supervision data.
Description
Technical Field
The disclosure relates to the technical field of electric digital data processing, in particular to a training method of an aspect emotion analysis model, an aspect emotion analysis method, a training device of the aspect emotion analysis model, an aspect emotion analysis device, electronic equipment and a storage medium.
Background
With the rapid development of information technology, the comment on the internet has become an important way for people to express viewpoints and transmit experiences. Meanwhile, internet comment texts become an important source for people to find decision reference information. However, the explosive growth of information has made it more difficult to obtain useful information from it. According to the emotion analysis technology for automatically acquiring the views expressed by the comment text, automatic emotion recognition is realized by calculating the views, emotion, evaluation and attitude in the text, and convenience is brought to the user for acquiring the view information.
Text emotion analysis is a study of people's calculation of views, evaluations, attitudes and emotions expressed by entities (including products, services, organizations, individuals, subjects, events, topics and their attributes, etc.). It involves many subdivision directions such as viewpoint extraction, emotion mining, subjectivity analysis, emotion calculation, emotion analysis, comment mining, etc. Although these tasks are slightly different, they all fall into the category of emotion analysis. Emotion analysis has evolved into an important research direction in the field of natural language processing.
Text emotion analysis can be classified into chapter-level emotion analysis, sentence-level emotion analysis, and aspect-level emotion analysis according to analysis granularity. The chapter-level and sentence-level emotion analysis tasks assume that a piece of text has only one emotion, and analyze a given text and determine whether its overall emotion polarity is positive, negative, neutral, or the like. However, the overall emotion analysis of text masks its details and the overall emotion does not reflect the fine-grained emotional expression of opinion goals by people. If specific details are ignored only with respect to the overall emotion, erroneous results may be calculated in real-world applications such as recommendation systems, question-answering systems, etc. Thus, to perform a more complete emotion analysis, the system needs to discover the various aspect objects of the text comment and determine the emotion information expressed by the text for each aspect, which is an aspect-level emotion analysis technique. The method has important significance in carrying out aspect emotion analysis on the text.
Disclosure of Invention
The present disclosure provides a face-level emotion analysis technique.
According to an aspect of the present disclosure, there is provided a training method of an aspect-level emotion analysis model, including:
training a sentence-level emotion analysis model by adopting a first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer;
Responding to the sentence-level emotion analysis model training until a first preset training condition is met, initializing parameters of the second input layer according to the parameters of the first input layer, and initializing parameters of the second capsule layer according to the parameters of the first capsule layer;
training the aspect emotion analysis model by adopting a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
In one possible implementation of the present invention,
the aspect emotion analysis model further comprises a second pooling layer, and the second pooling layer is arranged behind the second capsule layer and before the second classification layer;
the sentence-level emotion analysis model further comprises a first pooling layer, and the first pooling layer is arranged behind the first capsule layer and in front of the first classification layer;
the network structure of the second pooling layer is the same as the network structure of the first pooling layer.
In one possible implementation of the present invention,
the training sentence-level emotion analysis model by adopting the first training text set comprises the following steps: extracting a first input vector corresponding to any first training text in the first training text set through the first input layer; extracting features of the first input vector through the first capsule layer to obtain a first feature vector corresponding to the first training text; outputting sentence-level emotion classification prediction results corresponding to the first feature vectors through the first classification layer; training the sentence-level emotion analysis model according to the sentence-level emotion classification prediction result and the sentence-level emotion label corresponding to the first training text;
Training the aspect emotion analysis model by using a second training text set, wherein the training comprises the following steps: extracting a second input vector corresponding to any second training text in the second training text set through the second input layer; extracting features of the second input vector through the second capsule layer to obtain a second feature vector corresponding to the second training text; outputting a first-aspect emotion classification prediction result corresponding to the second feature vector through the second classification layer; and training the aspect emotion analysis model according to the aspect emotion classification prediction result of the first aspect and the aspect emotion label corresponding to the second training text.
In one possible implementation of the present invention,
the extracting the first input vector corresponding to the first training text includes: obtaining a first word vector corresponding to the first training text; determining a first input vector corresponding to the first training text according to the first word vector;
the extracting the second input vector corresponding to the second training text includes: obtaining a second word vector corresponding to the second training text; and determining a second input vector corresponding to the second training text according to the second word vector.
In one possible implementation of the present invention,
the determining, according to the first word vector, a first input vector corresponding to the first training text includes: obtaining a first position vector corresponding to the first training text, wherein the first position vector is used for representing the position of a preset aspect word in the first training text; determining a first input vector corresponding to the first training text according to the first word vector and the first position vector;
the determining, according to the second word vector, a second input vector corresponding to the second training text includes: obtaining a second position vector corresponding to the second training text, wherein the second position vector is used for representing the position of the preset aspect word in the second training text; and determining a second input vector corresponding to the second training text according to the second word vector and the second position vector.
In one possible implementation of the present invention,
the first training text comprises at least one first token, and the elements of the first input vector comprise at least one first token input vector corresponding to the at least one first token; the feature extraction is performed on the first input vector to obtain a first feature vector corresponding to the first training text, including: extracting at least one first token input vector segment from the first input vectors, wherein any first token input vector segment comprises N continuous first token input vectors, and N is an integer greater than or equal to 1; extracting at least one first token input vector segment corresponding to the at least one first token input vector segment; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector;
The second training text comprises at least one second token, and the elements of the second input vector comprise at least one second token input vector corresponding to the at least one second token; the feature extraction is performed on the second input vector to obtain a second feature vector corresponding to the second training text, including: extracting at least one second token input vector segment from the second input vectors, wherein any second token input vector segment comprises N continuous second token input vectors, and N is an integer greater than or equal to 1; extracting at least one second segment feature vector corresponding to the at least one second token input vector segment; and obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector.
In one possible implementation of the present invention,
the extracting at least one first token input vector segment from the first input vector comprises: extracting at least one first token input vector segment from the first input vector through a sliding window with a window size of N and a sliding step length of 1;
the extracting at least one second token input vector segment from the second input vector comprises: and extracting at least one second token input vector segment from the second input vector through a sliding window with a window size of N and a sliding step length of 1.
In one possible implementation of the present invention,
the obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector includes: determining at least one first weight corresponding to the at least one first segment feature vector; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector and the at least one first weight; wherein, the first weight corresponding to the first token segment containing the preset aspect word is different from the first weight corresponding to the first token segment not containing the preset aspect word;
the obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector includes: determining at least one second weight corresponding to the at least one second segment feature vector; obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector and the at least one second weight; wherein, the second weight corresponding to the second token segment containing the preset aspect word is different from the second weight corresponding to the second token segment not containing the preset aspect word.
In one possible implementation of the present invention,
the determining at least one first weight corresponding to the at least one first segment feature vector includes: for any first segment feature vector, responding to the fact that a first token segment corresponding to the first segment feature vector comprises the preset aspect word, and determining a first weight corresponding to the first segment feature vector based on a preset activation function; determining a first weight corresponding to the first segment feature vector as a preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word;
the determining at least one second weight corresponding to the at least one second segment feature vector includes: for any second segment feature vector, responding to the second token segment corresponding to the second segment feature vector to comprise the preset aspect word, and determining a second weight corresponding to the second segment feature vector based on a preset activation function; determining a second weight corresponding to the second segment feature vector as the preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word;
Wherein the preset value is outside the value range of the preset activation function.
According to an aspect of the present disclosure, there is provided an aspect-level emotion analysis method including:
acquiring an aspect emotion analysis model obtained by training the training method of the aspect emotion analysis model;
inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
According to an aspect of the present disclosure, there is provided a training apparatus of an aspect-level emotion analysis model, including:
the first training module is used for training a sentence-level emotion analysis model by adopting a first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer;
The initialization module is used for responding to training of the sentence-level emotion analysis model until a first preset training condition is met, initializing parameters of the second input layer according to the parameters of the first input layer, and initializing parameters of the second capsule layer according to the parameters of the first capsule layer;
the second training module is used for training the aspect emotion analysis model by adopting a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
In one possible implementation of the present invention,
the aspect emotion analysis model further comprises a second pooling layer, and the second pooling layer is arranged behind the second capsule layer and before the second classification layer;
the sentence-level emotion analysis model further comprises a first pooling layer, and the first pooling layer is arranged behind the first capsule layer and in front of the first classification layer;
the network structure of the second pooling layer is the same as the network structure of the first pooling layer.
In one possible implementation of the present invention,
the first training module is used for: extracting a first input vector corresponding to any first training text in the first training text set through the first input layer; extracting features of the first input vector through the first capsule layer to obtain a first feature vector corresponding to the first training text; outputting sentence-level emotion classification prediction results corresponding to the first feature vectors through the first classification layer; training the sentence-level emotion analysis model according to the sentence-level emotion classification prediction result and the sentence-level emotion label corresponding to the first training text;
The second training module is used for: extracting a second input vector corresponding to any second training text in the second training text set through the second input layer; extracting features of the second input vector through the second capsule layer to obtain a second feature vector corresponding to the second training text; outputting a first-aspect emotion classification prediction result corresponding to the second feature vector through the second classification layer; and training the aspect emotion analysis model according to the aspect emotion classification prediction result of the first aspect and the aspect emotion label corresponding to the second training text.
In one possible implementation of the present invention,
the first training module is used for: obtaining a first word vector corresponding to the first training text; determining a first input vector corresponding to the first training text according to the first word vector;
the second training module is used for: obtaining a second word vector corresponding to the second training text; and determining a second input vector corresponding to the second training text according to the second word vector.
In one possible implementation of the present invention,
the first training module is used for: obtaining a first position vector corresponding to the first training text, wherein the first position vector is used for representing the position of a preset aspect word in the first training text; determining a first input vector corresponding to the first training text according to the first word vector and the first position vector;
The second training module is used for: obtaining a second position vector corresponding to the second training text, wherein the second position vector is used for representing the position of the preset aspect word in the second training text; and determining a second input vector corresponding to the second training text according to the second word vector and the second position vector.
In one possible implementation of the present invention,
the first training text comprises at least one first token, and the elements of the first input vector comprise at least one first token input vector corresponding to the at least one first token; the first training module is used for: extracting at least one first token input vector segment from the first input vectors, wherein any first token input vector segment comprises N continuous first token input vectors, and N is an integer greater than or equal to 1; extracting at least one first token input vector segment corresponding to the at least one first token input vector segment; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector;
the second training text comprises at least one second token, and the elements of the second input vector comprise at least one second token input vector corresponding to the at least one second token; the second training module is used for: extracting at least one second token input vector segment from the second input vectors, wherein any second token input vector segment comprises N continuous second token input vectors, and N is an integer greater than or equal to 1; extracting at least one second segment feature vector corresponding to the at least one second token input vector segment; and obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector.
In one possible implementation of the present invention,
the first training module is used for: extracting at least one first token input vector segment from the first input vector through a sliding window with a window size of N and a sliding step length of 1;
the second training module is used for: and extracting at least one second token input vector segment from the second input vector through a sliding window with a window size of N and a sliding step length of 1.
In one possible implementation of the present invention,
the first training module is used for: determining at least one first weight corresponding to the at least one first segment feature vector; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector and the at least one first weight; wherein, the first weight corresponding to the first token segment containing the preset aspect word is different from the first weight corresponding to the first token segment not containing the preset aspect word;
the second training module is used for: determining at least one second weight corresponding to the at least one second segment feature vector; obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector and the at least one second weight; wherein, the second weight corresponding to the second token segment containing the preset aspect word is different from the second weight corresponding to the second token segment not containing the preset aspect word.
In one possible implementation of the present invention,
the first training module is used for: for any first segment feature vector, responding to the fact that a first token segment corresponding to the first segment feature vector comprises the preset aspect word, and determining a first weight corresponding to the first segment feature vector based on a preset activation function; determining a first weight corresponding to the first segment feature vector as a preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word;
the second training module is used for: for any second segment feature vector, responding to the second token segment corresponding to the second segment feature vector to comprise the preset aspect word, and determining a second weight corresponding to the second segment feature vector based on a preset activation function; determining a second weight corresponding to the second segment feature vector as the preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word;
wherein the preset value is outside the value range of the preset activation function.
According to an aspect of the present disclosure, there is provided an aspect-level emotion analysis apparatus including:
The obtaining module is used for obtaining the aspect emotion analysis model obtained by training of the training device of the aspect emotion analysis model;
the aspect emotion analysis module is used for inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In an embodiment of the disclosure, a sentence-level emotion analysis model is trained by using a first training text set, wherein the first training text set includes a plurality of first training texts, and the first training texts include sentence-level emotion labels, the sentence-level emotion analysis model includes a first input layer, a first capsule layer, and a first classification layer, an aspect-level emotion analysis model includes a second input layer, a second capsule layer, and a second classification layer, a network structure of the first input layer is the same as a network structure of the second input layer, a network structure of the first capsule layer is the same as a network structure of the second capsule layer, the sentence-level emotion analysis model is trained to satisfy a first preset training condition in response to the sentence-level emotion analysis model, parameters of the second input layer are initialized according to parameters of the first input layer, parameters of the second capsule layer are initialized according to parameters of the first capsule layer, and the aspect-level emotion analysis model is trained by using a second training text set, wherein the second training text set includes a plurality of second input layers, and the aspect-level emotion analysis model is transferred to a small amount of the sentence-level emotion analysis model by using a first training text, and a small amount of the aspect-level emotion analysis model is performed by using the second aspect-level emotion analysis model. Because the sentence-level supervision data is low in acquisition difficulty, the sentence-level emotion analysis model can be fully trained by utilizing a large amount of sentence-level supervision data, so that the aspect-level emotion analysis model is fully learned. By adopting the embodiment of the disclosure, the accuracy of the aspect emotion analysis by the aspect emotion analysis model can be improved on the premise of reducing the acquisition cost of the supervision data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
FIG. 1 illustrates a flow chart of a training method for an aspect-level emotion analysis model provided by an embodiment of the present disclosure.
Fig. 2 illustrates an architecture diagram of a sentence-level emotion analysis model in a training method of an aspect-level emotion analysis model provided in an embodiment of the present disclosure.
Fig. 3 illustrates an architectural diagram of an aspect emotion analysis model in a training method of the aspect emotion analysis model provided in an embodiment of the present disclosure.
FIG. 4 illustrates a block diagram of a training apparatus for an aspect-level emotion analysis model provided by an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, the aspect emotion analysis mainly includes three steps: aspect word extraction, viewpoint word extraction and aspect-level emotion classification.
The extraction of aspect words and viewpoint words can adopt a supervised extraction method or an unsupervised extraction method. Among other supervised extraction methods, can include NER (Named Entity Recognition) based model extraction, dictionary based extraction, etc. Unsupervised extraction methods may include LDA (Latant Dirichlet Allocation, latent dirichlet allocation) model based extraction, TF-IDF (Term Frequency-inverse document Frequency) based extraction, dependency syntax analysis based extraction, and the like.
There are mainly two ways of aspect-level emotion classification. The first approach is to employ a feature engineering+machine learning model. For example, a feature engineering+gbdt (Gradient Boosting Decision Tree, gradient lifting decision tree) model is employed. The second approach is to employ a deep learning model. For example, an LSTM (Long Short Term Memory, long and short term memory) model, a BERT (Bidirectional Encoder Representation Transformers, transducer-based bi-directional encoder representation) model, and the like are employed.
The related art requires a large amount of aspect-level supervision data when training an aspect-level emotion analysis model, and the acquisition cost of the aspect-level supervision data is extremely high.
The embodiment of the disclosure provides a training method for an aspect emotion analysis model, which is characterized in that a first training text set is adopted to train a sentence-level emotion analysis model, wherein the first training text set comprises a plurality of first training texts, the first training texts comprise sentence-level emotion labels, the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is the same as the network structure of the second input layer, the network structure of the first capsule layer is the same as the network structure of the second capsule layer, the sentence-level emotion analysis model is trained to meet a first preset training condition in response to the sentence-level emotion analysis model, the parameters of the second input layer are initialized according to the parameters of the first capsule layer, the aspect emotion analysis model is trained by adopting a second training text set, the aspect emotion analysis model comprises a plurality of aspect emotion analysis sentence-level emotion analysis models, the aspect emotion analysis data are transferred to a small amount of the aspect emotion analysis model by using a first training text set, and finally, the aspect emotion analysis is performed by using a small amount of the aspect emotion analysis model. Because the sentence-level supervision data is low in acquisition difficulty, the sentence-level emotion analysis model can be fully trained by utilizing a large amount of sentence-level supervision data, so that the aspect-level emotion analysis model is fully learned. By adopting the embodiment of the disclosure, the accuracy of the aspect emotion analysis by the aspect emotion analysis model can be improved on the premise of reducing the acquisition cost of the supervision data.
The following describes in detail the training method of the aspect emotion analysis model provided by the embodiment of the present disclosure with reference to the accompanying drawings.
FIG. 1 illustrates a flow chart of a training method for an aspect-level emotion analysis model provided by an embodiment of the present disclosure. In a possible implementation manner, the execution subject of the training method of the aspect-level emotion analysis model may be a training device of the aspect-level emotion analysis model, for example, the training method of the aspect-level emotion analysis model may be executed by a terminal device or a server or other electronic devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or the like. In some possible implementations, the training method of the aspect-level emotion analysis model may be implemented by a manner in which the processor invokes computer-readable instructions stored in the memory. As shown in fig. 1, the training method of the aspect emotion analysis model includes steps S11 to S13.
In step S11, training a sentence-level emotion analysis model by adopting a first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer.
In step S12, in response to the sentence-level emotion analysis model training until a first preset training condition is satisfied, initializing parameters of the second input layer according to the parameters of the first input layer, and initializing parameters of the second capsule layer according to the parameters of the first capsule layer.
In step S13, training the aspect emotion analysis model by using a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
In the embodiment of the disclosure, the sentence-level emotion analysis model may be used for performing sentence-level emotion analysis on the input text to obtain sentence-level emotion classification results corresponding to the input text. The aspect-level emotion analysis model can be used for carrying out aspect-level emotion analysis on the input text to obtain an aspect-level emotion classification result corresponding to the input text.
The sentence-level emotion analysis model may at least include a first input layer, a first capsule layer, and a first classification layer, where the first capsule layer is disposed after the first input layer, and the first classification layer is disposed after the first capsule layer. The aspect-level emotion analysis model may include at least a second input layer, a second capsule layer, and a second classification layer, wherein the second capsule layer is disposed after the second input layer, and the second classification layer is disposed after the second capsule layer.
In one possible implementation, the aspect emotion analysis model further includes a second pooling layer, and the second pooling layer is disposed after the second capsule layer and before the second classification layer; the sentence-level emotion analysis model further comprises a first pooling layer, and the first pooling layer is arranged behind the first capsule layer and in front of the first classification layer; the network structure of the second pooling layer is the same as the network structure of the first pooling layer.
Fig. 2 illustrates an architecture diagram of a sentence-level emotion analysis model in a training method of an aspect-level emotion analysis model provided in an embodiment of the present disclosure. As shown in fig. 2, the sentence-level emotion analysis model may include a first input layer, a first capsule layer, a first pooling layer, and a first classification layer.
Fig. 3 illustrates an architectural diagram of an aspect emotion analysis model in a training method of the aspect emotion analysis model provided in an embodiment of the present disclosure. As shown in fig. 3, the aspect emotion analysis model may include a second input layer, a second capsule layer, a second pooling layer, and a second classification layer.
In the implementation manner, the first pooling layer is arranged in the sentence-level emotion analysis model, so that the operation speed of the sentence-level emotion analysis model can be increased. By providing the second pooling layer in the aspect emotion analysis model, the operation speed of the aspect emotion analysis model can be increased.
As an example of this implementation, a first pooling layer may be used to average pooling of feature vectors output by a first capsule layer and a second pooling layer may be used to average pooling of feature vectors output by a second capsule layer.
As another example of this implementation, a first pooling layer may be used to maximize the pooling of feature vectors output by the first capsule layer and a second pooling layer may be used to maximize the pooling of feature vectors output by the second capsule layer.
In another possible implementation, neither the sentence-level emotion analysis model nor the aspect-level emotion analysis model may include a pooling layer.
In an embodiment of the present disclosure, the first training text set may represent a training text set for training a sentence-level emotion analysis model. The first training text set may include a plurality of first training texts. For example, the first training text set may include a large number of first training texts. Wherein the first training text may represent training text in the first training text set. Each of the first training texts may include sentence-level emotion tags, respectively. The sentence-level emotion label corresponding to any one of the first training texts can represent the true value of the sentence-level emotion category of the first training text. For example, sentence-level emotion categories may be positive or negative; as another example, sentence-level emotion categories may be positive, neutral, or negative; as another example, sentence-level emotion categories may be positive, sub-positive, neutral, sub-negative, or negative; etc. For example, a certain first training text is "today weather is good", and the sentence-level emotion label corresponding to the first training text is "front".
In one possible implementation, the first training text set may be derived based on open source emotion classification data.
In another possible implementation manner, comment data of the electronic commerce can be obtained on the premise of obtaining the authorization, and the first training text set is obtained based on the comment data of the electronic commerce. For example, if the score of any comment data is 4 or 5, the comment text in the comment data may be used as the first training text, and the sentence-level emotion tag corresponding to the comment text may be determined as the front; comment data with a score of 3 may be discarded; if the score of any comment data is 1 or 2, the comment text in the comment data can be used as a first training text, and the sentence-level emotion label corresponding to the comment text can be determined as negative.
In the embodiment of the disclosure, the sentence-level emotion analysis model may be trained by using the first training text set until the sentence-level emotion analysis model is trained to meet a first preset training condition. The first preset training condition may be parameter convergence of the sentence-level emotion analysis model, a training time period number (epoch) of the sentence-level emotion analysis model reaching a first preset time period number, an iteration number (iteration) of the sentence-level emotion analysis model reaching a first preset iteration number, and so on.
In embodiments of the present disclosure, the second training text set may represent a training text set for training an aspect-level emotion analysis model. The second training text set may include a plurality of second training texts. For example, the second training text set may include a small amount of second training text. Wherein the second training text may represent training text in the second training text set. Each second training text may include an aspect emotion tag, respectively. Any aspect emotion label corresponding to the second training text can represent a true value of the aspect emotion category of the second training text. For example, an aspect emotion classification may be positive or negative; as another example, an aspect emotion classification may be positive, neutral, or negative; as another example, an aspect emotion category may be positive, sub-positive, neutral, sub-negative, or negative; etc. For example, a certain second training text is "five-in-one, the family is taken to an XY hotel, the hotel environment is good, the traffic is convenient, the service attitude of service personnel is required to be improved, and the aspect emotion label corresponding to the second training text comprises (aspect words: environment, category: positive), (aspect words: traffic, category: positive), (aspect words: service attitude, category: negative).
In an embodiment of the present disclosure, the aspect level emotion tags of the second training text in the second training text set may be obtained by manual labeling.
In one possible implementation, some or all of the second training text in the second training text set may be identical to the first training text in the first training text set.
In the embodiment of the disclosure, the aspect-level emotion analysis model may be trained using the second training text set until the aspect-level emotion analysis model is trained to satisfy a second preset training condition. The second preset training condition may be parameter convergence of the aspect emotion analysis model, a training time period of the aspect emotion analysis model reaching a second preset time period, an iteration number of the aspect emotion analysis model reaching a second preset iteration number, and so on.
In one possible implementation, the same aspect level emotion analysis model may be trained for different aspects. In this implementation, the aspect-level emotion analysis model may be used to output emotion classification results corresponding to different aspects, and the aspect-level emotion analysis model may output emotion classification results corresponding to one aspect at a time. For example, the same aspect level emotion analysis model may be trained for three aspects of environment, traffic, and service attitudes.
In another possible implementation, different aspect-level emotion analysis models may be trained separately for different aspects. For example, for three aspects of environment, traffic and service attitudes, an aspect-level emotion analysis model corresponding to the environment, an aspect-level emotion analysis model corresponding to the traffic and an aspect-level emotion analysis model corresponding to the service attitudes can be trained respectively. In one possible implementation, in the case of training the same aspect emotion analysis model for different aspects, the trained sentence-level emotion analysis model may be used as an initialized aspect emotion analysis model, or after the parameters of the first classification layer in the trained sentence-level emotion analysis model are randomly initialized, the initialized aspect emotion analysis model may be used. In this implementation, the initialized aspect-level emotion analysis model may be derived based on the trained sentence-level emotion analysis model without the need for additional construction of the aspect-level emotion analysis model.
In one possible implementation, the first input layer may input the first training text.
In another possible implementation, the first input layer may input a first word segmentation result corresponding to the first training text. The first word segmentation result may represent a word segmentation result of the first training text. The first word segmentation result may include at least one first token. Any of the first tokens may include at least one character.
In one possible implementation, the second input layer may input the second training text.
In another possible implementation, the second input layer may input a second word segmentation result corresponding to the second training text. Wherein the second word segmentation result may represent a word segmentation result of the second training text. The second word result may include at least one second token. Any of the second tokens may comprise at least one character.
The token may be referred to as a token or a word, and is not limited herein.
In one possible implementation manner, training a sentence-level emotion analysis model by using the first training text set includes: extracting a first input vector corresponding to any first training text in the first training text set through the first input layer; extracting features of the first input vector through the first capsule layer to obtain a first feature vector corresponding to the first training text; outputting sentence-level emotion classification prediction results corresponding to the first feature vectors through the first classification layer; training the sentence-level emotion analysis model according to the sentence-level emotion classification prediction result and the sentence-level emotion label corresponding to the first training text; training the aspect emotion analysis model by using a second training text set, wherein the training comprises the following steps: extracting a second input vector corresponding to any second training text in the second training text set through the second input layer; extracting features of the second input vector through the second capsule layer to obtain a second feature vector corresponding to the second training text; outputting a first-aspect emotion classification prediction result corresponding to the second feature vector through the second classification layer; and training the aspect emotion analysis model according to the aspect emotion classification prediction result of the first aspect and the aspect emotion label corresponding to the second training text.
In this implementation, the first input vector may represent an input vector corresponding to the first training text extracted by the first input layer, and the first feature vector may represent a feature vector corresponding to the first training text extracted by the first capsule layer; the second input vector may represent an input vector corresponding to a second training text extracted by the second input layer, the second feature vector may represent a feature vector corresponding to a second training text extracted by the second capsule layer, and the first aspect emotion classification prediction result may represent an aspect emotion classification prediction result corresponding to the second training text.
In this implementation, the loss function corresponding to the sentence-level emotion analysis model and the loss function corresponding to the aspect-level emotion analysis model may be cross entropy loss functions, and the like, which are not limited herein.
By adopting the implementation mode, the sentence-level emotion analysis model can learn the capability of identifying the sentence-level emotion type in the input text, and the aspect-level emotion analysis model can learn the capability of identifying the aspect-level emotion type in the input text.
In one possible implementation, the parameters of the second classification layer are randomly initialized.
In another possible implementation, the parameters of the second classification layer may be initialized based on empirical values.
In another possible implementation, the parameters of the second classification layer may be initialized according to the parameters of the first classification layer in response to training of the sentence-level emotion analysis model to meet a first preset training condition.
In a possible implementation manner, the extracting a first input vector corresponding to the first training text includes: obtaining a first word vector corresponding to the first training text; determining a first input vector corresponding to the first training text according to the first word vector; the extracting the second input vector corresponding to the second training text includes: obtaining a second word vector corresponding to the second training text; and determining a second input vector corresponding to the second training text according to the second word vector.
In this implementation, the first input layer may map n first tokens in the first training text to n first token vectors. Wherein the token vector may represent a word vector corresponding to the token. The n first token vectors corresponding to the n first tokens may be obtained by using word2vec, a word vector lookup table (lookup table), and the like. For example, n first token vectors may be denoted as e1, e2, e3, … …, en, respectively. n first token vectors may constitute a first word vector. For example, the first word vector e= { E1, E2, E3, … …, en }.
In this implementation, the first input vector may be determined from the first word vector. For example, the first input vector may be represented by X.
As an example of this implementation, the first word vector may be processed to obtain a first input vector. As another example of this implementation, the first word vector may be directly taken as the first input vector.
The second token vector corresponding to the second token in the second training text is obtained in a similar way to the first token vector corresponding to the first token in the first training text; the second word vector is obtained in a similar manner to the first word vector; the second input vector is obtained in a similar manner to the first input vector; and will not be described in detail herein.
In this implementation manner, the first word vector corresponding to the first training text is obtained, and the first input vector corresponding to the first training text is determined according to the first word vector, so that the first input vector output by the first input layer can express the semantics of the first training text, the dimension can be reduced, the processing speed of the sentence-level emotion analysis model is improved, the second word vector corresponding to the second training text is obtained, and the second input vector corresponding to the second training text is determined according to the second word vector, so that the second input vector output by the second input layer can express the semantics of the second training text, the dimension can be reduced, and the processing speed of the aspect-level emotion analysis model is improved.
As an example of this implementation manner, the determining, according to the first word vector, a first input vector corresponding to the first training text includes: obtaining a first position vector corresponding to the first training text, wherein the first position vector is used for representing the position of a preset aspect word in the first training text; determining a first input vector corresponding to the first training text according to the first word vector and the first position vector; the determining, according to the second word vector, a second input vector corresponding to the second training text includes: obtaining a second position vector corresponding to the second training text, wherein the second position vector is used for representing the position of the preset aspect word in the second training text; and determining a second input vector corresponding to the second training text according to the second word vector and the second position vector.
In this example, the first input layer may determine a first location vector corresponding to the first training text according to a location of a preset aspect word in the first training text. For example, the first position vector p= { P1, P2, P3, … …, pn }, where pi=1 if the i-th first token in the first training text is a preset aspect word, and pi=0 if the i-th first token in the first training text is not a preset aspect word, where 1.ltoreq.i.ltoreq.n. In this example, at most one element in the first position vector corresponding to any one of the first training texts is 1.
For example, a certain first training text is "today weather true o", and the first training text includes 4 first tokens, which are respectively: today, weather, true good, o. The 4 first tokens are not preset aspect words, so the first position vector corresponding to the first training text may be p= {0, 0}.
In this example, the first input layer may set only one preset facet at a time. In the case where any first training text includes a plurality of aspect words, the sentence-level emotion analysis model may be trained multiple times using the first training text, with only one preset aspect word employed at a time.
In one example, the first input vector may be determined from x=e+p, where E represents a first word vector and P represents a first position vector. X= { X1, X2, X3, … …, xn }, where xi represents the first token input vector corresponding to the i-th first token in the first training text, xi=ei+pi. For example, if the ith first token in the first training text is a preset aspect word, xi=ei+1; if the ith first token in the first training text is not a preset aspect word, xi=ei.
Of course, in some application scenarios, the first word vector may be multiplied by the first position vector to obtain the first input vector, which is not limited herein.
In this example, the second position vector is obtained in a similar manner to that of the first position vector, and the second input vector is obtained in a similar manner to that of the first input vector, which is not described here again.
For example, a certain second training text is 'five-in-one, people with families check in an XY hotel, the hotel environment is good, the traffic is convenient, and the service attitude of service personnel is required to be improved' in the disadvantage; the second training text includes 17 second tokens, respectively: 51. long-term, carrying, family, check-in, XY hotel, environment, good, traffic, convenience, short, yes, service personnel, service attitude, still, need, improvement. If the preset aspect word is "environment", the second position vector p= {0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0} corresponding to the second training text. If the preset aspect word is "traffic", the second position vector p= {0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0} corresponding to the second training text. If the preset aspect word is "service attitude", the second position vector p= {0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0} corresponding to the second training text.
In this example, the first position vector is used to represent the position of the preset aspect word in the first training text, the first input vector corresponding to the first training text is determined according to the first word vector and the first position vector, the second position vector corresponding to the second training text is obtained, the second position vector is used to represent the position of the preset aspect word in the second training text, and the second input vector corresponding to the second training text is determined according to the second word vector and the second position vector, so that the preset aspect word can be highlighted through the first input vector, and the preset aspect word is highlighted through the second input vector, thereby facilitating learning of the aspect emotion analysis model to the ability of more accurately performing aspect emotion analysis.
In one possible implementation, the first training text includes at least one first token, and the element of the first input vector includes at least one first token input vector corresponding to the at least one first token; the feature extraction is performed on the first input vector to obtain a first feature vector corresponding to the first training text, including: extracting at least one first token input vector segment from the first input vectors, wherein any first token input vector segment comprises N continuous first token input vectors, and N is an integer greater than or equal to 1; extracting at least one first token input vector segment corresponding to the at least one first token input vector segment; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector; the second training text comprises at least one second token, and the elements of the second input vector comprise at least one second token input vector corresponding to the at least one second token; the feature extraction is performed on the second input vector to obtain a second feature vector corresponding to the second training text, including: extracting at least one second token input vector segment from the second input vectors, wherein any second token input vector segment comprises N continuous second token input vectors, and N is an integer greater than or equal to 1; extracting at least one second segment feature vector corresponding to the at least one second token input vector segment; and obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector.
In this implementation, the first token input vector segment may represent a segment of N consecutive first token input vectors, and the first segment feature vector may represent a feature vector extracted from the first token input vector segment. The second token input vector segment may represent a segment of N consecutive second token input vectors, and the second segment feature vector may represent a feature vector extracted from the second token input vector segment.
In this implementation manner, at least one first token input vector segment is extracted from the first input vector, where any first token input vector segment includes N consecutive first token input vectors, N is an integer greater than or equal to 1, at least one first segment feature vector corresponding to the at least one first token input vector segment is extracted, according to the at least one first segment feature vector, a first feature vector corresponding to the first training text is obtained, and at least one second token input vector segment is extracted from the second input vector, where any second token input vector segment includes N consecutive second token input vectors, N is an integer greater than or equal to 1, at least one second segment feature vector corresponding to the at least one second token input vector segment is extracted, and according to the at least one second segment feature vector, a second feature vector corresponding to the second training text is obtained, so that the extracted feature vector helps to better reflect information of a preset aspect, and helps to improve emotion level analysis accuracy.
As an example of this implementation, the extracting at least one first token input vector segment from the first input vector includes: extracting at least one first token input vector segment from the first input vector through a sliding window with a window size of N and a sliding step length of 1; the extracting at least one second token input vector segment from the second input vector comprises: and extracting at least one second token input vector segment from the second input vector through a sliding window with a window size of N and a sliding step length of 1.
The following examples illustrate training text, tokens, token input vector fragments, fragment feature vectors. The training text in the following examples may be a first training text or a second training text, and the token may be a first token or a second token, the token input vector segment may be a first token input vector segment or a second token input vector segment, and the segment feature vector may be a first segment feature vector or a second segment feature vector, respectively.
In the example, the training text is 'five-in-one, the family is taken to check in an XY hotel, the hotel environment is good, the traffic is convenient, and the service attitude of service personnel is required to be improved' in the disadvantage; the training text includes 17 tokens, respectively: 51. long-term, carrying, family, check-in, XY hotel, environment, good, traffic, convenience, insufficient, yes, service personnel, service attitude, return, need, improvement; the window size is 3 and the sliding step size is 1. Then the token input vector segment corresponding to the following token segment may be extracted: token segment [ five, long false, carry ] corresponding token input vector segment, token segment [ long false, carry, family ] corresponding token input vector segment, token segment [ family, check in, XY hotel ] corresponding token input vector segment, token segment [ check in, XY hotel, hotel ] corresponding token input vector segment, token segment [ XY hotel, environment ] corresponding token input vector segment, token segment [ hotel, environment, good ] corresponding token input vector segment, token segment [ environment, good, traffic ] corresponding token input vector segment, token segment [ good, traffic ] corresponding token input vector segment, token segment [ traffic, convenience, deficiency ] corresponding token input vector segment, token segment [ convenience, deficiency, token input vector segment, token segment [ deficiency ], token input vector segment corresponding to, token segment [ yes, attendant ] corresponding to, token segment [ attendant ] service attitude, token segment [ attendant ] service attitude, token input vector segment corresponding to, token segment [ service attitude ] service attitude, further, token input vector segment corresponding to, and token segment [ further, required, token input vector segment corresponding to, token segment [ further, token input vector segment corresponding to, required to be improved ].
For example, the token segment [ five, long false, carry ] corresponds to the segment feature vector r (1), the token segment [ long false, carry, the family ] corresponds to the segment feature vector r (2), the token segment [ carry, the family ] corresponds to the segment feature vector r (3), the token segment [ family, check-in, the XY hotel ] corresponds to the segment feature vector r (4), the token segment [ check-in, the XY hotel, the hotel ] corresponds to the segment feature vector r (5), the token segment [ XY hotel, the environment ] corresponds to the segment feature vector r (6), the token segment [ hotel, the environment, the segment feature vector r (7) corresponds to the segment feature vector r (8), the token segment [ good, traffic, convenient ] corresponds to the segment feature vector r (9), the token segment [ traffic, convenient, the segment feature vector corresponding to the deficiency ] is r (10), the token segment [ convenient, the segment feature vector corresponding to the deficiency ] is r (11), the token segment [ deficiency ], the segment feature vector corresponding to the token segment is r (12), the segment feature vector corresponding to the token segment [ yes, the service person ] is r (13), the segment feature vector corresponding to the token segment [ yes, the service person, the service attitude ] is r (14), the segment feature vector corresponding to the token segment [ service person, the service attitude, the segment feature vector corresponding to the further ] is r (15), the token segment [ service attitude, the segment feature vector corresponding to the need is r (16), and the segment feature vector corresponding to the token segment is r (17) [ further, need is raised ].
For example, the capsule layer may calculate the segment feature vector based on r (i) =c (xi, x (i+n-1)) ×f1+b1. Where C (xi, x (i+N-1)) may represent stitching an ith token input vector to an ith+N-1 token input vector in the input vectors. F1 may be a convolution kernel matrix and b1 may be a bias matrix. Wherein, F1 and b1 can be randomly initialized and updated along with the training of the sentence-level emotion analysis model and the direction-level emotion analysis model.
For example, c token input vector segments are extracted in total, and accordingly, the number of segment feature vectors is also c, wherein 1.ltoreq.c.ltoreq.n, where n represents the number of tokens in the training text. The c segment feature vectors may be combined into a segment feature matrix r= { R1, R2, … …, rc } corresponding to the training text. For example, the training text "five-in-one long false" is taken to check in an XY hotel with family members, the hotel environment is good, the traffic is convenient, and the service personnel service attitude is required to be improved by "corresponding segment feature matrices r= { R1, R2, … … R17}.
In this example, at least one first token input vector segment is extracted from the first input vector through a sliding window with a window size of N and a sliding step size of 1, and at least one second token input vector segment is extracted from the second input vector through a sliding window with a window size of N and a sliding step size of 1, so that information in the token input vector is sufficiently acquired, and accuracy of aspect-level emotion analysis is improved.
In other examples, the step size of the sliding window may be greater than 1 and less than or equal to N, without limitation.
In a possible implementation manner, the obtaining, according to the at least one first segment feature vector, a first feature vector corresponding to the first training text includes: determining at least one first weight corresponding to the at least one first segment feature vector; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector and the at least one first weight; wherein, the first weight corresponding to the first token segment containing the preset aspect word is different from the first weight corresponding to the first token segment not containing the preset aspect word; the obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector includes: determining at least one second weight corresponding to the at least one second segment feature vector; obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector and the at least one second weight; wherein, the second weight corresponding to the second token segment containing the preset aspect word is different from the second weight corresponding to the second token segment not containing the preset aspect word.
In this implementation, the first weight corresponding to any first segment feature vector may represent the weight corresponding to the first segment feature vector. In this implementation, the first weights corresponding to the respective first segment feature vectors may be determined separately.
In this implementation manner, after determining the at least one first segment feature vector and the at least one first weight, the at least one first segment feature vector may be weighted according to the at least one first weight, so as to obtain a first feature vector corresponding to the first training text.
In this implementation, the second weight corresponding to any second segment feature vector may represent the weight corresponding to the second segment feature vector. In this implementation, the second weights corresponding to the respective second segment feature vectors may be determined separately.
In this implementation manner, after determining the at least one second segment feature vector and the at least one second weight, the at least one second segment feature vector may be weighted according to the at least one second weight, to obtain a second feature vector corresponding to the second training text.
In one example, the weights may be expressed in g (i). The weights corresponding to the c segment feature vectors may be noted as g= { G1, G2, … …, gc }. For example, the training text "five-in-one long false, take home to check in XY hotel, hotel environment is good, traffic is convenient, and what is lacking in the mind is that service personnel service attitude also needs to be improved" the corresponding weight can be recorded as g= { G1, G2, … …, G17}.
In one example, the eigenvector may be calculated based on p=r. For example, the training text "five-in-one old, take home to check in XY hotel, hotel environment is good, traffic is convenient, and what is lacking in the usages is that the service attitude of the attendant needs to be improved" the corresponding feature vector may be p=rg= { P1, P2, … …, P17}.
In the implementation manner, the token fragments containing the preset aspect words and the token fragments not containing the preset aspect words can be distinguished according to the size of the feature vector by setting that the first weights corresponding to the first token fragments containing the preset aspect words are different from the first weights corresponding to the first token fragments not containing the preset aspect words and setting that the second weights corresponding to the second token fragments containing the preset aspect words are different from the second weights corresponding to the second token fragments not containing the preset aspect words, so that the accuracy of aspect emotion analysis by the aspect emotion analysis model can be improved.
As an example of this implementation, the determining at least one first weight corresponding to the at least one first segment feature vector includes: for any first segment feature vector, responding to the fact that a first token segment corresponding to the first segment feature vector comprises the preset aspect word, and determining a first weight corresponding to the first segment feature vector based on a preset activation function; determining a first weight corresponding to the first segment feature vector as a preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word; the determining at least one second weight corresponding to the at least one second segment feature vector includes: for any second segment feature vector, responding to the second token segment corresponding to the second segment feature vector to comprise the preset aspect word, and determining a second weight corresponding to the second segment feature vector based on a preset activation function; determining a second weight corresponding to the second segment feature vector as the preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word; wherein the preset value is outside the value range of the preset activation function.
In one example, the preset activation function may be a (i) =sigmod [ C (xi, x (i+n-1)) ×f2+t×e+b2], where F2 may be a convolution kernel matrix, T may be an adaptation matrix, e may represent a feature vector of a preset aspect word, b2 may be a bias matrix, and F2, T, and b2 may be randomly initialized and updated with training of the sentence-level emotion analysis model and the direction-level emotion analysis model.
Of course, the preset activation function can be flexibly set according to the actual application scene requirement, which is not limited herein.
In one example, the preset value may be 1 or 0.01, etc., which is not limited herein.
For example, for any segment feature vector, determining a weight g (i) =a (i) corresponding to the segment feature vector in response to a token segment corresponding to the segment feature vector including a preset aspect word; and determining the weight g (i) =1 corresponding to the segment feature vector according to the fact that the token segment corresponding to the segment feature vector does not comprise the preset aspect word.
In this example, for any first segment feature vector, determining, in response to a first token segment corresponding to the first segment feature vector including the preset aspect word, a first weight corresponding to the first segment feature vector based on a preset activation function, determining, in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word, the first weight corresponding to the first segment feature vector as a preset value, for any second segment feature vector, determining, in response to a second token segment corresponding to the second segment feature vector including the preset aspect word, a second weight corresponding to the second segment feature vector based on a preset activation function, determining, in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word, a second weight corresponding to the second segment feature vector as the preset value; the preset value is out of the value range of the preset activation function, so that the accuracy of the aspect emotion analysis by the aspect emotion analysis model can be further improved.
As another example of this implementation, the determining at least one first weight corresponding to the at least one first segment feature vector includes: for any first segment feature vector, determining a first weight corresponding to the first segment feature vector as a first preset value in response to the first token segment corresponding to the first segment feature vector comprising the preset aspect word; determining a first weight corresponding to the first segment feature vector as a second preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word; the determining at least one second weight corresponding to the at least one second segment feature vector includes: for any second segment feature vector, determining a second weight corresponding to the second segment feature vector as a first preset value in response to the second token segment corresponding to the second segment feature vector comprising the preset aspect word; determining a second weight corresponding to the second segment feature vector as a second preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word; wherein the first preset value is not equal to the second preset value.
The training method of the aspect emotion analysis model provided by the embodiment of the disclosure is described below through a specific application scenario.
In the application scenario, the sentence-level emotion analysis model may include a first input layer, a first capsule layer, a first pooling layer, and a first classification layer; the aspect-level emotion analysis model may include a second input layer, a second capsule layer, a second pooling layer, and a second classification layer.
The sentence-level emotion analysis model may be trained using the first training text set until a first preset training condition is satisfied.
During training of the sentence-level emotion analysis model, a first training text may be input to the first input layer. The first input layer may obtain a first word vector E and a first position vector P corresponding to the first training text. The first input layer may determine a first input vector X from x=e+p.
The first capsule layer may extract first token input vector segments from the first input vector X through a sliding window having a window size of N and a sliding step size of 1, and may calculate first segment feature vectors corresponding to each first token input vector segment according to r (i) =c (xi, X (i+n-1)) ×f1+b1. The first capsule layer may determine a first segment feature matrix r= { R1, R2, … …, rc } from each first segment feature vector. The first capsule layer may determine the weight g= { G1, G2, … …, gc } corresponding to the first training text according to whether each first token includes a preset aspect word. The first capsule layer may calculate a first feature vector corresponding to the first training text according to p=r.
The first pooling layer may perform an average pooling operation on the first feature vector to obtain a first pooling result.
The first classification layer may obtain a sentence-level emotion classification prediction result corresponding to the first training text based on the first pooling result.
And training a sentence-level emotion analysis model according to the sentence-level emotion classification prediction result corresponding to the first training text and the sentence-level emotion label corresponding to the first training text.
The sentence-level emotion analysis model may be trained in response to the sentence-level emotion analysis model satisfying a first preset training condition, parameters of the second input layer may be initialized according to parameters of the first input layer, parameters of the second capsule layer may be initialized according to parameters of the first capsule layer, and parameters of the second classification layer may be randomly initialized.
And training the aspect-level emotion analysis model by adopting a second training text set until a second preset training condition is met.
During training of the aspect emotion analysis model, a second training text may be entered into the second input layer. The second input layer may obtain a second word vector E and a second position vector P corresponding to the second training text. The second input layer may determine a second input vector X from x=e+p.
The second capsule layer may extract second token input vector segments from the second input vector X through a sliding window having a window size of N and a sliding step size of 1, and may calculate second segment feature vectors corresponding to each second token input vector segment according to r (i) =c (xi, X (i+n-1)) ×f1+b1. The second capsule layer may determine a second segment feature matrix r= { R1, R2, … …, rc } from each second segment feature vector. The second capsule layer may determine the weight g= { G1, G2, … …, gc } corresponding to the second training text according to whether each second token includes a preset aspect word. The second capsule layer may calculate a second feature vector corresponding to the second training text according to p=r.
The second pooling layer may perform an average pooling operation on the second feature vector to obtain a second pooling result.
The second classification layer may obtain a first-aspect emotion classification prediction result corresponding to the second training text based on the second pooling result.
And training an aspect emotion analysis model according to the first aspect emotion classification prediction result corresponding to the second training text and the aspect emotion label corresponding to the second training text.
The embodiment of the disclosure also provides an aspect-level emotion analysis method, which comprises the following steps: acquiring an aspect emotion analysis model obtained by training the training method of the aspect emotion analysis model; inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a training device for the aspect emotion analysis model, the aspect emotion analysis device, an electronic apparatus, a computer readable storage medium and a computer program product, which can be used for implementing any one of the training method for the aspect emotion analysis model or the aspect emotion analysis method provided by the disclosure, and corresponding technical schemes and technical effects can be referred to corresponding records of the method section and are not repeated.
FIG. 4 illustrates a block diagram of a training apparatus for an aspect-level emotion analysis model provided by an embodiment of the present disclosure. As shown in fig. 4, the training device of the aspect emotion analysis model includes:
a first training module 41 for training a sentence-level emotion analysis model using the first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer;
An initialization module 42, configured to initialize parameters of the second input layer according to parameters of the first input layer and initialize parameters of the second capsule layer according to parameters of the first capsule layer in response to training of the sentence-level emotion analysis model to meet a first preset training condition;
a second training module 43 for training the aspect emotion analysis model using a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
In one possible implementation of the present invention,
the aspect emotion analysis model further comprises a second pooling layer, and the second pooling layer is arranged behind the second capsule layer and before the second classification layer;
the sentence-level emotion analysis model further comprises a first pooling layer, and the first pooling layer is arranged behind the first capsule layer and in front of the first classification layer;
the network structure of the second pooling layer is the same as the network structure of the first pooling layer.
In one possible implementation of the present invention,
the first training module 41 is configured to: extracting a first input vector corresponding to any first training text in the first training text set through the first input layer; extracting features of the first input vector through the first capsule layer to obtain a first feature vector corresponding to the first training text; outputting sentence-level emotion classification prediction results corresponding to the first feature vectors through the first classification layer; training the sentence-level emotion analysis model according to the sentence-level emotion classification prediction result and the sentence-level emotion label corresponding to the first training text;
The second training module 43 is configured to: extracting a second input vector corresponding to any second training text in the second training text set through the second input layer; extracting features of the second input vector through the second capsule layer to obtain a second feature vector corresponding to the second training text; outputting a first-aspect emotion classification prediction result corresponding to the second feature vector through the second classification layer; and training the aspect emotion analysis model according to the aspect emotion classification prediction result of the first aspect and the aspect emotion label corresponding to the second training text.
In one possible implementation of the present invention,
the first training module 41 is configured to: obtaining a first word vector corresponding to the first training text; determining a first input vector corresponding to the first training text according to the first word vector;
the second training module 43 is configured to: obtaining a second word vector corresponding to the second training text; and determining a second input vector corresponding to the second training text according to the second word vector.
In one possible implementation of the present invention,
The first training module 41 is configured to: obtaining a first position vector corresponding to the first training text, wherein the first position vector is used for representing the position of a preset aspect word in the first training text; determining a first input vector corresponding to the first training text according to the first word vector and the first position vector;
the second training module 43 is configured to: obtaining a second position vector corresponding to the second training text, wherein the second position vector is used for representing the position of the preset aspect word in the second training text; and determining a second input vector corresponding to the second training text according to the second word vector and the second position vector.
In one possible implementation of the present invention,
the first training text comprises at least one first token, and the elements of the first input vector comprise at least one first token input vector corresponding to the at least one first token; the first training module 41 is configured to: extracting at least one first token input vector segment from the first input vectors, wherein any first token input vector segment comprises N continuous first token input vectors, and N is an integer greater than or equal to 1; extracting at least one first token input vector segment corresponding to the at least one first token input vector segment; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector;
The second training text comprises at least one second token, and the elements of the second input vector comprise at least one second token input vector corresponding to the at least one second token; the second training module 43 is configured to: extracting at least one second token input vector segment from the second input vectors, wherein any second token input vector segment comprises N continuous second token input vectors, and N is an integer greater than or equal to 1; extracting at least one second segment feature vector corresponding to the at least one second token input vector segment; and obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector.
In one possible implementation of the present invention,
the first training module 41 is configured to: extracting at least one first token input vector segment from the first input vector through a sliding window with a window size of N and a sliding step length of 1;
the second training module 43 is configured to: and extracting at least one second token input vector segment from the second input vector through a sliding window with a window size of N and a sliding step length of 1.
In one possible implementation of the present invention,
The first training module 41 is configured to: determining at least one first weight corresponding to the at least one first segment feature vector; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector and the at least one first weight; wherein, the first weight corresponding to the first token segment containing the preset aspect word is different from the first weight corresponding to the first token segment not containing the preset aspect word;
the second training module 43 is configured to: determining at least one second weight corresponding to the at least one second segment feature vector; obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector and the at least one second weight; wherein, the second weight corresponding to the second token segment containing the preset aspect word is different from the second weight corresponding to the second token segment not containing the preset aspect word.
In one possible implementation of the present invention,
the first training module 41 is configured to: for any first segment feature vector, responding to the fact that a first token segment corresponding to the first segment feature vector comprises the preset aspect word, and determining a first weight corresponding to the first segment feature vector based on a preset activation function; determining a first weight corresponding to the first segment feature vector as a preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word;
The second training module 43 is configured to: for any second segment feature vector, responding to the second token segment corresponding to the second segment feature vector to comprise the preset aspect word, and determining a second weight corresponding to the second segment feature vector based on a preset activation function; determining a second weight corresponding to the second segment feature vector as the preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word;
wherein the preset value is outside the value range of the preset activation function.
The embodiment of the disclosure also provides an aspect-level emotion analysis device, which comprises:
the obtaining module is used for obtaining the aspect emotion analysis model obtained by training of the training device of the aspect emotion analysis model;
the aspect emotion analysis module is used for inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. Wherein the computer readable storage medium may be a non-volatile computer readable storage medium or may be a volatile computer readable storage medium.
The disclosed embodiments also propose a computer program comprising computer readable code which, when run in an electronic device, causes a processor in the electronic device to carry out the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the above method.
The embodiment of the disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 5 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server or a terminal. Referring to FIG. 5, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output interface 1958 (I/O interface). Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, the product applying the technical scheme of the embodiment of the disclosure clearly informs the personal information processing rule and obtains personal independent consent before processing the personal information. If the technical solution of the embodiment of the present disclosure relates to sensitive personal information, the product applying the technical solution of the embodiment of the present disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of "explicit consent". For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (14)
1. A method of training an aspect-level emotion analysis model, comprising:
training a sentence-level emotion analysis model by adopting a first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer;
Responding to the sentence-level emotion analysis model training until a first preset training condition is met, initializing parameters of the second input layer according to the parameters of the first input layer, and initializing parameters of the second capsule layer according to the parameters of the first capsule layer;
training the aspect emotion analysis model by adopting a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the aspect emotion analysis model further comprises a second pooling layer, and the second pooling layer is arranged behind the second capsule layer and before the second classification layer;
the sentence-level emotion analysis model further comprises a first pooling layer, and the first pooling layer is arranged behind the first capsule layer and in front of the first classification layer;
the network structure of the second pooling layer is the same as the network structure of the first pooling layer.
3. A method according to claim 1 or 2, characterized in that,
the training sentence-level emotion analysis model by adopting the first training text set comprises the following steps: extracting a first input vector corresponding to any first training text in the first training text set through the first input layer; extracting features of the first input vector through the first capsule layer to obtain a first feature vector corresponding to the first training text; outputting sentence-level emotion classification prediction results corresponding to the first feature vectors through the first classification layer; training the sentence-level emotion analysis model according to the sentence-level emotion classification prediction result and the sentence-level emotion label corresponding to the first training text;
Training the aspect emotion analysis model by using a second training text set, wherein the training comprises the following steps: extracting a second input vector corresponding to any second training text in the second training text set through the second input layer; extracting features of the second input vector through the second capsule layer to obtain a second feature vector corresponding to the second training text; outputting a first-aspect emotion classification prediction result corresponding to the second feature vector through the second classification layer; and training the aspect emotion analysis model according to the aspect emotion classification prediction result of the first aspect and the aspect emotion label corresponding to the second training text.
4. The method of claim 3, wherein the step of,
the extracting the first input vector corresponding to the first training text includes: obtaining a first word vector corresponding to the first training text; determining a first input vector corresponding to the first training text according to the first word vector;
the extracting the second input vector corresponding to the second training text includes: obtaining a second word vector corresponding to the second training text; and determining a second input vector corresponding to the second training text according to the second word vector.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the determining, according to the first word vector, a first input vector corresponding to the first training text includes: obtaining a first position vector corresponding to the first training text, wherein the first position vector is used for representing the position of a preset aspect word in the first training text; determining a first input vector corresponding to the first training text according to the first word vector and the first position vector;
the determining, according to the second word vector, a second input vector corresponding to the second training text includes: obtaining a second position vector corresponding to the second training text, wherein the second position vector is used for representing the position of the preset aspect word in the second training text; and determining a second input vector corresponding to the second training text according to the second word vector and the second position vector.
6. The method of claim 3, wherein the step of,
the first training text comprises at least one first token, and the elements of the first input vector comprise at least one first token input vector corresponding to the at least one first token; the feature extraction is performed on the first input vector to obtain a first feature vector corresponding to the first training text, including: extracting at least one first token input vector segment from the first input vectors, wherein any first token input vector segment comprises N continuous first token input vectors, and N is an integer greater than or equal to 1; extracting at least one first token input vector segment corresponding to the at least one first token input vector segment; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector;
The second training text comprises at least one second token, and the elements of the second input vector comprise at least one second token input vector corresponding to the at least one second token; the feature extraction is performed on the second input vector to obtain a second feature vector corresponding to the second training text, including: extracting at least one second token input vector segment from the second input vectors, wherein any second token input vector segment comprises N continuous second token input vectors, and N is an integer greater than or equal to 1; extracting at least one second segment feature vector corresponding to the at least one second token input vector segment; and obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector.
7. The method of claim 6, wherein the step of providing the first layer comprises,
the extracting at least one first token input vector segment from the first input vector comprises: extracting at least one first token input vector segment from the first input vector through a sliding window with a window size of N and a sliding step length of 1;
the extracting at least one second token input vector segment from the second input vector comprises: and extracting at least one second token input vector segment from the second input vector through a sliding window with a window size of N and a sliding step length of 1.
8. The method of claim 6, wherein the step of providing the first layer comprises,
the obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector includes: determining at least one first weight corresponding to the at least one first segment feature vector; obtaining a first feature vector corresponding to the first training text according to the at least one first segment feature vector and the at least one first weight; wherein, the first weight corresponding to the first token segment containing the preset aspect word is different from the first weight corresponding to the first token segment not containing the preset aspect word;
the obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector includes: determining at least one second weight corresponding to the at least one second segment feature vector; obtaining a second feature vector corresponding to the second training text according to the at least one second segment feature vector and the at least one second weight; wherein, the second weight corresponding to the second token segment containing the preset aspect word is different from the second weight corresponding to the second token segment not containing the preset aspect word.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
the determining at least one first weight corresponding to the at least one first segment feature vector includes: for any first segment feature vector, responding to the fact that a first token segment corresponding to the first segment feature vector comprises the preset aspect word, and determining a first weight corresponding to the first segment feature vector based on a preset activation function; determining a first weight corresponding to the first segment feature vector as a preset value in response to the first token segment corresponding to the first segment feature vector not including the preset aspect word;
the determining at least one second weight corresponding to the at least one second segment feature vector includes: for any second segment feature vector, responding to the second token segment corresponding to the second segment feature vector to comprise the preset aspect word, and determining a second weight corresponding to the second segment feature vector based on a preset activation function; determining a second weight corresponding to the second segment feature vector as the preset value in response to the second token segment corresponding to the second segment feature vector not including the preset aspect word;
Wherein the preset value is outside the value range of the preset activation function.
10. An aspect-level emotion analysis method, comprising:
obtaining an aspect emotion analysis model trained by the training method of the aspect emotion analysis model according to any one of claims 1 to 9;
inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
11. A training device for an aspect-level emotion analysis model, comprising:
the first training module is used for training a sentence-level emotion analysis model by adopting a first training text set; wherein the first training text set comprises a plurality of first training texts, and the first training texts comprise sentence-level emotion tags; the sentence-level emotion analysis model comprises a first input layer, a first capsule layer and a first classification layer, the aspect-level emotion analysis model comprises a second input layer, a second capsule layer and a second classification layer, the network structure of the first input layer is identical to that of the second input layer, and the network structure of the first capsule layer is identical to that of the second capsule layer;
The initialization module is used for responding to training of the sentence-level emotion analysis model until a first preset training condition is met, initializing parameters of the second input layer according to the parameters of the first input layer, and initializing parameters of the second capsule layer according to the parameters of the first capsule layer;
the second training module is used for training the aspect emotion analysis model by adopting a second training text set; wherein the second training text set comprises a plurality of second training texts, and the second training texts comprise aspect-level emotion tags.
12. An aspect emotion analysis device, comprising:
an obtaining module, configured to obtain the aspect emotion analysis model obtained by training by the training device for aspect emotion analysis model according to claim 11;
the aspect emotion analysis module is used for inputting the text to be processed into the aspect emotion analysis model, and outputting a second aspect emotion classification prediction result corresponding to the text to be processed through the aspect emotion analysis model.
13. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
Wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any of claims 1 to 10.
14. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410023911.XA CN117540725B (en) | 2024-01-05 | 2024-01-05 | Aspect-level emotion analysis method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410023911.XA CN117540725B (en) | 2024-01-05 | 2024-01-05 | Aspect-level emotion analysis method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117540725A true CN117540725A (en) | 2024-02-09 |
CN117540725B CN117540725B (en) | 2024-03-22 |
Family
ID=89790366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410023911.XA Active CN117540725B (en) | 2024-01-05 | 2024-01-05 | Aspect-level emotion analysis method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117540725B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200167419A1 (en) * | 2018-11-27 | 2020-05-28 | Sap Se | Exploiting document knowledge for aspect-level sentiment classification |
CN113204645A (en) * | 2021-04-01 | 2021-08-03 | 武汉大学 | Knowledge-guided aspect-level emotion analysis model training method |
CN114048288A (en) * | 2021-11-10 | 2022-02-15 | 北京明略软件系统有限公司 | Fine-grained emotion analysis method and system, computer equipment and storage medium |
CN114692604A (en) * | 2022-04-16 | 2022-07-01 | 东南大学 | Deep learning-based aspect-level emotion classification method |
CN115169345A (en) * | 2022-07-22 | 2022-10-11 | 深圳零时科技有限公司 | Training method, device and equipment for text emotion analysis model and storage medium |
CN115357711A (en) * | 2022-07-06 | 2022-11-18 | 华南师范大学 | Aspect level emotion analysis method and device, electronic equipment and storage medium |
-
2024
- 2024-01-05 CN CN202410023911.XA patent/CN117540725B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200167419A1 (en) * | 2018-11-27 | 2020-05-28 | Sap Se | Exploiting document knowledge for aspect-level sentiment classification |
CN113204645A (en) * | 2021-04-01 | 2021-08-03 | 武汉大学 | Knowledge-guided aspect-level emotion analysis model training method |
CN114048288A (en) * | 2021-11-10 | 2022-02-15 | 北京明略软件系统有限公司 | Fine-grained emotion analysis method and system, computer equipment and storage medium |
CN114692604A (en) * | 2022-04-16 | 2022-07-01 | 东南大学 | Deep learning-based aspect-level emotion classification method |
CN115357711A (en) * | 2022-07-06 | 2022-11-18 | 华南师范大学 | Aspect level emotion analysis method and device, electronic equipment and storage medium |
CN115169345A (en) * | 2022-07-22 | 2022-10-11 | 深圳零时科技有限公司 | Training method, device and equipment for text emotion analysis model and storage medium |
Non-Patent Citations (1)
Title |
---|
余传明;: "基于深度循环神经网络的跨领域文本情感分析", 图书情报工作, no. 11, 5 June 2018 (2018-06-05) * |
Also Published As
Publication number | Publication date |
---|---|
CN117540725B (en) | 2024-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113627447B (en) | Label identification method, label identification device, computer equipment, storage medium and program product | |
CN112015859A (en) | Text knowledge hierarchy extraction method and device, computer equipment and readable medium | |
CN112164391A (en) | Statement processing method and device, electronic equipment and storage medium | |
CN113254785B (en) | Recommendation model training method, recommendation method and related equipment | |
CN111709240A (en) | Entity relationship extraction method, device, equipment and storage medium thereof | |
CN113704460B (en) | Text classification method and device, electronic equipment and storage medium | |
CN110795944A (en) | Recommended content processing method and device, and emotion attribute determining method and device | |
CN111046275A (en) | User label determining method and device based on artificial intelligence and storage medium | |
CN111241285A (en) | Method, device, equipment and storage medium for identifying question answer types | |
CN113868519B (en) | Information searching method, device, electronic equipment and storage medium | |
CN113392253A (en) | Visual question-answering model training and visual question-answering method, device, equipment and medium | |
CN112528658A (en) | Hierarchical classification method and device, electronic equipment and storage medium | |
CN113836268A (en) | Document understanding method and device, electronic equipment and medium | |
CN110222333A (en) | A kind of voice interactive method, device and relevant device | |
CN116245097A (en) | Method for training entity recognition model, entity recognition method and corresponding device | |
CN111639234B (en) | Method and device for mining core entity attention points | |
CN114201516A (en) | User portrait construction method, information recommendation method and related device | |
CN111444335B (en) | Method and device for extracting central word | |
CN112507705B (en) | Position code generation method and device and electronic equipment | |
CN112528146B (en) | Content resource recommendation method and device, electronic equipment and storage medium | |
CN112131884B (en) | Method and device for entity classification, method and device for entity presentation | |
CN117540725B (en) | Aspect-level emotion analysis method and device, electronic equipment and storage medium | |
CN115510860A (en) | Text sentiment analysis method and device, electronic equipment and storage medium | |
CN113128225B (en) | Named entity identification method and device, electronic equipment and computer storage medium | |
CN114398482A (en) | Dictionary construction method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |