CN115098675A - Emotion triple generation method based on multi-class table filling - Google Patents

Emotion triple generation method based on multi-class table filling Download PDF

Info

Publication number
CN115098675A
CN115098675A CN202210700536.9A CN202210700536A CN115098675A CN 115098675 A CN115098675 A CN 115098675A CN 202210700536 A CN202210700536 A CN 202210700536A CN 115098675 A CN115098675 A CN 115098675A
Authority
CN
China
Prior art keywords
comment
emotion
word
category
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210700536.9A
Other languages
Chinese (zh)
Inventor
葛继科
程文俊
向月
陈祖琴
武承志
胡庭恺
杨照旭
刘浩因
刘苏
陈超
胥纪超
余文成
董焱
郑育�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202210700536.9A priority Critical patent/CN115098675A/en
Publication of CN115098675A publication Critical patent/CN115098675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides an emotion triple generation method based on multi-class table filling, which comprises the following steps: analyzing the original comment text and unifying labels of the aspect words, comment viewpoints and emotion polarities of the comment text by using a joint labeling frame; extracting semantic features of text information by using a Bert pre-training language model; utilizing a multi-class multi-head attention mechanism to learn the relevance class enhancement vector representation of the aspect words and the comment viewpoints; dividing and filtering information of the aspect word recognition and comment viewpoint detection task; filling cell scores and realizing symmetry constraint and implicit constraint of a table structure by using an emotion triple unified mark space; unified tag search and structured decoding are carried out by utilizing the characteristics that the aspect words, the evaluation viewpoints and the emotion polarities are all rectangular frames in the unified labeling space; and constructing a multifunctional comment text aspect word emotion triple. The invention improves the accuracy of the aspect word recognition and comment viewpoint detection and eliminates the problem of emotion triple overlapping.

Description

Emotion triple generation method based on multi-class table filling
Technical Field
The invention relates to the technical field of natural language processing information extraction, in particular to an emotion triple generation method based on multi-class table filling.
Background
With the rapid development of internet platforms such as web society and electronic commerce, more and more users share their own opinions on the web platforms. A large number of user comments comprise comment viewpoints and sentiment tendency, more valuable information can be obtained by performing fine-grained comment mining and sentiment analysis on comment texts, and the method has important significance for consumers, merchants, governments and the like. For example, user reviews of events on a social platform may show the user's position in relation to the events, while reviews on an e-commerce platform may show the user's satisfaction with goods and services. At present, recognizing an Aspect word from a comment text and extracting corresponding emotion polarity become research hotspots of Aspect word emotion triple Extraction (ASTE).
The extraction of the Aspect word emotion triple aims to extract an Aspect word-comment viewpoint triple, namely (Aspect Term, Opinion Term, Sentiment, AOS), from a user comment text. Aspect Term is a Term also called an opinion target, and is a physical word or phrase representing product or service characteristics in a comment text; the Opinion Term is a comment viewpoint and is a word or phrase for expressing the attitude or the viewpoint of the user; sentiment is the emotional polarity (positive, negative, neutral) of the user to the opinion objective. For example, "make-up removal is clean and mild without stimulation, and is smooth and comfortable when the face is used up," make-up removal "is Aspect Term," very clean "is Opinion Term, and is sentime actively, then the terms emotion triple of the sentence are (" make-up removal "," very clean "," active ").
The current extraction of the aspect word emotion triple mainly comprises two modes: pipeline and joint decimation. For the assembly line type emotion triple extraction method, firstly, two relatively independent sequence mark models are used for extracting the aspect words and the comment viewpoints, secondly, the extracted aspect words and the comment viewpoints are paired, secondly, the classification model is used for judging whether the generated word pairs are effective or not, and finally, the effective word pair information is used for judging the emotion polarity of the aspect words, so that emotion triples are generated. However, the method has a limitation on the accuracy of the extraction of the triples, and an extraction mode of a pipeline can cause an accumulated error, that is, the extraction accuracy of the aspect word-comment viewpoint in the previous step can influence the judgment of the emotion polarity of the aspect word in the next step. The method has the advantages that the effect of error accumulation can be effectively relieved by adopting a combined extraction mode, a multi-task frame is utilized to jointly detect the aspect words, the comment viewpoints and the emotion dependence, but the two independent sequences are still used in the frame to label, identify and extract the aspect words and the comment viewpoints, the information interaction between the aspect words and the comment viewpoints is ignored, and the emotion consistency between word pairs cannot be guaranteed.
In summary, although there is a certain research result in the conventional emotion triple extraction, the following disadvantages still exist: 1. the extraction mode of the assembly line can cause error propagation and influence the accuracy of the extraction of the triples; 2. the joint extraction mode ignores information interaction between the aspect word extraction and the comment viewpoint extraction, so that the emotion polarities of word pairs are inconsistent; 3. in the process of joint extraction, the influence of the types of the aspect words or the comment opinions is ignored by the aspect word recognition and comment opinion extraction; 4. the problem of overlapping of emotion triples cannot be solved, so that the identification efficiency is low; 5. when the aspect word-comment viewpoint pair is extracted and the word pair emotion polarity is predicted, the mark spaces between the aspect word-comment viewpoint pair and the comment viewpoint pair are still separated, and information interaction between the aspect word-comment viewpoint pair and the comment viewpoint pair is hindered; 6. the lack of systematic analysis on the overall evaluation results of the user comments makes the user commodity comments in a large scale as references, but cannot assist the user in making decisions intuitively and quickly.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an emotion triple generation method based on multi-class form filling, which provides data support for the aspect word emotion triple facing to the commodity comment of a user by extracting detailed triple information in the comment text, thereby achieving the purpose of assisting the user to make a decision quickly and accurately.
In order to solve the technical problems, the invention adopts the following technical scheme:
an emotion triple generation method based on multi-class table filling comprises the following steps:
s1, firstly, cleaning the comment text information data obtained by the crawler tool; secondly, uniformly labeling comment viewpoints, evaluation objects, namely, aspect words and emotion types in the data, and constructing an emotion triple uniform marking space; and finally, the marked data is divided into 8: 1: 1, dividing the ratio into a training set, a verification set and a test set;
s2, carrying out feature coding on the comment text by utilizing a Bert pre-training language model, and extracting deep semantic information H of the text;
s3, according to the emotion triple unified mark space, respectively learning the category enhancement vector representation H of the category of the comment and the related aspect words by using a multi-category multi-head attention mechanism A And associated category enhancement vector representation H with comment views O
S4, representing H by a category enhanced vector A 、H O Based on the two-way association of the aspect word recognition task and the comment viewpoint detection task by using a partition filtering mechanism, firstly, an aspect gate similar to an LSTM neural network is realized by using a linear layer neural network
Figure BDA0003703810950000031
Harmony view point door
Figure BDA0003703810950000032
Then, each time step unit is divided into aspect word recognition task partitions rho by using a gating mechanism A Comment viewpoint detection task partition ρ O And shared task partition ρ S Finally, filtering the information irrelevant to the task by using a filtering mechanism to obtain partition filtering information H p
S5, calculating probability distribution score vectors among each word pair by using a double affine depth attention mechanism, and filling the probability distribution score vectors into each word pair cell of an emotion triple unified mark space two-dimensional table;
s6, adding symmetry constraint L to unified tags in the emotion triple unified tag space two-dimensional table sym And implicit constraints L imp
S7, traversing a square representing the aspect words and the comment viewpoints and a rectangle representing emotion polarity in a two-dimensional search table by using an emotion triple unified mark space combined decoding frame, determining the boundary of information of the aspect words and the comment viewpoints by using the property that adjacent rows or columns of the aspect words or the comment objects in the two-dimensional search table are marked to be consistent, decoding the aspect words or the comment viewpoints by using the property that the square is symmetrical about a diagonal line, and traversing the emotion polarity of an aligned rectangular frame structure between the search aspect words and the comment viewpoints by using the detected aspect words and the comment viewpoints;
s8, constructing comment text aspect word emotion triples, aggregating the advantages and disadvantages of aspect word emotion evaluations under various categories and reasons for generation, summarizing the overall comment text emotion triples to reflect overall evaluation results, and automatically generating feedback information according to query conditions of users.
Further, the construction of the emotion triple unified mark space in step S1 includes the following steps:
s11, acquiring the starting position and the ending position of the aspect word A and the comment viewpoint word O in the comment text and the emotion polarity Y of the corresponding aspect word sent ={Pos,Neg,Neu};
S12, obtaining category information describing various aspects words and comment viewpoints in the comment text, and obtaining m category information through statistical analysis, wherein the m category information is defined as Y c ={y 1 ,y 2 ,…,y m };
S13, marking the Chinese character, comment view label and emotion polarity based on the obtained m kinds of information, and defining the marking mode of the Chinese character as Y A ={y 1 ,…,y i ,None},y i ∈Y c The comment is marked in the form of Y O ={y 1 ,…,y i ,None},y i ∈Y c The joint marking mode of emotional polarity is Y P ={y 1 +p 1 ,…,y i +p i ,None},y i ∈Y c ,p i ∈Y sent None indicates no association between word pairs;
s14, filling the obtained aspect word marks, comment viewpoint marks and emotion polarity combined marks into a table T respectively n×n In each cell of (2) to represent a word pair w i,j And the information category relationship between the comment texts is obtained, so that an emotion triple unified mark space is constructed, wherein n represents the length of the comment text S.
Further, in the step S3, the category enhancement vector representations H of the categories to which the comments belong and the aspect words are associated are learned respectively by using a multi-category multi-head attention mechanism A And associated classes with review viewsEnhancement vector representation H O The method specifically comprises the following steps:
s31, further obtaining text context deep semantic information of each time step by using LSTM neural network model
Figure BDA0003703810950000041
The detailed calculation is as follows:
Figure BDA0003703810950000042
Figure BDA0003703810950000043
Figure BDA0003703810950000051
wherein W, b is a trainable parameter, σ represents sigmoid activation function, i t 、o t 、f t Respectively showing an input gate, an output gate and a forgetting gate, c t Cell state representing the current time step; c. C t-1 Cell state representing the previous time step;
Figure BDA0003703810950000052
representing a cell state update value;
s32, representing the Bert output vector
Figure BDA0003703810950000053
Hidden layer vector h output at last time step t-1 As the input part of the multi-class multi-head attention mechanism module, firstly, the module will be
Figure BDA0003703810950000054
And K (t) Point multiplication is carried out to obtain semantic similarity a between each category and each aspect word or comment viewpoint (t) Then V is added (t) And a (t) Click-by-click to find a facet word category or a comment opinion categoryEnhanced vector representation
Figure BDA0003703810950000055
Finally, the hidden layer output vector of the LSTM neural network model is spliced with the category enhancement vector representation to form a final vector representation h of the unit time step t Specifically, the formula is shown as follows:
Figure BDA0003703810950000056
Figure BDA0003703810950000057
Figure BDA0003703810950000058
wherein softmax denotes the activation function, d e Representing the word vector dimension of the Bert output, attention representing the manner in which attention is computed,
Figure BDA0003703810950000059
Figure BDA00037038109500000510
m represents the category to which the text description term or comment opinion belongs,
Figure BDA00037038109500000511
the key-value pair representing the association of the ith category is specifically shown as follows:
Figure BDA00037038109500000512
wherein σ represents a sigmoid activation function;
s33, acquiring category enhanced vector representations of the whole text sequence about the aspect words and the comment viewpoints, wherein the category enhanced vector representations are respectively
Figure BDA0003703810950000061
Wherein
Figure BDA0003703810950000062
Figure BDA0003703810950000063
And splicing the category enhanced vectors of the aspect words and the comment viewpoints to obtain the final overall category enhanced vector representation
Figure BDA0003703810950000064
Further, the step S4 of performing bidirectional association between the aspect word recognition task and the comment opinion detection task by using a partition filtering mechanism specifically includes the following steps:
s41, utilization door
Figure BDA0003703810950000065
Harmony view point door
Figure BDA0003703810950000066
Respectively controlling information distribution of the aspect word recognition task and the comment viewpoint detection task, and dividing the neural unit of each time step into aspect word recognition task partitions rho A Comment viewpoint detection task partition ρ O And shared task partition ρ S The door of the above aspect
Figure BDA0003703810950000067
Harmony view point door
Figure BDA0003703810950000068
Is calculated as follows:
Figure BDA0003703810950000069
Figure BDA00037038109500000610
wherein cummax represents the calculation mode of the accumulated maximum value, Linear represents the standard Linear transformation,
Figure BDA00037038109500000611
represents the t-th overall class enhancement vector representation, h t-1 A hidden layer vector representation representing a previous time step;
s42, further calculating the current time step information representation
Figure BDA00037038109500000612
Figure BDA00037038109500000613
Wherein, tanh represents an activation function;
s43, calculating the aspect word recognition task partition corresponding to the historical time step
Figure BDA00037038109500000614
Review opinion detection task partitioning
Figure BDA00037038109500000615
And shared task partitioning
Figure BDA00037038109500000616
The specific calculation mode is shown as the following formula:
Figure BDA00037038109500000617
Figure BDA00037038109500000618
Figure BDA0003703810950000071
wherein,
Figure BDA00037038109500000714
the multiplication operator of the corresponding elements is represented,
Figure BDA0003703810950000072
and the calculation method of (2) and the step S41
Figure BDA0003703810950000073
Harmony view point door
Figure BDA0003703810950000074
The calculation modes are consistent;
s44, calculating the corresponding aspect word recognition task partition of the current time step
Figure BDA0003703810950000075
Review opinion detection task partitioning
Figure BDA0003703810950000076
And shared task partitioning
Figure BDA0003703810950000077
The specific calculation mode is shown as the following formula:
Figure BDA0003703810950000078
Figure BDA0003703810950000079
Figure BDA00037038109500000710
s45, adding the partition vector of the current time step and the partition vector of the previous time step, and integrating the sum into the total partition information expression rho of the current time step A 、ρ O And ρ S The specific calculation method is shown as the following formula:
Figure BDA00037038109500000711
Figure BDA00037038109500000712
Figure BDA00037038109500000713
s46, identifying task partition rho by using divided aspect words A Comment viewpoint detection task partition ρ O And shared task partition ρ S Different superposition modes between the two groups of the information are realized to realize the retention and the filtration of the information, and for this purpose, the memory storage units of the aspect word recognition task partition, the comment viewpoint detection task partition and the shared task partition are respectively defined as mu A 、μ O And mu S The specific calculation method is shown as the following formula:
μ A =ρ As ,μ O =ρ Os ,μ S =ρ s
s47, splicing the three partition memory storage units to obtain the unit state vector representation c of the next time step t And hidden layer vector representation h t The specific calculation method is shown as the following formula:
c t =Linear([μ A,t ;μ O,t ;μ s,t ])
h t =tanh(c t )
wherein Linear represents a standard Linear transformation;
s48, finally, representing h by each time step vector t Splicing is carried out on the basis of the information, so that the partitioned filtering information H of the recognition of the aspect words and the detection of the comment viewpoints is generated p =[h 1 ,h 2 ,…,h n ]。
Further, the step S5 of calculating a probability distribution score vector between each word pair by using a double affine depth attention mechanism, and filling the probability distribution score vector into each word pair cell of the emotion triple unified tag space two-dimensional table specifically includes the following steps:
s51, predicting the head and tail parts of each word by adopting two multilayer MLP neural network models, wherein the specific calculation mode is shown as the following formula:
Figure BDA0003703810950000081
s52, calculating the fractional vector representation g of each word pair according to the following formula by using a double affine depth attention mechanism model i,j
Figure BDA0003703810950000082
Figure BDA0003703810950000083
Wherein Biaff represents a double affine transformation, U 1 And U 2 All model weights, b represents an offset;
s53, first, the score vector is expressed as g i,j The input as a softmax function predicts the probability distribution of the labels in a word pair as follows:
P(y i,j ∣s)=softmax(dropout(g i,j ))
then, filling the probability distribution of each word pair into an n multiplied by n two-dimensional table T;
and finally, calculating the overall loss value by using the probability distribution condition of the predicted label and the real label according to the following formula:
Figure BDA0003703810950000084
wherein, Y i,j Is a real label.
Further, in the step S6, a symmetry constraint L is added to the unified tag in the emotion triple unified tag space two-dimensional table sym And implicit constraints L imp The steps are as follows:
s61, the labeled structures of the aspect words and the comment viewpoint words are all squares symmetrical about a diagonal, and the loss function for defining the symmetrical constraint is L sym
Figure BDA0003703810950000091
Wherein,
Figure BDA0003703810950000092
p (y) representing all possible occurrences of tags for each word pair in a sentence i,j | s) stacking;
s62, in the emotion triple, the emotion polarity of the aspect word is definitely closely related to the aspect word and the comment viewpoint, and the loss function for defining the implicit constraint is L imp
Figure BDA0003703810950000093
Wherein,
Figure BDA0003703810950000094
p (y) representing all possible occurrences of tags per word pair in a sentence i,j | s) stack.
Compared with the prior art, the method for generating the emotion triple based on the multi-class table filling has the following beneficial effects that:
1. the multi-class multi-head attention mechanism model can consider the influence of the class to which the comment text belongs on the aspect word recognition and comment viewpoint detection triplets, and is beneficial to improving the accuracy of the aspect word recognition and comment viewpoint detection;
2. the partition filtering neural network model ensures that two subtasks of aspect word recognition and comment viewpoint detection are not isolated any more, but fuses information between the two subtasks, and divides the two subtasks into three partitions on the basis of the information, namely an aspect word recognition task partition, a comment viewpoint detection task partition and a shared task partition, so that the bidirectional interaction between the subtasks is improved, the common information between the two subtasks is stored, and irrelevant information is abandoned;
3. according to the table filling-based combined extraction framework, the unified marking space of the comment viewpoints, the aspect words and the emotion triples of emotion polarity is constructed, and the sequence marking and decoding mode of the aspect words and the comment viewpoints is converted into a mode of finding rectangles in a two-dimensional table, so that the problems of information obstruction among different subtasks and emotion triples overlapping are effectively eliminated;
4. the comment text aspect word emotion triple disclosed by the invention can realize quick extraction of the comment text emotion triple, so that the quality and the cause of aspect word emotion under each category can be aggregated, the overall comment text emotion triple can be summarized to reflect the overall evaluation result, and feedback information can be automatically generated according to the query condition of a user.
Drawings
FIG. 1 is a schematic flow diagram of an emotion triple generation method based on multi-class table filling according to the present invention.
FIG. 2 is an emotion triple extraction overall model architecture diagram provided by the embodiment of the invention.
Fig. 3 is a multi-class multi-head attention mechanism detailed schematic diagram of an overall model architecture diagram according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of a partition filter showing an overall model architecture diagram according to an embodiment of the present invention.
FIG. 5 is an exemplary diagram of a unified markup space of emotion triples according to an embodiment of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
Referring to fig. 1, the present invention provides an emotion triple generation method based on multi-class table filling, including the following steps:
s1, firstly, cleaning comment text information data obtained by a crawler tool, for example, filtering and cleaning original user commodity comment text information data on a webpage crawled by an automatic Selenium crawler tool by using a regular expression, manual review and other data preprocessing modes, wherein the data preprocessing modes comprise invalid character emoticons removal, comment text filtration and other data filtration; secondly, uniformly labeling comment viewpoints, evaluation objects, namely, aspect words and emotion types in the data, and constructing an emotion triple uniform marking space; and finally, the marked data is divided into 8: 1: 1, dividing the ratio into a training set, a verification set and a test set; subsequently, an emotion triple extraction model based on multi-class table filling is further utilized for triple extraction, and the overall structure of the model is shown in the attached figure 2;
s2, carrying out feature coding on the comment text by utilizing a Bert pre-training language model, and extracting deep semantic information H of the text;
s3, respectively learning category enhancement vector expression H associated with the category of the comment and the aspect word by utilizing a multi-category multi-head attention mechanism according to the emotion triple unified mark space A And associated category enhancement vector representation H with comment views O
S4, representing H by a category enhanced vector A 、H O Based on the two-way association of the aspect word recognition task and the comment viewpoint detection task by using a partition filtering mechanism, firstly, an aspect gate similar to an LSTM neural network is realized by using a linear layer neural network
Figure BDA0003703810950000111
Harmony view point door
Figure BDA0003703810950000112
Then, each time step unit is divided into aspect word recognition task partitions rho by using a gating mechanism A Comment viewpoint detection task partition ρ O And shared task partition ρ S Finally, a filtering mechanism is utilizedFiltering the information irrelevant to the task to obtain partition filtering information H p The information bidirectional communication between the tasks is realized, and the problem of negative migration of other tasks to the task is avoided;
s5, calculating probability distribution score vectors among each word pair by using a double affine depth attention mechanism, and filling the probability distribution score vectors into each word pair cell of an emotion triple unified mark space two-dimensional table;
s6, adding symmetry constraint L to the unified label in the emotion triple unified label space two-dimensional table sym And implicit constraints L imp
S7, traversing a square representing the aspect words and the comment viewpoints and a rectangle representing emotion polarity in a two-dimensional search table by using an emotion triple unified mark space combined decoding frame, determining the boundary of information of the aspect words and the comment viewpoints by using the property that adjacent rows or columns of the aspect words or the comment objects in the two-dimensional search table are marked to be consistent, decoding the aspect words or the comment viewpoints by using the property that the square is symmetrical about a diagonal line, and traversing the emotion polarity of an aligned rectangular frame structure between the search aspect words and the comment viewpoints by using the detected aspect words and the comment viewpoints;
s8, constructing comment text aspect word emotion triples, realizing rapid extraction of comment text emotion triples, aggregating aspect word emotion evaluation advantages and disadvantages under various categories and reasons for generation, summarizing the overall comment text emotion triples to reflect overall evaluation results, and automatically generating feedback information according to user query conditions.
As a specific embodiment, the construction of the emotion triple uniform mark space in step S1 includes the following steps:
s11, analyzing the preprocessed comment text information data to obtain the starting position and the ending position of the aspect word A and the comment viewpoint word O in the comment text and the emotion polarity Y of the corresponding aspect word sent The three emotion polarity categories Pos, Neg and Neu respectively represent positive, negative and neutral;
s12, obtaining the words and comments describing each aspect in the comment textThe statistical analysis obtains m pieces of category information, which is defined as Y c ={y 1 ,y 2 ,…,y m }; for example, through the analysis of the comment text data set, 16 belonging categories of the comment text are obtained, and are defined as Y c ={y 1 ,y 2 ,…,y 16 Such as "logistics", "efficacy", "experience with use", "price", "overall", "size", "smell", "authenticity", "packaging", "service", "composition", "freshness", "hardware performance", "usage scenario", "appearance", "software performance";
s13, marking the facet words, comment viewpoint labels and emotion polarities on the basis of the obtained m types of category information, namely constructing a unified marking space by using the emotion polarity labels and the categories to which comment texts belong, and defining the marking mode of the facet words as Y A ={y 1 ,…,y i ,None},y i ∈Y c The marking mode of the comment viewpoint is Y O ={y 1 ,…,y i ,None},y i ∈Y c The joint marking mode of emotional polarity is Y P ={y 1 +p 1 ,…,y i +p i ,None},y i ∈Y c ,p i ∈Y sent None indicates no association between word pairs; if the makeup removal is clean, mild and not irritating, and the face is smooth and comfortable after use, the makeup removal is used as an aspect word, the clean is used as a comment viewpoint, the emotion polarity is positive, the category to which the evaluation object belongs is efficacy, the labels of the aspect word and the comment viewpoint are efficacy or ECT, the emotion polarity category is jointly labeled ECT-POS, and other words are labeled None;
s14, filling the obtained aspect word mark, comment viewpoint mark and emotion polarity combined mark into a table T respectively n×n In each cell of (a), to represent a word pair w i,j The information category relationship between the comment texts is obtained, so that an emotion triple unified mark space is constructed, wherein n represents the length of the comment text S, and the specific emotion triple unified mark space is shown in fig. 5.
As a specific embodiment, the step S2 of performing feature coding on the comment text by using the Bert pre-training language model, so as to extract the deep semantic information H of the text specifically includes the following steps:
s21, carrying out statistical analysis on the preprocessed comment text information data to obtain that the longest sentence length in the comment text is 108, if the comment text length is lower than 108 characters, a 0 completion sequence is needed, otherwise, if the comment text length exceeds 108 characters, the comment text is cut off, and meanwhile, a [ CLS ] character and a [ SEP ] character are respectively spliced and added at the initial position and the tail position of the comment text;
s22, as shown in FIG. 2, sentence coding is carried out by using the Bert pre-training language model; the Bert pre-training language model consists of an Encoder part in a 12-layer Transformer module, wherein each layer of Encoder consists of Multi-Head Attention, LayerNormalization and feed Forward. The Multi-Head Attention consists of 12 heads, firstly, the word embedding expression of comment texts and 3 weight matrixes are utilized to carry out dot product operation to obtain a Query matrix, a Key matrix and a Value matrix, as shown in the following formula (1); secondly, calculating the semantic similarity alpha between the Query and the Key by using the Query and the Key, as shown in the following formula (2); then, performing dot multiplication by using the semantic similarity alpha and the Value matrix to obtain a single-head result, which is shown in the following formula (3); finally, the results of the 12 heads are combined to obtain the final output, as shown in the following formula (4):
Query,Key,Value=X e (W Q ,W K ,W V ) Formula (1)
Figure BDA0003703810950000131
head i =Attention(Query,Key,Value)=α i V type (3)
MultiHead(Query,Key,Value)=Concat(head 1 ,…,head i )W O Formula (4)
In addition, the output result layer of the MultiHead is normalized by using a LayNormalization module, the final output of the Encoder in the current layer is obtained through analysis and processing of a feedforward neural network module, and the steps are continuously repeated for each subsequent layer of Encoder until the last layer of Encoder outputs the deep semantic information H of the whole comment text, as shown in the following formula 5:
Figure BDA0003703810950000141
the output result H of the Bert pre-training language model of the formula (5) is used as data input of a multi-class multi-head Attention mechanism, and the multi-class multi-head Attention mechanism model is applied to the Aspect word recognition and comment viewpoint extraction module respectively, so that interactive information of comment categories on Aspect words and comment viewpoints is obtained, as shown in the Aspect Type attachment and Opinion Type attachment parts in the attached drawing 2. Each multi-class multi-head Attention mechanism module consists of a long short-term neural network unit (LSTM) and a multi-class Attention unit (Type-Attention). Therefore, as a specific embodiment, the category enhancement vector representation H of the category to which the comment belongs and the related aspect word are learned separately in step S3 using a multi-category multi-head attention mechanism A And an associated category enhancement vector representation H with comment views O The method specifically comprises the following steps:
s31, further obtaining text context deep semantic information of each time step by using LSTM neural network model
Figure BDA0003703810950000142
The detailed calculation is as follows:
Figure BDA0003703810950000143
Figure BDA0003703810950000144
Figure BDA0003703810950000145
wherein W, b is trainableParameter, σ denotes sigmoid activation function, i t 、o t 、f t Respectively showing an input gate, an output gate and a forgetting gate, c t Cell state representing the current time step; c. C t-1 Cell state representing the previous time step;
Figure BDA0003703810950000146
representing a cell state update value;
s32, representing the Bert output vector of the step S31
Figure BDA0003703810950000147
And the hidden layer vector h output at the last time step t-1 As the input part of the multi-category multi-head attention mechanism module to highlight the influence of a plurality of comment categories on the aspect words and comment viewpoints, firstly, the multi-category multi-head attention mechanism module will
Figure BDA0003703810950000148
And K (t) Point multiplication is carried out to obtain semantic similarity a between each category and each aspect word or comment viewpoint (t) Then V is added (t) And a (t) Enhanced vector representation of aspect word category or comment viewpoint category obtained by point multiplication
Figure BDA0003703810950000151
Finally, the hidden layer output vector of the LSTM neural network model is spliced with the category enhancement vector representation to form a final vector representation h of the unit time step t Specifically, the formula is shown as follows:
Figure BDA0003703810950000152
Figure BDA0003703810950000153
Figure BDA0003703810950000154
wherein softmax denotes the activation function, d e Representing the word vector dimension of the Bert output, attention representing the manner in which attention is computed,
Figure BDA0003703810950000155
Figure BDA0003703810950000156
m represents the category to which the text description term or comment opinion belongs,
Figure BDA0003703810950000157
the key-value pair representing the association of the ith category is specifically shown as follows:
Figure BDA0003703810950000158
wherein σ represents a sigmoid activation function;
s33, since the aspect word and the comment viewpoint have a common comment text description category, the category enhancement vector representation of the whole text sequence about the aspect word and the comment viewpoint, respectively, can be obtained through the above-mentioned steps S31 and S32
Figure BDA0003703810950000159
Wherein
Figure BDA00037038109500001510
And splicing the aspect words and the category enhancement vectors of the comment viewpoints to obtain the final overall category enhancement vector representation
Figure BDA00037038109500001511
The detailed module structure is shown in fig. 3.
The final overall class enhancement vector representation H is obtained at the above step S33 type On the basis, a task Partition filtering neural network model, namely a Partition filtering mechanism, is constructed, as shown in a Partition Filter part of fig. 2. The network model is composed of a partition encoder and a partition filtering encoderThe device comprises the following components: the partition encoder divides each neuron into three regions by using a gating mechanism, namely an aspect word recognition task partition, a comment viewpoint detection task partition and a shared task partition; the partition filtering encoder is used for eliminating information which is counterproductive between different tasks and avoiding error message propagation, and the detailed module structure is shown in figure 4. Therefore, as a specific embodiment, the step S4 of bi-directionally associating the aspect word recognition task and the comment viewpoint detection task by using the partition filtering mechanism specifically includes the following steps:
s41, utilization side door
Figure BDA0003703810950000161
Harmony view point door
Figure BDA0003703810950000162
Respectively controlling information distribution of the aspect word recognition task and the comment viewpoint detection task, dividing a neural unit into two parts by a gating mechanism of any specific task, wherein one part is an information part related to the specific task, the other part is information distribution unrelated to the task, then forming a shared task partition by combining partition results from the two specific tasks, and further defining the two gating mechanisms as aspect gates
Figure BDA0003703810950000163
Harmony view point door
Figure BDA0003703810950000164
The square door
Figure BDA0003703810950000165
And viewpoint door
Figure BDA0003703810950000166
Is calculated as follows:
Figure BDA0003703810950000167
Figure BDA0003703810950000168
wherein cummax represents the calculation mode of the accumulated maximum value, Linear represents the standard Linear transformation,
Figure BDA0003703810950000169
representing the t-th overall class enhancement vector representation, h, in step S33 t-1 A hidden layer vector representation representing a previous time step;
using the gated neural units described above, the neural units at each time step can be further divided into three partitions, namely the facet recognition task partition ρ A Comment viewpoint detection task partition ρ O And shared task partition ρ S
S42, for further calculating the information representation of the three partitions, the information representation of the current time step when the partitions are not divided needs to be relied on
Figure BDA00037038109500001610
And history information c of the last time step t-1 . Calculating information representation of candidate unit, using LSTM neural network to calculate memory storage unit of current time step, and further calculating information representation of current time step
Figure BDA0003703810950000171
In the following manner:
Figure BDA0003703810950000172
wherein, tanh represents an activation function;
s43, utilizing the historical time step information c obtained in the step S42 t-1 Calculating the aspect word recognition task partition corresponding to the historical time step
Figure BDA0003703810950000173
Review opinion detection task partitioning
Figure BDA0003703810950000174
And shared task partitioning
Figure BDA0003703810950000175
The specific calculation mode is shown as the following formula:
Figure BDA0003703810950000176
Figure BDA0003703810950000177
Figure BDA0003703810950000178
wherein,
Figure BDA00037038109500001721
the multiplication operator of the corresponding elements is represented,
Figure BDA0003703810950000179
and the calculation method of (2) and the step S41
Figure BDA00037038109500001710
Harmony view point door
Figure BDA00037038109500001711
The calculation modes are consistent;
s44, the information of the current time step obtained in the step S42 is used to show
Figure BDA00037038109500001712
Calculating the aspect word recognition task partition corresponding to the current time step
Figure BDA00037038109500001713
Review opinion detection task partitioning
Figure BDA00037038109500001714
And shared task partitioning
Figure BDA00037038109500001715
The specific calculation mode is shown as the following formula:
Figure BDA00037038109500001716
Figure BDA00037038109500001717
Figure BDA00037038109500001718
wherein,
Figure BDA00037038109500001722
representing the corresponding element multiplication operator;
s45, adding the partition vector of the current time step and the partition vector of the previous time step, and integrating the sum into the total partition information expression rho of the current time step A 、ρ O And ρ S The specific calculation method is shown as the following formula:
Figure BDA00037038109500001719
Figure BDA00037038109500001720
Figure BDA0003703810950000181
wherein,
Figure BDA0003703810950000182
representing a corresponding element multiplication operator;
s46, further constructing memory storage units of the three partition information to ensure high uniformity of the related information, abandoning the unrelated information, and identifying task partitions rho by using the divided aspect words A Comment viewpoint detection task partition ρ O And shared task partition ρ S Different superposition modes between the two groups of the information are adopted to realize the retention and the filtration of the information, and the memory storage units of the aspect word recognition task partition, the comment viewpoint detection task partition and the shared task partition are respectively defined as mu A 、μ O And mu S The specific calculation method is shown as the following formula:
μ A =ρ As ,μ O =ρ Os ,μ S =ρ s formula (25)
From the above equation, it can be seen that for μ A In other words, the main information of the method is derived from the aspect word recognition task partition and the sharing task partition; mu.s O The information of (2) is derived from the comment viewpoint detection task partition and the sharing task partition; mu.s S The information sources of (1) are mainly concentrated on the shared task partitions;
s47, splicing the partition characteristic vector representation output by the partition encoder part with the memory storage unit obtained by the partition filtering encoder, namely splicing the three partition memory storage units to obtain the unit state vector representation c of the next time step t And hidden layer vector representation h t The specific calculation method is shown as the following formula:
c t =Linear([μ A,t ;μ O,t ;μ s,t ]) Formula (26)
h t =tanh(c t ) Formula (27)
Wherein Linear represents a standard Linear transformation;
s48, finally, representing h by each time step vector t Splicing is carried out on the basis of the information, so that the partitioned filtering information H of the recognition of the aspect words and the detection of the comment viewpoints is generated p =[h 1 ,h 2 ,…,h n ]。
Further at H p On the basis of the method, a joint extraction framework based on table filling is constructed: firstly, the information H of the previous layer is processed by using a double affine depth attention mechanism p Mapping into head and tail vectors of words in each word pair, and calculating scores or scores g between each word pair i,j And mixing g i,j Filling the scores into an nxn emotion triple unified mark space two-dimensional table T; then adding symmetry constraint and implicit constraint to the unified labels in the table through structural regularization; and finally, identifying the square and the rectangle in the two-dimensional table by using the emotion triple unified decoding frame.
As a specific embodiment, in the step S5, a probability distribution score vector between each word pair is calculated by using a double affine depth attention mechanism, and the probability distribution score vector is filled into each word pair cell of the emotion triple unified markup space two-dimensional table, as shown in the Biaffine Model part of FIG. 2. The method specifically comprises the following steps:
s51, predicting the head and tail parts of each word by adopting two multi-layer MLP neural network models, wherein the specific calculation mode is shown as the following formula:
Figure BDA0003703810950000191
s52, calculating the fractional vector expression g of each word pair by using a double affine depth attention mechanism model according to the following formula i,j
Figure BDA0003703810950000192
Figure BDA0003703810950000193
Wherein Biaff represents a double affine transformation, U 1 And U 2 All model weights, b represents an offset;
s53, obtaining the scoreAmount represents g i,j Then, the fractional vector is first represented as g i,j The input as the softmax function predicts the tags in a word pair as follows, giving a classification probability distribution over the tag space γ:
P(y i,j ∣s)=softmax(dropout(g i,j ) Formula (31)
Then, filling the probability distribution of each word pair into an n multiplied by n two-dimensional table T;
and finally, calculating the overall loss value by using the probability distribution condition of the predicted label and the real label according to the following formula:
Figure BDA0003703810950000194
wherein, Y i,j Is a real label.
The sample emotion triple unified mark space in fig. 5 is symmetrical about the two-dimensional table, and emotion polarity is certainly dependent on the aspect word and comment viewpoint, so that the symmetry and the implication of the two-dimensional table are used for limiting the detection of aspect word recognition and comment viewpoint. On the basis, symmetrical constraint and implication constraint are added to the two-dimensional table T through structural regularization. Therefore, as a specific embodiment, in the step S6, a symmetry constraint L is added to the unified tag in the emotion triple unified tag space two-dimensional table sym And implicit constraints L imp The steps are as follows:
s61, the corresponding squares of the aspect words in the two-dimensional table T are necessarily symmetrical about the diagonal line, and then are symmetrical about the comment viewpoint mark, namely, the labeling structures of the aspect words and the comment viewpoint words are the squares and the emotion triples (A) 1 ,O 1 Set) and (O) 1 ,A 1 Nt) are equivalent, so that a symmetry constraint can be used to improve the recognition result, for which a loss function defining the symmetry constraint is L sym
Figure BDA0003703810950000201
Wherein,
Figure BDA0003703810950000202
p (y) representing all possible occurrences of tags per word pair in a sentence i,j | s) stacking;
s62, in the emotion triple, the emotion polarity of the aspect word is certain to be closely related to the aspect word and the comment viewpoint, if the emotion polarity of the aspect word exists, the aspect word and the comment viewpoint must exist, namely the probability of the emotion polarity of the aspect word is not larger than the probability of the aspect word and the comment viewpoint, so that the implicit constraint can be easily added into the emotion triple extraction task, and the loss function for defining the implicit constraint is L imp
Figure BDA0003703810950000203
Wherein,
Figure BDA0003703810950000204
p (y) representing all possible occurrences of tags per word pair in a sentence i,j | s) stacking.
Further, the overall loss function L is jointly trained and minimized by whole Symmetric constraint loss function L sym And an implicit constraint loss function of L imp
L=L whole +L sym +L imp
By observing fig. 5, it is found that the aspect words and the comment views are squares symmetrical about a diagonal line, and the emotion polarity between the two is a rectangle about the diagonal line, so that the mining of emotion triples is converted into the search of rectangular boxes. On this basis, as a specific embodiment, in step S7, the detection of the square and the rectangle of the two-dimensional table T is implemented by using an emotion triple unified tag space joint decoding frame: firstly, decoding span range prediction of the aspect words and the comment viewpoints, namely determining the boundary of the aspect words and the comment viewpoints by using the consistent property of the marks of adjacent rows or columns of the aspect words or the evaluation objects in a two-dimensional table; then, the types of the aspect words and the comment opinions are decoded, namely, the aspect words or the comment opinions are decoded by using the symmetrical property of the square about the diagonal; and finally, traversing the emotion polarity of the aligned rectangular frame structure between the search aspect word and the comment viewpoint and decoding the emotion polarity between the aspect word and the comment viewpoint by using the detected aspect word and the comment viewpoint so as to generate an emotion triple. The method specifically comprises the following steps:
s71, determining the boundaries of the aspect words and the comment viewpoints according to the fact that the corresponding rows and columns under the same labels in the emotion triple two-dimensional table T are necessarily the same:
firstly, the probability matrix p is expanded according to rows to obtain
Figure BDA0003703810950000211
Calculating the Euclidean distance of adjacent rows;
then, the probability matrix p is developed according to columns to obtain
Figure BDA0003703810950000212
And calculating the Euclidean distance of the adjacent columns;
finally, the average value of the distance between the two positions is calculated and compared with a default distance threshold value, and if the distance is exceeded, the position is represented as a segmentation position;
s72, utilizing the character that the aspect words and comment opinions in the emotion triple two-dimensional table T are squares symmetrical about the diagonal line, identifying and detecting the category of the two
Figure BDA0003703810950000213
Figure BDA0003703810950000214
If it is not
Figure BDA0003703810950000215
Decoding into an aspect word or comment view;
s73, in the known aspect the word a 1 Span range of (i, j) and comment term o 1 In the case of span range of (m, n), the two-dimensional table is used to determine the correlation between the twoAssociative nature, decoding rectangular emotion types between the two
Figure BDA0003703810950000221
If it is
Figure BDA0003703810950000222
Then decode to the emotion polarity between the two to form the final emotion triple
Figure BDA0003703810950000223
As a specific embodiment, the construction of the emotion triple model is realized by using a pytorech deep learning framework, training is carried out on a training set and verification is carried out on a verification set, the training parameters of the model are stored, and the deployment of the front end and the rear end of the emotion triple is further realized by using a flash framework, which comprises the following steps:
firstly, realizing the front-end page layout of the system by using an HTML language;
secondly, writing a back-end program to realize loading, prediction and analysis of the emotion triple model parameters;
and finally, utilizing a flash frame to realize the connection between the page data and the rear-end interface so as to achieve the visual effect.
In addition, the comment text aspect word emotion triple constructed by the invention comprises the following functions: and identifying and generating emotion triples of the comment text, aggregating the quality and the cause of the emotion evaluation of the aspect words under each category, summarizing the emotion triples of the overall comment text to reflect the overall evaluation result, and automatically generating feedback information meeting the query condition of the user.
Compared with the prior art, the emotion triple generation method based on multi-class table filling provided by the invention has the following beneficial effects:
1. the multi-class multi-head attention mechanism model can consider the influence of the class of the comment text on the aspect word recognition and comment viewpoint detection triples, and is beneficial to improving the accuracy of the aspect word recognition and comment viewpoint detection;
2. the partition filtering neural network model ensures that two subtasks of aspect word recognition and comment viewpoint detection are not isolated any more, but fuses information between the two subtasks, and divides the two subtasks into three partitions on the basis of the information, namely an aspect word recognition task partition, a comment viewpoint detection task partition and a shared task partition, so that the bidirectional interaction between the subtasks is improved, the common information between the two subtasks is stored, and irrelevant information is abandoned;
3. according to the table filling-based combined extraction framework, the unified marking space of the comment viewpoints, the aspect words and the emotion triples of emotion polarity is constructed, and the sequence marking and decoding mode of the aspect words and the comment viewpoints is converted into a mode of finding rectangles in a two-dimensional table, so that the problems of information obstruction among different subtasks and emotion triples overlapping are effectively eliminated;
4. the comment text aspect word emotion triple disclosed by the invention can realize quick extraction of the comment text emotion triple, so that the quality and the cause of aspect word emotion under each category can be aggregated, the overall comment text emotion triple can be summarized to reflect the overall evaluation result, and feedback information can be automatically generated according to the query condition of a user.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (6)

1. An emotion triple generation method based on multi-class table filling is characterized by comprising the following steps:
s1, firstly, cleaning the comment text information data obtained by the crawler tool; secondly, uniformly labeling comment viewpoints, evaluation objects, namely, aspect words and emotion types in the data, and constructing an emotion triple uniform marking space; and finally, the marked data is divided into 8: 1: 1, dividing the ratio into a training set, a verification set and a test set;
s2, carrying out feature coding on the comment text by utilizing a Bert pre-training language model, and extracting deep semantic information H of the text;
s3, respectively learning category enhancement vector expression H associated with the category of the comment and the aspect word by utilizing a multi-category multi-head attention mechanism according to the emotion triple unified mark space A And an associated category enhancement vector representation H with comment views O
S4, representing H by category enhanced vector A 、H O Based on the two-way association of the aspect word recognition task and the comment viewpoint detection task by using a partition filtering mechanism, firstly, an aspect gate similar to an LSTM neural network is realized by using a linear layer neural network
Figure FDA0003703810940000011
Harmony view point door
Figure FDA0003703810940000012
Then, each time step unit is divided into aspect word recognition task partitions rho by using a gating mechanism A Comment viewpoint detection task partition ρ O And shared task partition ρ S Finally, filtering the information irrelevant to the task by using a filtering mechanism to obtain partition filtering information H p
S5, calculating probability distribution score vectors among each word pair by using a double affine depth attention mechanism, and filling the probability distribution score vectors into each word pair cell of an emotion triple unified mark space two-dimensional table;
s6, adding symmetry constraint L to unified tags in the emotion triple unified tag space two-dimensional table sym And implicit constraints L imp
S7, traversing a square representing the aspect words and the comment viewpoints and a rectangle representing emotion polarity in a two-dimensional search table by using an emotion triple unified mark space combined decoding frame, determining the boundary of information of the aspect words and the comment viewpoints by using the property that adjacent rows or columns of the aspect words or the comment objects in the two-dimensional search table are marked to be consistent, decoding the aspect words or the comment viewpoints by using the property that the square is symmetrical about a diagonal line, and traversing the emotion polarity of an aligned rectangular frame structure between the search aspect words and the comment viewpoints by using the detected aspect words and the comment viewpoints;
s8, constructing comment text aspect word emotion triples, aggregating the advantages and disadvantages of aspect word emotion evaluations under various categories and reasons for generation, summarizing the overall comment text emotion triples to reflect overall evaluation results, and automatically generating feedback information according to query conditions of users.
2. The method for generating emotion triples based on multi-class table padding according to claim 1, wherein the constructing of the emotion triple unified tag space in step S1 includes the following steps:
s11, acquiring the starting position and the ending position of the aspect word A and the comment viewpoint word O in the comment text and the emotion polarity Y of the corresponding aspect word sent ={Pos,Neg,Neu};
S12, obtaining category information describing various aspects words and comment viewpoints in the comment text, and obtaining m category information through statistical analysis, wherein the m category information is defined as Y c ={y 1 ,y 2 ,…,y m };
S13, marking the Chinese character, comment view label and emotion polarity based on the obtained m kinds of information, and defining the marking mode of the Chinese character as Y A ={y 1 ,…,y i ,None},y i ∈Y c The marking mode of the comment viewpoint is Y O ={y 1 ,…,y i ,None},y i ∈Y c The joint marking mode of emotional polarity is Y P ={y 1 +p 1 ,…,y i +p i ,None},y i ∈Y c ,p i ∈Y sent None indicates no association between word pairs;
s14, filling the obtained aspect word mark, comment viewpoint mark and emotion polarity combined mark into a table T respectively n×n In each cell ofTo represent word pairs w i,j And the information category relationship between the comment texts is obtained, so that an emotion triple unified mark space is constructed, wherein n represents the length of the comment text S.
3. The method for generating emotion triples based on multi-class table filling as claimed in claim 1, wherein said step S3 utilizes multi-class multi-head attention mechanism to learn the class enhancement vector representation H of the class to which the comment belongs and the related aspect word respectively A And an associated category enhancement vector representation H with comment views O The method specifically comprises the following steps:
s31, further obtaining text context deep semantic information of each time step by using LSTM neural network model
Figure FDA0003703810940000031
The detailed calculation is as follows:
Figure FDA0003703810940000032
Figure FDA0003703810940000033
Figure FDA0003703810940000034
wherein W, b is a trainable parameter, σ represents sigmoid activation function, i t 、o t 、f t Respectively showing an input gate, an output gate and a forgetting gate, c t A cell state representing a current time step; c. C t-1 Cell state representing the previous time step;
Figure FDA0003703810940000035
representing a cell state update value;
s32, outputting the Bert to the vector tableDisplay device
Figure FDA0003703810940000036
Hidden layer vector h output at last time step t-1 As the input part of the multi-class multi-head attention mechanism module, firstly, the multi-class multi-head attention mechanism module is used
Figure FDA0003703810940000037
And K (t) Point multiplication is carried out to obtain semantic similarity a between each category and each aspect word or comment viewpoint (t) Then V is added (t) And a (t) Enhanced vector representation of aspect word category or comment viewpoint category obtained by point multiplication
Figure FDA0003703810940000038
Finally, the hidden layer output vector of the LSTM neural network model is spliced with the category enhancement vector representation to form a final vector representation h of the unit time step t Specifically, the following formula is shown:
Figure FDA0003703810940000039
Figure FDA00037038109400000310
Figure FDA00037038109400000311
wherein softmax denotes the activation function, d e Representing the dimension of the word vector output by Bert, attention representing the way in which attention is computed,
Figure FDA00037038109400000312
Figure FDA0003703810940000041
m representsThe text describes the category to which the facet word or comment opinion belongs,
Figure FDA0003703810940000042
the key-value pair representing the association of the ith category is specifically shown as follows:
Figure FDA0003703810940000043
wherein σ represents a sigmoid activation function;
s33, acquiring category enhanced vector representations of the whole text sequence about the aspect words and the comment viewpoints, wherein the category enhanced vector representations are respectively
Figure FDA0003703810940000044
Wherein
Figure FDA0003703810940000045
Figure FDA0003703810940000046
And splicing the aspect words and the category enhancement vectors of the comment viewpoints to obtain the final overall category enhancement vector representation
Figure FDA0003703810940000047
4. The method for generating emotion triples based on multi-category table filling as claimed in claim 1, wherein said step S4 of bi-directionally associating the aspect word recognition task with the comment viewpoint detection task by using a partition filtering mechanism specifically includes the following steps:
s41, utilization door
Figure FDA0003703810940000048
Harmony view point door
Figure FDA0003703810940000049
Respectively controlling information distribution of the aspect word recognition task and the comment viewpoint detection task, and dividing the neural unit of each time step into aspect word recognition task partitions rho A Comment viewpoint detection task partition ρ O And shared task partition ρ S The door of the above aspect
Figure FDA00037038109400000410
Harmony view point door
Figure FDA00037038109400000411
Is calculated as follows:
Figure FDA00037038109400000412
Figure FDA00037038109400000413
wherein cummax represents the calculation mode of the accumulated maximum value, Linear represents the standard Linear transformation,
Figure FDA00037038109400000414
represents the t-th overall class enhancement vector representation, h t-1 A hidden layer vector representation representing a previous time step;
s42, further calculating the current time step information representation
Figure FDA00037038109400000415
Figure FDA0003703810940000051
Wherein, tanh represents an activation function;
s43, calculating the aspect word recognition task partition corresponding to the historical time step
Figure FDA0003703810940000052
Review opinion detection task partitioning
Figure FDA0003703810940000053
And shared task partitioning
Figure FDA0003703810940000054
The specific calculation mode is shown as the following formula:
Figure FDA0003703810940000055
Figure FDA0003703810940000056
Figure FDA0003703810940000057
wherein,
Figure FDA00037038109400000520
the multiplication operator of the corresponding elements is represented,
Figure FDA0003703810940000058
and the calculation method of (S41)
Figure FDA0003703810940000059
Harmony view point door
Figure FDA00037038109400000510
The calculation modes of (2) are consistent;
s44, calculating the aspect word recognition task partition corresponding to the current time step
Figure FDA00037038109400000511
Review opinion detection task partitioning
Figure FDA00037038109400000512
And shared task partitioning
Figure FDA00037038109400000513
The specific calculation mode is shown as the following formula:
Figure FDA00037038109400000514
Figure FDA00037038109400000515
Figure FDA00037038109400000516
s45, adding the partition vector of the current time step and the partition vector of the previous time step, and integrating the sum into the total partition information expression rho of the current time step A 、ρ O And ρ S The specific calculation method is shown as the following formula:
Figure FDA00037038109400000517
Figure FDA00037038109400000518
Figure FDA00037038109400000519
s46, identifying task partition rho by using divided aspect words A Comment viewpoint detection task partition ρ O And shared task partition ρ S Different superposition modes between the two groups of the information are realized to realize the retention and the filtration of the information, and for this purpose, the memory storage units of the aspect word recognition task partition, the comment viewpoint detection task partition and the shared task partition are respectively defined as mu A 、μ O And mu S The specific calculation method is shown as the following formula:
μ A =ρ As ,μ O =ρ Os ,μ S =ρ s
s47, splicing the three partition memory storage units to obtain the unit state vector representation c of the next time step t And hidden layer vector representation h t The specific calculation method is shown as the following formula:
c t =Linear([μ A,t ;μ O,t ;μ s,t ])
h t =tanh(c t )
wherein Linear represents a standard Linear transformation;
s48, finally, representing h by each time step vector t Splicing is carried out on the basis of the information, so that the partitioned filtering information H of the recognition of the aspect words and the detection of the comment viewpoints is generated p =[h 1 ,h 2 ,…,h n ]。
5. The method of claim 1, wherein the step S5 of calculating a probability distribution score vector between each word pair using a double affine depth attention mechanism, and filling the probability distribution score vector into each word pair cell of the unified markup space two-dimensional table of emotion triples specifically comprises the steps of:
s51, predicting the head and tail parts of each word by adopting two multilayer MLP neural network models, wherein the specific calculation mode is shown as the following formula:
Figure FDA0003703810940000061
s52, calculating the fractional vector representation g of each word pair according to the following formula by using a double affine depth attention mechanism model i,j
Figure FDA0003703810940000062
Figure FDA0003703810940000063
Wherein Biaff represents a double affine transformation, U 1 And U 2 All model weights, b represents an offset;
s53, first, the score vector is expressed as g i,j The input as a softmax function predicts the probability distribution of the labels in a word pair as follows:
P(y i,j ∣s)=softmax(dropout(g i,j ))
then, filling the probability distribution of each word pair into an n multiplied by n two-dimensional table T;
and finally, calculating the overall loss value by using the probability distribution condition of the predicted label and the real label according to the following formula:
Figure FDA0003703810940000071
wherein, Y i,j Is a real label.
6. The method of claim 1, wherein adding symmetry constraint L to the unified tags in the unified tag space two-dimensional table of emotion triples sym And implicit constraints L imp Comprises the following steps:
s61, the labeled structures of the aspect words and the comment viewpoint words are all squares symmetrical about a diagonal, and the loss function for defining the symmetrical constraint is L sym
Figure FDA0003703810940000072
Wherein,
Figure FDA0003703810940000073
p (y) representing all possible occurrences of tags per word pair in a sentence i,j | s) stacking;
s62, in the emotion triple, the emotion polarity of the aspect word is definitely closely related to the aspect word and the comment viewpoint, and the loss function for defining the implicit constraint is L imp
Figure FDA0003703810940000074
Wherein,
Figure FDA0003703810940000075
p (y) representing all possible occurrences of tags per word pair in a sentence i,j | s) stack.
CN202210700536.9A 2022-06-20 2022-06-20 Emotion triple generation method based on multi-class table filling Pending CN115098675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210700536.9A CN115098675A (en) 2022-06-20 2022-06-20 Emotion triple generation method based on multi-class table filling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210700536.9A CN115098675A (en) 2022-06-20 2022-06-20 Emotion triple generation method based on multi-class table filling

Publications (1)

Publication Number Publication Date
CN115098675A true CN115098675A (en) 2022-09-23

Family

ID=83292028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210700536.9A Pending CN115098675A (en) 2022-06-20 2022-06-20 Emotion triple generation method based on multi-class table filling

Country Status (1)

Country Link
CN (1) CN115098675A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115391570A (en) * 2022-10-28 2022-11-25 聊城大学 Method and device for constructing emotion knowledge graph based on aspects
CN115563972A (en) * 2022-10-17 2023-01-03 北京中科智加科技有限公司 Training method of structured six-linkage emotion analysis model
CN116562305A (en) * 2023-07-10 2023-08-08 江西财经大学 Aspect emotion four-tuple prediction method and system
CN116757195A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Implicit emotion recognition method based on prompt learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115563972A (en) * 2022-10-17 2023-01-03 北京中科智加科技有限公司 Training method of structured six-linkage emotion analysis model
CN115563972B (en) * 2022-10-17 2023-07-04 北京中科智加科技有限公司 Training method of structured six-linked emotion analysis model
CN115391570A (en) * 2022-10-28 2022-11-25 聊城大学 Method and device for constructing emotion knowledge graph based on aspects
CN116757195A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Implicit emotion recognition method based on prompt learning
CN116562305A (en) * 2023-07-10 2023-08-08 江西财经大学 Aspect emotion four-tuple prediction method and system
CN116562305B (en) * 2023-07-10 2023-09-12 江西财经大学 Aspect emotion four-tuple prediction method and system

Similar Documents

Publication Publication Date Title
CN110222140B (en) Cross-modal retrieval method based on counterstudy and asymmetric hash
CN115098675A (en) Emotion triple generation method based on multi-class table filling
CN112884551B (en) Commodity recommendation method based on neighbor users and comment information
Sharma et al. Visual question answering model based on graph neural network and contextual attention
Sonkar et al. qdkt: Question-centric deep knowledge tracing
Sharma et al. A survey of methods, datasets and evaluation metrics for visual question answering
Li et al. UD_BBC: Named entity recognition in social network combined BERT-BiLSTM-CRF with active learning
CN111666406A (en) Short text classification prediction method based on word and label combination of self-attention
CN113378919B (en) Image description generation method for fusing visual sense and enhancing multilayer global features
CN111311364B (en) Commodity recommendation method and system based on multi-mode commodity comment analysis
CN111368197A (en) Deep learning-based comment recommendation system and method
CN115391570A (en) Method and device for constructing emotion knowledge graph based on aspects
CN114357167B (en) Bi-LSTM-GCN-based multi-label text classification method and system
CN114648031A (en) Text aspect level emotion recognition method based on bidirectional LSTM and multi-head attention mechanism
CN116187349A (en) Visual question-answering method based on scene graph relation information enhancement
CN112036189A (en) Method and system for recognizing gold semantic
CN113240033B (en) Visual relation detection method and device based on scene graph high-order semantic structure
CN115018941A (en) Text-to-image generation algorithm based on improved version text parser
US11948387B2 (en) Optimized policy-based active learning for content detection
CN113268592B (en) Short text object emotion classification method based on multi-level interactive attention mechanism
Ren et al. A co-attention based multi-modal fusion network for review helpfulness prediction
CN117408735A (en) Client management method and system based on Internet of things
Jasim et al. Analyzing Social Media Sentiment: Twitter as a Case Study
CN112612900A (en) Knowledge graph guided multi-scene image generation method
Shi et al. Product feature extraction from Chinese online reviews: Application to product improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination