CN115905524A - Emotion analysis method, device and equipment integrating syntactic and semantic information - Google Patents

Emotion analysis method, device and equipment integrating syntactic and semantic information Download PDF

Info

Publication number
CN115905524A
CN115905524A CN202211383395.9A CN202211383395A CN115905524A CN 115905524 A CN115905524 A CN 115905524A CN 202211383395 A CN202211383395 A CN 202211383395A CN 115905524 A CN115905524 A CN 115905524A
Authority
CN
China
Prior art keywords
sentence
semantic
representation
module
syntactic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211383395.9A
Other languages
Chinese (zh)
Other versions
CN115905524B (en
Inventor
冯锦辉
蔡倩华
李坤桃
陈一帆
薛云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202211383395.9A priority Critical patent/CN115905524B/en
Publication of CN115905524A publication Critical patent/CN115905524A/en
Application granted granted Critical
Publication of CN115905524B publication Critical patent/CN115905524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the field of emotion analysis, in particular to an emotion analysis method, device, equipment and storage medium fusing syntactic and semantic information.

Description

Emotion analysis method, device and equipment integrating syntactic and semantic information
Technical Field
The invention relates to the field of emotion analysis, in particular to an emotion analysis method, device, equipment and storage medium integrating syntax and semantic information.
Background
Target-specific sentiment analysis (ABSA) is an important task in fine-grained sentiment analysis, which aims to automatically infer sentiment classifications of attribute words in the context of fine-grained sentiment analysis.
The existing technical scheme is a semantic-based method, attention weights of all context words are generally automatically learned through a neural network, and aspect words of sentences are analyzed by combining the attention weights, however, interaction between syntactic effects and semantic effects, namely common information, is still ignored, and some unnecessary noise is introduced in an ABSA task, so that the emotion information recognition result is inaccurate, the efficiency is low, and the sentences cannot be accurately analyzed.
Disclosure of Invention
Based on this, the invention aims to provide an emotion analysis method, device, equipment and storage medium fusing syntax and semantic information, and by constructing an information fusion module and an information sharing module, the attention of a model to the aspect words of a sentence is improved, and the relevance of the aspect words to the context words is enhanced, so that the syntax information and semantic information of the sentence are effectively concerned, and the accuracy of a fine-grained emotion analysis task is further improved.
In a first aspect, an embodiment of the present application provides an emotion analysis method fusing syntax and semantic information, including the following steps:
obtaining a sentence to be detected and a preset emotion analysis model, wherein the emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same;
inputting the sentence to be detected into the sentence coding module for coding to obtain the sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested;
inputting the sentence characteristic representation of the sentence to be detected and the syntax adjacency matrix into the first syntax information extraction module to obtain a first syntax characteristic representation of the sentence to be detected; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence;
inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic characteristic representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected;
and inputting the first fusion characteristic representation and the second fusion characteristic representation of the to-be-detected sentence into the emotion analysis module to obtain an emotion analysis result of the to-be-detected sentence.
In a second aspect, an embodiment of the present application provides an emotion analysis apparatus that fuses syntax and semantic information, including:
the system comprises an obtaining module, a judging module and a judging module, wherein the obtaining module is used for obtaining a sentence to be tested and a preset emotion analysis model, and the emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same;
the sentence coding module is used for inputting the sentence to be detected into the sentence coding module for coding processing to obtain the sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested;
the first feature representation calculation module is used for inputting the sentence feature representation of the sentence to be detected and the syntactic adjacency matrix into the first syntactic information extraction module to obtain a first syntactic feature representation of the sentence to be detected; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence;
the second feature representation calculation module is used for inputting the sentence feature representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic feature representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected;
and the emotion analysis module is used for inputting the first fusion characteristic representation and the second fusion characteristic representation of the statement to be detected into the emotion analysis module to obtain an emotion analysis result of the statement to be detected.
In a third aspect, the present application provides a computer device, including a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the emotion analysis method for merging syntactic and semantic information according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a storage medium storing a computer program, which when executed by a processor implements the steps of the emotion analysis method fusing syntactic and semantic information according to the first aspect.
In the embodiment of the application, an emotion analysis method, device, equipment and storage medium fusing syntax and semantic information are provided, and by constructing an information fusion module and an information sharing module, the attention degree of a model to the aspect words of a sentence is improved, and the relevance of the aspect words to the context words is enhanced, so that the syntax information and semantic information of the sentence are effectively concerned, and the accuracy of a fine-grained emotion analysis task is further improved.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flowchart of an emotion analysis method fusing syntactic and semantic information according to an embodiment of the present application;
fig. 2 is a schematic diagram of S2 in a process of an emotion analysis method for merging syntax and semantic information according to an embodiment of the present application;
FIG. 3 is a schematic diagram of S3 in the process of the emotion analysis method for merging syntactic and semantic information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a step S3 in the emotion analysis method for merging syntactic and semantic information according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a step S3 in the emotion analysis method for merging syntactic and semantic information according to an embodiment of the present application;
FIG. 6 is a diagram of a step S4 of a sentiment analysis method for merging syntactic and semantic information according to another embodiment of the present application;
FIG. 7 is a diagram illustrating a process S5 of a sentiment analysis method for merging syntactic and semantic information according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an emotion analysis apparatus fusing syntactic and semantic information according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if as used herein may be interpreted as" at "8230; \8230when" or "when 8230; \823030, when" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an emotion analyzing method for merging syntax and semantic information according to an embodiment of the present application, where the method includes the following steps:
s1: and obtaining the sentence to be tested and a preset emotion analysis model.
The main execution body of the emotion analysis method fusing syntax and semantic information is analysis equipment (hereinafter referred to as analysis equipment for short) of the emotion analysis method fusing syntax and semantic information, and in an optional embodiment, the analysis equipment can be one computer equipment, a server or a server cluster formed by combining a plurality of computer equipment.
In this embodiment, the analysis device obtains the sentence to be detected and the preset emotion analysis model, and specifically, the analysis device may obtain the sentence to be detected input by the user, and also may obtain the corresponding sentence to be detected from the preset database, where the sentence to be detected includes a plurality of words and an aspect word, the aspect word is composed of a plurality of words, and the sentence expression of the sentence to be detected is:
x={w 1 ,w 2 ,...,w i ,...,w τ+1 ,...,w τ+m ,...,w n }
wherein, x is the sentence expression of the sentence to be detected, i is the position index of the word, n is the number of the word of the sentence to be detected, and w i Is a vector representation of the i-th word of the sentence under test, wherein A = { w = { (w) } τ+1 ,...,w τ+m And τ is the starting position of the aspect word, and m is the number of words corresponding to the aspect word.
The emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module, wherein the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same.
S2: inputting the sentence to be detected into the sentence coding module for coding processing to obtain the sentence characteristic representation of the sentence to be detected; and constructing a syntax adjacency matrix of the statement to be tested.
The sentence coding module adopts a BERT (Bidirectional Encoder reproduction from transformations) word embedding model for converting the vectors of the words into corresponding word embedding vectors.
In this embodiment, the analysis device inputs the sentence to be tested into the sentence coding module for coding, so as to obtain the sentence characteristic representation of the sentence to be tested. Specifically, the analysis equipment inputs the sentence to be detected into a preset BERT word embedding model respectively, maps each word in the sentence to be detected into a low-dimensional vector space, and obtains word embedding vectors of a plurality of words of the sentence to be detected, which are output by the BERT word embedding model, through inquiring a pretrained BERT matrix.
In an optional embodiment, the analysis device adjusts the positions of the words in the sentence to be tested, and moves the aspect words to the end of the sentence to be tested, so that the sentence to be tested is constructed into a sentence-aspect word pair, so as to improve the explicit interaction between the sentence to be tested and the aspect words, and thus the obtained word embedding vectors are focused on the aspect words.
And the analysis equipment constructs a syntactic adjacency matrix of the sentence to be tested so as to represent the dependency relationship among words in the sentence to be tested, wherein the syntactic adjacency matrix comprises a plurality of nodes, and the syntactic adjacency matrix comprises dependency relationship vectors among the words corresponding to the nodes.
Referring to fig. 2, fig. 2 is a schematic diagram of S2 in a process of an emotion analysis method for merging syntax and semantic information according to an embodiment of the present application, including steps S21 to S22, which are specifically as follows:
s21: and acquiring an initial dependency syntax tree which comprises a plurality of nodes, respectively arranging a plurality of words of the sentence to be tested on the nodes of the initial dependency syntax tree, and constructing a syntax adjacency matrix of the sentence to be tested.
In this embodiment, the analysis device obtains an initial dependency syntax tree, where the initial dependency syntax tree includes a plurality of nodes, and sets a plurality of words of the sentence to be tested on the nodes of the initial dependency syntax tree, respectively, to construct an initial adjacency matrix of the sentence to be tested.
S22: the method comprises the steps of obtaining dependency relationship information of a sentence to be detected, wherein the dependency relationship information is used for indicating the connection relationship between words in the sentence to be detected, converting an initial adjacent matrix of the sentence to be detected into an initial syntactic adjacent matrix according to the dependency relationship information of the sentence to be detected, and carrying out standardization processing on the initial syntactic adjacent matrix according to a preset first standardization algorithm to construct the syntactic adjacent matrix of the sentence to be detected.
In this embodiment, the analysis device obtains dependency relationship information of the sentence to be tested, where the dependency relationship information is used to indicate a connection relationship between words in the sentence to be tested.
The analysis equipment converts the initial adjacency matrix of the statement to be tested into an initial syntactic adjacency matrix according to the dependency relationship information of the statement to be tested, wherein the initial adjacency matrix A is a matrix with n rows and n columns, and A ij Is the dependency vector of the initial adjacency matrix when A ij =1, meaning that word i is linked to word j, whereas when A is used ij And =0, representing that word i is not connected to word j.
Analyzing equipment and standardizing the initial syntax adjacency matrix according to a preset first standardization algorithm to construct the syntax adjacency matrix of the statement to be tested, wherein the first standardization algorithm is as follows:
Figure BDA0003929535090000061
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000062
for the syntactic adjacency matrix, A syn For the initial syntax adjacency matrix, < >>
Figure BDA0003929535090000063
For the initial syntactic adjacency matrix A syn Degree matrix of (I) f Is an identity matrix.
S3: inputting the sentence characteristic representation of the sentence to be detected and the syntax adjacency matrix into the first syntax information extraction module to obtain a first syntax characteristic representation of the sentence to be detected; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence.
And the analysis equipment inputs the sentence characteristic representation and the syntactic adjacency matrix of the sentence to be tested into the first syntactic information extraction module to obtain the first syntactic characteristic representation of the sentence to be tested.
And the analysis equipment inputs the first syntactic characteristic representation to the multi-head self-attention module, and constructs a semantic adjacency matrix of the sentence to be tested so as to represent the attention weight relationship among all words in the sentence to be tested.
And the analysis equipment inputs the first syntactic feature representation and the semantic adjacency matrix into the first semantic information extraction module for semantic feature extraction, so as to obtain a first fusion feature representation of the statement to be detected.
The first syntax information extraction module is a multi-layer graph convolutional network structure, please refer to fig. 3, and fig. 3 is a schematic diagram of S3 in a flow of the emotion analysis method for merging syntax and semantic information provided in an embodiment of the present application, and includes step S31, which is specifically as follows:
s31: and obtaining a plurality of layers of sentence hidden representations of the first syntax information extraction module according to the standardized syntax adjacency matrix and a preset first hidden feature calculation algorithm, extracting a plurality of target layers of sentence hidden representations from the plurality of layers of sentence hidden representations, and splicing the plurality of target layers of sentence hidden representations to obtain the spliced sentence hidden representations which are used as the first syntax feature representation of the sentence to be detected.
In this embodiment, the analysis device obtains several layers of sentence hiding representations of the first syntax information extraction module according to the normalized syntax adjacency matrix and a preset first hidden feature calculation algorithm, where the first hidden feature calculation algorithm is:
Figure BDA0003929535090000071
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000072
hiding a representation for a sentence of layer l +1 of the first syntax information extraction module,
Figure BDA0003929535090000073
H c for the sentence characteristic representation of the statement to be examined, ->
Figure BDA0003929535090000074
For the weight parameter at level l +1 of the first syntax information extraction module, < > H>
Figure BDA0003929535090000075
A bias parameter for layer i of the first syntax information extraction module.
And the analysis equipment extracts the sentence hidden representations of the plurality of target layers from the sentence hidden representations of the plurality of layers, splices the sentence hidden representations of the plurality of target layers to obtain the spliced sentence hidden representations which are used as the first syntactic characteristic representation of the sentence to be detected.
In an optional embodiment, when the first syntax information extraction module extracts information, since the higher the number of layers is, the higher the noise information is merged into, it is ensured that accurate syntax information can be merged, and negative effects caused by the noise information are reduced as much as possible, and the analysis deviceExtracting sentence-hidden representations of a first layer and a second layer of the first syntax information extraction module
Figure BDA0003929535090000076
And &>
Figure BDA0003929535090000077
Splicing to obtain spliced sentence hidden representation H sem And the first syntactic characteristic is represented as the first syntactic characteristic of the sentence to be tested.
Referring to fig. 4, fig. 4 is a schematic diagram of S3 in a process of an emotion analysis method for merging syntax and semantic information according to an embodiment of the present application, including steps S32 to S35, which are specifically as follows:
s32: and constructing a plurality of initial semantic adjacency matrixes according to the first syntactic feature representation of the statement to be tested and a preset multi-head self-attention calculation algorithm.
The multi-head self-attention calculation algorithm comprises the following steps:
Figure BDA0003929535090000081
in the formula, A sem,j Is the jth initial semantic adjacency matrix, H sem For said first syntactic characteristic representation, W sem,k Is a first weight parameter, W, of the multi-headed attention module sem,q A second weight parameter, d, for the multi-head attention module head Dimension parameters of multi-head self-attention;
in this embodiment, the analysis device constructs a plurality of initial semantic adjacency matrixes according to the first syntactic feature representation of the sentence to be tested and a preset multi-head self-attention calculation algorithm.
S33: and obtaining the probability vectors of the plurality of initial semantic adjacency matrixes according to the plurality of initial semantic adjacency matrixes and a preset matrix probability calculation algorithm, and extracting the initial semantic adjacency matrix with the maximum probability vector from the plurality of initial semantic adjacency matrixes.
The matrix probability calculation algorithm is as follows:
A sem =argmax[softmax(A sem,1 ,...,A sem,K )]
in the formula, A sem The initial semantic adjacency matrix with the maximum probability vector is obtained, K is the number of the semantic adjacency matrices, softmax () is a normalization function, and argmax () is a set-solving function;
in this embodiment, the analysis device obtains probability vectors of the plurality of initial semantic adjacency matrices according to the plurality of initial semantic adjacency matrices and a preset matrix probability calculation algorithm, and extracts an initial semantic adjacency matrix with a maximum probability vector from the plurality of initial semantic adjacency matrices.
S34: and initializing the initial semantic adjacency matrix with the maximum probability vector according to a preset quick selection algorithm, and constructing a target semantic adjacency matrix of the statement to be tested.
The quick selection algorithm is as follows:
A’ sem =top-k(A sem )
of formula (II) to' sem For the target semantic adjacency matrix, top-k () is a quick selection function;
in this embodiment, the analysis device performs initialization processing on the initial semantic adjacency matrix with the largest probability vector according to a preset quick selection algorithm, and constructs a target semantic adjacency matrix of the sentence to be tested
S35: and carrying out standardization processing on the target semantic adjacency matrix according to a preset second standardization algorithm to construct the semantic adjacency matrix of the statement to be detected.
The second normalization algorithm is:
Figure BDA0003929535090000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000092
for the standardizationProcessed semantic adjacency matrix, A' sem For the target semantic adjacency matrix, < >>
Figure BDA0003929535090000093
Is the target semantic adjacency matrix A' sem Degree matrix of (I) f Is an identity matrix.
In this embodiment, the analysis device performs normalization processing on the target semantic adjacency matrix according to a preset second normalization algorithm to construct the semantic adjacency matrix of the sentence to be tested.
Referring to fig. 5, fig. 5 is a schematic diagram of S4 in a flow of an emotion analysis method for merging syntax and semantic information according to an embodiment of the present application, which further includes step S36, specifically as follows:
s36: and taking the first syntactic feature representation of the sentence to be detected as the first-layer input data of the convolution module, and obtaining the sentence hiding representation of the last layer of the convolution module according to the semantic adjacency matrix and a preset second hidden feature calculation algorithm, wherein the sentence hiding representation of the last layer of the convolution module is taken as the first fusion feature representation of the sentence to be detected.
The second hidden feature calculation algorithm is as follows:
Figure BDA0003929535090000094
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000095
for a sentence hidden representation of the l-th layer of the convolution module, a value is selected>
Figure BDA0003929535090000096
For the weight parameter of the l-th layer of the convolution module, < >>
Figure BDA0003929535090000097
Is the bias parameter of the l-th layer of the convolution module.
In this embodiment, the analysis device determines the first syntax of the sentence to be testedThe symbolic representation is used as the first layer input data of the convolution module, the last layer sentence hiding representation of the convolution module is obtained according to the semantic adjacency matrix and a preset second hidden feature calculation algorithm and is used as the first fusion feature representation of the sentence to be detected
Figure BDA0003929535090000098
The first fusion feature of the sentence to be detected represents a first hidden vector comprising a plurality of words and a first hidden vector comprising a plurality of side words.
S4: inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic characteristic representation of the sentence to be tested; and inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected.
In this embodiment, the analysis device inputs the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module, and obtains a second syntactic characteristic representation of the sentence to be tested; and inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected.
The second syntax information extraction module and the second semantic information extraction module are both multilayer graph convolutional network structures, please refer to fig. 6, where fig. 6 is a schematic diagram of S4 in a flow of the emotion analysis method for merging syntax and semantic information provided in an embodiment of the present application, and includes steps S41 to S43, which are specifically as follows:
s41: and taking the sentence characteristic representation of the sentence to be detected as the first-layer input data of the second syntax information extraction module, and obtaining the last-layer sentence hidden representation of the second syntax information extraction module according to the syntax adjacency matrix and a preset third hidden feature calculation algorithm, wherein the last-layer sentence hidden representation is taken as the second syntax characteristic representation of the sentence to be detected.
The third hidden feature calculation algorithm is as follows:
Figure BDA0003929535090000101
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000102
for the sentence-hiding representation of layer l +1 of the second syntax information extraction module, H c For the sentence-feature representation of the statement under test, RELU () is a nonlinear function, based on>
Figure BDA0003929535090000103
For the syntax adjacency matrix>
Figure BDA0003929535090000104
For the weight parameter at level l +1 of the second syntax information extraction module, < > H>
Figure BDA0003929535090000105
A bias parameter for layer i of the second syntax information extraction module.
In this embodiment, the analysis device uses the sentence characteristic representation of the sentence to be tested as the first-layer input data of the second syntax information extraction module, and obtains the sentence hidden representation of the last layer of the second syntax information extraction module according to the syntax adjacency matrix and a preset third hidden feature calculation algorithm, which is used as the second syntax characteristic representation of the sentence to be tested.
S42: and taking the sentence characteristic representation of the sentence to be detected as the first-layer input data of the second semantic information extraction module, and obtaining the last-layer sentence hidden representation of the second semantic information extraction module as the second semantic characteristic representation of the sentence to be detected according to the semantic adjacency matrix and a preset fourth hidden characteristic calculation algorithm.
The fourth hidden feature calculation algorithm is as follows:
Figure BDA0003929535090000106
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000107
for the sentence hiding representation of the l +1 th layer of the second semantic information extraction module, be->
Figure BDA0003929535090000108
For the semantic adjacency matrix, < > H>
Figure BDA0003929535090000109
For the weight parameter of the l +1 th layer of the second semantic information extraction module, based on the semantic information of the first semantic information>
Figure BDA00039295350900001010
And the bias parameter of the l layer of the second semantic information extraction module.
In this embodiment, the analysis device uses the sentence characteristic representation of the to-be-detected sentence as the first-layer input data of the second semantic information extraction module, and obtains the last-layer sentence hidden representation of the second semantic information extraction module as the second semantic characteristic representation of the to-be-detected sentence according to the semantic adjacency matrix and a preset fourth hidden characteristic calculation algorithm.
S43: and obtaining a second fusion characteristic representation of the statement to be tested according to the second syntactic characteristic representation, the second semantic characteristic representation and a preset fusion characteristic calculation algorithm of the statement to be tested.
The fusion feature calculation algorithm is as follows:
Figure BDA0003929535090000111
in the formula (I), the compound is shown in the specification,
Figure BDA0003929535090000112
is the second fused feature representation.
In this embodiment, the analysis device obtains a second fused feature representation of the sentence to be tested according to the second syntactic feature representation, the second semantic feature representation, and a preset fused feature calculation algorithm of the sentence to be tested, where the second fused feature representation of the sentence to be tested includes second hidden vectors of a plurality of words and second hidden vectors of aspect words.
S5: and inputting the first fusion characteristic representation and the second fusion characteristic representation of the statement to be detected to the emotion analysis module to obtain an emotion analysis result of the statement to be detected.
In this embodiment, the analysis device inputs the first fusion feature representation and the second fusion feature representation of the sentence to be tested to the emotion analysis module, and obtains an emotion analysis result of the sentence to be tested.
Referring to fig. 7, fig. 7 is a schematic diagram of S5 in a process of an emotion analysis method for merging syntax and semantic information according to an embodiment of the present application, including steps S51 to S52, which are specifically as follows:
s51: and performing mask processing on a first hidden vector of a word of a non-aspect word in the first fusion feature representation of the to-be-detected sentence and a second hidden vector of a word of a non-aspect word in the second fusion feature representation to obtain a first fusion feature representation and a second fusion feature representation of the to-be-detected sentence after mask processing, and performing splicing processing on the first fusion feature representation and the second fusion feature representation of the to-be-detected sentence after mask processing to obtain a spliced fusion feature representation.
In this embodiment, the analysis device performs mask processing on the first hidden vector of the word of the non-aspect word in the first fused feature representation of the sentence to be detected and the second hidden vector of the word of the non-aspect word in the second fused feature representation to obtain the first fused feature representation and the second fused feature representation of the sentence to be detected after the mask processing, which are specifically as follows:
Figure BDA0003929535090000113
Figure BDA0003929535090000114
where f () is the average pooling function,
Figure BDA0003929535090000121
as a mask processing function, h sem H is the first fused feature representation of the statement to be tested after mask processing com And representing the second fusion feature of the statement to be tested after mask processing.
The analysis device performs splicing processing on the first fusion characteristic representation and the second fusion characteristic representation of the statement to be tested after the mask processing to obtain a fusion characteristic representation after the splicing processing, wherein the fusion characteristic representation after the splicing processing is as follows:
h a =[h sem ;h com ]
in the formula, h a Representing the fusion characteristics after splicing treatment [;]denoted as a stitching process operation.
S52: and acquiring an emotion classification polarity probability distribution vector of the to-be-detected sentence according to the fusion feature representation after the splicing processing and a preset emotion analysis algorithm, and acquiring an emotion polarity corresponding to the dimension with the maximum probability according to the emotion classification polarity probability distribution vector as an emotion analysis result of the to-be-detected sentence.
The emotion analysis algorithm comprises the following steps:
Figure BDA0003929535090000122
in the formula,
Figure BDA0003929535090000123
Classifying a polarity probability distribution vector, h, for the emotion a For the fusion feature representation after the splicing process, softmax () is a normalization function, W 1 As a weight parameter of the emotion analysis module, b 1 Is a bias parameter of the emotion analysis module.
In this embodiment, the analysis device obtains the emotion classification polarity probability distribution vector of the to-be-detected sentence according to the fusion feature representation after the splicing processing by using a softmax function and an emotion analysis algorithm constructed by a single-layer perceptron. Obtaining the emotion polarity corresponding to the dimension with the maximum probability according to the emotion classification polarity probability distribution vector to be used as an emotion analysis result of the statement to be detected, specifically, obtaining the emotion polarity corresponding to the dimension with the maximum probability when calculating
Figure BDA0003929535090000124
Maximum probability->
Figure BDA0003929535090000125
And negativity, wherein the emotion polarity corresponding to the dimension with the maximum probability is negative, and is used as the emotion analysis result of the statement to be detected.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an emotion analyzing apparatus for merging syntax and semantic information according to an embodiment of the present application, where the apparatus may implement all or a part of the emotion analyzing apparatus for merging syntax and semantic information through software, hardware, or a combination of the two, and the apparatus 8 includes:
the obtaining module 81 is used for obtaining a sentence to be tested and a preset emotion analysis model, wherein the emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same;
a sentence coding module 82, configured to input the sentence to be detected into the sentence coding module for coding, so as to obtain a sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested;
the first feature representation calculation module is used for inputting the sentence feature representation of the sentence to be tested and the syntactic adjacency matrix into the first syntactic information extraction module to obtain a first syntactic feature representation of the sentence to be tested; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence;
a second feature representation calculation module 83, configured to input the sentence feature representation of the sentence to be tested and the syntax adjacency matrix into the second syntax information extraction module, so as to obtain a second syntax feature representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected;
and the emotion analysis module 84 is configured to input the first fusion feature representation and the second fusion feature representation of the to-be-detected sentence to the emotion analysis module, so as to obtain an emotion analysis result of the to-be-detected sentence.
In this embodiment, a sentence to be detected and a preset emotion analysis model are obtained through an obtaining module, wherein the emotion analysis model includes a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same; inputting the sentence to be detected into a sentence coding module for coding through a sentence coding module to obtain sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested; inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the first syntactic information extraction module through a first characteristic representation calculation module to obtain a first syntactic characteristic representation of the sentence to be tested; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence; inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module through a second characteristic representation calculation module to obtain a second syntactic characteristic representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected; and inputting the first fusion characteristic representation and the second fusion characteristic representation of the statement to be detected to the emotion analysis module through an emotion analysis module to obtain an emotion analysis result of the statement to be detected.
By constructing the information fusion module and the information sharing module, the attention degree of the model to the aspect words of the sentence is improved, and the relevance of the aspect words to the context words is enhanced, so that the syntactic information and the semantic information of the sentence are effectively concerned, and the accuracy of a fine-grained emotion analysis task is further improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device 10 includes: a processor 91, a memory 92, and a computer program 93 stored on the memory 92 and executable on the processor 91; the computer device may store a plurality of instructions, where the instructions are suitable for being loaded by the processor 91 and executing the method steps in the embodiments shown in fig. 1 to 7, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to 7, which are not described herein again.
Processor 91 may include one or more processing cores, among others. The processor 91 is connected to various parts in the server by various interfaces and lines, and executes various functions of the emotion analyzing apparatus 8 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 92 and calling data in the memory 92 to perform fusion of syntactic and semantic information, and optionally, the processor 91 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field Programmable Gate Array (FPGA), programmable Logic Array (PLA). The processor 91 may integrate one or a combination of a Central Processing Unit (CPU) 91, a Graphics Processing Unit (GPU) 91, a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 91, but may be implemented by a single chip.
The Memory 92 may include a Random Access Memory (RAM) 92 or a Read-Only Memory (Read-Only Memory) 92. Optionally, the memory 92 includes a non-transitory computer-readable medium. The memory 92 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 92 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 92 may alternatively be at least one memory device located remotely from the processor 91.
The embodiment of the present application further provides a storage medium, where the storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and being executed in the method steps of the first to fourth embodiments, and a specific execution process may refer to specific descriptions of the first to fourth embodiments, which are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (10)

1. A sentiment analysis method fusing syntax and semantic information is characterized by comprising the following steps:
obtaining a sentence to be detected and a preset emotion analysis model, wherein the emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same;
inputting the sentence to be detected into the sentence coding module for coding to obtain the sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested;
inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the first syntactic information extraction module to obtain a first syntactic characteristic representation of the sentence to be tested; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence;
inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic characteristic representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected;
and inputting the first fusion characteristic representation and the second fusion characteristic representation of the statement to be detected to the emotion analysis module to obtain an emotion analysis result of the statement to be detected.
2. The emotion analysis method fusing syntactic and semantic information according to claim 1, wherein: the sentence to be detected comprises a plurality of words;
the construction of the syntactic adjacency matrix of the statement to be tested comprises the following steps:
acquiring an initial dependency syntax tree, wherein the initial dependency syntax tree comprises a plurality of nodes, and a plurality of words of the sentence to be tested are respectively arranged on the nodes of the initial dependency syntax tree to construct an initial adjacency matrix of the sentence to be tested;
obtaining dependency relationship information of the sentence to be tested, wherein the dependency relationship information is used for indicating a connection relationship between words in the sentence to be tested, converting an initial adjacent matrix of the sentence to be tested into an initial syntactic adjacent matrix according to the dependency relationship information of the sentence to be tested, and normalizing the initial syntactic adjacent matrix according to a preset first normalization algorithm to construct the syntactic adjacent matrix of the sentence to be tested, wherein the first normalization algorithm is as follows:
Figure FDA0003929535080000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000022
for the syntactic adjacency matrix, A syn For the initial syntax adjacency matrix, < >>
Figure FDA0003929535080000023
For the initial syntactic adjacency matrix A syn Degree matrix of (I) f Is an identity matrix.
3. The emotion analysis method fusing syntactic and semantic information according to claim 2, wherein: the first syntax information extraction module is of a multilayer graph convolution network structure;
the method for obtaining the first syntactic characteristic representation of the sentence to be detected by inputting the sentence characteristic representation and the syntactic adjacency matrix of the sentence to be detected into the first syntactic information extraction module comprises the following steps:
obtaining a plurality of layers of sentence hidden representations of the first syntax information extraction module according to the standardized syntax adjacency matrix and a preset first hidden feature calculation algorithm, extracting a plurality of target layers of sentence hidden representations from the plurality of layers of sentence hidden representations, and performing splicing processing on the plurality of target layers of sentence hidden representations to obtain spliced sentence hidden representations as the first syntax feature representation of the sentence to be detected, wherein the first hidden feature calculation algorithm is as follows:
Figure FDA0003929535080000024
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000025
hiding a table for a sentence of layer l +1 of the first syntax information extraction moduleMeans for>
Figure FDA0003929535080000026
H c For the sentence characteristic representation of the statement to be examined, ->
Figure FDA0003929535080000027
For the weight parameter at level l +1 of the first syntax information extraction module, < > H>
Figure FDA0003929535080000028
A bias parameter for layer i of the first syntax information extraction module.
4. The emotion analysis method fusing syntactic and semantic information according to claim 1, wherein: the step of inputting the first syntactic characteristic representation to the multi-head self-attention module to construct a semantic adjacency matrix of the sentence to be tested comprises the following steps:
constructing a plurality of initial semantic adjacency matrixes according to the first syntactic feature representation of the sentence to be tested and a preset multi-head self-attention computing algorithm, wherein the multi-head self-attention computing algorithm is as follows:
Figure FDA0003929535080000029
in the formula, A sem,j For the jth initial semantic adjacency matrix, H sem For said first syntactic characteristic representation, W sem,k A first weight parameter, W, for the multi-head attention module sem,q A second weight parameter, d, for the multi-head attention module head Dimension parameters of multi-head self-attention;
obtaining probability vectors of the initial semantic adjacency matrixes according to the initial semantic adjacency matrixes and a preset matrix probability calculation algorithm, and extracting an initial semantic adjacency matrix with the maximum probability vector from the initial semantic adjacency matrixes, wherein the matrix probability calculation algorithm is as follows:
A sem =argmax[softmax(A sem,1 ,…,A sem,K )]
in the formula, A sem The initial semantic adjacency matrix with the maximum probability vector is obtained, K is the number of the semantic adjacency matrices, softmax () is a normalization function, and argmax () is a set-solving function;
initializing the initial semantic adjacency matrix with the maximum probability vector according to a preset quick selection algorithm, and constructing a target semantic adjacency matrix of the statement to be detected, wherein the quick selection algorithm is as follows:
A’ sem =top-k(A sem )
of formula (II) to' sem For the target semantic adjacency matrix, top-k () is a quick selection function;
according to a preset second standardization algorithm, carrying out standardization processing on the target semantic adjacency matrix to construct the semantic adjacency matrix of the statement to be tested, wherein the second standardization algorithm is as follows:
Figure FDA0003929535080000031
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000032
is the normalized semantic adjacency matrix, A' sem For the target semantic adjacency matrix, a set of semantic adjacency matrices,
Figure FDA0003929535080000033
is the target semantic adjacency matrix A' sem Degree matrix of (I) f Is an identity matrix.
5. The emotion analysis method fusing syntax and semantic information according to claim 4, wherein the step of inputting the first syntax feature representation and the semantic adjacency matrix into the first semantic information extraction module for semantic feature extraction to obtain the first fused feature representation of the sentence to be tested comprises the steps of:
taking the first syntactic feature representation of the sentence to be detected as the first-layer input data of the convolution module, obtaining the sentence hiding representation of the last layer of the convolution module according to the semantic adjacency matrix and a preset second hidden feature calculation algorithm, and taking the sentence hiding representation of the last layer of the convolution module as the first fusion feature representation of the sentence to be detected, wherein the second hidden feature calculation algorithm is as follows:
Figure FDA0003929535080000034
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000035
for a sentence hidden representation of the l-th layer of the convolution module, a value is selected>
Figure FDA0003929535080000036
For the weight parameter of the l-th layer of the convolution module, < >>
Figure FDA0003929535080000037
Is the bias parameter of the l-th layer of the convolution module.
6. The emotion analysis method fusing syntactic and semantic information according to claim 3, wherein: the second syntax information extraction module and the second semantic information extraction module are both of a multilayer graph convolution network structure;
inputting the sentence characteristic representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic characteristic representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module, obtaining a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected, including the steps of:
taking the sentence characteristic representation of the sentence to be detected as the first-layer input data of the second syntax information extraction module, and obtaining the sentence hidden representation of the last layer of the second syntax information extraction module as the second syntax characteristic representation of the sentence to be detected according to the syntax adjacency matrix and a preset third hidden characteristic calculation algorithm, wherein the third hidden characteristic calculation algorithm is as follows:
Figure FDA0003929535080000041
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000042
for the sentence-hiding representation of layer l +1 of the second syntax information extraction module, H c For the sentence-characteristic representation of the statement under test, RELU () is a non-linear function, ->
Figure FDA0003929535080000043
For the syntax adjacency matrix, < >>
Figure FDA0003929535080000044
For the weight parameter at level l +1 of the second syntax information extraction module, < > H>
Figure FDA0003929535080000045
A bias parameter for layer i of the second syntax information extraction module;
taking the sentence characteristic representation of the sentence to be detected as the first-layer input data of the second semantic information extraction module, obtaining the last-layer sentence hidden representation of the second semantic information extraction module according to the semantic adjacency matrix and a preset fourth hidden characteristic calculation algorithm, and taking the last-layer sentence hidden representation as the second semantic characteristic representation of the sentence to be detected, wherein the fourth hidden characteristic calculation algorithm is as follows:
Figure FDA0003929535080000046
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000047
for the sentence hiding representation of the l +1 th layer of the second semantic information extraction module, be->
Figure FDA0003929535080000048
For the semantic adjacency matrix, < > H>
Figure FDA0003929535080000049
For the weight parameter of the l +1 th layer of the second semantic information extraction module, < > H>
Figure FDA00039295350800000410
The bias parameter of the l layer of the second semantic information extraction module is obtained;
obtaining a second fusion feature representation of the sentence to be tested according to the second syntactic feature representation, the second semantic feature representation and a preset fusion feature calculation algorithm of the sentence to be tested, wherein the fusion feature calculation algorithm is as follows:
Figure FDA00039295350800000411
/>
in the formula (I), the compound is shown in the specification,
Figure FDA00039295350800000412
is the second fused feature representation.
7. The emotion analysis method fusing syntactic and semantic information according to claim 3, wherein: the sentence to be detected also comprises an aspect word consisting of a plurality of words;
the first fusion characteristic of the sentence to be detected represents a first hidden vector comprising a plurality of words and a first hidden vector comprising an aspect word, and the second fusion characteristic of the sentence to be detected represents a second hidden vector comprising a plurality of words and a second hidden vector comprising an aspect word;
the step of inputting the first fusion characteristic representation and the second fusion characteristic representation of the to-be-detected sentence into the emotion analysis module to obtain an emotion analysis result of the to-be-detected sentence comprises the following steps:
masking a first hidden vector of a word of a non-aspect word in the first fused feature representation of the to-be-detected sentence and a second hidden vector of a word of a non-aspect word in the second fused feature representation to obtain a first fused feature representation and a second fused feature representation of the to-be-detected sentence after masking, and splicing the first fused feature representation and the second fused feature representation of the to-be-detected sentence after masking to obtain a spliced fused feature representation;
obtaining an emotion classification polarity probability distribution vector of the statement to be tested according to the fusion feature representation after the splicing processing and a preset emotion analysis algorithm, obtaining an emotion polarity corresponding to the dimension with the maximum probability according to the emotion classification polarity probability distribution vector, and using the emotion polarity as an emotion analysis result of the statement to be tested, wherein the emotion analysis algorithm is as follows:
Figure FDA0003929535080000051
in the formula (I), the compound is shown in the specification,
Figure FDA0003929535080000052
classifying a polarity probability distribution vector, h, for the emotion a For the fusion feature representation after the splicing process, softmax () is a normalization function, W 1 As a weight parameter of the emotion analysis module, b 1 For the emotion analysis moduleA bias parameter.
8. An emotion analysis device that fuses syntactic and semantic information, comprising:
the system comprises an obtaining module, a judging module and a judging module, wherein the obtaining module is used for obtaining a sentence to be tested and a preset emotion analysis model, and the emotion analysis model comprises a sentence coding module, a multi-head self-attention module, an information fusion module, an information sharing module and an emotion analysis module; the information fusion module comprises a first syntax information extraction module and a first semantic information extraction module which are cascaded in tandem, the information sharing module comprises a second syntax information extraction module and a second semantic information extraction module which are connected in parallel, and the weight parameters of the second syntax information extraction module and the second semantic information extraction module are the same;
the sentence coding module is used for inputting the sentence to be detected into the sentence coding module for coding processing to obtain the sentence characteristic representation of the sentence to be detected; constructing a syntax adjacency matrix of the statement to be tested;
the first feature representation calculation module is used for inputting the sentence feature representation of the sentence to be tested and the syntactic adjacency matrix into the first syntactic information extraction module to obtain a first syntactic feature representation of the sentence to be tested; inputting the first syntactic feature representation to the multi-head self-attention module, constructing a semantic adjacency matrix of the to-be-detected sentence, and inputting the first syntactic feature representation and the semantic adjacency matrix to the first semantic information extraction module for semantic feature extraction to obtain a first fusion feature representation of the to-be-detected sentence;
the second feature representation calculation module is used for inputting the sentence feature representation of the sentence to be tested and the syntactic adjacency matrix into the second syntactic information extraction module to obtain a second syntactic feature representation of the sentence to be tested; inputting the sentence characteristic representation of the sentence to be detected and the semantic adjacency matrix into the second syntax information extraction module to obtain a second syntax characteristic representation of the sentence to be detected, and performing fusion processing on the second syntax characteristic representation and the second semantic characteristic representation to obtain a second fusion characteristic representation of the sentence to be detected;
and the emotion analysis module is used for inputting the first fusion characteristic representation and the second fusion characteristic representation of the statement to be detected into the emotion analysis module to obtain an emotion analysis result of the statement to be detected.
9. A computer device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of sentiment analysis fusing syntactic and semantic information according to any one of claims 1 to 7 when executing the computer program.
10. A storage medium, characterized by: the storage medium stores a computer program which, when executed by a processor, implements the steps of the method for sentiment analysis fusing syntactic and semantic information according to any one of claims 1 to 7.
CN202211383395.9A 2022-11-07 2022-11-07 Emotion analysis method, device and equipment integrating syntax and semantic information Active CN115905524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211383395.9A CN115905524B (en) 2022-11-07 2022-11-07 Emotion analysis method, device and equipment integrating syntax and semantic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211383395.9A CN115905524B (en) 2022-11-07 2022-11-07 Emotion analysis method, device and equipment integrating syntax and semantic information

Publications (2)

Publication Number Publication Date
CN115905524A true CN115905524A (en) 2023-04-04
CN115905524B CN115905524B (en) 2023-10-03

Family

ID=86490676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211383395.9A Active CN115905524B (en) 2022-11-07 2022-11-07 Emotion analysis method, device and equipment integrating syntax and semantic information

Country Status (1)

Country Link
CN (1) CN115905524B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822522A (en) * 2023-06-13 2023-09-29 连连银通电子支付有限公司 Semantic analysis method, semantic analysis device, semantic analysis equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966686A (en) * 1996-06-28 1999-10-12 Microsoft Corporation Method and system for computing semantic logical forms from syntax trees
CN112966074A (en) * 2021-05-17 2021-06-15 华南师范大学 Emotion analysis method and device, electronic equipment and storage medium
CN115204183A (en) * 2022-09-19 2022-10-18 华南师范大学 Knowledge enhancement based dual-channel emotion analysis method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966686A (en) * 1996-06-28 1999-10-12 Microsoft Corporation Method and system for computing semantic logical forms from syntax trees
CN112966074A (en) * 2021-05-17 2021-06-15 华南师范大学 Emotion analysis method and device, electronic equipment and storage medium
CN115204183A (en) * 2022-09-19 2022-10-18 华南师范大学 Knowledge enhancement based dual-channel emotion analysis method, device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822522A (en) * 2023-06-13 2023-09-29 连连银通电子支付有限公司 Semantic analysis method, semantic analysis device, semantic analysis equipment and storage medium

Also Published As

Publication number Publication date
CN115905524B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN112131383B (en) Specific target emotion polarity classification method
CN115204183B (en) Knowledge enhancement-based two-channel emotion analysis method, device and equipment
CN110046698A (en) Heterogeneous figure neural network generation method, device, electronic equipment and storage medium
Dias et al. Using the Choquet integral in the pooling layer in deep learning networks
Chen et al. Recursive context routing for object detection
CN113641820A (en) Visual angle level text emotion classification method and system based on graph convolution neural network
CN114676704A (en) Sentence emotion analysis method, device and equipment and storage medium
EP4322031A1 (en) Recommendation method, recommendation model training method, and related product
CN115587597B (en) Sentiment analysis method and device of aspect words based on clause-level relational graph
CN112699215B (en) Grading prediction method and system based on capsule network and interactive attention mechanism
CN116151263B (en) Multi-mode named entity recognition method, device, equipment and storage medium
CN115905524A (en) Emotion analysis method, device and equipment integrating syntactic and semantic information
CN115659951B (en) Statement emotion analysis method, device and equipment based on label embedding
CN117633516A (en) Multi-mode cynics detection method, device, computer equipment and storage medium
CN116258145B (en) Multi-mode named entity recognition method, device, equipment and storage medium
CN115659987B (en) Multi-mode named entity recognition method, device and equipment based on double channels
CN115906863B (en) Emotion analysis method, device, equipment and storage medium based on contrast learning
CN114547312B (en) Emotional analysis method, device and equipment based on common sense knowledge graph
CN115905518B (en) Emotion classification method, device, equipment and storage medium based on knowledge graph
CN112800217A (en) Vector relevance matrix-based intelligent assessment method for government affair transaction processing report
CN115827878A (en) Statement emotion analysis method, device and equipment
CN113987188B (en) Short text classification method and device and electronic equipment
CN115906861A (en) Statement emotion analysis method and device based on interaction aspect information fusion
CN115033700A (en) Cross-domain emotion analysis method, device and equipment based on mutual learning network
CN115618884A (en) Language analysis method, device and equipment based on multi-task learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant