CN112966074A - Emotion analysis method and device, electronic equipment and storage medium - Google Patents
Emotion analysis method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112966074A CN112966074A CN202110535102.3A CN202110535102A CN112966074A CN 112966074 A CN112966074 A CN 112966074A CN 202110535102 A CN202110535102 A CN 202110535102A CN 112966074 A CN112966074 A CN 112966074A
- Authority
- CN
- China
- Prior art keywords
- information
- semantic
- graph
- neural network
- syntax
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 134
- 238000004458 analytical method Methods 0.000 title claims abstract description 112
- 239000013598 vector Substances 0.000 claims abstract description 200
- 238000003062 neural network model Methods 0.000 claims abstract description 102
- 230000007246 mechanism Effects 0.000 claims abstract description 44
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 157
- 238000010586 diagram Methods 0.000 claims description 109
- 230000006870 function Effects 0.000 claims description 51
- 230000015654 memory Effects 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 20
- 238000011176 pooling Methods 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 10
- 238000012935 Averaging Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000003491 array Methods 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 28
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention provides an emotion analysis method, an emotion analysis device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a hidden state vector corresponding to a sentence to be subjected to emotion analysis through a bidirectional LSTM network, obtaining syntactic information of the sentence through a first graph convolution neural network model, obtaining semantic information of the sentence through a multi-head self-attention mechanism model and a second graph convolution neural network model, obtaining public information between a syntactic graph and a semantic graph through a shared graph convolution neural network model, splicing and fusing the syntactic information, the semantic information and the public information to obtain feature expression of a specific target, inputting the feature expression to a full-connection network for probability calculation to obtain an emotion analysis result of the specific target, fully extracting the semantic information in the semantic graph through the feature expression, considering the public information between the semantic information and the syntactic information, and improving the accuracy of emotion analysis.
Description
Technical Field
The present invention relates to the field of natural language processing technologies, and in particular, to an emotion analysis method and apparatus, an electronic device, and a storage medium.
Background
With the explosive growth of user-generated text on the internet, the automatic extraction of useful information from rich documents has received attention from the field of Natural Language Processing (NLP). Emotion analysis is one of important problems in the field of natural language processing, and the purpose of emotion analysis is to analyze subjective text with emotion colors. Generally, when performing emotion analysis based on a specific target, there are several steps such as obtaining word embedding, modeling syntactic information, extracting semantic information, where mining the most relevant opinion words plays a pivotal role.
In some technologies, attention mechanism is applied to connect the aspect words and the opinion words, and semantic information is extracted to perform emotion analysis on a specific target. However, limited by co-occurrence frequency or long-range word dependencies, attention mechanisms may assign incorrect weights to unrelated words. In some techniques, a graph-based neural network extracts syntax information from a dependency syntax tree. Although a dramatic improvement over attention-based models is achieved, there are also disadvantages: sentences have different sensitivities to syntactic and semantic information. Especially for those sentences that do not have a clear syntactic structure, this means that the syntactic information cannot help the model determine the emotional polarity of the sentence in some cases. In some techniques, syntactic information is extracted from dependency syntax trees based on a graph neural network and semantic information is extracted in conjunction with an attention mechanism, however, not all information on the dependency trees is meaningful and noise will be encoded by the graph neural network and secondary noise will be caused if the attention mechanism is used on this basis.
In the process of implementing the invention, the inventor finds that the technology at least has the following problems: semantic information is extracted through an attention mechanism, or syntax information is extracted from a dependency syntax tree based on a graph neural network, common information between the semantic information and the syntax information is not extracted, and the emotion analysis accuracy is reduced.
Disclosure of Invention
In order to solve the problems in the related art, embodiments of the present application provide an emotion analysis method, apparatus, electronic device, and storage medium, which have the advantage of improving emotion analysis accuracy.
According to a first aspect of embodiments of the present application, there is provided an emotion analysis method, including the steps of:
acquiring a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into a bidirectional LSTM neural network to obtain a hidden state vector corresponding to the word vector;
obtaining a dependency syntax tree corresponding to the sentence, and converting the dependency syntax tree into a syntax graph;
inputting the hidden state vector and the syntactic graph into a first graph convolution neural network model to obtain syntactic information of the sentence;
inputting the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic graph, and inputting the hidden state vector and the semantic graph into a second graph convolution neural network model to obtain semantic information of the sentence;
inputting the hidden state vector, the syntactic graph and the semantic graph into a shared graph convolution neural network model to obtain common information between the syntactic graph and the semantic graph;
inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, and splicing and fusing the specific target information to obtain characteristic expression of a specific target;
and inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
Further, the step of obtaining a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector to a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector includes: converting each word in the sentence to be subjected to emotion analysis into a word vector according to the GloVe word embedding model; inputting the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector; wherein the hidden state vector is represented as follows:
wherein,representing the number of word vectors corresponding to the sentence to be subjected to emotion analysis,representing the number of word vectors corresponding to a specific target in the sentence to be subjected to emotion analysis,represents the hidden-state vector in question,representing a hidden state vector encoded in the forward direction,representing a hidden state vector coded in the backward direction,is the superscript representation of the sentence to be emotion analyzed,is a subscript representative of a particular target in the sentence to be emotion analyzed,the subscript indicating the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual specific objects indicate,a hidden state vector representing a forward direction encoding corresponding to each of said word vectors,and representing the hidden state vector of the backward direction coding corresponding to each word vector.
Further, the step of inputting the hidden state vector and the syntax map into a first map convolution neural network model to obtain syntax information of the sentence comprises: obtaining a syntax adjacency matrix of the syntax diagram; wherein the syntactic adjacency matrix represents adjacency relationships of words in the syntactic graph; inputting the hidden state vector and the syntactic adjacency matrix into the first graph convolution neural network model to obtain syntactic information of the sentence; wherein the formula for obtaining the syntax information is as follows:
wherein,represents the hidden-state vector in question,a first layer input representing the first graph convolution neural network model,is represented by,…,Formed by splicing ""means the number of splices,representing the first atlas convolutional neural network modelThe output of the layer is carried out,is a normalized adjacency matrix that is,is the syntactic adjacency matrix in question,is a matrix of units, and is,is a matrix of degrees and is,is the first graph convolution neural network modelThe learnable parameter matrix of a layer,it is shown that the activation function is,syntax information representing the sentence.
Further, the step of inputting the hidden state vector to a multi-head attention mechanism model to obtain a semantic graph, and inputting the hidden state vector and the semantic graph to a second graph convolution neural network model to obtain semantic information of the sentence includes: inputting the hidden state vector into a multi-head self-attention mechanism model to obtain an initial semantic adjacency matrix of the semantic graph; inputting the hidden state vector and the initial semantic adjacency matrix into an operational formula of the second graph convolution neural network model to obtain an output result of an initial layer of the second graph convolution neural network model; inputting the initial semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model into a multi-head self-attention mechanism model updating formula to obtain an updated semantic adjacency matrix; repeatedly executing input operation on the updated semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model until the output result of the output layer of the second graph convolution neural network model is obtained, and obtaining the semantic information of the sentence; wherein the formula for obtaining the initial semantic adjacency matrix of the semantic graph is as follows:
wherein,represents the hidden-state vector in question,as a first layer input to the second graph convolution neural network model,is the number of the heads of the multi-head self-attention,is the hidden state vector dimension for each of the bi-directional LSTM networks,is the dimension of the multi-head self-attention per head,is the first of the initial layerThe self-attention matrix is used for self-attention,is the first layer in the initial layer of the multi-head self-attention mechanism modelA first trainable parameter matrix corresponding to each self-attention moment array,is the first layer in the initial layer of the multi-head self-attention mechanism modelA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,representing the largest of the sorted matricesThe number of the elements is one,is the initial semantic adjacency matrix;
wherein, the operation formula of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is a matrix of units, and is,is a matrix of degrees and is,is a learnable parameter matrix of a first layer of the second graph convolution neural network model,it is shown that the activation function is,representing an output result of the initial layer of the second graph convolution neural network model;
wherein, the multi-head self-attention mechanism model updating formula is as follows:
wherein,is formed by,…,Formed by splicing ""means the number of splices,representing the second atlas convolutional neural network modelThe output of the layer is carried out,is shown asLayer oneThe self-attention matrix is used for self-attention,is the first of the multi-head self-attention mechanism modelIn a layer ofA first trainable parameter matrix corresponding to each self-attention moment array,is the first of the multi-head self-attention mechanism modelIn a layer ofA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,is the dimension of the multi-head self-attention per head,indicating the existence ofThe function is activated in such a way that,indicating the existence ofThe function of the function is that of the function,is the number of the heads of the multi-head self-attention,is an intermediate result of the updating of the semantic adjacency matrix,representing the largest of the sorted matricesThe number of the elements is one,is the updated semantic adjacency matrix;
wherein, the formula for obtaining the output result of the output layer of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is the updated semantic adjacency matrix that is,is a matrix of units, and is,is a matrix of degrees and is,second graph convolution neural network model number oneA learnable parametric matrix representation of the layer,it is shown that the activation function is,semantic information representing the sentence.
Further, the step of inputting the hidden state vector, the syntax diagram and the semantic diagram into a shared graph convolution neural network model, and obtaining common information between the syntax diagram and the semantic diagram comprises: inputting the hidden state vector and the syntactic graph into a shared graph convolution neural network model to obtain public information of the syntactic graph; inputting the hidden state vector and the semantic graph into a shared graph convolution neural network model to obtain public information of the semantic graph; inputting the public information of the syntactic graph and the public information of the semantic graph into a combined operation formula to obtain the public information between the syntactic graph and the semantic graph; wherein the public information formula for obtaining the syntax diagram is as follows:
wherein,an adjacency matrix representing the syntax diagram,represents the hidden-state vector in question,a parameter matrix that represents a learnable model of the shared graph convolutional neural network,common information representing the syntax diagram is described,a syntax graph convolution module representing obtaining common information of the syntax graph from an adjacency matrix of the syntax graph, the hidden state vector, and a parameter matrix learnable by the shared graph convolution neural network model;
wherein, the public information formula for obtaining the semantic graph is as follows:
wherein,an adjacency matrix representing the semantic graph,represents the hidden-state vector in question,express the languageThe common information of the sense graph is defined,a semantic graph convolution module for obtaining public information of the syntactic graph according to the adjacency matrix of the semantic graph, the hidden state vector and the parameter matrix which can be learnt by the shared graph convolution neural network model;
wherein, the combined operation formula is as follows:
wherein,andis a matrix of parameters that can be learned,representing common information between the syntax diagram and the semantic diagram.
Further, the step of inputting the syntax information, the semantic information and the public information into a mask model to perform averaging and pooling to obtain specific target information, and performing splicing and fusion on the specific target information to obtain a feature expression of a specific target includes: inputting the syntax information, the semantic information and the public information into a mask model to perform average pooling to obtain syntax specific target information, semantic specific target information and public specific target information; splicing the syntax specific target information, the semantic specific target information and the public specific target information to obtain specific target representation; inputting the specific target representation into a multilayer neural network fusion formula to obtain the characteristic expression of the specific target; the obtaining of the syntax specific target information, the semantic specific target information and the common specific target information is formulated as follows:
wherein,is of said mask modelThe function of the function is that of the function,is a subscript representative of a particular target in the sentence to be emotion analyzed,is an index to a particular target and is,indicating the number of specific objects that are to be addressed,is the average pooling function of the received data,the syntax information is represented by a syntax table,representing common information between the syntax diagram and the semantic diagram,the semantic information is represented by a representation of the semantic information,represents the syntax-specific object information and the syntax-specific object information,representing the semantic specific object information in question,representing the common specific target information;
wherein, the characteristic expression formula for obtaining the specific target is as follows:
wherein, the multilayer neural network fusion formula is as follows:
wherein,a matrix of weights that can be learned is represented,a bias term is represented as a function of,presentation activation letterThe number of the first and second groups is,a characteristic expression representing the specific object.
Further, the step of inputting the feature expression of the specific target into a fully-connected network for probability calculation to obtain the emotion analysis result of the specific target includes: inputting said target-specific feature expression into a fully connected networkCarrying out probability calculation by using a layer operation formula to obtain an emotion analysis result of the specific target; wherein,the layer operation formula is as follows:
wherein,a feature expression representing the specific object,a matrix of weights that can be learned is represented,a bias term is represented as a function of,indicating the existence ofThe function is activated in such a way that,and representing the emotion analysis result.
According to a second aspect of embodiments of the present application, there is provided an emotion analysis apparatus including:
the hidden state acquisition module is used for acquiring a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into the bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector;
the dependency syntax tree conversion module is used for acquiring a dependency syntax tree corresponding to the sentence and converting the dependency syntax tree into a syntax graph;
a syntax information obtaining module, configured to input the hidden state vector and the syntax map into a first map convolution neural network model, so as to obtain syntax information of the sentence;
a semantic information obtaining module, configured to input the hidden state vector to a multi-head attention mechanism model to obtain a semantic graph, and input the hidden state vector and the semantic graph to a second graph convolution neural network model to obtain semantic information of the sentence;
a public information obtaining module, configured to input the hidden state vector, the syntax diagram, and the semantic diagram into a shared diagram convolutional neural network model, and obtain public information between the syntax diagram and the semantic diagram;
the feature expression obtaining module is used for inputting the syntactic information, the semantic information and the public information into a mask model to obtain average pooling to obtain specific target information, and splicing and fusing the specific target information to obtain feature expression of a specific target;
and the emotion analysis module is used for inputting the characteristic expression to a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and executed to implement the emotion analysis method as described in any one of the above.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the emotion analysis method as described in any one of the above.
The embodiment of the application obtains a word vector of a sentence to be subjected to emotion analysis, inputs the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector, obtains a dependency syntax tree corresponding to the sentence, converts the dependency syntax tree into a syntax diagram, inputs the hidden state vector and the syntax diagram into a first diagram convolution neural network model to obtain syntax information of the sentence, inputs the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic diagram, inputs the hidden state vector and the semantic diagram into a second diagram convolution neural network model to obtain semantic information of the sentence, inputs the hidden state vector, the syntax diagram and the semantic diagram into a shared diagram convolution neural network model to obtain common information between the syntax diagram and the semantic diagram, inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, splicing and fusing the specific target information to obtain a characteristic expression of the specific target, inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target, fully extracting the semantic information in a semantic graph by the characteristic expression, considering the public information between the semantic information and the syntactic information, and improving the accuracy of emotion analysis.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a sentiment analysis method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a step S110 of the emotion analyzing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a step S130 of the emotion analysis method according to an embodiment of the present application;
FIG. 4 is a diagram of a syntactic dependency tree, provided in accordance with an embodiment of the present application;
FIG. 5 is a diagram of a syntactic adjacency matrix provided by one embodiment of the present application;
FIG. 6 is a flowchart illustrating a step S140 of the emotion analysis method according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a step S150 of the emotion analysis method according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a step S160 of the emotion analysis method according to an embodiment of the present application;
FIG. 9 is a schematic overall structure diagram of an emotion analysis model provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of a second convolutional neural network model provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a shared graph convolution neural network model provided in accordance with an embodiment of the present application;
FIG. 12 is a block diagram schematically illustrating a structure of an emotion analyzing apparatus according to an embodiment of the present application;
fig. 13 is a block diagram illustrating a schematic structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination". Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Example 1
Referring to fig. 1, an emotion analysis method provided in the embodiment of the present application includes the following steps:
s110: obtaining a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector.
Word embedding is a digital representation of words, which is to map a word into a high-dimensional vector to realize the representation of the word, and the vector is called a word vector. In the embodiment of the application, a word vector of a sentence to be subjected to emotion analysis is acquired, and the word vector is input to a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector. The bidirectional LSTM Network, namely a bidirectional Long Short Term Memory Network (Bi-LSTM), is one of the Recurrent Neural Networks, comprises a forward Recurrent Neural Network and a backward Recurrent Neural Network, and is more suitable for modeling time sequence data.
Referring to fig. 2, in an embodiment of the present application, the step S110 includes steps S111 to S112, which are as follows:
s111: and converting each word in the sentence to be subjected to emotion analysis into a word vector according to the GloVe word embedding model.
Global Vectors for Word representation is a Word representation tool based on global Word frequency statistics, which can represent a Word as a vector of real numbers, which captures some semantic properties between words. In the embodiment of the application, emotion analysis to be performed is acquiredSentence of (2)Including a specific target word,Respectively representing words in said sentence, together comprisingThe number of the individual words is,words respectively representing specific target words in the sentence, includingThe number of the individual words is,is a subscript representative of a particular target in the sentence to be emotion analyzed,the subscript indicating the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual particular targets indicate. By looking up pre-trained word embedding matricesInitializing sentences,Is the size of the lexicon of words,and converting each word in the sentence to be subjected to emotion analysis into a word vector.
S112: and inputting the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector.
In the embodiment of the application, the word vector is input to a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector. Wherein the hidden state vector is represented as follows:
wherein,representing the number of word vectors corresponding to the sentence to be subjected to emotion analysis,representing the number of word vectors corresponding to a specific target in the sentence to be subjected to emotion analysis,represents the hidden-state vector in question,representing a hidden state vector encoded in the forward direction,a hidden state vector representing a backward direction encoding, c is a superscript representation of the sentence to be emotion analyzed,is a subscript representative of a particular target in the sentence to be emotion analyzed,to representThe subscript of the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual specific objects indicate,a hidden state vector representing a forward direction encoding corresponding to each of said word vectors,and representing the hidden state vector of the backward direction coding corresponding to each word vector.
S120: and acquiring a dependency syntax tree corresponding to the sentence, and converting the dependency syntax tree into a syntax graph.
The dependency syntax tree represents the dependency relationship between words in a sentence. In the embodiment of the application, a Stanford parser is used for carrying out syntactic analysis on a sentence, the dependency syntax tree is generated, and the dependency syntax tree is converted into a syntax diagram. Wherein,is a adjacency matrix of the syntax diagram,is the hidden state vector.
S130: and inputting the hidden state vector and the syntactic graph into a first graph convolution neural network model to obtain syntactic information of the sentence.
Graph Convolutional neural network (GCN) is a Convolutional neural network capable of deep learning of Graph dataAnd the method is used for processing data of a graph structure type, wherein the graph structure is a topological structure and can also be called a non-Euclidean structure. In the embodiment of the application, the hidden state vector is usedAnd the syntax mapAnd inputting the data into a first graph convolution neural network model, integrating the syntax in the sentence, and obtaining the syntax information of the sentence.
Referring to fig. 3, in an embodiment of the present application, the step S130 includes steps S131 to S132 as follows:
s131: obtaining a syntax adjacency matrix of the syntax diagram; wherein the syntactic adjacency matrix represents adjacency relationships of words in the syntactic graph.
And obtaining an adjacency matrix corresponding to the words in the sentence to be subjected to emotion analysis according to the syntactic graph. Wherein the syntactic adjacency matrix represents adjacency relationships of words in the syntactic graph. Referring to fig. 4 and 5, fig. 4 is a schematic diagram of a syntactic dependency tree provided in an embodiment of the present application, and fig. 5 is a schematic diagram of a syntactic adjacency matrix provided in an embodiment of the present application. The syntactic dependency tree shown in FIG. 4 shows the dependency relationship of the words in the target sentence "it has a bad memory button a good battery life". FIG. 5 is a syntactic adjacency matrix corresponding to the syntactic dependency tree shown in FIG. 4. For two words with dependency relationship, the corresponding value in the syntactic adjacency matrix is 1, for two words without dependency relationship, the corresponding value in the syntactic adjacency matrix is 0, and the word have the dependency relationship by default.
S132: and inputting the hidden state vector and the syntactic adjacency matrix into the first graph convolution neural network model to obtain syntactic information of the sentence.
In this embodiment of the present application, the hidden state vector and the syntactic adjacency matrix are input into the first graph convolution neural network model to obtain an output result of a current layer, and the input operation is repeatedly performed until an output result of an output layer of the first graph convolution neural network model is obtained, so as to obtain the syntactic representation of the sentence. Wherein the formula for obtaining the syntax information is as follows:
wherein,represents the hidden-state vector in question,a first layer input representing the first graph convolution neural network model,is represented by,…,Formed by splicing ""means the number of splices,representing the first atlas convolutional neural network modelThe output of the layer is carried out,is a normalized adjacency matrix that is,is the syntactic adjacency matrix in question,is a matrix of units, and is,is a matrix of degrees and is,is the first graph convolution neural network modelThe learnable parameter matrix of a layer,it is shown that the activation function is,syntax information representing the sentence.
S140: and inputting the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic graph, and inputting the hidden state vector and the semantic graph into a second graph convolution neural network model to obtain semantic information of the sentence.
The nature of the attention mechanism comes from the human visual attention mechanism, which is applied to emotion analysis in order to enable more attention to be assigned to key words in the classification process. In particular, a sentence of text can be thought of as being composed of a series of<Key,Value>Data pair composition, a certain element Query is given at the moment, the similarity or correlation between the Query and each Key is calculated to obtain the weight coefficient of Value corresponding to each Key, and then the weight coefficient is obtainedAnd after the function is normalized, carrying out weighted summation on the weight coefficient and the corresponding Value to obtain an attention result. In current research, Key and Value are often equal, i.e., Key is Value. In the embodiment of the application, the hidden state vector is input to a multi-head self-attention mechanism model to obtain a semantic graph, and the hidden state vector and the semantic graph are input to a second graph convolution neural network model to obtain semantic information of the sentence.
Referring to fig. 6, in an embodiment of the present application, the step S140 includes steps S141 to S144, which are as follows:
s141: and inputting the hidden state vector into a multi-head self-attention mechanism model to obtain an initial semantic adjacency matrix of the semantic graph.
In the embodiment of the application, the hidden state vector is input to a multi-head self-attention mechanism model to generateAn attention matrix, which is used to enhance the robustness of the modelThe attention matrices are summed in an initialization phase and then usedChoose the largest of themAn element, thereby obtaining an initial semantic adjacency matrix of the semantic graph. Wherein the formula for obtaining the initial semantic adjacency matrix of the semantic graph is as follows:
wherein,represents the hidden-state vector in question,as a first layer input to the second graph convolution neural network model,is the number of the heads of the multi-head self-attention,is the hidden state vector dimension for each of the bi-directional LSTM networks,is the dimension of the multi-head self-attention per head,is the first of the initial layerThe self-attention matrix is used for self-attention,is the first layer in the initial layer of the multi-head self-attention mechanism modelA first trainable parameter matrix corresponding to each self-attention moment array,is the first layer in the initial layer of the multi-head self-attention mechanism modelA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,representing the largest of the sorted matricesThe number of the elements is one,is the initial semantic adjacency matrix.
S142: and inputting the hidden state vector and the initial semantic adjacency matrix into an operation formula of the second graph convolution neural network model to obtain an output result of the initial layer of the second graph convolution neural network model.
In the embodiment of the application, the hidden state vector is usedAnd the initial semantic adjacency matrixInputting the result into an operation formula of the second graph convolution neural network model to obtain an output result of the initial layer of the second graph convolution neural network model. Wherein, the operation formula of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is a matrix of units, and is,is a matrix of degrees and is,is a learnable parameter matrix of a first layer of the second graph convolution neural network model,it is shown that the activation function is,and representing the output result of the initial layer of the second graph convolution neural network model.
S143: and inputting the initial semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model into a multi-head self-attention mechanism model updating formula to obtain an updated semantic adjacency matrix.
In the embodiment of the application, the initial semantic adjacency matrix is usedAnd the output result of the initial layer of the second graph convolution neural network modelAnd inputting the data into a multi-head self-attention mechanism model updating formula to obtain an updated semantic adjacency matrix. Wherein, the multi-head self-attention mechanism model updating formula is as follows:
wherein,is formed by,…,Formed by splicing ""means the number of splices,representing the second atlas convolutional neural network modelThe output of the layer is carried out,is shown asLayer oneThe self-attention matrix is used for self-attention,is the first of the multi-head self-attention mechanism modelIn a layer ofA first trainable parameter matrix corresponding to each self-attention moment array,is the first of the multi-head self-attention mechanism modelIn a layer ofA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,is the dimension of the multi-head self-attention per head,indicating the existence ofThe function is activated in such a way that,indicating the existence ofThe function of the function is that of the function,is the number of the heads of the multi-head self-attention,is an intermediate result of the updating of the semantic adjacency matrix,representing the largest of the sorted matricesThe number of the elements is one,is the updated semantic adjacency matrix.
S144: and repeatedly executing input operation on the updated semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model until the output result of the output layer of the second graph convolution neural network model is obtained, and obtaining the semantic information of the sentence.
In this embodiment of the application, the input operation is repeatedly executed on the updated semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model until the output result of the output layer of the second graph convolution neural network model is obtained, and the semantic information of the sentence is obtained. Wherein, the formula for obtaining the output result of the output layer of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is the updated semantic adjacency matrix that is,is a matrix of units, and is,is a matrix of degrees and is,second graph convolution neural network model number oneA learnable parametric matrix representation of the layer,it is shown that the activation function is,semantic information representing the sentence.
S150: and inputting the hidden state vector, the syntactic graph and the semantic graph into a shared graph convolution neural network model to obtain common information between the syntactic graph and the semantic graph.
Considering that the syntax information of the syntax diagram and the semantic information of the semantic diagram are not completely separated, the syntax and the semantics affect each other, and the semantics change as the syntax structure of the sentence changes. Therefore, extracting common information shared by the syntax diagram and the semantic diagram is advantageous for understanding the sentence. In the embodiment of the application, the hidden state vector, the syntactic graph and the semantic graph are input into a shared graph convolution neural network model, and common information between the syntactic graph and the semantic graph is obtained.
Referring to fig. 7, in an embodiment of the present application, the step S150 includes steps S151 to S153 as follows:
s151: and inputting the hidden state vector and the syntactic graph into the shared graph convolution neural network model to obtain the public information of the syntactic graph.
In this embodiment of the present application, the hidden state vector and the syntax map are input to the shared map convolutional neural network model, so as to obtain common information of the syntax map. Wherein the public information formula for obtaining the syntax diagram is as follows:
wherein,an adjacency matrix representing the syntax diagram,represents the hidden-state vector in question,a parameter matrix that represents a learnable model of the shared graph convolutional neural network,common information representing the syntax diagram is described,a syntax graph convolution module to derive common information for the syntax graph from the adjacency matrix of the syntax graph, the hidden state vector, and a parameter matrix learnable by the shared graph convolution neural network model. The syntactic graph convolution module uses the formula of the syntactic information obtained in step S132, wherein the first graph is convolved with the neural network modelLearnable parameter matrix of layersReplacing with a learnable parameter matrix of the shared graph convolutional neural network modelAnd (4) finishing.
S152: and inputting the hidden state vector and the semantic graph into the shared graph convolution neural network model to obtain the public information of the semantic graph.
In the embodiment of the application, the hidden state vector and the semantic graph are input into the shared graph convolution neural network model to obtain the public information of the semantic graph. Wherein, the public information formula for obtaining the semantic graph is as follows:
wherein,an adjacency matrix representing the semantic graph,represents the hidden-state vector in question,common information representing the semantic graph is provided,and the semantic graph convolution module is used for obtaining the public information of the syntactic graph according to the adjacency matrix of the semantic graph, the hidden state vector and the parameter matrix which can be learned by the shared graph convolution neural network model. The semantic graph convolution module uses the formula in the steps S141-S144, wherein the second graph is convolved with the neural network modelLearnable parameter matrix of layersReplacing with a learnable parameter matrix of the shared graph convolutional neural network modelAnd (4) finishing.
S153: and inputting the public information of the syntactic graph and the public information of the semantic graph into a combined operation formula to obtain the public information between the syntactic graph and the semantic graph.
In the embodiment of the application, the public information of the syntactic graph and the public information of the semantic graph are input into a combined operation formula to obtain the public information between the syntactic graph and the semantic graph. Wherein, the combined operation formula is as follows:
wherein,andis a matrix of parameters that can be learned,representing common information between the syntax diagram and the semantic diagram.
S160: and inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, and splicing and fusing the specific target information to obtain the characteristic expression of the specific target.
In the embodiment of the application, the syntactic information, the semantic information and the public information are input into a mask model, and are averaged and pooled to obtain specific target information, and the specific target information is spliced and fused to obtain the feature expression of a specific target, so that the syntactic information, the semantic information and the public information thereof are adaptively fused, and further deep specific target information is obtained for next emotion analysis.
In one embodiment of the present application, sinceAndare all clauses and expressionsTo study in middle school, in order to letMore common information is captured. Therefore, willAndthe loss between is defined as. Also, in the same manner as above,andthe loss between is defined as. Total specificity errorThe method comprises the following steps:
in addition to this, the present invention is,andusing constraints as output of the shared graph convolution neural networkTo enhance the similarity between them:
the loss function of the final model is
WhereinIs the number of emotion categories, including positive, negative and neutral,representing the probability of the jth sample of the jth real emotion class,representing the probability of the jth predicted emotion class for the ith sample,is the regularization weight parameter that is,andis a hyper-parameter which is the parameter,a trainable parameter matrix, including、、Andand the like,to representThe square of the norm.
Referring to fig. 8, in an embodiment of the present application, the step S160 includes steps S161 to S163, which are as follows:
s161: and inputting the syntax information, the semantic information and the public information into a mask model to perform average pooling to obtain syntax specific target information, semantic specific target information and public specific target information.
In the embodiment of the application, the syntax information, the semantic information and the public information are input into a mask model to be averaged and pooled, and then the syntax specific target information, the semantic specific target information and the public specific target information are obtained. The obtaining of the syntax specific target information, the semantic specific target information and the common specific target information is formulated as follows:
wherein,is a function of the output of the mask model,is a subscript representative of a particular target in the sentence to be emotion analyzed,the subscript indicating the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual specific objects indicate,is an index to a particular target and is,indicating the number of specific objects that are to be addressed,is the average pooling function of the received data,the syntax information is represented by a syntax table,representing common information between the syntax diagram and the semantic diagram,the semantic information is represented by a representation of the semantic information,represents the syntax-specific object information and the syntax-specific object information,representing the semantic specific object information in question,representing the common specific target information.
S162: and splicing the syntax specific target information, the semantic specific target information and the public specific target information to obtain specific target representation.
In the embodiment of the application, the syntax specific target information, the semantic specific target information and the public specific target information are spliced to obtain specific target representation. Wherein, the characteristic expression formula for obtaining the specific target is as follows:
S163: and inputting the specific target representation into a multilayer neural network fusion formula to obtain the characteristic expression of the specific target.
A Multi-Layer neural network (MLP) is an artificial neural network of a forward structure that maps a set of input vectors to a set of output vectors. The multi-layer neural network can be regarded as a directed graph, and is composed of a plurality of node layers, and each layer is fully connected to the next layer. Each node, except the input nodes, is a neuron (or processing unit) with a nonlinear activation function. In the embodiment of the application, the specific target representation is input into a multi-layer neural network fusion formula, and the characteristic expression of the specific target is obtained. Wherein, the multilayer neural network fusion formula is as follows:
wherein,a matrix of weights that can be learned is represented,a bias term is represented as a function of,it is shown that the activation function is,a characteristic expression representing the specific object.
S170: and inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
In the embodiment of the application, the feature expression is input to a full-connection network for probability calculation, and the emotion analysis result of the specific target is obtained.
In an embodiment of the present application, step S170 includes S171, which is as follows:
s171: inputting said target-specific feature expression into a fully connected networkAnd carrying out probability calculation by using a layer operation formula to obtain the emotion analysis result of the specific target.
In the embodiment of the application, the characteristic expression of the specific target is input into the full-connection networkCollaterals of stomachAnd carrying out probability calculation by using a layer operation formula to obtain the emotion analysis result of the specific target. Wherein,the layer operation formula is as follows:
wherein,a feature expression representing the specific object,a matrix of weights that can be learned is represented,a bias term is represented as a function of,indicating the existence ofThe function is activated in such a way that,and representing the emotion analysis result.
Referring to fig. 9, fig. 10 and fig. 11, fig. 9 is a schematic diagram of an overall structure of an emotion analysis model according to an embodiment of the present invention, fig. 10 is a schematic diagram of a second convolutional neural network model according to an embodiment of the present invention, and fig. 11 is a schematic diagram of a shared convolutional neural network model according to an embodiment of the present invention. The emotion analysis model corresponds to the emotion analysis method proposed in the embodiment of the present application, for example: steps S110 to S170. Specifically, the model obtains a word vector of a sentence to be subjected to emotion analysis, inputs the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector, obtains a dependency syntax tree corresponding to the sentence, converts the dependency syntax tree into a syntax diagram, inputs the hidden state vector and the syntax diagram into a first diagram convolution neural network model to obtain syntax information of the sentence, inputs the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic diagram, inputs the hidden state vector and the semantic diagram into a second diagram convolution neural network model to obtain semantic information of the sentence, inputs the hidden state vector, the syntax diagram and the semantic diagram into a shared diagram convolution neural network model to obtain common information between the syntax diagram and the semantic diagram, inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, splicing and fusing the specific target information to obtain a characteristic expression of the specific target, inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target, fully extracting the semantic information in a semantic graph by the characteristic expression, considering the public information between the semantic information and the syntactic information, and improving the accuracy of emotion analysis.
Example 2
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Please refer to fig. 12, which shows a schematic structural diagram of an emotion analyzing apparatus provided in an embodiment of the present application. The emotion analysis device 200 provided in the embodiment of the present application includes:
a hidden state obtaining module 210, configured to obtain a word vector of a sentence to be subjected to emotion analysis, and input the word vector to a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector;
a dependency syntax tree transformation module 220, configured to obtain a dependency syntax tree corresponding to the sentence, and transform the dependency syntax tree into a syntax diagram;
a syntax information obtaining module 230, configured to input the hidden state vector and the syntax map into a first map convolution neural network model, and obtain syntax information of the sentence;
a semantic information obtaining module 240, configured to input the hidden state vector to a multi-head self-attention mechanism model to obtain a semantic graph, and input the hidden state vector and the semantic graph to a second graph convolution neural network model to obtain semantic information of the sentence;
a public information obtaining module 250, configured to input the hidden state vector, the syntax diagram, and the semantic diagram into a shared diagram convolutional neural network model, and obtain public information between the syntax diagram and the semantic diagram;
a feature expression obtaining module 260, configured to input the syntax information, the semantic information, and the public information into a mask model to perform averaging and pooling to obtain specific target information, and perform splicing and fusion on the specific target information to obtain a feature expression of a specific target;
and the emotion analysis module 270 is configured to input the feature expression to a full-connection network for probability calculation, so as to obtain an emotion analysis result of the specific target.
The embodiment of the application obtains a word vector of a sentence to be subjected to emotion analysis, inputs the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector, obtains a dependency syntax tree corresponding to the sentence, converts the dependency syntax tree into a syntax diagram, inputs the hidden state vector and the syntax diagram into a first diagram convolution neural network model to obtain syntax information of the sentence, inputs the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic diagram, inputs the hidden state vector and the semantic diagram into a second diagram convolution neural network model to obtain semantic information of the sentence, inputs the hidden state vector, the syntax diagram and the semantic diagram into a shared diagram convolution neural network model to obtain common information between the syntax diagram and the semantic diagram, inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, splicing and fusing the specific target information to obtain a characteristic expression of the specific target, inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target, fully extracting the semantic information in a semantic graph by the characteristic expression, considering the public information between the semantic information and the syntactic information, and improving the accuracy of emotion analysis.
Example 3
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 13, the present application further provides an electronic device 8, a processor 80, a memory 81, and a computer program 82, such as an emotion analysis program, stored in the memory 81 and operable on the processor 80. The processor 80, when executing the computer program 82, implements the steps in the above-described embodiments of emotion analysis methods, such as the steps S110 to S170 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 210 to 270 shown in fig. 8.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the emotion analysis apparatus 8. For example, the computer program 82 may be divided into a hidden state acquisition module, a dependency syntax tree conversion module, a syntax information acquisition module, a semantic information acquisition module, a public information acquisition module, a feature expression acquisition module, and an emotion analysis module, each of which functions as follows:
the hidden state acquisition module is used for acquiring a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into the bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector;
the dependency syntax tree conversion module is used for acquiring a dependency syntax tree corresponding to the sentence and converting the dependency syntax tree into a syntax graph;
a syntax information obtaining module, configured to input the hidden state vector and the syntax map into a first map convolution neural network model, so as to obtain syntax information of the sentence;
a semantic information obtaining module, configured to input the hidden state vector to a multi-head attention mechanism model to obtain a semantic graph, and input the hidden state vector and the semantic graph to a second graph convolution neural network model to obtain semantic information of the sentence;
a public information obtaining module, configured to input the hidden state vector, the syntax diagram, and the semantic diagram into a shared diagram convolutional neural network model, and obtain public information between the syntax diagram and the semantic diagram;
the feature expression obtaining module is used for inputting the syntactic information, the semantic information and the public information into a mask model to obtain average pooling to obtain specific target information, and splicing and fusing the specific target information to obtain feature expression of a specific target;
and the emotion analysis module is used for inputting the characteristic expression to a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
The emotion analyzing apparatus 8 may include, but is not limited to, a processor 80 and a memory 81. Those skilled in the art will appreciate that FIG. 8 is merely an example of a particular target emotion classification device 8 based on attention coding and graph convolution networks, and does not constitute a limitation on emotion analysis device 8, and may include more or fewer components than those shown, or combine certain components, or different components, e.g., emotion analysis device 8 may also include input output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the emotion analyzing apparatus 8, such as a hard disk or a memory of the emotion analyzing apparatus 8. The memory 81 may also be an external storage device of the emotion analyzing apparatus 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the emotion analyzing apparatus 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the emotion analyzing device 8. The memory 81 is used to store the computer program and other programs and data required by the emotion analyzing apparatus. The memory 81 may also be used to temporarily store data that has been output or is to be output.
Example 4
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing illustrated embodiments, and specific execution processes may refer to specific descriptions shown in embodiment 1, which are not described herein again. The device where the storage medium is located can be an electronic device such as a personal computer, a notebook computer, a smart phone and a tablet computer.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. An emotion analysis method, comprising:
acquiring a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector;
obtaining a dependency syntax tree corresponding to the sentence, and converting the dependency syntax tree into a syntax graph;
inputting the hidden state vector and the syntactic graph into a first graph convolution neural network model to obtain syntactic information of the sentence;
inputting the hidden state vector into a multi-head self-attention mechanism model to obtain a semantic graph, and inputting the hidden state vector and the semantic graph into a second graph convolution neural network model to obtain semantic information of the sentence;
inputting the hidden state vector, the syntactic graph and the semantic graph into a shared graph convolution neural network model to obtain common information between the syntactic graph and the semantic graph;
inputting the syntactic information, the semantic information and the public information into a mask model, averaging and pooling to obtain specific target information, and splicing and fusing the specific target information to obtain characteristic expression of a specific target;
and inputting the characteristic expression into a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
2. The emotion analysis method of claim 1, wherein the step of obtaining a word vector of a sentence to be subjected to emotion analysis, inputting the word vector to a bidirectional LSTM network, and obtaining a hidden state vector corresponding to the word vector comprises:
converting each word in the sentence to be subjected to emotion analysis into a word vector according to the GloVe word embedding model;
inputting the word vector into a bidirectional LSTM network to obtain a hidden state vector corresponding to the word vector; wherein the hidden state vector is represented as follows:
wherein,representing the number of word vectors corresponding to the sentence to be subjected to emotion analysis,representing the number of word vectors corresponding to a specific target in the sentence to be subjected to emotion analysis,represents the hidden-state vector in question,representing a hidden state vector encoded in the forward direction,representing a hidden state vector coded in the backward direction,is the superscript representation of the sentence to be emotion analyzed,is a subscript representative of a particular target in the sentence to be emotion analyzed,the subscript indicating the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual specific objects indicate,a hidden state vector representing a forward direction encoding corresponding to each of said word vectors,and representing the hidden state vector of the backward direction coding corresponding to each word vector.
3. The emotion analysis method of claim 1, wherein the step of inputting the hidden state vector and the syntax map into a first graph convolution neural network model to obtain syntactic information of the sentence comprises:
obtaining a syntax adjacency matrix of the syntax diagram; wherein the syntactic adjacency matrix represents adjacency relationships of words in the syntactic graph;
inputting the hidden state vector and the syntactic adjacency matrix into the first graph convolution neural network model to obtain syntactic information of the sentence; wherein the formula for obtaining the syntax information is as follows:
wherein,represents the hidden-state vector in question,a first layer input representing the first graph convolution neural network model,is represented by,…,Formed by splicing ""means the number of splices,represents the first diagramFirst of convolutional neural network modelThe output of the layer is carried out,is a normalized adjacency matrix that is,is the syntactic adjacency matrix in question,is a matrix of units, and is,is a matrix of degrees and is,is the first graph convolution neural network modelThe learnable parameter matrix of a layer,it is shown that the activation function is,syntax information representing the sentence.
4. The emotion analysis method of claim 1, wherein the step of inputting the hidden state vector to a multi-head self-attention mechanism model to obtain a semantic map, and inputting the hidden state vector and the semantic map to a second map convolution neural network model to obtain semantic information of the sentence comprises:
inputting the hidden state vector into a multi-head self-attention mechanism model to obtain an initial semantic adjacency matrix of the semantic graph;
inputting the hidden state vector and the initial semantic adjacency matrix into an operational formula of the second graph convolution neural network model to obtain an output result of an initial layer of the second graph convolution neural network model;
inputting the initial semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model into a multi-head self-attention mechanism model updating formula to obtain an updated semantic adjacency matrix;
repeatedly executing input operation on the updated semantic adjacency matrix and the output result of the initial layer of the second graph convolution neural network model until the output result of the output layer of the second graph convolution neural network model is obtained, and obtaining the semantic information of the sentence; wherein the formula for obtaining the initial semantic adjacency matrix of the semantic graph is as follows:
wherein,represents the hidden-state vector in question,as a first layer input to the second graph convolution neural network model,is the number of the heads of the multi-head self-attention,is the hidden state vector dimension for each of the bi-directional LSTM networks,is the dimension of the multi-head self-attention per head,is the first of the initial layerThe self-attention matrix is used for self-attention,is the first layer in the initial layer of the multi-head self-attention mechanism modelA first trainable parameter matrix corresponding to each self-attention moment array,is the first layer in the initial layer of the multi-head self-attention mechanism modelA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,representing the largest of the sorted matricesThe number of the elements is one,is the initial semantic adjacency matrix;
wherein, the operation formula of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is a matrix of units, and is,is a matrix of degrees and is,is a learnable parameter matrix of a first layer of the second graph convolution neural network model,it is shown that the activation function is,representing an output result of the initial layer of the second graph convolution neural network model;
wherein, the multi-head self-attention mechanism model updating formula is as follows:
wherein,is formed by,…,Formed by splicing ""means the number of splices,representing the second atlas convolutional neural network modelThe output of the layer is carried out,is shown asLayer oneThe self-attention matrix is used for self-attention,is the first of the multi-head self-attention mechanism modelIn a layer ofA first trainable parameter matrix corresponding to each self-attention moment array,is the first of the multi-head self-attention mechanism modelIn a layer ofA second trainable parameter matrix corresponding to the individual self-attention moment matrix,which represents the transpose of the matrix,is the dimension of the multi-head self-attention per head,indicating the existence ofThe function is activated in such a way that,indicating the existence ofThe function of the function is that of the function,is the number of the heads of the multi-head self-attention,is an intermediate result of the updating of the semantic adjacency matrix,representing the largest of the sorted matricesThe number of the elements is one,is the updated semantic adjacency matrix;
wherein, the formula for obtaining the output result of the output layer of the second graph convolution neural network model is as follows:
wherein,is a normalized adjacency matrix that is,is the updated semantic adjacency matrix that is,is unit momentThe number of the arrays is determined,is a matrix of degrees and is,second graph convolution neural network model number oneA learnable parametric matrix representation of the layer,it is shown that the activation function is,semantic information representing the sentence.
5. The emotion analysis method of claim 1, wherein the step of inputting the hidden state vector, the syntax map and the semantic map into a shared map convolutional neural network model, and obtaining common information between the syntax map and the semantic map comprises:
inputting the hidden state vector and the syntactic graph into a shared graph convolution neural network model to obtain public information of the syntactic graph;
inputting the hidden state vector and the semantic graph into a shared graph convolution neural network model to obtain public information of the semantic graph;
inputting the public information of the syntactic graph and the public information of the semantic graph into a combined operation formula to obtain the public information between the syntactic graph and the semantic graph;
wherein the public information formula for obtaining the syntax diagram is as follows:
wherein,an adjacency matrix representing the syntax diagram,represents the hidden-state vector in question,a parameter matrix that represents a learnable model of the shared graph convolutional neural network,common information representing the syntax diagram is described,a syntax graph convolution module representing obtaining common information of the syntax graph from an adjacency matrix of the syntax graph, the hidden state vector, and a parameter matrix learnable by the shared graph convolution neural network model;
wherein, the public information formula for obtaining the semantic graph is as follows:
wherein,an adjacency matrix representing the semantic graph,represents the hidden-state vector in question,common information representing the semantic graph is provided,a semantic graph convolution module for obtaining public information of the syntactic graph according to the adjacency matrix of the semantic graph, the hidden state vector and the parameter matrix which can be learnt by the shared graph convolution neural network model;
wherein, the combined operation formula is as follows:
6. The emotion analysis method according to claim 1, wherein the step of inputting the syntactic information, the semantic information, and the public information into a mask model to perform averaging and pooling to obtain specific target information, and performing splicing and fusion on the specific target information to obtain the feature expression of a specific target includes:
inputting the syntax information, the semantic information and the public information into a mask model to perform average pooling to obtain syntax specific target information, semantic specific target information and public specific target information;
splicing the syntax specific target information, the semantic specific target information and the public specific target information to obtain specific target representation;
inputting the specific target representation into a multilayer neural network fusion formula to obtain the characteristic expression of the specific target;
the obtaining of the syntax specific target information, the semantic specific target information and the common specific target information is formulated as follows:
wherein,is a function of the output of the mask model,is a subscript representative of a particular target in the sentence to be emotion analyzed,the subscript indicating the 1 st specific target in the sentence to be subjected to emotion analysis indicates,representing the sentence to be subjected to emotion analysisThe subscripts of the individual specific objects indicate,is an index to a particular target and is,indicating the number of specific objects that are to be addressed,is the average pooling function of the received data,the syntax information is represented by a syntax table,representing common information between the syntax diagram and the semantic diagram,the semantic information is represented by a representation of the semantic information,represents the syntax-specific object information and the syntax-specific object information,representing the semantic specific object information in question,representing the common specific target information;
wherein, the characteristic expression formula for obtaining the specific target is as follows:
wherein, the multilayer neural network fusion formula is as follows:
7. The emotion analysis method of claim 1, wherein the step of inputting the feature expression of the specific target into a fully-connected network for probability calculation to obtain the emotion analysis result of the specific target comprises:
inputting said target-specific feature expression into a fully connected networkCarrying out probability calculation by using a layer operation formula to obtain an emotion analysis result of the specific target; wherein,the layer operation formula is as follows:
8. An emotion analysis device, comprising:
the hidden state acquisition module is used for acquiring a word vector of a sentence to be subjected to emotion analysis, and inputting the word vector into the bidirectional LSTM neural network to obtain a hidden state vector corresponding to the word vector;
the dependency syntax tree conversion module is used for acquiring a dependency syntax tree corresponding to the sentence and converting the dependency syntax tree into a syntax graph;
a syntax information obtaining module, configured to input the hidden state vector and the syntax map into a first map convolution neural network model, so as to obtain syntax information of the sentence;
a semantic information obtaining module, configured to input the hidden state vector to a multi-head attention mechanism model to obtain a semantic graph, and input the hidden state vector and the semantic graph to a second graph convolution neural network model to obtain semantic information of the sentence;
a public information obtaining module, configured to input the hidden state vector, the syntax diagram, and the semantic diagram into a shared diagram convolutional neural network model, and obtain public information between the syntax diagram and the semantic diagram;
the feature expression obtaining module is used for inputting the syntactic information, the semantic information and the public information into a mask model to obtain average pooling to obtain specific target information, and splicing and fusing the specific target information to obtain feature expression of a specific target;
and the emotion analysis module is used for inputting the characteristic expression to a full-connection network for probability calculation to obtain an emotion analysis result of the specific target.
9. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the sentiment analysis method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a sentiment analysis method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110535102.3A CN112966074B (en) | 2021-05-17 | 2021-05-17 | Emotion analysis method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110535102.3A CN112966074B (en) | 2021-05-17 | 2021-05-17 | Emotion analysis method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112966074A true CN112966074A (en) | 2021-06-15 |
CN112966074B CN112966074B (en) | 2021-08-03 |
Family
ID=76279804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110535102.3A Active CN112966074B (en) | 2021-05-17 | 2021-05-17 | Emotion analysis method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112966074B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449110A (en) * | 2021-08-30 | 2021-09-28 | 华南师范大学 | Emotion classification method and device, storage medium and computer equipment |
CN113571097A (en) * | 2021-09-28 | 2021-10-29 | 之江实验室 | Speaker self-adaptive multi-view dialogue emotion recognition method and system |
CN113674767A (en) * | 2021-10-09 | 2021-11-19 | 复旦大学 | Depression state identification method based on multi-modal fusion |
CN113765928A (en) * | 2021-09-10 | 2021-12-07 | 湖南工商大学 | Internet of things intrusion detection method, system, equipment and medium |
CN113761941A (en) * | 2021-11-09 | 2021-12-07 | 华南师范大学 | Text emotion analysis method |
CN114048730A (en) * | 2021-11-05 | 2022-02-15 | 光大科技有限公司 | Word segmentation and entity combined recognition model training method and device |
CN115510226A (en) * | 2022-09-02 | 2022-12-23 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Emotion classification method based on graph neural network |
CN115659951A (en) * | 2022-12-26 | 2023-01-31 | 华南师范大学 | Statement emotion analysis method, device and equipment based on label embedding |
CN115712726A (en) * | 2022-11-08 | 2023-02-24 | 华南师范大学 | Emotion analysis method, device and equipment based on bigram embedding |
CN115905524A (en) * | 2022-11-07 | 2023-04-04 | 华南师范大学 | Emotion analysis method, device and equipment integrating syntactic and semantic information |
CN116029294A (en) * | 2023-03-30 | 2023-04-28 | 华南师范大学 | Term pairing method, device and equipment |
CN116304748A (en) * | 2023-05-17 | 2023-06-23 | 成都工业学院 | Text similarity calculation method, system, equipment and medium |
CN117171610A (en) * | 2023-08-03 | 2023-12-05 | 江南大学 | Knowledge enhancement-based aspect emotion triplet extraction method and system |
CN117911161A (en) * | 2024-01-25 | 2024-04-19 | 广东顺银产融投资有限公司 | Project investment decision method, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140056715A (en) * | 2012-10-31 | 2014-05-12 | 에스케이플래닛 주식회사 | An apparatus for opinion mining based on hierarchical categories and a method thereof |
CN111259142A (en) * | 2020-01-14 | 2020-06-09 | 华南师范大学 | Specific target emotion classification method based on attention coding and graph convolution network |
CN112528672A (en) * | 2020-12-14 | 2021-03-19 | 北京邮电大学 | Aspect-level emotion analysis method and device based on graph convolution neural network |
-
2021
- 2021-05-17 CN CN202110535102.3A patent/CN112966074B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140056715A (en) * | 2012-10-31 | 2014-05-12 | 에스케이플래닛 주식회사 | An apparatus for opinion mining based on hierarchical categories and a method thereof |
CN111259142A (en) * | 2020-01-14 | 2020-06-09 | 华南师范大学 | Specific target emotion classification method based on attention coding and graph convolution network |
CN112528672A (en) * | 2020-12-14 | 2021-03-19 | 北京邮电大学 | Aspect-level emotion analysis method and device based on graph convolution neural network |
Non-Patent Citations (1)
Title |
---|
ZHANG,ZUFAN 等: "《Textual sentiment analysis via three different attention convolutional neural networks and cross-modality consistent regression》", 《ELSEVIER》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449110A (en) * | 2021-08-30 | 2021-09-28 | 华南师范大学 | Emotion classification method and device, storage medium and computer equipment |
CN113765928B (en) * | 2021-09-10 | 2023-03-24 | 湖南工商大学 | Internet of things intrusion detection method, equipment and medium |
CN113765928A (en) * | 2021-09-10 | 2021-12-07 | 湖南工商大学 | Internet of things intrusion detection method, system, equipment and medium |
CN113571097A (en) * | 2021-09-28 | 2021-10-29 | 之江实验室 | Speaker self-adaptive multi-view dialogue emotion recognition method and system |
CN113674767A (en) * | 2021-10-09 | 2021-11-19 | 复旦大学 | Depression state identification method based on multi-modal fusion |
CN114048730A (en) * | 2021-11-05 | 2022-02-15 | 光大科技有限公司 | Word segmentation and entity combined recognition model training method and device |
CN113761941A (en) * | 2021-11-09 | 2021-12-07 | 华南师范大学 | Text emotion analysis method |
CN115510226A (en) * | 2022-09-02 | 2022-12-23 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Emotion classification method based on graph neural network |
CN115510226B (en) * | 2022-09-02 | 2023-11-10 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Emotion classification method based on graph neural network |
CN115905524A (en) * | 2022-11-07 | 2023-04-04 | 华南师范大学 | Emotion analysis method, device and equipment integrating syntactic and semantic information |
CN115905524B (en) * | 2022-11-07 | 2023-10-03 | 华南师范大学 | Emotion analysis method, device and equipment integrating syntax and semantic information |
CN115712726A (en) * | 2022-11-08 | 2023-02-24 | 华南师范大学 | Emotion analysis method, device and equipment based on bigram embedding |
CN115712726B (en) * | 2022-11-08 | 2023-09-12 | 华南师范大学 | Emotion analysis method, device and equipment based on double word embedding |
CN115659951A (en) * | 2022-12-26 | 2023-01-31 | 华南师范大学 | Statement emotion analysis method, device and equipment based on label embedding |
CN116029294A (en) * | 2023-03-30 | 2023-04-28 | 华南师范大学 | Term pairing method, device and equipment |
CN116304748A (en) * | 2023-05-17 | 2023-06-23 | 成都工业学院 | Text similarity calculation method, system, equipment and medium |
CN116304748B (en) * | 2023-05-17 | 2023-07-28 | 成都工业学院 | Text similarity calculation method, system, equipment and medium |
CN117171610A (en) * | 2023-08-03 | 2023-12-05 | 江南大学 | Knowledge enhancement-based aspect emotion triplet extraction method and system |
CN117171610B (en) * | 2023-08-03 | 2024-05-03 | 江南大学 | Knowledge enhancement-based aspect emotion triplet extraction method and system |
CN117911161A (en) * | 2024-01-25 | 2024-04-19 | 广东顺银产融投资有限公司 | Project investment decision method, device and storage medium |
CN117911161B (en) * | 2024-01-25 | 2024-06-21 | 广东顺银产融投资有限公司 | Project investment decision method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112966074B (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112966074B (en) | Emotion analysis method and device, electronic equipment and storage medium | |
CN111259142B (en) | Specific target emotion classification method based on attention coding and graph convolution network | |
CN112084331B (en) | Text processing and model training method and device, computer equipment and storage medium | |
CN107516110B (en) | Medical question-answer semantic clustering method based on integrated convolutional coding | |
CN110532353B (en) | Text entity matching method, system and device based on deep learning | |
CN111061843A (en) | Knowledge graph guided false news detection method | |
CN112749274B (en) | Chinese text classification method based on attention mechanism and interference word deletion | |
CN109214006B (en) | Natural language reasoning method for image enhanced hierarchical semantic representation | |
CN110659742A (en) | Method and device for acquiring sequence representation vector of user behavior sequence | |
CN110619044A (en) | Emotion analysis method, system, storage medium and equipment | |
CN112861522B (en) | Aspect-level emotion analysis method, system and model based on dual-attention mechanism | |
CN111460783B (en) | Data processing method and device, computer equipment and storage medium | |
CN114443899A (en) | Video classification method, device, equipment and medium | |
CN118113855B (en) | Ship test training scene question answering method, system, equipment and medium | |
Ciaburro et al. | Python Machine Learning Cookbook: Over 100 recipes to progress from smart data analytics to deep learning using real-world datasets | |
CN116975350A (en) | Image-text retrieval method, device, equipment and storage medium | |
CN112749737A (en) | Image classification method and device, electronic equipment and storage medium | |
CN116150367A (en) | Emotion analysis method and system based on aspects | |
CN116561272A (en) | Open domain visual language question-answering method and device, electronic equipment and storage medium | |
CN111611796A (en) | Hypernym determination method and device for hyponym, electronic device and storage medium | |
CN115186085A (en) | Reply content processing method and interaction method of media content interaction content | |
CN115758159B (en) | Zero sample text position detection method based on mixed contrast learning and generation type data enhancement | |
Kalangi et al. | Sentiment Analysis using Machine Learning | |
CN117009516A (en) | Converter station fault strategy model training method, pushing method and device | |
Wakchaure et al. | A scheme of answer selection in community question answering using machine learning techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |