CN113449110A - Emotion classification method and device, storage medium and computer equipment - Google Patents

Emotion classification method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN113449110A
CN113449110A CN202111000019.2A CN202111000019A CN113449110A CN 113449110 A CN113449110 A CN 113449110A CN 202111000019 A CN202111000019 A CN 202111000019A CN 113449110 A CN113449110 A CN 113449110A
Authority
CN
China
Prior art keywords
semantic
information
syntactic
graph
syntax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111000019.2A
Other languages
Chinese (zh)
Other versions
CN113449110B (en
Inventor
燕泽昊
庞士冠
薛云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202111000019.2A priority Critical patent/CN113449110B/en
Publication of CN113449110A publication Critical patent/CN113449110A/en
Application granted granted Critical
Publication of CN113449110B publication Critical patent/CN113449110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention relates to an emotion classification method, device, storage medium and computer equipment, which is characterized in that a vector representation of an input text is obtained by encoding the input text by utilizing a pre-trained BERT model, syntax information is extracted from a syntax graph generated based on syntax dependence analysis by utilizing a syntax graph convolution network, semantic information is extracted from a weighted semantic similarity graph generated based on a self-attention mechanism by utilizing a semantic graph convolution network, the syntax information and the semantic information are interactively fused by utilizing an exchange module, and the fused syntax information and semantic information are obtained; extracting the syntactic characteristics and semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism; the combined features obtained after the syntactic features and the semantic features are weighted and summed are input into the full-connection layer for emotion classification, emotion polarity information is obtained, and compared with the prior art, the accuracy of a specific emotion target classification task is improved.

Description

Emotion classification method and device, storage medium and computer equipment
Technical Field
The invention relates to the field of natural language processing, in particular to an emotion classification method, device, storage medium and computer equipment.
Background
Emotion classification refers to predicting the emotion polarity (positive, negative, and neutral) to which a particular target word of a sentence corresponds. In recent years, the graph neural network is widely applied to aspect-level emotion analysis with strong performance. The graph convolution-based method can effectively extract syntactic information from the dependency tree,
however, for some sentences lacking obvious syntactic features, the accuracy of the dependency parsing may not be satisfactory. For example, a dependency tree extracted from "has Halloween all put away and fall deco up, partitioning my new PSP." may contain a lot of noise. Second, syntax and semantics interact, both being related and distinct. Therefore, the method based on graph convolution cannot sufficiently analyze the internal rules of sentences to obtain accurate emotion classification information.
Disclosure of Invention
The embodiment of the application provides an emotion classification method, an emotion classification device, a storage medium and computer equipment, which can improve the accuracy of emotion classification. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an emotion classification method, including the following steps:
coding an input text by utilizing a pre-trained BERT model to obtain vector representation of each word of the input text;
performing syntactic dependency analysis on the input text to acquire a syntactic dependency relationship of the input text; generating a syntactic graph represented by each vector by taking the vector representation as a graph node and taking the corresponding syntactic dependency represented by the vector representation as an edge;
based on a self-attention mechanism, obtaining a semantic adjacency matrix represented by a vector, and generating a weighted semantic similarity graph;
extracting syntactic information from the syntactic graph by using a syntactic graph convolution network, and extracting semantic information from the weighted semantic similarity graph by using a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
extracting the syntactic characteristics and the semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism;
carrying out weighted summation on the syntactic characteristics and the semantic characteristics to obtain joint characteristics;
and inputting the combined features into a full-connection layer for emotion classification to acquire emotion polarity information.
In a second aspect, an embodiment of the present application provides an emotion classification apparatus, including:
the vector representation acquisition module is used for encoding the input text by utilizing a pre-trained BERT model and acquiring the vector representation of each word of the input text;
the syntactic graph obtaining module is used for carrying out syntactic dependency analysis on the input text and obtaining syntactic dependency of the input text; taking vector representation as a graph node, taking the corresponding syntactic dependency relationship represented by the vector as an edge, and acquiring a syntactic graph represented by each vector;
the similarity graph acquisition module is used for acquiring a semantic adjacency matrix represented by the vector based on a self-attention mechanism and generating a weighted semantic similarity graph;
the information acquisition module is used for extracting syntactic information from the syntactic graph by utilizing a syntactic graph convolution network and extracting semantic information from the weighted semantic similarity graph by utilizing a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
the feature extraction module is used for extracting the syntactic features and the semantic features of the target words in the syntactic information and the semantic information based on an attention mechanism;
the joint characteristic acquisition module is used for carrying out weighted summation on the syntactic characteristics and the semantic characteristics to acquire joint characteristics;
and the emotion classification acquisition module is used for inputting the combined features into the full-connection layer for emotion classification to acquire emotion polarity information.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the emotion classification method as described in any one of the above.
In a fourth aspect, the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable by the processor, and the processor implements the steps of the emotion classification method as described in any one of the above when executing the computer program.
In the embodiment of the application, vector representation of an input text is obtained by coding the input text by using a pre-trained BERT model, syntax information is extracted from a syntax graph generated based on syntax dependence analysis by using a syntax graph convolution network, semantic information is extracted from a weighted semantic similarity graph generated based on a self-attention mechanism by using a semantic graph convolution network, the syntax information and the semantic information are interactively fused by using an exchange module, and the fused syntax information and semantic information are obtained; extracting the syntactic characteristics and semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism; compared with the prior art, the method and the device have the advantages that the syntactic features can be supplemented by the semantic information, the syntactic information and the semantic information are flexibly combined in a dynamic communication mode, and the accuracy of specific emotion target classification tasks is improved.
For a better understanding of location and implementation, the present invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a sentiment classification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an emotion classification apparatus according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the masking experiment results using the emotion classification method of the present invention and a conventional emotion classification model in one embodiment;
FIG. 4 is a diagram illustrating the masking test results using the emotion classification method of the present invention and the existing emotion classification model in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as utilized herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used merely for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, nor are the terms "positions" to indicate or imply relative importance. To those of ordinary skill in the art, the above terms may be used in the present application in any particular sense depending on the particular situation.
In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As shown in fig. 1, an embodiment of the present application provides an emotion classification method, including the following steps:
step S1: coding an input text by utilizing a pre-trained BERT model to obtain vector representation of each word of the input text;
the input text can be text content input by a user at a terminal such as a mobile phone, a computer and the like or other equipment with a text input function.
In one embodiment, the input text may be a social media comment, wherein the social media comment may be collected from a network by using a data capture technology such as a crawler, and the emotion classification method of the present application is used to perform target-specific emotion classification on the input text, so as to obtain emotion polarity information of a target word by a user, and the emotion polarity information feeds back viewpoint tendency and emotion information of the user, and has a wide application prospect in the fields of topic tracking discovery, public opinion tracking, opinion polling, targeted advertisement delivery, after-sale service evaluation, and the like.
The BERT model (Bidirectional Encoder reproduction from transforms) encodes through an input text and outputs vector Representation of each character/word in the text after semantic information is fused. The vector representation may include a text vector characterizing global semantic information of the text and a position vector determining position information of each input word in the text.
In the embodiment of the application, the input text is a text containing target words
Figure 486379DEST_PATH_IMAGE001
A sentence of length n
Figure 211758DEST_PATH_IMAGE003
Inputting the input text into a BERT model for word coding to obtain a vector representation of each word in the sentence
Figure 143942DEST_PATH_IMAGE004
Wherein the vector of the output represents
Figure 753915DEST_PATH_IMAGE006
Also includes a target word subsequence
Figure 409150DEST_PATH_IMAGE007
Step S2: performing syntactic dependency analysis on the input text to acquire a syntactic dependency relationship of the input text; generating a syntactic graph represented by each vector by taking the vector representation as a graph node and taking the corresponding syntactic dependency represented by the vector representation as an edge;
in one embodiment, the input text is subjected to syntactic dependency parsing by using a Stanford parser, wherein the Stanford parser is an open source syntactic parser based on probability statistics syntactic analysis developed by the Natural language processing group of Stanford university. Specifically, the step of generating a syntax map for each vector representation includes:
a syntax diagram is generated in the following manner:
Figure 446376DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 69118DEST_PATH_IMAGE009
in order to be a syntactic graph,
Figure 482782DEST_PATH_IMAGE011
for the purpose of a vector representation of the input text,
Figure 225479DEST_PATH_IMAGE012
is represented by a vector
Figure 699185DEST_PATH_IMAGE014
The adjacency matrix of (a);
Figure 543645DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 495420DEST_PATH_IMAGE016
the adjacency matrix for graph node i and graph node j,
Figure 919055DEST_PATH_IMAGE018
indicating that a dependency exists between graph node i and graph node j.
Step S3: based on a self-attention mechanism, obtaining a semantic adjacency matrix represented by a vector, and generating a weighted semantic similarity graph;
in some cases, the syntactic dependency analysis can introduce a lot of noise, thereby affecting the accuracy of emotion classification. In addition, in many cases, it is difficult to explain the internal rules of the input text only by syntactic dependency analysis, so in the embodiment of the present application, semantic connections between words are constructed by calculating semantic similarities in the input text, a semantic adjacency matrix is supplemented by using an attention mechanism to obtain a weighted semantic similarity map, and syntactic information is supplemented by extracting semantic relationships contained in sentences to improve the accuracy of emotion classification.
The attention mechanism can increase the accuracy of classification by increasing the weight coefficient of important information to focus the model on more important parts. Specifically, the step of generating the weighted semantic similarity graph includes:
mapping the vector representation to K d-dimensional semantic spaces in the following manner;
Figure 173450DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 364260DEST_PATH_IMAGE021
for the purpose of a vector representation of the input text,
Figure 244360DEST_PATH_IMAGE022
is a vector representation of the kth semantic space,
Figure 102595DEST_PATH_IMAGE023
Figure 793470DEST_PATH_IMAGE025
the corresponding bias vector is represented for the vector of the kth semantic space,
Figure 737155DEST_PATH_IMAGE027
the corresponding mapping matrix is represented for the vector of the kth semantic space,
Figure 656832DEST_PATH_IMAGE029
is a non-linear activation function;
obtaining semantic adjacency matrixes corresponding to K semantic spaces according to the following modes:
Figure 369573DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure 28087DEST_PATH_IMAGE032
a semantic adjacency matrix for node i and node j,
Figure 583703DEST_PATH_IMAGE034
in order to be a preset threshold value, the threshold value is set,
Figure 680972DEST_PATH_IMAGE036
for the vector representation of node i in the kth semantic space,
Figure 123585DEST_PATH_IMAGE038
vector representation in the kth semantic space for node j;
acquiring a weighted similarity adjacency matrix according to the following modes:
Figure 280897DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 822344DEST_PATH_IMAGE041
for the weighted similarity adjacency matrix for node i and node j,
Figure 457724DEST_PATH_IMAGE042
the attention weight coefficients for node i and node j are obtained as follows:
Figure 20424DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 348637DEST_PATH_IMAGE045
for the attention weight matrix of the node i,
Figure 347686DEST_PATH_IMAGE046
is the attention weight matrix for node j, d is the dimension of the semantic space,
Figure 317916DEST_PATH_IMAGE047
is composed of
Figure 469543DEST_PATH_IMAGE048
The transposed matrix of (2);
acquiring a weighted semantic similarity map according to the following modes:
Figure 968657DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 222046DEST_PATH_IMAGE050
is a weighted semantic similarity graph.
Mapping the vector representation of each word in the step S1 to K d-dimensional semantic spaces to capture semantic representations in different forms, automatically learning the semantic connection strength between word pairs by using an attention mechanism to obtain semantic adjacency matrixes corresponding to the K semantic spaces, and averaging and summing the semantic adjacency matrixes of the K semantic spaces to obtain a weighted semantic similarity adjacency matrix
Figure 199229DEST_PATH_IMAGE052
And obtaining a weighted semantic similarity map according to the weighted semantic similarity map
Figure 736521DEST_PATH_IMAGE053
Step S4: extracting syntactic information from the syntactic graph by using a syntactic graph convolution network, and extracting semantic information from the weighted semantic similarity graph by using a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
in one embodiment, the syntactic graph convolution network comprises L layers of convolution layers, and each layer outputs syntax information of L layers respectively
Figure 406537DEST_PATH_IMAGE054
Said step of extracting syntax information from said syntax map using a syntax map convolutional network, comprising:
syntax information is extracted in the following manner:
Figure 645757DEST_PATH_IMAGE055
wherein the content of the first and second substances,
Figure 426631DEST_PATH_IMAGE057
for the non-linear activation function, in the present embodiment,
Figure 818430DEST_PATH_IMAGE059
it may be a function of the ReLU,
Figure 659347DEST_PATH_IMAGE060
convolution of network for syntax diagrams
Figure 681136DEST_PATH_IMAGE062
The syntax information extracted by the layer is,
Figure 468963DEST_PATH_IMAGE064
convolution of network for syntax diagrams
Figure 512006DEST_PATH_IMAGE066
The syntax information extracted by the layer is,
Figure 648458DEST_PATH_IMAGE068
is that
Figure 472057DEST_PATH_IMAGE070
The symmetric normalized adjacency matrix of (a) is,
Figure 470100DEST_PATH_IMAGE072
is a syntactic graph convolution network
Figure 695545DEST_PATH_IMAGE074
The weight matrix of the layer or layers is,
Figure 504364DEST_PATH_IMAGE076
is an identity matrix;
the step of extracting semantic information from the weighted semantic similarity graph by using a semantic graph convolution network comprises the following steps:
semantic information is extracted in the following way:
Figure 346418DEST_PATH_IMAGE077
wherein the content of the first and second substances,
Figure 148152DEST_PATH_IMAGE078
convolution of network for semantic graph
Figure 962524DEST_PATH_IMAGE080
The semantic information extracted by the layer(s),
Figure 706358DEST_PATH_IMAGE081
convolution of network for semantic graph
Figure 504550DEST_PATH_IMAGE083
The semantic information extracted by the layer(s),
Figure 844395DEST_PATH_IMAGE084
convolution of network for semantic graph
Figure 44432DEST_PATH_IMAGE080
The weight matrix of the layer or layers is,
Figure 254440DEST_PATH_IMAGE085
is that
Figure 149715DEST_PATH_IMAGE086
The symmetric normalized adjacency matrix of (a).
The exchange module is used for interactively fusing the syntax information extracted by the syntax graph convolution network and the semantic information extracted by the semantic graph convolution network, and the syntax information and the semantic information after exchange and fusion are combined with the mutual influence of syntax and semantics, so that the emotion polarity of a user can be reflected more accurately, and the emotion classification accuracy is improved.
Specifically, the step of acquiring the merged syntax information and semantic information includes:
acquiring the merged syntax information according to the following modes:
Figure 417886DEST_PATH_IMAGE087
wherein the content of the first and second substances,
Figure 128221DEST_PATH_IMAGE089
in order to be the syntax information after the merging,
Figure 495749DEST_PATH_IMAGE090
convolution of network for semantic graph
Figure 629052DEST_PATH_IMAGE092
The semantic information extracted by the layer(s),
Figure 700913DEST_PATH_IMAGE094
for syntactic fusion coefficients, the following is obtained:
Figure 203439DEST_PATH_IMAGE095
wherein the content of the first and second substances,
Figure 210709DEST_PATH_IMAGE096
convolution of network for syntax diagrams
Figure 205210DEST_PATH_IMAGE098
The syntax information extracted by the layer is,
Figure 524106DEST_PATH_IMAGE099
is composed of
Figure 163029DEST_PATH_IMAGE100
The transpose matrix of (a) is,
Figure 200255DEST_PATH_IMAGE102
in order to syntactically fuse the weight matrices,
Figure 869003DEST_PATH_IMAGE104
fusing the bias parameters for syntax;
acquiring the fused semantic information according to the following modes:
Figure 892454DEST_PATH_IMAGE105
wherein the content of the first and second substances,
Figure 776096DEST_PATH_IMAGE107
in order to obtain the fused semantic information,
Figure 610322DEST_PATH_IMAGE108
for semantic fusion weight coefficients, the following method is used for obtaining:
Figure 579415DEST_PATH_IMAGE109
wherein the content of the first and second substances,
Figure 468874DEST_PATH_IMAGE110
is composed of
Figure 66077DEST_PATH_IMAGE111
The transpose matrix of (a) is,
Figure 710685DEST_PATH_IMAGE113
in order to fuse the weight matrix for the semantics,
Figure 42440DEST_PATH_IMAGE115
bias parameters are fused for semantics.
Step S5: extracting the syntactic characteristics and the semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism;
the target word can be input into a target in the text to be subjected to emotion analysis, for example, when the input text is a comment, the target word can be food, and emotion polarity information of the target word is obtained by performing emotion analysis on words related to the food in the input text. The syntactic characteristics and the semantic characteristics are fully extracted based on an attention mechanism, the representation of target words and contexts of each level can be fused, and the coding capability of the network is enhanced.
In one embodiment, the step of extracting the syntactic characteristics and semantic characteristics of the target word in the syntactic information and the semantic information based on the attention mechanism specifically includes:
obtaining the output syntactic characteristic weight of each layer of the syntactic graph convolution network according to the following modes:
Figure 797907DEST_PATH_IMAGE116
wherein the content of the first and second substances,
Figure 76048DEST_PATH_IMAGE118
convolution of network for syntax diagrams
Figure 766924DEST_PATH_IMAGE080
Syntactic characteristic weight of ith node of layer output; wherein the content of the first and second substances,
Figure 648292DEST_PATH_IMAGE120
the larger the value is
Figure 128821DEST_PATH_IMAGE080
The characteristics of the layer are more important.
Figure 841562DEST_PATH_IMAGE122
For the intermediate parameter, the following method is adopted:
Figure 703339DEST_PATH_IMAGE123
wherein the content of the first and second substances,
Figure 868741DEST_PATH_IMAGE124
convolution of network for syntax diagrams
Figure 592108DEST_PATH_IMAGE080
Syntactic characteristics of the ith node of the layer output,
Figure 424935DEST_PATH_IMAGE125
convolution of network for syntax diagrams
Figure 457613DEST_PATH_IMAGE127
Syntactic characteristics of ith node of layer output
Figure 844732DEST_PATH_IMAGE129
The transposed matrix of (2);
the syntactic characteristics are obtained in the following way:
Figure 870326DEST_PATH_IMAGE130
wherein the content of the first and second substances,
Figure 823238DEST_PATH_IMAGE132
for syntactic characteristics of the ith node of a syntactic graph convolutional network,
Figure 26818DEST_PATH_IMAGE133
convolution of network for syntax diagrams
Figure 166812DEST_PATH_IMAGE127
Syntactic characteristics of the ith node of the layer output. Obtaining the syntactic characteristics of the n nodes of the syntactic graph convolution network according to the mode
Figure 963473DEST_PATH_IMAGE134
In one embodiment, the syntactic characteristics include a word vector representation of a syntactic target word
Figure 505313DEST_PATH_IMAGE135
Word vector representation in syntactic context
Figure DEST_PATH_IMAGE137
. Specifically, a word vector representation of a syntactic target word is obtained in the following manner
Figure 410952DEST_PATH_IMAGE138
Wherein the content of the first and second substances,
Figure 162876DEST_PATH_IMAGE140
for a word vector representation of a syntactic target word,
Figure DEST_PATH_IMAGE141
for syntactic characteristics of the ith node of a syntactic graph convolutional network,
Figure DEST_PATH_IMAGE143
is the average of the pooling functions,
the word vector representation of the syntactic context is obtained in the following way:
Figure 766158DEST_PATH_IMAGE144
wherein the content of the first and second substances,
Figure 428084DEST_PATH_IMAGE146
a word vector representation for a syntactic context,
Figure 973466DEST_PATH_IMAGE148
,
Figure DEST_PATH_IMAGE149
for syntactic weight vectors, we obtain as follows:
Figure 212686DEST_PATH_IMAGE150
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE151
in order to be a syntactic weight matrix,
Figure 462402DEST_PATH_IMAGE152
is composed of
Figure DEST_PATH_IMAGE153
The transposed matrix of (2).
The semantic features include word vector representations of semantic target words
Figure DEST_PATH_IMAGE155
Word vector representation in semantic context
Figure DEST_PATH_IMAGE157
Word vector representation of the semantic target word
Figure 70844DEST_PATH_IMAGE155
Word vector representation in semantic context
Figure 115024DEST_PATH_IMAGE157
The above syntactic feature extraction formula extraction may be referred to, and details are not repeated here.
Step S6: carrying out weighted summation on the syntactic characteristics and the semantic characteristics to obtain joint characteristics;
specifically, the joint features are obtained in the following manner:
Figure 575961DEST_PATH_IMAGE158
wherein the content of the first and second substances,
Figure 832630DEST_PATH_IMAGE160
for the sake of the combined features,
Figure 203568DEST_PATH_IMAGE162
in order to be a non-linear activation function,
Figure 841485DEST_PATH_IMAGE164
in order to be a weight matrix, the weight matrix,
Figure 930664DEST_PATH_IMAGE166
as a function of the offset parameter(s),
Figure 928707DEST_PATH_IMAGE168
for the first feature, the acquisition is made in the following manner
Figure DEST_PATH_IMAGE169
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE171
in order to learn the parameters, the user may,
Figure DEST_PATH_IMAGE173
for the purpose of the syntactic characteristics,
Figure DEST_PATH_IMAGE175
is a semantic feature.
Step S7: and inputting the combined features into a full-connection layer for emotion classification to acquire emotion polarity information.
And inputting the combined features into the full-connection layer to calculate the probability of different emotion polarities so as to acquire emotion polarity information. Specifically, emotion polarity information is acquired in the following manner:
Figure 544365DEST_PATH_IMAGE176
wherein the content of the first and second substances,
Figure 84674DEST_PATH_IMAGE160
for the sake of the combined features,
Figure 661149DEST_PATH_IMAGE178
is a weight matrix of the fully-connected layer,
Figure 462883DEST_PATH_IMAGE180
as a function of the offset parameter(s),
Figure 808414DEST_PATH_IMAGE182
is emotion polarity information.
In the embodiment of the application, vector representation of an input text is obtained by coding the input text by using a pre-trained BERT model, syntax information is extracted from a syntax graph generated based on syntax dependence analysis by using a syntax graph convolution network, semantic information is extracted from a weighted semantic similarity graph generated based on a self-attention mechanism by using a semantic graph convolution network, the syntax information and the semantic information are interactively fused by using an exchange module, and the fused syntax information and semantic information are obtained; extracting the syntactic characteristics and semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism; compared with the prior art, the method and the device have the advantages that the syntactic features can be supplemented by the semantic information, the syntactic information and the semantic information are flexibly combined in a dynamic communication mode, the accuracy rate of a specific emotion target classification task is improved, and the method and the device can be suitable for most specific target emotion classification data sets.
As shown in fig. 2, an embodiment of the present application further provides an emotion classification apparatus, including:
the vector representation acquisition module 1 is used for encoding an input text by using a pre-trained BERT model and acquiring the vector representation of each word of the input text;
the syntactic graph obtaining module 2 is used for performing syntactic dependency analysis on the input text to obtain a syntactic dependency relationship of the input text; taking vector representation as a graph node, taking the corresponding syntactic dependency relationship represented by the vector as an edge, and acquiring a syntactic graph represented by each vector;
the similarity graph acquisition module 3 is used for acquiring a semantic adjacency matrix represented by a vector based on a self-attention mechanism and generating a weighted semantic similarity graph;
the information acquisition module 4 is used for extracting syntactic information from the syntactic graph by utilizing a syntactic graph convolution network and extracting semantic information from the weighted semantic similarity graph by utilizing a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
the feature extraction module 5 is used for extracting the syntactic features and the semantic features of the target words in the syntactic information and the semantic information based on an attention mechanism;
a joint feature obtaining module 6, configured to perform weighted summation on the syntactic features and the semantic features to obtain joint features;
and the emotion classification acquisition module 7 is used for inputting the combined features into the full-connection layer for emotion classification to acquire emotion polarity information.
It should be noted that, when the emotion classification apparatus provided in the above embodiment executes the emotion classification method, only the division of each function module is illustrated, and in practical applications, the function distribution may be completed by different function modules according to needs, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the emotion classification device and the emotion classification method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is characterized in that: the computer program when executed by a processor performs the steps of the emotion classification method as described in any of the above.
Embodiments of the present application may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, in which program code is embodied. Computer readable storage media, which include both non-transitory and non-transitory, removable and non-removable media, may implement any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
As shown in Table 1, the emotion classification data is obtained by performing emotion classification on data sets such as restaurant, notebook computer, twitter and the like by using the emotion classification method of the present invention and existing emotion classification methods based on semantics (ATAE-LSTM, RAM, MGAN and GCAE) and syntax (LSTM + SynATT, ASGCN, CDT, TD-GAT, BiGCN, R-GA, RepWalk and DGEDT); wherein the accuracy and markov F1 are used as the evaluation indices of the table.
The emotion classification method uses an Adam optimizer optimization network with a learning rate of 1e-3 or 1e-4, sets the learning rate of a BERT model to be 5e-5 or 2e-5, and sets a regularized coefficient of L2 to be 1 e-5. The batch size was set to 32 or 8 and the random discard rate was between 0.1 and 0.6.
Table 1: sentiment classification results
Figure DEST_PATH_IMAGE183
As can be seen from the table, the emotion classification results with higher accuracy can be obtained in the data set. In addition, the invention can well learn the syntactic information and the semantic information on a data set with richer syntactic and semantic information, and the improvement effect is more obvious, such as a data set of a notebook computer.
As shown in fig. 3-4, in an embodiment, a masking experiment is performed by using the emotion classification method of the present invention and an existing emotion classification model (CDT) to obtain a contribution of each word w in a sentence s, wherein the masking experiment calculation method is as follows:
Figure 817827DEST_PATH_IMAGE184
wherein
Figure DEST_PATH_IMAGE185
Representing the combined features generated by the sentence s (the word w masked),
Figure DEST_PATH_IMAGE187
representing the combined features generated by the sentence s (the word w is not masked),
Figure DEST_PATH_IMAGE189
if, if
Figure DEST_PATH_IMAGE191
Then the expression w is used to generate the union feature
Figure DEST_PATH_IMAGE193
There was no impact.
As shown in fig. 3, the conventional emotion classification model cannot recognize the viewpoint word 'Great' of 'food' well, but focuses on 'dreadful' erroneously. Similarly, in fig. 4, although the conventional emotion classification model can focus on the opinion word 'loving' of 'psp', it is not sufficiently understood. In the two examples, the emotion classification method can accurately judge which viewpoint word is the most relevant to the aspect word and is less influenced by irrelevant words. Therefore, the method can supplement syntactic characteristics by utilizing semantic information, flexibly combine with the learning syntax and the semantic combined representation through an internal dynamic communication mechanism, can better enhance the analysis capability of the model on sentences compared with the prior work, and improves the accuracy of emotion classification.
The embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable by the processor, and when the processor executes the computer program, the processor implements the steps of the emotion classification method according to any one of the above items.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (10)

1. An emotion classification method, characterized by comprising the steps of:
coding an input text by utilizing a pre-trained BERT model to obtain vector representation of each word of the input text;
performing syntactic dependency analysis on the input text to acquire a syntactic dependency relationship of the input text; generating a syntactic graph represented by each vector by taking the vector representation as a graph node and taking the corresponding syntactic dependency represented by the vector representation as an edge;
based on a self-attention mechanism, obtaining a semantic adjacency matrix represented by a vector, and generating a weighted semantic similarity graph;
extracting syntactic information from the syntactic graph by using a syntactic graph convolution network, and extracting semantic information from the weighted semantic similarity graph by using a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
extracting the syntactic characteristics and the semantic characteristics of the target words in the syntactic information and the semantic information based on an attention mechanism;
carrying out weighted summation on the syntactic characteristics and the semantic characteristics to obtain joint characteristics;
and inputting the combined features into a full-connection layer for emotion classification to acquire emotion polarity information.
2. The emotion classification method of claim 1, wherein the step of generating a syntax map for each vector representation comprises:
a syntax diagram is generated in the following manner:
Figure 339589DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 390591DEST_PATH_IMAGE002
in order to be a syntactic graph,
Figure 673805DEST_PATH_IMAGE004
for the purpose of a vector representation of the input text,
Figure 653524DEST_PATH_IMAGE005
is represented by a vector
Figure 451716DEST_PATH_IMAGE007
The adjacency matrix of (a);
Figure 853878DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 178549DEST_PATH_IMAGE009
the adjacency matrix for graph node i and graph node j,
Figure 703072DEST_PATH_IMAGE011
indicating that a dependency exists between graph node i and graph node j.
3. The emotion classification method of claim 1, wherein the step of generating a weighted semantic similarity map comprises:
mapping the vector representation to K d-dimensional semantic spaces in the following manner;
Figure 863926DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 132096DEST_PATH_IMAGE014
for the purpose of a vector representation of the input text,
Figure 544229DEST_PATH_IMAGE015
is a vector representation of the kth semantic space,
Figure 505232DEST_PATH_IMAGE017
the corresponding bias vector is represented for the vector of the kth semantic space,
Figure 153382DEST_PATH_IMAGE019
the corresponding mapping matrix is represented for the vector of the kth semantic space,
Figure 959664DEST_PATH_IMAGE021
is a non-linear activation function;
obtaining semantic adjacency matrixes corresponding to K semantic spaces according to the following modes:
Figure 258928DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 797356DEST_PATH_IMAGE024
a semantic adjacency matrix for node i and node j,
Figure 57436DEST_PATH_IMAGE026
in order to be a preset threshold value, the threshold value is set,
Figure 293508DEST_PATH_IMAGE028
for the vector representation of node i in the kth semantic space,
Figure 322644DEST_PATH_IMAGE030
vector representation in the kth semantic space for node j;
acquiring a weighted similarity adjacency matrix according to the following modes:
Figure 500815DEST_PATH_IMAGE031
wherein the content of the first and second substances,
Figure 717033DEST_PATH_IMAGE033
for the weighted similarity adjacency matrix for node i and node j,
Figure 255331DEST_PATH_IMAGE034
the attention weight coefficients for node i and node j are obtained as follows:
Figure 811077DEST_PATH_IMAGE035
wherein the content of the first and second substances,
Figure 284783DEST_PATH_IMAGE037
for the attention weight matrix of the node i,
Figure 877045DEST_PATH_IMAGE038
is the attention weight matrix for node j, d is the dimension of the semantic space,
Figure 828821DEST_PATH_IMAGE039
is composed of
Figure 442336DEST_PATH_IMAGE040
The transposed matrix of (2);
acquiring a weighted semantic similarity map according to the following modes:
Figure 86944DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 667967DEST_PATH_IMAGE042
in order to weight the semantic similarity graph,
Figure 95537DEST_PATH_IMAGE044
is a weighted similarity adjacency matrix.
4. The emotion classification method of claim 1, wherein the step of extracting syntax information from the syntax map using a syntax map convolutional network comprises:
syntax information is extracted in the following manner:
Figure 688193DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure 864221DEST_PATH_IMAGE047
in order to be a non-linear activation function,
Figure 807906DEST_PATH_IMAGE048
convolution of network for syntax diagrams
Figure 242430DEST_PATH_IMAGE050
The syntax information extracted by the layer is,
Figure 955171DEST_PATH_IMAGE052
convolution of network for syntax diagrams
Figure 597374DEST_PATH_IMAGE054
The syntax information extracted by the layer is,
Figure 903721DEST_PATH_IMAGE056
is that
Figure 990DEST_PATH_IMAGE058
The symmetric normalized adjacency matrix of (a) is,
Figure 480424DEST_PATH_IMAGE060
is a syntactic graph convolution network
Figure 372156DEST_PATH_IMAGE062
The weight matrix of the layer or layers is,
Figure 165800DEST_PATH_IMAGE064
is a matrix of the units,
Figure 66760DEST_PATH_IMAGE005
is represented by a vector
Figure 691776DEST_PATH_IMAGE007
The adjacency matrix of (a);
the step of extracting semantic information from the weighted semantic similarity graph by using a semantic graph convolution network comprises the following steps:
semantic information is extracted in the following way:
Figure 144623DEST_PATH_IMAGE065
wherein the content of the first and second substances,
Figure 284617DEST_PATH_IMAGE066
convolution of network for semantic graph
Figure 864635DEST_PATH_IMAGE068
The semantic information extracted by the layer(s),
Figure 406474DEST_PATH_IMAGE069
convolution of network for semantic graph
Figure 531687DEST_PATH_IMAGE071
The semantic information extracted by the layer(s),
Figure 158978DEST_PATH_IMAGE072
convolution of network for semantic graph
Figure 277107DEST_PATH_IMAGE068
The weight matrix of the layer or layers is,
Figure 673453DEST_PATH_IMAGE073
is that
Figure 468102DEST_PATH_IMAGE074
The symmetric normalized adjacency matrix of (a) is,
Figure 582689DEST_PATH_IMAGE044
is a weighted similarity adjacency matrix.
5. The emotion classification method of claim 4, wherein the step of obtaining the fused syntactic and semantic information comprises:
acquiring the merged syntax information according to the following modes:
Figure 504508DEST_PATH_IMAGE075
wherein the content of the first and second substances,
Figure 20940DEST_PATH_IMAGE077
in order to be the syntax information after the merging,
Figure 485026DEST_PATH_IMAGE078
convolution of network for semantic graph
Figure 821330DEST_PATH_IMAGE080
The semantic information extracted by the layer(s),
Figure 281261DEST_PATH_IMAGE082
for syntactic fusion coefficients, the following is obtained:
Figure 652200DEST_PATH_IMAGE083
wherein the content of the first and second substances,
Figure 788652DEST_PATH_IMAGE084
convolution of network for syntax diagrams
Figure 612251DEST_PATH_IMAGE086
The syntax information extracted by the layer is,
Figure 875874DEST_PATH_IMAGE087
is composed of
Figure 101319DEST_PATH_IMAGE088
The transpose matrix of (a) is,
Figure 910137DEST_PATH_IMAGE090
in order to syntactically fuse the weight matrices,
Figure 486612DEST_PATH_IMAGE092
fusing the bias parameters for syntax;
acquiring the fused semantic information according to the following modes:
Figure DEST_PATH_IMAGE093
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE095
in order to obtain the fused semantic information,
Figure 475296DEST_PATH_IMAGE096
for semantic fusion weight coefficients, the following method is used for obtaining:
Figure DEST_PATH_IMAGE097
wherein the content of the first and second substances,
Figure 961772DEST_PATH_IMAGE098
is composed of
Figure DEST_PATH_IMAGE099
The transpose matrix of (a) is,
Figure DEST_PATH_IMAGE101
in order to fuse the weight matrix for the semantics,
Figure DEST_PATH_IMAGE103
bias parameters are fused for semantics.
6. The emotion classification method of claim 1, wherein the step of obtaining the joint features comprises:
the joint features are obtained in the following manner:
Figure 469721DEST_PATH_IMAGE104
wherein the content of the first and second substances,
Figure 471175DEST_PATH_IMAGE106
for the sake of the combined features,
Figure 325867DEST_PATH_IMAGE108
in order to be a non-linear activation function,
Figure 932429DEST_PATH_IMAGE110
in order to be a weight matrix, the weight matrix,
Figure 722530DEST_PATH_IMAGE112
as a function of the offset parameter(s),
Figure 634117DEST_PATH_IMAGE114
for the first feature, the acquisition is made in the following manner
Figure DEST_PATH_IMAGE115
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE117
in order to learn the parameters, the user may,
Figure DEST_PATH_IMAGE119
for the purpose of the syntactic characteristics,
Figure DEST_PATH_IMAGE121
is a semantic feature.
7. The emotion classification method of claim 1, wherein the step of obtaining emotion polarity information comprises:
obtaining emotion polarity information according to the following modes:
Figure 26921DEST_PATH_IMAGE122
wherein the content of the first and second substances,
Figure 956831DEST_PATH_IMAGE106
for the sake of the combined features,
Figure 121096DEST_PATH_IMAGE124
is a weight matrix of the fully-connected layer,
Figure 628301DEST_PATH_IMAGE126
as a function of the offset parameter(s),
Figure 323331DEST_PATH_IMAGE128
is emotion polarity information.
8. An emotion classification apparatus, comprising:
the vector representation acquisition module is used for encoding the input text by utilizing a pre-trained BERT model and acquiring the vector representation of each word of the input text;
the syntactic graph obtaining module is used for carrying out syntactic dependency analysis on the input text and obtaining syntactic dependency of the input text; taking vector representation as a graph node, taking the corresponding syntactic dependency relationship represented by the vector as an edge, and acquiring a syntactic graph represented by each vector;
the similarity graph acquisition module is used for acquiring a semantic adjacency matrix represented by the vector based on a self-attention mechanism and generating a weighted semantic similarity graph;
the information acquisition module is used for extracting syntactic information from the syntactic graph by utilizing a syntactic graph convolution network and extracting semantic information from the weighted semantic similarity graph by utilizing a semantic graph convolution network; interactively fusing the syntax information and the semantic information by using an exchange module to obtain the fused syntax information and semantic information;
the feature extraction module is used for extracting the syntactic features and the semantic features of the target words in the syntactic information and the semantic information based on an attention mechanism;
the joint characteristic acquisition module is used for carrying out weighted summation on the syntactic characteristics and the semantic characteristics to acquire joint characteristics;
and the emotion classification acquisition module is used for inputting the combined features into the full-connection layer for emotion classification to acquire emotion polarity information.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when being executed by a processor realizes the steps of the sentiment classification method according to any one of claims 1 to 7.
10. A computer device, characterized by: comprising a memory, a processor and a computer program stored in the memory and executable by the processor, the processor implementing the steps of the sentiment classification method according to any one of claims 1 to 7 when executing the computer program.
CN202111000019.2A 2021-08-30 2021-08-30 Emotion classification method and device, storage medium and computer equipment Active CN113449110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111000019.2A CN113449110B (en) 2021-08-30 2021-08-30 Emotion classification method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111000019.2A CN113449110B (en) 2021-08-30 2021-08-30 Emotion classification method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113449110A true CN113449110A (en) 2021-09-28
CN113449110B CN113449110B (en) 2021-12-07

Family

ID=77818967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111000019.2A Active CN113449110B (en) 2021-08-30 2021-08-30 Emotion classification method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113449110B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115510226A (en) * 2022-09-02 2022-12-23 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Emotion classification method based on graph neural network
CN115712726A (en) * 2022-11-08 2023-02-24 华南师范大学 Emotion analysis method, device and equipment based on bigram embedding
CN115827878A (en) * 2023-02-13 2023-03-21 华南师范大学 Statement emotion analysis method, device and equipment
CN116089619A (en) * 2023-04-06 2023-05-09 华南师范大学 Emotion classification method, apparatus, device and storage medium
CN116304748A (en) * 2023-05-17 2023-06-23 成都工业学院 Text similarity calculation method, system, equipment and medium
WO2024000956A1 (en) * 2022-06-30 2024-01-04 苏州思萃人工智能研究所有限公司 Aspect sentiment analysis method and model, and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8700404B1 (en) * 2005-08-27 2014-04-15 At&T Intellectual Property Ii, L.P. System and method for using semantic and syntactic graphs for utterance classification
CN111651973A (en) * 2020-06-03 2020-09-11 拾音智能科技有限公司 Text matching method based on syntax perception
CN112528672A (en) * 2020-12-14 2021-03-19 北京邮电大学 Aspect-level emotion analysis method and device based on graph convolution neural network
CN112668319A (en) * 2020-12-18 2021-04-16 昆明理工大学 Vietnamese news event detection method based on Chinese information and Vietnamese statement method guidance
CN112686056A (en) * 2021-03-22 2021-04-20 华南师范大学 Emotion classification method
CN112966074A (en) * 2021-05-17 2021-06-15 华南师范大学 Emotion analysis method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8700404B1 (en) * 2005-08-27 2014-04-15 At&T Intellectual Property Ii, L.P. System and method for using semantic and syntactic graphs for utterance classification
US20160086601A1 (en) * 2005-08-27 2016-03-24 At&T Intellectual Property Ii, L.P. System and method for using semantic and syntactic graphs for utterance classification
CN111651973A (en) * 2020-06-03 2020-09-11 拾音智能科技有限公司 Text matching method based on syntax perception
CN112528672A (en) * 2020-12-14 2021-03-19 北京邮电大学 Aspect-level emotion analysis method and device based on graph convolution neural network
CN112668319A (en) * 2020-12-18 2021-04-16 昆明理工大学 Vietnamese news event detection method based on Chinese information and Vietnamese statement method guidance
CN112686056A (en) * 2021-03-22 2021-04-20 华南师范大学 Emotion classification method
CN112966074A (en) * 2021-05-17 2021-06-15 华南师范大学 Emotion analysis method and device, electronic equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024000956A1 (en) * 2022-06-30 2024-01-04 苏州思萃人工智能研究所有限公司 Aspect sentiment analysis method and model, and medium
CN115510226A (en) * 2022-09-02 2022-12-23 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Emotion classification method based on graph neural network
CN115510226B (en) * 2022-09-02 2023-11-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Emotion classification method based on graph neural network
CN115712726A (en) * 2022-11-08 2023-02-24 华南师范大学 Emotion analysis method, device and equipment based on bigram embedding
CN115712726B (en) * 2022-11-08 2023-09-12 华南师范大学 Emotion analysis method, device and equipment based on double word embedding
CN115827878A (en) * 2023-02-13 2023-03-21 华南师范大学 Statement emotion analysis method, device and equipment
CN115827878B (en) * 2023-02-13 2023-06-06 华南师范大学 Sentence emotion analysis method, sentence emotion analysis device and sentence emotion analysis equipment
CN116089619A (en) * 2023-04-06 2023-05-09 华南师范大学 Emotion classification method, apparatus, device and storage medium
CN116089619B (en) * 2023-04-06 2023-06-06 华南师范大学 Emotion classification method, apparatus, device and storage medium
CN116304748A (en) * 2023-05-17 2023-06-23 成都工业学院 Text similarity calculation method, system, equipment and medium
CN116304748B (en) * 2023-05-17 2023-07-28 成都工业学院 Text similarity calculation method, system, equipment and medium

Also Published As

Publication number Publication date
CN113449110B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN113449110B (en) Emotion classification method and device, storage medium and computer equipment
CN108536681B (en) Intelligent question-answering method, device, equipment and storage medium based on emotion analysis
CN112084331A (en) Text processing method, text processing device, model training method, model training device, computer equipment and storage medium
CN109902301B (en) Deep neural network-based relationship reasoning method, device and equipment
CN113627447B (en) Label identification method, label identification device, computer equipment, storage medium and program product
Udandarao et al. Cobra: Contrastive bi-modal representation algorithm
CN112580328A (en) Event information extraction method and device, storage medium and electronic equipment
CN114419642A (en) Method, device and system for extracting key value pair information in document image
CN111259130A (en) Method and apparatus for providing reply sentence in dialog
CN111783903A (en) Text processing method, text model processing method and device and computer equipment
CN113032525A (en) False news detection method and device, electronic equipment and storage medium
CN111597580B (en) Robot hearing privacy information monitoring processing method
CN110633410A (en) Information processing method and device, storage medium, and electronic device
CN115827878B (en) Sentence emotion analysis method, sentence emotion analysis device and sentence emotion analysis equipment
CN114461943B (en) Deep learning-based multi-source POI semantic matching method and device and storage medium thereof
CN116501877A (en) Multi-mode attention rumor detection method based on causal graph
CN113741759B (en) Comment information display method and device, computer equipment and storage medium
CN115357711A (en) Aspect level emotion analysis method and device, electronic equipment and storage medium
CN116150353A (en) Training method for intention feature extraction model, intention recognition method and related device
CN113886547A (en) Client real-time conversation switching method and device based on artificial intelligence and electronic equipment
CN113010772A (en) Data processing method, related equipment and computer readable storage medium
CN115525740A (en) Method and device for generating dialogue response sentence, electronic equipment and storage medium
Yuan et al. Point-of-Interest Oriented Question Answering with Joint Inference of Semantic Matching and Distance Correlation
CN116860952B (en) RPA intelligent response processing method and system based on artificial intelligence
CN111177526B (en) Network opinion leader identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant