CN115033700A - Cross-domain emotion analysis method, device and equipment based on mutual learning network - Google Patents

Cross-domain emotion analysis method, device and equipment based on mutual learning network Download PDF

Info

Publication number
CN115033700A
CN115033700A CN202210954299.9A CN202210954299A CN115033700A CN 115033700 A CN115033700 A CN 115033700A CN 202210954299 A CN202210954299 A CN 202210954299A CN 115033700 A CN115033700 A CN 115033700A
Authority
CN
China
Prior art keywords
text data
mutual learning
target
feature
representations corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210954299.9A
Other languages
Chinese (zh)
Inventor
陆子豪
杨驰
薛云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202210954299.9A priority Critical patent/CN115033700A/en
Publication of CN115033700A publication Critical patent/CN115033700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to the field of emotion analysis, in particular to a cross-domain emotion analysis method, a cross-domain emotion analysis device, a cross-domain emotion analysis equipment and a storage medium based on a mutual learning network, wherein a training text data set is obtained, the training text data set is input into a preset word embedding model, a word embedding vector set is obtained, the word embedding vector set is input into the preset mutual learning network, a loss function corresponding to the mutual learning network is constructed, optimization training is carried out, a target mutual learning network is obtained, in response to an analysis instruction, text data to be analyzed are obtained, the text data to be analyzed are input into the target mutual learning network, and emotion analysis results output by the target mutual learning network are obtained. Compared with the prior art, the method and the device have the advantages that the emotion analysis is carried out on the text data set to be detected more comprehensively, and the accuracy and the efficiency of the emotion analysis are improved.

Description

Cross-domain emotion analysis method, device and equipment based on mutual learning network
Technical Field
The invention relates to the field of emotion analysis, in particular to a cross-domain emotion analysis method, a cross-domain emotion analysis device, cross-domain emotion analysis equipment and a storage medium based on a mutual learning network.
Background
Cross-domain emotion classification is one of important tasks in natural language processing, the purpose of the task is to learn knowledge from Source domain (Source domain) data with abundant labels to guide Target domain (Target domain) training with few labels or no labels, and the task has important value on field research with labels lacking. Most of the existing correlation models relate to a feature extractor and an emotion classifier, wherein the feature extractor is responsible for learning the invariant features from two domains, and the emotion classifier is trained on the source domain only to guide the learning of the feature extractor. Often such models ignore the target domain emotional polarity and do not exploit the target domain emotional polarity. In addition, the existing cross-domain emotion classification method neglects the characteristics of complex syntactic structure and various semantic information of texts in different domains.
In order to enable a feature extractor to have the capability of learning a domain invariant representation, the features of a source domain and a target domain need to be aligned, a more mainstream method at present is to use a countermeasure network as a domain discriminator, and use a gradient inversion layer to deceive the feature extractor so that the feature extractor cannot distinguish which domain data comes from, however, the countermeasure network does not have a definite loss to indicate a training process in an actual training process, and only can depend on experience to judge whether a generator and a discriminator reach dynamic balance, which leads to unstable training of the countermeasure network and easily causes the problem of gradient disappearance.
Disclosure of Invention
Based on the method, the device, the equipment and the storage medium, the mutual learning network is constructed by utilizing a deep mutual learning method, each group of mutual learning migration channels in the mutual learning network are associated with the source field and the target field, the difference between the two fields is minimized, the mutual learning network is optimized and trained, the accuracy and the efficiency of optimization training are improved, and the emotion analysis can be carried out on text data more comprehensively.
The technical method comprises the following steps:
in a first aspect, an embodiment of the present application provides a cross-domain emotion analysis method based on a mutual learning network, including the following steps:
acquiring a training text data set, wherein the training text data set comprises a text data set of a source field and a text data set of a target field, the text data set comprises a plurality of text data, the text data comprises a plurality of sentences, and the sentences comprise a plurality of words;
inputting the training text data set into a preset word embedding model to obtain a word embedding vector set, wherein the word embedding vector set comprises word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields;
inputting the word embedding vector set into a preset mutual learning network, constructing a loss function corresponding to the mutual learning network, performing optimization training, and obtaining a target mutual learning network, wherein the mutual learning network comprises two groups of mutual learning migration channels;
and responding to an analysis instruction, acquiring text data to be analyzed, inputting the text data to be analyzed into the target mutual learning network, and acquiring an emotion analysis result output by the target mutual learning network.
In a second aspect, an embodiment of the present application provides a cross-domain emotion analysis apparatus based on a mutual learning network, including:
the training text data set acquisition module is used for acquiring a training text data set, wherein the training text data set comprises text data of a plurality of source fields and text data of a plurality of target fields, the text data comprises a plurality of sentences, and the sentences comprise a plurality of words;
a word embedding vector representation obtaining module, configured to input the training text data set into a preset word embedding model, and obtain a word embedding vector set, where the word embedding vector set includes word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields;
the network training module is used for inputting the word embedding vector set into a preset mutual learning network, constructing a loss function corresponding to the mutual learning network, performing optimization training and obtaining a target mutual learning network, wherein the mutual learning network comprises two groups of mutual learning migration channels;
and the text data analysis module is used for responding to an analysis instruction, acquiring text data to be analyzed, inputting the text data to be analyzed into the target mutual learning network, and acquiring an emotion analysis result output by the target mutual learning network.
In a third aspect, an embodiment of the present application provides a computer device, including: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the cross-domain emotion analysis method based on a mutual learning network according to the first aspect.
In a fourth aspect, the present application provides a storage medium storing a computer program, which when executed by a processor implements the steps of the cross-domain emotion analysis method based on mutual learning network according to the first aspect.
In this embodiment, a cross-domain emotion analysis method, a cross-domain emotion analysis device, a cross-domain emotion analysis equipment and a storage medium based on a mutual learning network are provided, a mutual learning network is constructed by using a deep mutual learning method, each group of mutual learning migration channels in the mutual learning network are associated with a source domain and a target domain, and the mutual learning network is optimally trained by minimizing the difference between the two domains, so that the accuracy and efficiency of the optimal training are improved, and emotion analysis can be performed on text data more comprehensively.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic flowchart of a cross-domain emotion analysis method based on a mutual learning network according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of S3 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
fig. 3 is a schematic flowchart of S31 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
fig. 4 is a schematic flowchart of S311 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application;
fig. 5 is a schematic flowchart of S312 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
FIG. 6 is a schematic flowchart of S32 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
FIG. 7 is a schematic flowchart of S33 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
fig. 8 is a schematic flowchart of S34 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
FIG. 9 is a schematic flowchart of S4 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a cross-domain emotion analysis device based on a mutual learning network according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a schematic flowchart of a cross-domain emotion analysis method based on a mutual learning network according to an embodiment of the present application, including the following steps:
s1: a training text data set is obtained.
The main execution body of the cross-domain emotion analysis method based on the mutual learning network is analysis equipment (hereinafter referred to as analysis equipment) of the cross-domain emotion analysis method based on the mutual learning network.
In an alternative embodiment, the analysis device may be a computer device, a server, or a server cluster formed by combining a plurality of computer devices.
In this embodiment, the analysis device may obtain a training text data set input by a user, and also obtain a corresponding training text data set from a preset database, where the training text data set includes a text data set of the source field and a text data set of the target field, the text data set includes a plurality of text data, the text data includes a plurality of sentences, and the sentences include a plurality of words.
S2: and inputting the training text data set into a preset word embedding model to obtain a word embedding vector set.
The Word embedding model is a Word2vec Word embedding model and is used for obtaining Word embedding vectors corresponding to each text data.
In this embodiment, the analysis device inputs the training text data set into a preset Word embedding model, maps each Word in the text data into a low-dimensional vector space, and obtains a Word embedding vector set by querying a pre-trained Word2Vec matrix, where the Word embedding vector set includes Word embedding vector representations corresponding to text data of a plurality of source fields and Word embedding vector representations corresponding to text data of a plurality of target fields.
S3: and inputting the word embedding vector set into a preset mutual learning network, constructing a loss function corresponding to the mutual learning network, performing optimization training, and acquiring a target mutual learning network.
The mutual learning network comprises two groups of mutual learning migration channels, and the two groups of mutual learning migration channels comprise a feature extraction module, an emotion classification module, a field difference learning module and a label detection module.
In this embodiment, the analysis device inputs the word embedded vector set to a preset mutual learning network, constructs a loss function corresponding to the mutual learning network, performs optimization training, and obtains a target mutual learning network.
Referring to fig. 2, fig. 2 is a schematic flowchart of S3 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application, including steps S31 to S35, which are as follows:
s31: and respectively inputting the word embedding vector set to the feature extraction modules in the two groups of mutual learning migration channels, and acquiring text feature representations corresponding to the text data of the plurality of source fields and the text feature representations corresponding to the text data of the plurality of target fields, which are output by the feature extraction modules of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the word embedding vector set to the feature extraction modules in the two groups of mutual learning migration channels, and obtains text feature representations corresponding to the text data of the plurality of source fields and text feature representations corresponding to the text data of the plurality of target fields, which are output by the feature extraction modules in the two groups of mutual learning migration channels.
The feature extraction module includes a semantic feature extraction module and a syntactic feature extraction module which are connected in sequence, please refer to fig. 3, and fig. 3 is a schematic flow diagram of S31 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, including steps S311 to S313, which are as follows:
s311: and respectively inputting the word embedding vector set to a semantic feature extraction module in the corresponding feature extraction module, and respectively acquiring semantic feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the semantic feature extraction modules of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the word embedding vector set to the semantic feature extraction modules in the corresponding feature extraction modules, and obtains semantic feature representations corresponding to the text data of the plurality of source domains and the text data of the plurality of target domains output by the semantic feature extraction modules of the two groups of mutual learning migration channels, respectively.
The semantic feature extraction module includes a first bidirectional gated loop unit, a soft attention unit, a second bidirectional gated loop unit, and a convolution attention unit, where the convolution attention unit includes a plurality of convolution layers, please refer to fig. 4, and fig. 4 is a schematic flow diagram of S311 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, including steps S3111 to S3114, which are as follows:
s3111: and inputting the word embedding vector set into a first bidirectional gating circulating unit in the corresponding semantic feature extraction module, respectively coding the word embedding vector set, and acquiring hidden layer feature representations corresponding to the text data of the plurality of source fields and the hidden layer feature representations corresponding to the text data of the plurality of target fields, which are output by the bidirectional gating circulating units of the semantic feature extraction modules of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the word embedding vector set into the corresponding first bidirectional gating cyclic unit in the semantic feature extraction module for encoding processing, and obtains two sub-hidden layer feature representations output by two GRUs in different directions, which are specifically as follows:
Figure 168280DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure 84283DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
respectively sub-hidden layer feature representations in two different directions,nfor the number of words in the sentence,iis the first in a sentence in the text dataiThe number of the individual words is,jas the first in the text datajThe number of the sentences is one,
Figure 742798DEST_PATH_IMAGE004
is the coding function of the bi-directional gated cyclic unit,
Figure DEST_PATH_IMAGE005
embedding a vector representation for the word;
and combining the sub-hidden layer feature representations respectively to obtain hidden layer feature representations corresponding to the text data of the plurality of source fields and the hidden layer feature representations corresponding to the text data of the plurality of target fields, which are output by a bidirectional gating circulation unit of a semantic feature extraction module of the two groups of mutual learning migration channels, wherein the hidden layer feature representations are as follows:
Figure 845883DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE007
is the hidden layer feature representation.
S3112: inputting hidden layer feature representations corresponding to the text data of the source fields and hidden layer feature representations corresponding to the text data of the target fields into corresponding soft attention units in the semantic feature extraction module, acquiring corresponding attention weight parameters and attention weight parameters corresponding to the text data of the target fields according to a preset attention weight parameter calculation algorithm, and respectively acquiring sentence feature representations corresponding to the text data of the source fields and sentence feature representations corresponding to the text data of the target fields, which are output by the soft attention units of the two groups of mutual learning migration channels according to a preset sentence feature representation calculation algorithm.
In order to measure the contribution of each word in each sentence in the text data to the emotional information of the sentence, in this embodiment, the analyzing device inputs the hidden layer feature representations corresponding to the text data of the plurality of source fields and the hidden layer feature representations corresponding to the text data of the plurality of target fields to the corresponding Soft Attention unit (Soft-Attention) in the semantic feature extraction module, and obtains the corresponding Attention weight parameter and the Attention weight parameter corresponding to the text data of the plurality of target fields according to a preset Attention weight parameter calculation algorithm, where the Attention weight parameter calculation algorithm is:
Figure 867453DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE009
for the purpose of the attention weight parameter,
Figure 106805DEST_PATH_IMAGE010
for a preset first trainable network parameter,
Figure DEST_PATH_IMAGE011
for a preset second trainable network parameter,
Figure 185488DEST_PATH_IMAGE012
is a preset third trainable network parameter;
and according to a preset sentence characteristic representation calculation algorithm, respectively acquiring sentence characteristic representations corresponding to the text data of the source fields output by the soft attention units of the two groups of mutual learning migration channels and sentence characteristic representations corresponding to the text data of the target fields, so as to improve the accuracy of the acquired sentence characteristic representations and further improve the effect of optimization training of the mutual learning network, wherein the sentence characteristic representation calculation algorithm is as follows:
Figure DEST_PATH_IMAGE013
in the formula (I), the compound is shown in the specification,
Figure 510290DEST_PATH_IMAGE014
is a sentence feature representation.
S3113: and respectively inputting sentence characteristic representations corresponding to the text data of the source fields and sentence characteristic representations corresponding to the text data of the target fields into corresponding second bidirectional gating circulation units in the semantic characteristic extraction module for coding, and respectively acquiring the sentence characteristic representations corresponding to the text data of the source fields and the sentence characteristic representations corresponding to the text data of the target fields after coding and processing, which are output by the second bidirectional gating circulation units of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the sentence feature representations corresponding to the text data of the plurality of source domains and the sentence feature representations corresponding to the text data of the plurality of target domains into the corresponding second bidirectional gating circulation units in the semantic feature extraction module respectively for encoding, and obtains the sentence feature representations corresponding to the text data of the plurality of source domains and the sentence feature representations corresponding to the text data of the plurality of target domains after encoding, which are output by the second bidirectional gating circulation units of the two groups of mutual learning migration channels, respectively, wherein the sentence feature representations after encoding are:
Figure DEST_PATH_IMAGE015
in the formula (I), the compound is shown in the specification,
Figure 83354DEST_PATH_IMAGE016
for the sentence feature representation after the encoding process, BiGRU () is an encoding function of the second bidirectional gating loop unit.
S3114: and inputting sentence characteristic representations corresponding to the text data of the plurality of source fields after the coding processing and sentence characteristic representations corresponding to the text data of the plurality of target fields after the coding processing into corresponding convolution attention units in the semantic characteristic extraction module, respectively performing weighting processing on the sentence characteristic representations corresponding to the text data of the plurality of source fields after the coding processing and the sentence characteristic representations corresponding to the text data of the plurality of target fields after the coding processing according to a preset attention algorithm, and acquiring semantic characteristic representations corresponding to the text data of the plurality of source fields and semantic characteristic representations corresponding to the text data of the plurality of target fields.
The convolution Attention unit (CNN-Attention) is used for giving different weights to each input sentence feature representation, extracting more key and important information and enabling the network to make more accurate judgment.
In this embodiment, the analysis device inputs the sentence feature representations corresponding to the text data of the plurality of source fields after the encoding process and the sentence feature representations corresponding to the text data of the plurality of target fields after the encoding process to the convolution attention unit in the semantic feature extraction module, respectively performs weighting processing on the sentence feature representations corresponding to the text data of the plurality of source fields after the encoding process and the sentence feature representations corresponding to the text data of the plurality of target fields after the encoding process according to a preset attention algorithm, and acquires semantic feature representations corresponding to the text data of the plurality of source fields
Figure DEST_PATH_IMAGE017
And semantic feature representations corresponding to the text data of the target domains
Figure 692059DEST_PATH_IMAGE018
Taking the semantic feature representation corresponding to the text data of the acquired target domain as an example, the operation process of the attention algorithm is as follows:
Figure DEST_PATH_IMAGE019
in the formula (I), the compound is shown in the specification,qfor the number of windows in the convolutional layer,mfor the number of sentences in the text data,tis the first in the convolutional layertA convolution filter is set in the filter to filter the convolution,
Figure 692376DEST_PATH_IMAGE020
in order to represent the characteristics of the convolution,
Figure DEST_PATH_IMAGE021
in order to be a non-linear activation function,
Figure 488162DEST_PATH_IMAGE022
is a firsttThe convolution function of each of the convolution filters,
Figure DEST_PATH_IMAGE023
which represents a convolution operation, the operation of the convolution,
Figure 864917DEST_PATH_IMAGE024
for the purpose of the preset convolution coefficients,
Figure 344440DEST_PATH_IMAGE018
is a firstkSemantic feature representations corresponding to the text data of the individual target domains.
S312: and respectively inputting the word embedding vector set to a syntactic feature extraction module in the corresponding feature extraction modules, and respectively acquiring syntactic feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the word embedding vector set to the syntactic feature extraction modules in the corresponding feature extraction modules, and obtains syntactic feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels, respectively.
The syntactic feature extraction module comprises a bidirectional gating circulation unit, a multilayer graph attention network unit and a convolution unit, wherein the multilayer graph attention network unit comprises a plurality of graph attention network layers; referring to fig. 5, fig. 5 is a schematic flowchart of S312 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, including steps S3121 to S3123, which are specifically as follows:
s3121: and inputting the word embedding vector set to a corresponding bidirectional gating circulating unit in the syntactic feature extraction module, and respectively acquiring the context hidden layer feature representations corresponding to the text data of the plurality of source fields and the context hidden layer feature representations corresponding to the text data of the plurality of target fields, which are output by the bidirectional gating circulating units of the syntactic feature extraction modules of the two groups of mutual learning migration channels.
The syntactic characteristic extracting module mainly considers that syntactic dependencies also exist among words of different sentences, so that the analysis equipment treats the whole text data as a long sentence, and each text data containsndA word, wherein,nd=m*n
in this embodiment, the analyzing device inputs the word embedding vector set to the corresponding bidirectional gating cycle unit in the syntactic feature extraction module, and obtains the context hidden layer feature representations corresponding to the text data in the plurality of source fields and the context hidden layer feature representations corresponding to the text data in the plurality of target fields, which are output by the bidirectional gating cycle units of the syntactic feature extraction modules of the two groups of mutual learning migration channels, respectively, as follows:
Figure DEST_PATH_IMAGE025
in the formula (I), the compound is shown in the specification,
Figure 515658DEST_PATH_IMAGE026
for the context-hiding layer feature representation,BiGRU() And coding functions of bidirectional gating cycle units in the syntactic feature extraction module.
S3122: and inputting the context hidden layer feature representation corresponding to the text data of the source fields and the context hidden layer feature representation corresponding to the text data of the target fields into the corresponding multilayer drawing attention network unit in the syntactic feature extraction module, taking the context hidden layer feature representation as a state vector of a first drawing attention network layer of the multilayer drawing attention network unit, and respectively acquiring state vectors corresponding to nodes of the plurality of drawing attention network layers of the two groups of mutual learning migration channels according to a preset state vector calculation algorithm.
The multi-layer graph ATTENTION network unit is GAT (GRAPH ATTENTION NETWORKS), and is a multi-layer graph ATTENTION network constructed in a stacking mode.
In this embodiment, the analysis device inputs the context hidden layer feature representations corresponding to the text data of the source fields and the context hidden layer feature representations corresponding to the text data of the target fields into corresponding multilayer graph attention network units in the syntax feature extraction module, uses the context hidden layer feature representations as state vectors of a first graph attention network layer of the multilayer graph attention network units, and respectively obtains state vectors corresponding to nodes of the graph attention network layers of the two groups of mutual learning migration channels according to a preset state vector calculation algorithm, where the state vector calculation algorithm is:
Figure DEST_PATH_IMAGE027
in the formula (I), the compound is shown in the specification,
Figure 67250DEST_PATH_IMAGE028
is a non-linear activation function;
Figure DEST_PATH_IMAGE029
is at the firstlPersonal attention network layer, the firstxA node and ayRaw attention score data between individual nodes,
Figure 247695DEST_PATH_IMAGE030
for a pre-set trainable weight vector, | | | is expressed as a concatenation operation,
Figure DEST_PATH_IMAGE031
for a pre-set trainable weight matrix,
Figure 581725DEST_PATH_IMAGE032
is as followslFirst of a layerxThe state vector corresponding to each node is calculated,
Figure DEST_PATH_IMAGE033
after normalization processing, in the second steplPersonal attention network layer, the firstxA node and ayAn attention weight parameter between the individual nodes,
Figure 173112DEST_PATH_IMAGE034
in order to be a non-linear activation function,
Figure DEST_PATH_IMAGE035
for all and nodesxA set of adjacent nodes.
S3123: and inputting the state vectors corresponding to the nodes of the plurality of graph attention network layers of the two groups of mutual learning migration channels into the corresponding convolution units in the syntactic feature extraction module for feature extraction, and acquiring syntactic feature representations corresponding to the text data of the plurality of source fields and syntactic feature representations corresponding to the text data of the plurality of target fields.
In this embodiment, the analysis device inputs the state vectors corresponding to the nodes of the plurality of graph attention network layers of the two groups of channels that are learned and migrated mutually into the corresponding convolution units in the syntactic feature extraction module to perform feature extraction, and obtains syntactic feature representations corresponding to the text data of the plurality of source fields and syntactic feature representations corresponding to the text data of the plurality of target fields.
S313: and respectively splicing the semantic feature representation and the syntactic feature representation corresponding to the text data of the source fields and the semantic feature representation and the syntactic feature representation corresponding to the text data of the target fields, and acquiring the text feature representation corresponding to the text data of the source fields and the text feature representation corresponding to the text data of the target fields, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels.
In this embodiment, the analysis device respectively represents the semantic features, the syntactic features and the destinations corresponding to the text data of the source fieldsAnd splicing semantic feature representation and syntactic feature representation corresponding to the text data of the mark domain, and acquiring text feature representations corresponding to the text data of the source fields output by the syntactic feature extraction modules of the two groups of mutual learning migration channels
Figure 959802DEST_PATH_IMAGE036
And text characteristic representation corresponding to the text data of the target fields
Figure DEST_PATH_IMAGE037
S32: and respectively inputting the text feature representations corresponding to the text data of the source fields and the text feature representations corresponding to the text data of the target fields into emotion classification modules of the two groups of mutual learning migration channels, obtaining emotion feature representations corresponding to the text data of the source fields of the two groups of mutual learning migration channels and emotion feature representations corresponding to the text data of the target fields, and constructing a first loss function of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the source fields.
In this embodiment, the analysis device inputs the text feature representations corresponding to the text data of the source fields and the text feature representations corresponding to the text data of the target fields to the emotion classification modules of the two groups of mutual learning migration channels, obtains emotion feature representations corresponding to the text data of the source fields of the two groups of mutual learning migration channels and emotion feature representations corresponding to the text data of the target fields, and constructs a first loss function of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the source fields.
Referring to fig. 6, fig. 6 is a schematic flowchart of S32 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, including steps S321 to S322, which are as follows:
s321: and obtaining emotion feature representations corresponding to the text data of the source fields of the two groups of mutual learning migration channels and emotion feature representations corresponding to the text data of the target fields respectively according to the text feature representations corresponding to the text data of the source fields, the text feature representations corresponding to the text data of the target fields and an emotion classification module of the two groups of mutual learning migration channels by using a preset emotion feature representation calculation algorithm.
The emotional feature representation calculation algorithm comprises the following steps:
Figure 678360DEST_PATH_IMAGE038
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE039
representing corresponding emotional features of the text data of the source fields,
Figure 116163DEST_PATH_IMAGE040
the emotional characteristic representation corresponding to the text data of the target fields,
Figure DEST_PATH_IMAGE041
in order to be a function of the normalization,
Figure 894763DEST_PATH_IMAGE042
for a preset fourth trainable network parameter,
Figure DEST_PATH_IMAGE043
is a preset fifth trainable network parameter;
in this embodiment, the analysis device obtains, according to the text feature representations corresponding to the text data of the plurality of source fields, the text feature representations corresponding to the text data of the plurality of target fields, and the emotion classification module of the two groups of mutual learning migration channels, the emotion feature representation calculation algorithm preset in the emotion classification module, the emotion feature representation calculation algorithm corresponding to the text data of the plurality of source fields of the two groups of mutual learning migration channels, and the emotion feature representation corresponding to the text data of the plurality of target fields.
S322: and obtaining label data corresponding to the text data of the plurality of source fields, and constructing a first loss function of the two groups of mutual learning migration channels according to the emotional feature representation and the label data corresponding to the text data of the plurality of source fields.
For the emotional feature representations corresponding to the text data of the source fields, in this embodiment, the analysis device obtains tag data corresponding to the text data of the source fields, and constructs a first loss function of the two groups of mutual learning migration channels by calculating cross entropy according to the emotional feature representations and the tag data corresponding to the text data of the source fields, where the first loss function is:
Figure 903170DEST_PATH_IMAGE044
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE045
for the purpose of said first loss function,
Figure 674686DEST_PATH_IMAGE046
the number of text data of the source domain is set to the text data of the source domain,
Figure DEST_PATH_IMAGE047
is as followskLabel data corresponding to the text data of the source field.
S33: respectively inputting the text feature representations corresponding to the text data of the target domains into the label detection modules of the two groups of mutual learning migration channels, acquiring the label feature representations corresponding to the text data of the target domains of the two groups of mutual learning migration channels, inputting the label feature representations corresponding to the text data of the target domains of the mutual learning migration channels into the emotion classification module of the other group of mutual learning migration channels, and constructing a second loss function of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the source domains and the label feature representations corresponding to the text data of the target domains.
In this embodiment, the analysis device respectively inputs text feature representations corresponding to the text data of the target domains into the tag detection modules of the two groups of mutual learning migration channels, obtains tag feature representations corresponding to the text data of the target domains of the two groups of mutual learning migration channels, inputs tag feature representations corresponding to the text data of the target domains of the group of mutual learning migration channels into the emotion classification module of the other group of mutual learning migration channels, and constructs the second loss function of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the source domains and the tag feature representations corresponding to the text data of the target domains.
Referring to fig. 7, fig. 7 is a schematic flowchart of S33 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, including steps S331 to S332, which are as follows:
s331: and acquiring label feature representations corresponding to the text data of the plurality of target domains of the two groups of mutual learning migration channels according to the text feature representations corresponding to the text data of the plurality of target domains and a preset label feature representation calculation algorithm in the label detection modules of the two groups of mutual learning migration channels.
The label feature representation calculation algorithm is as follows:
Figure 717729DEST_PATH_IMAGE048
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE049
label feature representations corresponding to the text data of the several target fields,
Figure 667230DEST_PATH_IMAGE050
for the preset sixth trainable network parameters,
Figure DEST_PATH_IMAGE051
is a preset seventh trainable network parameter;
s332: and inputting label feature representations corresponding to the text data of a plurality of target domains of one group of mutual learning migration channels into an emotion classification module of another group of mutual learning migration channels, and constructing a second loss function of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the plurality of source domains and the label feature representations corresponding to the text data of the plurality of target domains.
The second loss function is:
Figure 692429DEST_PATH_IMAGE052
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE053
Figure 752789DEST_PATH_IMAGE054
respectively learning second loss functions corresponding to the migration channels for the two groups
Figure DEST_PATH_IMAGE055
Figure 181496DEST_PATH_IMAGE056
The emotion feature representations corresponding to the text data of the target domains corresponding to the two groups of mutual learning migration channels are respectively displayed,
Figure DEST_PATH_IMAGE057
Figure 285587DEST_PATH_IMAGE058
label features corresponding to the text data of the target domain corresponding to the two groups of mutual learning migration channels respectivelyThe representation of the sign is that,
Figure DEST_PATH_IMAGE059
is a preset divergence coefficient.
In this embodiment, the analysis device inputs label feature representations corresponding to text data of a plurality of target domains of one group of mutual learning migration channels into an emotion classification module of another group of mutual learning migration channels, and constructs a second loss function of the two groups of mutual learning migration channels according to emotion feature representations corresponding to text data of a plurality of source domains and label feature representations corresponding to text data of a plurality of target domains.
S34: and inputting the text feature representations corresponding to the text data of the source fields and the text feature representations corresponding to the text data of the target fields into a field difference learning module of the quantity group mutual learning migration channels, and constructing a third loss function of the two groups of mutual learning migration channels.
In this embodiment, the analysis device inputs the text feature representations corresponding to the text data of the plurality of source domains and the text feature representations corresponding to the text data of the plurality of target domains into the domain difference learning module of the two groups of mutual learning migration channels, and constructs a third loss function of the two groups of mutual learning migration channels.
Referring to fig. 8, fig. 8 is a schematic flowchart of S34 in the cross-domain emotion analysis method based on the mutual learning network according to an embodiment of the present application, which includes steps S341 to S343 as follows:
s341: and combining the text characteristic representations corresponding to the text data of the source fields and the text characteristic representations corresponding to the text data of the target fields respectively to obtain the text characteristic representation corresponding to the text data set of the source fields and the text characteristic representation corresponding to the text data set of the target fields.
In this embodiment, the analysis device combines the text feature representations corresponding to the text data of the plurality of source fields and the text feature representations corresponding to the text data of the plurality of target fields, respectively, to obtain the text feature representations corresponding to the text data sets of the source fields, which are used as the representation distribution of the source fields; and the text characteristic representation corresponding to the text data set of the target domain is used as the representation distribution of the target domain.
S342: and inputting the text characteristic representation corresponding to the text data set of the source field and the text characteristic representation corresponding to the text data set of the target field into a field difference learning module of the quantity group mutual learning migration channel to construct a bulldozer distance function.
The dozer distance function is:
Figure 534166DEST_PATH_IMAGE060
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE061
as a function of the bulldozer distance,
Figure 132637DEST_PATH_IMAGE046
the number of text data of the source domain is set to the text data of the source domain,
Figure 681430DEST_PATH_IMAGE062
is the number of text data of the target domain in the text data set of the target domain,
Figure DEST_PATH_IMAGE063
the source domain is represented by a field of view,
Figure 956423DEST_PATH_IMAGE064
a target domain is represented by a target field,
Figure DEST_PATH_IMAGE065
for the learning function in the domain difference learning module,
Figure 692298DEST_PATH_IMAGE066
is the text of the source domainThe corresponding text feature representation of the present data set,
Figure 360039DEST_PATH_IMAGE067
and representing the text characteristic corresponding to the text data set of the target domain.
In this embodiment, the analysis device inputs the text feature representation corresponding to the text data set of the source domain and the text feature representation corresponding to the text data set of the target domain into the domain difference learning module of the quantitative group mutual learning migration channel, and constructs a bulldozer distance function. The difference between the representation distribution of the source domain and the representation distribution of the target domain is estimated by adopting the calculated bulldozer distance (Wasserstein), so that the difference between the two domains is minimized, the domain invariant features are extracted, and the accuracy of network training is improved.
S343: and constructing a gradient penalty function according to preset learning training parameters and balance coefficients, and acquiring a third loss function of the two field difference learning modules of the mutual learning migration channels according to the bulldozer distance function and the gradient penalty function.
In order to improve the accuracy of network training, after each gradient update, the analysis device may execute Lipschitz constraint by clipping the weight within a preset clipping range, where the weight clipping may cause problems of capacity insufficiency and gradient disappearance or explosion, and therefore, in this embodiment, the analysis device constructs a gradient penalty function according to preset learning training parameters and balance coefficients to avoid problems of capacity insufficiency and gradient disappearance or explosion, subtracts the bulldozer distance function from the gradient penalty function, and obtains a function subtraction result as a third loss function of the two field difference learning modules of the mutual learning migration channels.
Wherein the gradient penalty function is:
Figure 763339DEST_PATH_IMAGE068
in the formula (I), the compound is shown in the specification,
Figure 491123DEST_PATH_IMAGE069
is a gradient penalty characteristic representation
Figure 229141DEST_PATH_IMAGE066
And with
Figure 700574DEST_PATH_IMAGE067
The serial connection of random points on the feature space;
Figure 958380DEST_PATH_IMAGE070
is the equilibrium coefficient;
the third loss function is:
Figure 857066DEST_PATH_IMAGE071
s35: and respectively constructing total loss functions corresponding to the two groups of mutual learning migration channels according to the first loss function, the first loss function and the third loss function of the two groups of mutual learning migration channels, and performing optimization training according to the total loss functions corresponding to the two groups of mutual learning migration channels to obtain the target mutual learning network.
In this embodiment, the analysis device respectively constructs a total loss function corresponding to the two groups of mutual learning migration channels according to a first loss function, and a third loss function of the two groups of mutual learning migration channels, where the total loss function is:
Figure 567533DEST_PATH_IMAGE072
in the formula (I), the compound is shown in the specification,
Figure 842656DEST_PATH_IMAGE073
Figure 207166DEST_PATH_IMAGE074
respectively learning total loss functions corresponding to the migration channels for the two groups,
Figure 276753DEST_PATH_IMAGE075
Figure 474516DEST_PATH_IMAGE076
respectively learning first loss functions corresponding to the migration channels for the two groups,
Figure 553331DEST_PATH_IMAGE053
Figure 520150DEST_PATH_IMAGE054
respectively learning second loss functions corresponding to the migration channels for the two groups,
Figure 26217DEST_PATH_IMAGE077
Figure 694965DEST_PATH_IMAGE078
respectively learning third loss functions corresponding to the migration channels for the two groups,
Figure 46312DEST_PATH_IMAGE079
is a first hyper-parameter that is pre-set,
Figure 133217DEST_PATH_IMAGE080
is a preset second hyper-parameter.
Carrying out optimization training according to the total loss functions corresponding to the two groups of mutual learning migration channels to obtain the target mutual learning network, specifically, solving by the analysis equipment
Figure 544606DEST_PATH_IMAGE081
Wherein, in the step (A),
Figure 716962DEST_PATH_IMAGE082
learning the training parameters for a preset second function to estimate the dozer distance, the analysis device first optimizing by iterative learning feature representation since the dozer distance is continuously differentiable almost anywhere, and after the optimization is completed, the analysis deviceOver-fixing the second function learning training parameters and minimizing the bulldozer distance to extract common features between the fields by solving the minimum maximum problem, namely:
Figure 137579DEST_PATH_IMAGE083
wherein the content of the first and second substances,
Figure 328258DEST_PATH_IMAGE084
the training parameters are learned for a preset second function. In a preferred embodiment, the analysis device should set the balance coefficient to 0 when performing the minimization operation, since the gradient penalizes the training process that should not be guided.
S4: and responding to an analysis instruction, acquiring text data to be analyzed, inputting the text data to be analyzed into the target mutual learning network, and acquiring an emotion analysis result output by the target mutual learning network.
The analysis instruction is sent by a user and received by the analysis equipment.
In this embodiment, the analysis device responds to an analysis instruction, obtains text data to be analyzed, inputs the text data to be analyzed into the target mutual learning network, and obtains an emotion analysis result output by the target mutual learning network.
Referring to fig. 9, fig. 9 is a schematic flowchart of S4 in the cross-domain emotion analysis method based on mutual learning network according to an embodiment of the present application, including steps S41 to S42, as follows:
s41: inputting the text data to be analyzed into the target mutual learning network to obtain text feature representation of the text data to be analyzed, and obtaining the emotion polarity vector of the text data to be analyzed according to the text feature representation of the text data to be analyzed and a preset emotion polarity vector calculation algorithm.
The calculation algorithm of the emotion polarity vector comprises the following steps:
Figure 910549DEST_PATH_IMAGE085
in the formula (I), the compound is shown in the specification,pfor the said vector of the polarity of emotion,
Figure 570200DEST_PATH_IMAGE041
in order to be a function of the normalization,
Figure 528929DEST_PATH_IMAGE086
in order to update the parameters for the weights,
Figure 324846DEST_PATH_IMAGE087
the bias term is updated for the weight(s),
Figure 78039DEST_PATH_IMAGE088
representing the text characteristics of the text data to be analyzed;
in this embodiment, the analysis device inputs the text data to be analyzed into the target mutual learning network, obtains text feature representation of the text data to be analyzed, and obtains an emotion polarity vector of the analyzed text data according to the text feature representation of the analyzed text data and a preset emotion polarity vector calculation algorithm.
S42: and acquiring the emotion polarity corresponding to the dimensionality with the maximum probability according to the emotion classification polarity probability distribution vector, and using the emotion polarity as an emotion analysis result output by the target mutual learning network.
In this embodiment, the analysis device obtains, according to the emotion classification polarity probability distribution vector, the emotion polarity corresponding to the dimension with the highest probability as the emotion analysis result output by the target mutual learning network, specifically, when the emotion polarity is obtained through calculation
Figure 474254DEST_PATH_IMAGE089
=[
Figure 236674DEST_PATH_IMAGE089
1,
Figure 887098DEST_PATH_IMAGE089
2,
Figure 76771DEST_PATH_IMAGE089
3,
Figure 445435DEST_PATH_IMAGE089
4,
Figure 745966DEST_PATH_IMAGE089
5]=[0.1,0.5,0.1,0.2,0.1]Maximum probability
Figure 768674DEST_PATH_IMAGE089
And 2, the emotion polarity score corresponding to the dimension with the highest probability is 2, and as the emotion analysis result output by the target mutual learning network, the higher the score is, the more positive the text is, and the lower the score is, the more negative the text is.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a cross-domain emotion analyzing apparatus based on mutual learning network according to an embodiment of the present application, where the apparatus may implement all or a part of the cross-domain emotion analyzing method based on mutual learning network through software, hardware or a combination of the two, and the apparatus 7 includes:
a training text data set obtaining module 101, configured to obtain a training text data set, where the training text data set includes text data of a plurality of source fields and text data of a plurality of target fields, the text data includes a plurality of sentences, and the sentences include a plurality of words;
a word embedding vector representation obtaining module 102, configured to input the training text data set into a preset word embedding model, and obtain a word embedding vector set, where the word embedding vector set includes word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields;
the network training module 103 is configured to input the word embedding vector set into a preset mutual learning network, construct a loss function corresponding to the mutual learning network, perform optimization training, and obtain a target mutual learning network, where the mutual learning network includes two groups of mutual learning migration channels;
the text data analysis module 104 is configured to, in response to an analysis instruction, acquire text data to be analyzed, input the text data to be analyzed into the target mutual learning network, and acquire an emotion analysis result output by the target mutual learning network.
In an embodiment of the application, a training text data set is obtained through a training text data set obtaining module, wherein the training text data set comprises text data of a plurality of source fields and text data of a plurality of target fields, the text data comprises a plurality of sentences, and the sentences comprise a plurality of words; the word embedding vector representation obtaining module is used for inputting the training text data set into a preset word embedding model to obtain a word embedding vector set, wherein the word embedding vector set comprises word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields; inputting the word embedding vector set into a preset mutual learning network through a network training module, constructing a loss function corresponding to the mutual learning network, performing optimization training, and obtaining a target mutual learning network, wherein the mutual learning network comprises two groups of mutual learning migration channels; the text data analysis module responds to an analysis instruction to obtain text data to be analyzed, the text data to be analyzed is input into the target mutual learning network, and emotion analysis results output by the target mutual learning network are obtained. By utilizing a deep mutual learning method, a mutual learning network is constructed, each group of mutual learning migration channels in the mutual learning network is associated with a source field and a target field, and the mutual learning network is optimally trained by minimizing the difference between the two fields, so that the accuracy and efficiency of the optimal training are improved, and the emotion analysis can be more comprehensively carried out on the text data.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device 11 includes: a processor 111, a memory 112, and a computer program 113 stored on the memory 112 and operable on the processor 111; the computer device may store a plurality of instructions, where the instructions are suitable for being loaded by the processor 111 and executing the method steps in the embodiments described in fig. 1 to 9, and a specific execution process may refer to specific descriptions of the embodiments described in fig. 1 to 9, which is not described herein again.
Processor 111 may include one or more processing cores, among other things. The processor 111 is connected to various parts in the server by various interfaces and lines, and executes various functions and processes data of the cross-domain emotion analyzing apparatus 10 based on the mutual learning network by executing or executing instructions, programs, code sets or instruction sets stored in the memory 112 and calling data in the memory 112, and optionally, the processor 111 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 111 may integrate one or a combination of a Central Processing Unit (CPU) 111, a Graphics Processing Unit (GPU) 111, a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 111, but may be implemented by a single chip.
The Memory 112 may include a Random Access Memory (RAM) 112, and may also include a Read-Only Memory (Read-Only Memory) 112. Optionally, the memory 112 includes a non-transitory computer-readable medium. The memory 112 may be used to store instructions, programs, code sets, or instruction sets. The memory 112 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 112 may optionally be at least one storage device located remotely from the processor 111.
The embodiment of the present application further provides a storage medium, where the storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and being executed in the method steps of the first to third embodiments, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to fig. 9, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (10)

1. A cross-domain emotion analysis method based on a mutual learning network is characterized by comprising the following steps:
acquiring a training text data set, wherein the training text data set comprises a text data set of a source field and a text data set of a target field, the text data set comprises a plurality of text data, the text data comprises a plurality of sentences, and the sentences comprise a plurality of words;
inputting the training text data set into a preset word embedding model to obtain a word embedding vector set, wherein the word embedding vector set comprises word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields;
inputting the word embedding vector set into a preset mutual learning network, constructing a loss function corresponding to the mutual learning network, performing optimization training, and obtaining a target mutual learning network, wherein the mutual learning network comprises two groups of mutual learning migration channels;
and responding to an analysis instruction, acquiring text data to be analyzed, inputting the text data to be analyzed into the target mutual learning network, and acquiring an emotion analysis result output by the target mutual learning network.
2. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 1, wherein:
the two groups of mutual learning migration channels respectively comprise a feature extraction module, an emotion classification module, a field difference learning module and a label detection module;
inputting the word embedding vector set into a preset mutual learning network, and constructing a loss function corresponding to the mutual learning network, including the steps of:
respectively inputting the word embedding vector set to the feature extraction modules in the two groups of mutual learning migration channels, and acquiring text feature representations corresponding to the text data of the plurality of source fields and the text feature representations corresponding to the text data of the plurality of target fields, which are output by the feature extraction modules of the two groups of mutual learning migration channels;
respectively inputting text feature representations corresponding to the text data of the source fields and text feature representations corresponding to the text data of the target fields into emotion classification modules of the two groups of mutual learning migration channels, obtaining emotion feature representations corresponding to the text data of the source fields and emotion feature representations corresponding to the text data of the target fields of the two groups of mutual learning migration channels, and constructing first loss functions of the two groups of mutual learning migration channels according to the emotion feature representations corresponding to the text data of the source fields;
respectively inputting text feature representations corresponding to the text data of the target domains into label detection modules of the two groups of mutual learning migration channels, acquiring label feature representations corresponding to the text data of the target domains of the two groups of mutual learning migration channels, inputting label feature representations corresponding to the text data of the target domains of the mutual learning migration channels into an emotion classification module of the other group of mutual learning migration channels, and constructing a second loss function of the two groups of mutual learning migration channels according to emotion feature representations corresponding to the text data of the source domains and label feature representations corresponding to the text data of the target domains;
inputting the text feature representations corresponding to the text data of the source fields and the text feature representations corresponding to the text data of the target fields into a field difference learning module of the quantity group mutual learning migration channels, and constructing a third loss function of the two groups of mutual learning migration channels;
respectively constructing total loss functions corresponding to the two groups of mutual learning migration channels according to the first loss function, the first loss function and the third loss function of the two groups of mutual learning migration channels, and performing optimization training according to the total loss functions corresponding to the two groups of mutual learning migration channels to obtain the target mutual learning network.
3. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 2, wherein:
the feature extraction module comprises a semantic feature extraction module and a syntactic feature extraction module which are connected in sequence;
the step of inputting the word embedding vector set into the feature extraction modules in the two groups of mutual learning migration channels respectively to obtain text feature representations corresponding to the text data of the plurality of source fields and the text feature representations corresponding to the text data of the plurality of target fields, which are output by the feature extraction modules in the two groups of mutual learning migration channels, includes the steps of:
respectively inputting the word embedding vector set to semantic feature extraction modules in the corresponding feature extraction modules, and respectively acquiring semantic feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the semantic feature extraction modules of the two groups of mutual learning migration channels;
respectively inputting the word embedding vector set to a syntactic feature extraction module in the corresponding feature extraction module, and respectively acquiring syntactic feature representations corresponding to the text data of the plurality of source fields and the syntactic feature representations corresponding to the text data of the plurality of target domains, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels;
and respectively splicing the semantic feature representation and the syntactic feature representation corresponding to the text data of the source fields and the semantic feature representation and the syntactic feature representation corresponding to the text data of the target fields, and acquiring the text feature representation corresponding to the text data of the source fields and the text feature representation corresponding to the text data of the target fields, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels.
4. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 3, wherein:
the semantic feature extraction module comprises a first bidirectional gated loop unit, a soft attention unit, a second bidirectional gated loop unit and a convolution attention unit, wherein the convolution attention unit comprises a plurality of convolution layers;
the method comprises the following steps of respectively inputting the word embedding vector set into semantic feature extraction modules in the corresponding feature extraction modules, respectively obtaining semantic feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the semantic feature extraction modules of the two groups of mutual learning migration channels, and respectively:
inputting the word embedding vector set into a first bidirectional gating circulating unit in the corresponding semantic feature extraction module, respectively encoding the word embedding vector set, and acquiring hidden layer feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the first bidirectional gating circulating unit of the semantic feature extraction modules of the two groups of mutual learning migration channels;
inputting hidden layer feature representations corresponding to the text data of the source domains and hidden layer feature representations corresponding to the text data of the target domains into corresponding soft attention units in the semantic feature extraction module, acquiring corresponding attention weight parameters and attention weight parameters corresponding to the text data of the target domains according to a preset attention weight parameter calculation algorithm, and respectively acquiring sentence feature representations corresponding to the text data of the source domains and sentence feature representations corresponding to the text data of the target domains, which are output by the soft attention units of the two groups of mutual learning migration channels, according to a preset sentence feature representation calculation algorithm, wherein the attention weight parameter calculation algorithm is as follows:
Figure 133013DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure 783437DEST_PATH_IMAGE002
for the purpose of the attention weight parameter,
Figure 973110DEST_PATH_IMAGE003
for a preset first trainable network parameter,
Figure 341775DEST_PATH_IMAGE004
for a preset second trainable network parameter,
Figure 642306DEST_PATH_IMAGE005
is a preset third trainable network parameter;
the sentence characteristic representation calculation algorithm is as follows:
Figure 412816DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure 757078DEST_PATH_IMAGE007
characterizing the sentence;
respectively inputting sentence characteristic representations corresponding to the text data of the source fields and sentence characteristic representations corresponding to the text data of the target fields into second bidirectional gating circulation units in the corresponding semantic characteristic extraction modules for coding, and respectively acquiring the sentence characteristic representations corresponding to the text data of the source fields and the sentence characteristic representations corresponding to the text data of the target fields after coding, which are output by the second bidirectional gating circulation units of the two groups of mutual learning migration channels;
inputting the sentence characteristic representations corresponding to the text data of the plurality of source fields after the encoding processing and the sentence characteristic representations corresponding to the text data of the plurality of target fields after the encoding processing into corresponding convolution attention units in the semantic characteristic extraction module, respectively performing weighting processing on the sentence characteristic representations corresponding to the text data of the plurality of source fields after the encoding processing and the sentence characteristic representations corresponding to the text data of the plurality of target fields after the encoding processing according to a preset attention algorithm, and acquiring the semantic characteristic representations corresponding to the text data of the plurality of source fields and the semantic characteristic representations corresponding to the text data of the plurality of target fields, wherein the attention algorithm is as follows:
Figure 878618DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,qfor the number of windows in the convolution attention unit,mfor the number of sentences in the text data,tis the first in the convolution attention unittA convolution filter for the received signal and a convolution filter for the received signal,
Figure 717261DEST_PATH_IMAGE009
in order to represent the characteristics of the convolution,
Figure 607857DEST_PATH_IMAGE010
in order to be a non-linear activation function,
Figure 873753DEST_PATH_IMAGE011
denotes the firsttThe function operation of each convolution filter is carried out,
Figure 217010DEST_PATH_IMAGE012
which represents a convolution operation, the operation of the convolution,
Figure 111541DEST_PATH_IMAGE013
is a function of the pre-set convolution coefficients,
Figure 856643DEST_PATH_IMAGE014
is as followskSemantic feature representations corresponding to the text data of the individual target domains.
5. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 3, wherein:
the syntactic characteristic extraction module comprises a bidirectional gating circulation unit, a multilayer drawing attention network unit and a convolution unit, wherein the multilayer drawing attention network unit comprises a plurality of drawing attention network layers;
the method comprises the following steps of inputting the word embedding vector set into corresponding syntactic feature extraction modules in the feature extraction modules respectively, and acquiring syntactic feature representations corresponding to the text data of the source fields and the text data of the target fields, which are output by the syntactic feature extraction modules of the two groups of mutual learning migration channels, respectively, and the syntactic feature representations corresponding to the text data of the target fields, and comprises the following steps:
inputting the word embedding vector set to a bidirectional gating circulating unit in the corresponding syntactic feature extraction module, and respectively acquiring context hidden layer feature representations corresponding to the text data of the plurality of source fields and the text data of the plurality of target fields, which are output by the bidirectional gating circulating unit of the syntactic feature extraction module of the two groups of mutual learning migration channels;
inputting the context hidden layer feature representations corresponding to the text data of the source fields and the context hidden layer feature representations corresponding to the text data of the target fields into corresponding multilayer drawing attention network units in the syntactic feature extraction module, taking the context hidden layer feature representations as state vectors of a first drawing attention network layer of the multilayer drawing attention network units, and respectively acquiring the state vectors corresponding to nodes of the drawing attention network layers of the two groups of mutual learning migration channels according to a preset state vector calculation algorithm, wherein the state vector calculation algorithm is as follows:
Figure 559020DEST_PATH_IMAGE015
in the formula (I), the compound is shown in the specification,
Figure 123993DEST_PATH_IMAGE016
is a non-linear activation function;
Figure 304439DEST_PATH_IMAGE017
is at the firstlPersonal attention network layer, the firstxA node and ayRaw attention score data between individual nodes,
Figure 169627DEST_PATH_IMAGE018
for a pre-set trainable weight vector, | | | is expressed as a concatenation operation,
Figure 26593DEST_PATH_IMAGE019
for a pre-set trainable weight matrix,
Figure 344442DEST_PATH_IMAGE020
is as followslFirst of the layerxThe state vector corresponding to each node is calculated,
Figure 62999DEST_PATH_IMAGE021
after normalization processing, in the second steplPersonal attention network layer, the firstxA node and ayAn attention weight parameter between the individual nodes,
Figure 782694DEST_PATH_IMAGE022
in order to be a non-linear activation function,
Figure 826873DEST_PATH_IMAGE023
for all and nodesxA set of adjacent nodes;
and inputting the state vectors corresponding to the nodes of the plurality of graph attention network layers of the two groups of mutual learning migration channels into the corresponding convolution units in the syntactic feature extraction module for feature extraction, and acquiring syntactic feature representations corresponding to the text data of the plurality of source fields and syntactic feature representations corresponding to the text data of the plurality of target fields.
6. The cross-domain emotion analysis method based on mutual learning network of claim 2, wherein the text feature representations corresponding to the text data of the source domains and the text feature representations corresponding to the text data of the target domains are respectively input to emotion classification modules of the two groups of mutual learning migration channels, emotion feature representations corresponding to the text data of the source domains of the two groups of mutual learning migration channels and emotion feature representations corresponding to the text data of the target domains are obtained, and the first loss functions of the two groups of mutual learning migration channels are constructed according to the emotion feature representations corresponding to the text data of the source domains, including:
according to the text feature representations corresponding to the text data of the source fields, the text feature representations corresponding to the text data of the target fields and the emotion classification modules of the two groups of mutual learning migration channels, a preset emotion feature representation calculation algorithm is used for respectively obtaining emotion feature representations corresponding to the text data of the source fields of the two groups of mutual learning migration channels and emotion feature representations corresponding to the text data of the target fields, wherein the emotion feature representation calculation algorithm is as follows:
Figure 632018DEST_PATH_IMAGE024
in the formula (I), the compound is shown in the specification,
Figure 669113DEST_PATH_IMAGE025
representing corresponding emotional characteristics of the text data of the source field,
Figure 977735DEST_PATH_IMAGE026
is the emotion characteristic representation corresponding to the text data of the target domain,
Figure 192815DEST_PATH_IMAGE027
in order to be a function of the normalization,
Figure 954098DEST_PATH_IMAGE028
for a preset fourth trainable network parameter,
Figure 545616DEST_PATH_IMAGE029
is a preset fifth trainable network parameter;
obtaining label data corresponding to the text data of the source fields, and constructing a first loss function of the two groups of mutual learning migration channels according to emotion feature representations and the label data corresponding to the text data of the source fields, wherein the first loss function is as follows:
Figure 974323DEST_PATH_IMAGE030
in the formula (I), the compound is shown in the specification,
Figure 343994DEST_PATH_IMAGE031
for the purpose of said first loss function,
Figure 123731DEST_PATH_IMAGE032
the number of text data of the source domain is set for the text data of the source domain,
Figure 253361DEST_PATH_IMAGE033
is as followskAnd label data corresponding to the text data of the source field.
7. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 6, characterized in that the text feature representations corresponding to the text data of the target domains are respectively input into the label detection modules of the two groups of mutual learning migration channels, acquiring label feature representations corresponding to the text data of a plurality of target domains of the two groups of mutual learning migration channels, and label feature representations corresponding to the text data of a plurality of target domains of one group of the mutual learning migration channels are input into an emotion classification module of another group of the mutual learning migration channels, constructing a second loss function of the two groups of mutual learning migration channels according to the emotional characteristic representations corresponding to the text data of the plurality of source fields and the label characteristic representations corresponding to the text data of the plurality of target fields, comprising the following steps:
according to the text feature representation corresponding to the text data of the target domains and a preset label feature representation calculation algorithm in the label detection modules of the two groups of mutual learning migration channels, obtaining label feature representations corresponding to the text data of the target domains of the two groups of mutual learning migration channels, wherein the label feature representation calculation algorithm is as follows:
Figure 270996DEST_PATH_IMAGE034
in the formula (I), the compound is shown in the specification,
Figure 93458DEST_PATH_IMAGE035
label feature representations corresponding to the text data of the several target fields,
Figure 94912DEST_PATH_IMAGE036
for a preset sixth trainable network parameter,
Figure 737553DEST_PATH_IMAGE037
is a preset seventh trainable network parameter;
inputting label feature representations corresponding to text data of a plurality of target domains of one group of mutual learning migration channels into an emotion classification module of another group of mutual learning migration channels, and constructing second loss functions of the two groups of mutual learning migration channels according to emotion feature representations corresponding to the text data of the plurality of source domains and label feature representations corresponding to the text data of the plurality of target domains, wherein the second loss functions are as follows:
Figure 140853DEST_PATH_IMAGE038
in the formula (I), the compound is shown in the specification,
Figure 134216DEST_PATH_IMAGE039
Figure 357387DEST_PATH_IMAGE040
respectively learning second loss functions corresponding to the migration channels for the two groups
Figure 94399DEST_PATH_IMAGE041
Figure 86626DEST_PATH_IMAGE042
The emotion feature representations corresponding to the text data of the target domains corresponding to the two groups of mutual learning migration channels are respectively displayed,
Figure 234579DEST_PATH_IMAGE043
Figure 210626DEST_PATH_IMAGE044
label feature representations corresponding to the text data of the target domains corresponding to the two groups of mutual learning migration channels respectively,
Figure 220170DEST_PATH_IMAGE045
is a preset divergence coefficient.
8. The cross-domain emotion analysis method based on mutual learning network as claimed in claim 6, wherein the step of inputting the text feature representations corresponding to the text data of the source domains and the text feature representations corresponding to the text data of the target domains into the domain difference learning module of the quantity group mutual learning migration channels to construct the third loss function of the two groups of mutual learning migration channels comprises the steps of:
combining the text characteristic representations corresponding to the text data of the source fields and the text characteristic representations corresponding to the text data of the target fields respectively to obtain the text characteristic representation corresponding to the text data set of the source fields and the text characteristic representation corresponding to the text data set of the target fields;
inputting the text feature representation corresponding to the text data set of the source field and the text feature representation corresponding to the text data set of the target field into a field difference learning module of the quantity group mutual learning migration channel, and constructing a bulldozer distance function, wherein the bulldozer distance function is as follows:
Figure 332483DEST_PATH_IMAGE046
in the formula (I), the compound is shown in the specification,
Figure 667649DEST_PATH_IMAGE047
as a function of the bulldozer distance,
Figure 599833DEST_PATH_IMAGE032
the number of text data of the source domain is set to the text data of the source domain,
Figure 927915DEST_PATH_IMAGE048
is the number of text data of the target field in the text data set of the target field,
Figure 160313DEST_PATH_IMAGE049
the source domain is represented by a field of view,
Figure 400802DEST_PATH_IMAGE050
a target domain is represented by a target field,
Figure 85861DEST_PATH_IMAGE051
for the learning function in the domain difference learning module,
Figure 702787DEST_PATH_IMAGE052
a text feature representation corresponding to the text dataset of the source domain,
Figure 789692DEST_PATH_IMAGE053
representing the text characteristic corresponding to the text data set of the target domain;
constructing a gradient penalty function according to preset learning training parameters and balance coefficients, and acquiring a third loss function of the two groups of field difference learning modules of the mutual learning migration channels according to the bulldozer distance function and the gradient penalty function, wherein the gradient penalty function is as follows:
Figure 201081DEST_PATH_IMAGE054
in the formula (I), the compound is shown in the specification,
Figure 357125DEST_PATH_IMAGE055
is a gradient penalty characteristic representation
Figure 777742DEST_PATH_IMAGE052
And
Figure 719153DEST_PATH_IMAGE053
the serial connection of random points on the feature space;
Figure 301444DEST_PATH_IMAGE056
is the equilibrium coefficient.
9. The method for cross-domain emotion analysis based on mutual learning network as claimed in claim 2, wherein the step of inputting the text data to be analyzed into the target mutual learning network and obtaining emotion analysis results output by the target mutual learning network comprises the steps of:
inputting the text data to be analyzed into the target mutual learning network, obtaining text feature representation of the text data to be analyzed, and obtaining emotion polarity vectors of the text data to be analyzed according to the text feature representation of the text data to be analyzed and a preset emotion polarity vector calculation algorithm, wherein the emotion polarity vector calculation algorithm is as follows:
Figure 961096DEST_PATH_IMAGE057
in the formula (I), the compound is shown in the specification,pfor the said vector of the emotion polarity,
Figure 919825DEST_PATH_IMAGE058
in order to be a function of the normalization,
Figure 967940DEST_PATH_IMAGE059
the parameters are updated for the purposes of the weights,
Figure 986711DEST_PATH_IMAGE060
the bias term is updated for the weight(s),
Figure 868079DEST_PATH_IMAGE061
representing the text characteristics of the text data to be analyzed;
and acquiring the emotion polarity corresponding to the dimensionality with the maximum probability according to the emotion classification polarity probability distribution vector, and using the emotion polarity as an emotion analysis result output by the target mutual learning network.
10. A cross-domain emotion analysis device based on a mutual learning network is characterized by comprising:
the training text data set acquisition module is used for acquiring a training text data set, wherein the training text data set comprises text data of a plurality of source fields and text data of a plurality of target fields, the text data comprises a plurality of sentences, and the sentences comprise a plurality of words;
a word embedding vector representation obtaining module, configured to input the training text data set into a preset word embedding model, and obtain a word embedding vector set, where the word embedding vector set includes word embedding vector representations corresponding to text data of a plurality of source fields and word embedding vector representations corresponding to text data of a plurality of target fields;
the network training module is used for inputting the word embedding vector set into a preset mutual learning network, constructing a loss function corresponding to the mutual learning network, performing optimization training and obtaining a target mutual learning network, wherein the mutual learning network comprises two groups of mutual learning migration channels;
and the text data analysis module is used for responding to an analysis instruction, acquiring text data to be analyzed, inputting the text data to be analyzed into the target mutual learning network, and acquiring an emotion analysis result output by the target mutual learning network.
CN202210954299.9A 2022-08-10 2022-08-10 Cross-domain emotion analysis method, device and equipment based on mutual learning network Pending CN115033700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210954299.9A CN115033700A (en) 2022-08-10 2022-08-10 Cross-domain emotion analysis method, device and equipment based on mutual learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210954299.9A CN115033700A (en) 2022-08-10 2022-08-10 Cross-domain emotion analysis method, device and equipment based on mutual learning network

Publications (1)

Publication Number Publication Date
CN115033700A true CN115033700A (en) 2022-09-09

Family

ID=83130758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210954299.9A Pending CN115033700A (en) 2022-08-10 2022-08-10 Cross-domain emotion analysis method, device and equipment based on mutual learning network

Country Status (1)

Country Link
CN (1) CN115033700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115712726A (en) * 2022-11-08 2023-02-24 华南师范大学 Emotion analysis method, device and equipment based on bigram embedding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065344A (en) * 2021-03-24 2021-07-02 大连理工大学 Cross-corpus emotion recognition method based on transfer learning and attention mechanism
CN114331123A (en) * 2021-12-28 2022-04-12 重庆邮电大学 Teaching evaluation emotion analysis method integrating cognitive migration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065344A (en) * 2021-03-24 2021-07-02 大连理工大学 Cross-corpus emotion recognition method based on transfer learning and attention mechanism
CN114331123A (en) * 2021-12-28 2022-04-12 重庆邮电大学 Teaching evaluation emotion analysis method integrating cognitive migration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHI YANG ET AL.: "Dual-Channel Domain Adaptation Model", 《IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY》 *
JIAN SHEN ET AL.: "Wasserstein Distance Guided Representation Learning for Domain Adaptation", 《THE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
QIANMING XUE ET AL.: "Improving Domain-Adapted Sentiment Classification by Deep Adversarial Mutual Learning", 《THE THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115712726A (en) * 2022-11-08 2023-02-24 华南师范大学 Emotion analysis method, device and equipment based on bigram embedding
CN115712726B (en) * 2022-11-08 2023-09-12 华南师范大学 Emotion analysis method, device and equipment based on double word embedding

Similar Documents

Publication Publication Date Title
CN112819023B (en) Sample set acquisition method, device, computer equipment and storage medium
CN109559300A (en) Image processing method, electronic equipment and computer readable storage medium
CN111950596A (en) Training method for neural network and related equipment
CN114676704B (en) Sentence emotion analysis method, device and equipment and storage medium
JP2018022496A (en) Method and equipment for creating training data to be used for natural language processing device
CN115587597B (en) Sentiment analysis method and device of aspect words based on clause-level relational graph
CN111368656A (en) Video content description method and video content description device
McCormack et al. Understanding aesthetic evaluation using deep learning
CN116151263B (en) Multi-mode named entity recognition method, device, equipment and storage medium
CN115168592B (en) Statement emotion analysis method, device and equipment based on aspect categories
Park et al. Neurocartography: Scalable automatic visual summarization of concepts in deep neural networks
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN115879508A (en) Data processing method and related device
CN115033700A (en) Cross-domain emotion analysis method, device and equipment based on mutual learning network
Qayyum et al. Ios mobile application for food and location image prediction using convolutional neural networks
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment
CN115905518B (en) Emotion classification method, device, equipment and storage medium based on knowledge graph
CN115906863B (en) Emotion analysis method, device, equipment and storage medium based on contrast learning
CN114547312B (en) Emotional analysis method, device and equipment based on common sense knowledge graph
CN115659951A (en) Statement emotion analysis method, device and equipment based on label embedding
CN112132269B (en) Model processing method, device, equipment and storage medium
CN115906861A (en) Statement emotion analysis method and device based on interaction aspect information fusion
CN115204171A (en) Document-level event extraction method and system based on hypergraph neural network
CN113821610A (en) Information matching method, device, equipment and storage medium
CN113627522A (en) Image classification method, device and equipment based on relational network and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220909