CN116541579A - Aspect-level emotion analysis based on local context focus mechanism and conversational attention - Google Patents

Aspect-level emotion analysis based on local context focus mechanism and conversational attention Download PDF

Info

Publication number
CN116541579A
CN116541579A CN202310548728.7A CN202310548728A CN116541579A CN 116541579 A CN116541579 A CN 116541579A CN 202310548728 A CN202310548728 A CN 202310548728A CN 116541579 A CN116541579 A CN 116541579A
Authority
CN
China
Prior art keywords
context
local context
local
layer
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310548728.7A
Other languages
Chinese (zh)
Inventor
李弼程
林煌
林正超
康智勇
王华珍
皮慧娟
王成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202310548728.7A priority Critical patent/CN116541579A/en
Publication of CN116541579A publication Critical patent/CN116541579A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides an aspect-level emotion analysis based on a local context focus mechanism and conversational attention, comprising: s1, constructing an analysis model; step S2, modeling words in the local context form sequence and the global context form sequence by the BERT pre-training layer respectively to obtain preliminary local context characteristics and preliminary global context characteristics; s3, at a feature extraction layer, utilizing a local context focus mechanism, further extracting local context features by combining a context feature dynamic masking technology with a talking attention mechanism, and extracting global context features by using the talking attention mechanism; s4, at a feature learning layer, fusing local context features and global context features to obtain fusion vectors, and extracting features of the fusion vectors by adopting a conversational attention mechanism; and S5, acquiring a result of the aspect emotion analysis according to the characteristics of the fusion vector at the output layer. The invention can better capture the emotion contained in different aspects.

Description

Aspect-level emotion analysis based on local context focus mechanism and conversational attention
Technical Field
The present invention relates to the field of view mining, and more particularly to aspect-level emotion analysis based on local context focus mechanisms and conversational attention.
Background
With the rapid development of the internet, various online platforms are continuously developed, from news, blogs to forums, users in the internet are more and more involved, and the users express respective views and attitudes on some events by browsing hot spot information on the internet. Meanwhile, various products and entertainment modes are also displayed to users in the form of the Internet, and after the users purchase and experience, the users can issue a large number of comments expressing own views to the products and services. These text data with user's perspective are very important data resources, and it is very important to analyze these text data with user's perspective. For example, merchants can obtain the preference and the existing deficiency of a user for a certain product by analyzing the data, so that better improvement directions are found and the sales of the product is improved; after the emergency occurs, the comment data published by people on the emergency is analyzed, so that the trend of the controlled public opinion is better; when the government leaves a new policy, whether the proposed policy has an actual effect is judged by analyzing the viewpoint of the netizen, so that adjustment is performed.
Along with the continuous progress of social media, emotion analysis has very high theoretical significance and application value in the field of natural language processing. Emotion analysis is mainly to classify emotion expressions of different emotion in a text. Most of the previous emotion analysis researches are coarse granularity analysis, and cannot meet the more accurate and fine analysis requirements. For example, for a certain product, it is analyzed from different aspects which advantages and which disadvantages are present for the product. The aspect-level emotion analysis is different from the prior coarse granularity study, and can analyze emotion polarities of different aspects in sentences, so that the method becomes an important study direction in the emotion analysis field. For example: "The price of this house is good, but the location is terrible", two terms of interest exist in the comments about the house, namely "price" and "location", and the terms corresponding to the terms are "good" and "term" respectively, where "good" represents positive emotion and "term" represents negative emotion. In this case, aspect-level emotion analysis can more fully capture the emotion of different aspects expressed herein. It can be seen that the method has very important research significance and value for research of aspect-level emotion analysis.
Along with the proposal of the pre-training model, such as BERT, is widely focused, and more researchers use the model in the field of aspect-level emotion analysis, so that the BERT pre-training model is proved to be feasible for the task of aspect-level emotion analysis. However, in the prior art, the aspect-level emotion analysis research predicts emotion polarities of different aspects in sentences, and the relation between emotion polarities and local contexts is not considered. In addition, most studies are developed in combination with a single attention or multiple attention mechanism, but the operations of each head in the multiple attention mechanism are independent, so that the aspect-level emotion analysis method in the prior art has room for improvement on a language processing model.
Disclosure of Invention
The invention aims to provide an aspect-level emotion analysis based on a local context focus mechanism and conversational attention, which establishes a relation between emotion polarity and local context, and connects mutually independent heads in a multi-head attention mechanism, so as to obtain stronger attention design, obtain better effect on a language processing model and better capture emotion contained in different aspects.
The invention is realized by the following technical scheme:
aspect-level emotion analysis based on local context focus mechanism and conversational attention, comprising the steps of:
s1, constructing an analysis model comprising a BERT pre-training layer, a feature extraction layer, a feature learning layer and an output layer;
s2, the BERT pre-training layer processes the corpus to be analyzed into a local context form sequence and a global context form sequence, models words in the local context form sequence and the global context form sequence respectively, and obtains primary local context characteristicsAnd preliminary global context feature ++>Step S3, at the feature extraction layer, utilizing a local context focus mechanism, further extracting local context features by combining a context feature dynamic masking technology with a conversation attention mechanism, and extracting global context features by using the conversation attention mechanism, wherein the method specifically comprises the following steps:
step S31, according to the formulaCalculating a semantic relative distance of the local context form sequence, wherein i represents the position of the word in the local context form sequence, F a Representing the position of the aspect word in the local context form sequence, n representing the length of the aspect word in the local context form sequence;
step S32, helping the model to capture local context features through a context feature dynamic masking technology to obtain the local context featuresWherein M= [ V ] 1 ,V 2 ,...V n ]For a shielding matrix for shielding non-local context features +.>Mask vector for each context word in the sequence of partial context forms, i=1, 2, …, n, a is semantic relative distanceA threshold value, E is a 1 vector with a length of n, and O is a 0 vector with a length of n;
step S33, further extracting local context features by using a conversational attention mechanismExtracting global context feature +.>
Step S4, at the feature learning layer, the local context feature O l And global context feature O g Fusion is carried out to obtain fusion vectors, and the feature of the fusion vectors is extracted by adopting a conversational attention mechanismS5, in the output layer, according to the characteristic of the fusion vector +.>And obtaining the result of aspect-level emotion analysis. Further, in the step S2, preprocessing the corpus to be analyzed into a local context form sequence and a global context form sequence specifically includes: respectively processing the corpus to be analyzed into a local context form sequence X l =[CLS]+context+ [ SEP ]]Global context form sequence X g =[CLS]+context+ [ SEP ]]+ aspect words + [ SEP ]]In the form of [ CLS ]]May be a semantic representation of the entire sentence.
Further, in the step S2, the BERT pre-training layer adopts a first BERT training model BERT l For X l Modeling to obtainUsing a second BERT training model BERT g For X g Modeling to obtainWherein the first BERT training model BERT l And a second BERT training modelBERT g Independent of each other.
Further, in the step S4, the fusion vector is characterized in thatWherein,,O lg =[O l ;O g ],W lg representing a matrix of weight coefficients, b lg Representing the bias vector +_>Is a fully connected layer.
Further, in the step S5, the output layer is a nonlinear layer, and features of the fusion vector are determinedThe nonlinear layer is input and predicted by using a softmax function: />Wherein (1)>For analysis result, W o As a weight matrix, b o Is a bias vector.
The invention has the following beneficial effects:
the method fully considers the importance of the local context on the emotion polarity, extracts the local and global context characteristics by using the first BERT training model and the second BERT training model which are mutually independent, further captures the local context characteristics by combining a context characteristic dynamic mask layer in a local context focus mechanism with a talking attention mechanism, fuses the local context characteristics with global information, and inputs the fused local context characteristics into a nonlinear layer for emotion analysis, thereby realizing stronger attention design, obtaining better effect on a language processing model and better capturing the emotion contained in different aspects.
Drawings
The invention is described in further detail below with reference to the accompanying drawings.
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a diagram of an analytical model of the present invention.
Fig. 3 is a dynamic mask diagram of the context feature of the present invention.
FIG. 4 is a graph showing the impact of semantic relative distance on analytical accuracy according to the present invention.
FIG. 5 is a graph showing the effect of semantic relative distance on MF1 values according to the present invention.
FIG. 6 is a graph showing the effect of the present invention on analysis accuracy.
FIG. 7 is a diagram of MF1 according to the present invention.
Detailed Description
As shown in fig. 1 and 2, the aspect-level emotion analysis based on the local context focus mechanism and the conversational attention includes the steps of:
step S1, constructing an analysis model (namely an LCFTHA model) comprising a BERT pre-training layer, a feature extraction layer, a feature learning layer and an output layer;
s2, the BERT pre-training layer processes the corpus to be analyzed into a local context form sequence and a global context form sequence, models words in the local context form sequence and the global context form sequence respectively, and obtains primary local context characteristicsAnd preliminary global context feature ++>
Specifically, the corpus to be analyzed is respectively processed into a local context form sequence X l =[CLS]+context+ [ SEP ]]Global context form sequence X g =[CLS]+context+ [ SEP ]]+ aspect words + [ SEP ]]In the form of [ CLS ]]Can be regarded as semantic representation of the whole sentence, [ SEP ]]Delimiters of sentences processed for the BERT pre-training layer;
respectively adopting a first BERT training model BERT l For X l Modeling to obtainSecond BERT training model BERT g For X g Modeling to get +.>Wherein the first BERT training model BERT l And a second BERT training model BERT g Independent of each other;
step S3, at the feature extraction layer, utilizing a local context focus mechanism, further extracting local context features by combining a context feature dynamic masking technology with a conversation attention mechanism, and extracting global context features by using the conversation attention mechanism, wherein the method specifically comprises the following steps:
step S31, according to the formulaCalculating the semantic relative distance of the local context form sequence (the semantic relative distance is based on the concept of Token-Aspect pairs, namely the positions of words and Aspect words in sentences, describing the distance between Token and Aspect, and can be understood as how many words are separated between them), wherein i represents the positions of words in the local context form sequence, F a Representing the position of an aspect word in the sequence of partial context forms, n representing the length of the aspect word in the sequence of partial context forms, D i Representing the distance between the position of the i-th word and the target aspect;
step S32, helping the model to capture local context features through a context feature dynamic masking technology to obtain the local context featuresWherein M= [ V ] 1 ,V 2 ,...V n ]For a shielding matrix for shielding non-local context features +.>In the form of local contextMask vector for each context word in a column, when D i V at a value less than or equal to a i Representing a local context, i=1, 2, …, n, a being a semantic relative distance threshold, E being a 1 vector of length n, O being a 0 vector of length n;
wherein, as shown in fig. 3, in addition to the local context feature, the context dynamic feature mask layer will mask the non-local context feature learned by the layer, the feature of the output position pointed by the dotted arrow will be masked, the feature of the output position pointed by the solid arrow will be preserved, POS is represented as the output position, and the context dynamic mask sets the features of all positions of the non-local context as zero vectors;
step S33, further extracting local context features by using a conversational attention mechanismExtracting global context feature +.>
By linking the individual attention heads of the multi-head attention mechanism, i.e. re-merging the plurality of attention heads with a parameter matrix, a plurality of mixed attention is formed, each newly obtained mixed attention merging the attention of the original individual heads, i.e.: o (O) THA =concat[{O (1) ,O (2) ,...,O (h) }·W WH ]Wherein concat []For concatenating two or more arrays, O (1) =P (1) V (1) ,O (2) =P (2) V (2) ,...,O (h) =P (h) V (h) ,P (1) =softmax(J (1) ),P (2) =softmax(J (2) ),...,P (h) =softmax(J (h) ) ,J (h) Representing a linear mapping between the various attention heads before softmax operation, O (h) Representing the Attention (Q) (h) ,K (h) ,V (h) ),Represents the reduction factor, h represents the number of attention heads, W WH Representing a weight matrix, lambda hh Representing a trainable parameter matrix;
the conversational attention mechanism can rebalance the masked local context features, avoid unbalanced mechanism of feature distribution after context feature dynamic masking, and formulaTHA () in (i.e. use O) THA =concat[{O (1) ,O (2) ,...,O (h) }·W WH ]The specific calculation process is the prior art;
step S4, at the feature learning layer, the local context feature O l And global context feature O g Fusion is carried out to obtain fusion vectors, and the feature of the fusion vectors is extracted by adopting a conversational attention mechanism
Specifically, the fusion vector is characterized byWherein (1)>O lg =[O l ;O g ],W lg Representing a matrix of weight coefficients, ">b lg Representing the bias vector +_>d n 、d h Respectively represent O l 、O g Is of the rank number of->Is a full connection layer;
s5, at an output layer, according to the characteristics of the fusion vectorObtaining a result of aspect-level emotion analysis;
the method comprises the following steps: the output layer is a nonlinear layer, which fuses the characteristics of vectorsThe nonlinear layer is input and predicted by using a softmax function: />Wherein (1)>For analysis result, W o As a weight matrix, b o Is a bias vector.
The present invention experimentally verifies the validity of the model herein by using both the resueurant 14 and Laptop14 data sets as well as the Twitter data set. Among the comment information in the three public data sets, the aspect words in each sentence correspond to three different emotion polarities, and the data details are shown in table 1.
Table 1 data set distribution
Partial superparameter settings for LCFTHA model: the BERT layer learning rate is 0.00002, the iteration number is 20, the batch processing size is 32, the sentence maximum length is 85, the talking attention head number is 12, the L2 regularization weight is 0.01, and the BERT model selects the BERT-base-uncased (open source BERT pre-training model).
Seven baseline models are selected for comparison experiments to verify the effectiveness of the LCFTHA model. Two commonly used evaluation indexes in the aspect-level emotion analysis task are mainly adopted to verify the effect of the model. The first evaluation index is Accuracy, and represents that the proportion of the samples to all samples is correctly predicted in the task, and the prediction is correctly possible to be a positive sample or a negative sample, namely:
wherein TP represents a positive sample of prediction correctness, TN represents a negative sample of prediction correctness, FP represents a positive sample of prediction error, and FN represents a negative sample of prediction error.
Wherein P represents the accuracy rate in the emotion category, R represents the recall rate in the emotion category, and C represents the total number of emotion categories.
The results are shown in table 2 by comparing seven baseline models by experiments performed on three published datasets of resuurant 14, laptop14 and Twitter.
Table 2 model accuracy and macro average (%)
The LCFTHA model is superior to a baseline model in accuracy and MF1 value on three public data sets of Restarant 14, laptop14 and Twitter, and the accuracy is greatly improved compared with other models.
In the model LCFTHA, the semantic relative distance is an important factor affecting the extraction of local context features, so the invention analyzes the influence of the semantic relative distance on three data sets through experiments, and as can be seen from fig. 4 and 5, when the threshold value of the semantic relative distance is 2, the analysis accuracy and MF1 value (MF 1 represents the average performance of the model under different emotion categories) of the resueurant 14 data set are optimal. For the Laptop14 data set, when the threshold value of the semantic relative distance is 7, the analysis accuracy and the effect of the MF1 value reach the optimal. For the Twitter data set, the threshold value of the semantic relative distance is the same as that of the Laptop14 data set, and when the threshold value is 7, the analysis accuracy and the effect of the MF1 value reach the optimal.
Finally, the invention performed ablative experiments on three published data sets to verify the importance of each module in the LCFTHA model, where w/o is an abbreviation for without and FLL is an abbreviation for feature learning layer, and the results are shown in table 3.
Table 3 ablation experiment results (%)
From an inspection of table 3, it can be seen that the combination of the global and local module assemblies provides a better effect than a single local or global module. On the basis, the performance of the model can be obviously improved by adding the FLL, and experimental results show that the FLL layer has important significance for the design of the model. The importance of the individual model components is more visually represented by a bar graph, as shown in fig. 6 and 7.
In summary, the present invention proposes an aspect-level emotion analysis model LCFTHA based on a local context focus mechanism and a conversational attention mechanism. First, two BERT pre-training models are used to extract local and global context features; then, the preliminary local features extracted through BERT are combined with a conversation attention mechanism through a CDM layer in a local context focus mechanism to further capture local context features, and a conversation attention encoder is deployed for the global context features to learn the global context features; and finally, after the feature learning layer fuses the local and global context features, inputting the fused local and global context features into the nonlinear layer for emotion analysis. Experiments on the three published datasets of resuurant 14, laptop14 and Twitter demonstrate that the LCFTHA model performs better than the baseline model in the other aspect-level emotion analysis tasks.
The foregoing description is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, i.e., the invention is not to be limited to the details of the claims and the description, but rather is to cover all modifications which are within the scope of the invention.

Claims (5)

1. Aspect-level emotion analysis based on local context focus mechanism and conversational attention, characterized by: the method comprises the following steps:
s1, constructing an analysis model comprising a BERT pre-training layer, a feature extraction layer, a feature learning layer and an output layer;
s2, the BERT pre-training layer processes the corpus to be analyzed into a local context form sequence and a global context form sequence, models words in the local context form sequence and the global context form sequence respectively, and obtains primary local context characteristicsAnd preliminary global context feature ++>
Step S3, at the feature extraction layer, utilizing a local context focus mechanism, further extracting local context features by combining a context feature dynamic masking technology with a conversation attention mechanism, and extracting global context features by using the conversation attention mechanism, wherein the method specifically comprises the following steps:
step S31, according to the formulaCalculating a semantic relative distance of the local context form sequence, wherein i represents the position of the word in the local context form sequence, F a Representing the position of the aspect word in the local context form sequence, n representing the length of the aspect word in the local context form sequence;
step S32, helping the model to capture local context features by context feature dynamic masking techniqueObtaining local context featuresWherein M= [ V ] 1 ,V 2 ,...V n ]For a shielding matrix for shielding non-local context features +.>For a mask vector for each context word in the sequence of partial context forms, i=1, 2, …, n, a is a semantic relative distance threshold, E is a 1 vector of length n, O is a 0 vector of length n;
step S33, further extracting local context features by using a conversational attention mechanismExtracting global context feature +.>
Step S4, at the feature learning layer, the local context feature O l And global context feature O g Fusion is carried out to obtain fusion vectors, and the feature of the fusion vectors is extracted by adopting a conversational attention mechanism
S5, at an output layer, according to the characteristics of the fusion vectorAnd obtaining the result of aspect-level emotion analysis.
2. The local context focus mechanism and conversational attention based aspect level emotion analysis of claim 1, wherein: in the step S2, preprocessing the corpus to be analyzed into a local context form sequence and a global context form sequence specifically includes: will beThe corpus to be analyzed is respectively processed into a local context form sequence X l =[CLS]+context+ [ SEP ]]Global context form sequence X g =[CLS]+context+ [ SEP ]]+ aspect words + [ SEP ]]In the form of [ CLS ]]May be a semantic representation of the entire sentence.
3. The local context focus mechanism and conversational attention based aspect level emotion analysis of claim 2, wherein: in the step S2, the BERT pre-training layer adopts a first BERT training model BERT l For X l Modeling to obtainUsing a second BERT training model BERT g For X g Modeling to get +.>Wherein the first BERT training model BERT l And a second BERT training model BERT g Independent of each other.
4. A local context focus mechanism and conversational attention based aspect level emotion analysis according to claim 1 or 2 or 3, characterized by: in the step S4, the fusion vector is characterized in thatWherein,,O lg =[O l ;O g ],W lg representing a matrix of weight coefficients, b lg Representing the bias vector +_>Is a fully connected layer.
5. A office-based according to claim 1 or 2 or 3Aspect-level emotion analysis of partial context focus mechanism and conversational attention, characterized by: in the step S5, the output layer is a nonlinear layer, and features of the fusion vector are determinedThe nonlinear layer is input and predicted by using a softmax function: />Wherein (1)>For analysis result, W o As a weight matrix, b o Is a bias vector.
CN202310548728.7A 2023-05-16 2023-05-16 Aspect-level emotion analysis based on local context focus mechanism and conversational attention Pending CN116541579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310548728.7A CN116541579A (en) 2023-05-16 2023-05-16 Aspect-level emotion analysis based on local context focus mechanism and conversational attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310548728.7A CN116541579A (en) 2023-05-16 2023-05-16 Aspect-level emotion analysis based on local context focus mechanism and conversational attention

Publications (1)

Publication Number Publication Date
CN116541579A true CN116541579A (en) 2023-08-04

Family

ID=87450334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310548728.7A Pending CN116541579A (en) 2023-05-16 2023-05-16 Aspect-level emotion analysis based on local context focus mechanism and conversational attention

Country Status (1)

Country Link
CN (1) CN116541579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891900A (en) * 2024-03-18 2024-04-16 腾讯科技(深圳)有限公司 Text processing method and text processing model training method based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891900A (en) * 2024-03-18 2024-04-16 腾讯科技(深圳)有限公司 Text processing method and text processing model training method based on artificial intelligence

Similar Documents

Publication Publication Date Title
Yang et al. Image-text multimodal emotion classification via multi-view attentional network
Bandi et al. The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges
Wang et al. Putting humans in the natural language processing loop: A survey
Ishaq et al. Aspect-based sentiment analysis using a hybridized approach based on CNN and GA
Chen et al. A thorough examination of the cnn/daily mail reading comprehension task
Wen et al. Dynamic interactive multiview memory network for emotion recognition in conversation
Sang et al. Context-dependent propagating-based video recommendation in multimodal heterogeneous information networks
CN111368074A (en) Link prediction method based on network structure and text information
Ye et al. Sentiment-aware multimodal pre-training for multimodal sentiment analysis
Cheng et al. Aspect-based sentiment analysis with component focusing multi-head co-attention networks
Zhou et al. Interpretable duplicate question detection models based on attention mechanism
Kawintiranon et al. PoliBERTweet: a pre-trained language model for analyzing political content on Twitter
Lian et al. A survey of deep learning-based multimodal emotion recognition: Speech, text, and face
Singh et al. Towards improving e-commerce customer review analysis for sentiment detection
Sun et al. Transformer based multi-grained attention network for aspect-based sentiment analysis
Lin et al. PS-mixer: A polar-vector and strength-vector mixer model for multimodal sentiment analysis
Sivakumar et al. Context-aware sentiment analysis with attention-enhanced features from bidirectional transformers
Shao et al. Automated comparative analysis of visual and textual representations of logographic writing systems in large language models
CN116541579A (en) Aspect-level emotion analysis based on local context focus mechanism and conversational attention
Gandhi et al. Multimodal sentiment analysis: review, application domains and future directions
CN115630145A (en) Multi-granularity emotion-based conversation recommendation method and system
Chou et al. Rating prediction based on merge-CNN and concise attention review mining
Lucy et al. Words as gatekeepers: Measuring discipline-specific terms and meanings in scholarly publications
CN114417823A (en) Aspect level emotion analysis method and device based on syntax and graph convolution network
CN114429135A (en) CNN-BilSTM aspect emotion analysis method based on confrontation training and multi-attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination