CN113221537A - Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution - Google Patents

Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution Download PDF

Info

Publication number
CN113221537A
CN113221537A CN202110389025.5A CN202110389025A CN113221537A CN 113221537 A CN113221537 A CN 113221537A CN 202110389025 A CN202110389025 A CN 202110389025A CN 113221537 A CN113221537 A CN 113221537A
Authority
CN
China
Prior art keywords
neural network
layer
information
proximity
weighted convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110389025.5A
Other languages
Chinese (zh)
Inventor
陈思溢
杜鑫浩
周佳聆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN202110389025.5A priority Critical patent/CN113221537A/en
Publication of CN113221537A publication Critical patent/CN113221537A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an aspect level emotion analysis method based on a truncated cyclic neural network and a near weighted convolution, which improves the accuracy of aspect level emotion analysis. The method comprises the following steps: s1, representing the input sentence by using the pre-trained word vector; s2, inputting the word vector obtained in the step S1 into a truncation recurrent neural network layer to extract local features and long-distance dependence semantic information of the sequence; s3, inputting the text semantic information obtained in the S2 into the adjacent weighted convolution layer to capture n-gram information; s4, inputting the n-gram information obtained after the adjacent weighted convolution into a pooling layer for maximum pooling operation, and extracting important features; and S5, classifying the output obtained by the maximum pooling operation through a softmax classification layer to obtain a final result.

Description

Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution
Technical Field
The invention relates to the technical field of emotion analysis of natural language processing, in particular to an aspect level emotion analysis method based on a truncated cyclic neural network and a proximity weighted convolution.
Background
Emotion analysis (Sentiment analysis) is a central topic of natural energy language processing (NLP), which is the computation of opinions, emotions, and subjectivity in text. Emotion analysis has three levels of granularity, namely document-level, sentence-level, and aspect-level. When one document or one sentence relates to a plurality of emotional expressions, the emotion analysis of the first two layers cannot accurately extract deep emotions inside the text. And aspect level emotion analysis refers to the task of performing fine-grained emotion analysis on a target word in a given context. For example, for the phrase "price is reasonable enough and service is bad", the words "price" and "service" are both relevant and positive and negative for the attitudes of "price" and "service", respectively.
Unlike other levels of granularity in emotion analysis, the emotion polarity of different aspects in a sentence needs to be determined in aspect level emotion analysis, which depends not only on context information but also on emotion information of different aspects. Furthermore, different specific aspects in a sentence may have completely opposite emotional polarities, so analyzing specific emotional polarities for individual aspects may more effectively help people understand the emotional expressions of users, thereby drawing more and more attention in the field. Early work on aspect level emotion analysis was mainly based on manually extracting defined features from a statistical perspective and using machine learning, such as support vector machine (support vector machine), conditional random field (conditional random field), etc. Feature quality has a great weight in the performance of these models, and feature engineering is labor intensive.
In recent years, researchers have compared the effects of deep learning and traditional machine learning on aspect level emotion analysis tasks, and the former has obvious advantages. The early model adopts Recurrent Neural Networks (RNN) or its variant Long Short Term Memory Networks (LSTM), Gated Recurrent Units (GRU), and the aspect-level emotion analysis model with better recent performance is mostly constructed based on Recurrent Neural Networks and its variants. However, RNN can model the entire sequence and capture long-term dependencies, but it does not do well in extracting local features. Some researchers have attempted to combine the advantages of CNN and RNN. Zhou proposes a Chinese product review analysis method combining CNN and BilSTM models. Xue reports a more accurate and efficient model that combines convolutional neural networks with gating mechanisms. Dong uses an adaptive recurrent neural network to classify target-dependent emotions on twitter. Vo applies the emotion vocabulary, as well as distributed word representation and neural pools to improve the ability of emotion analysis. Ma constructs a neural framework for targeted aspect-based emotion analysis, and can incorporate important common knowledge. However, the parameters of the convolution filter increase with the increase of the window size, Wang proposes a truncated-Gated Recurrent Units (DGRU) -based method that can capture long-distance dependent information, extract key phrase information well, and alleviate the overfitting problem caused by the increase of the parameters. The performance of these traditional neural models is more prominent than traditional machine learning. However, they can only capture context information in an implicit way, resulting in explicit imperfections, which preclude some important contextual clues for an aspect.
Currently, as attention mechanisms and memory networks mature. More and more such methods are used for natural language processing and achieve good results, such as machine translation, with improved performance compared to previous methods. In this area, the generation of representations can be influenced by the interaction of the target and the context. For example, Wang applies an attention-based network to facet level emotion classification. Long proposes a multi-head attention mechanism based on BilSTM and integrates it into a cross model of text emotion analysis. Lin establishes a brand-new aspect level emotion classification framework, which is a deep masking memory network based on semantic dependence and context moment. Jiang designed an LSTM-CNN attention model based on aspects for the same task. Ma develops an Interactive Attention Network (IAN) model starting from the network and attention mechanism. However, in these studies, the syntactic relationships between a body and its context words are often ignored, which may hinder the effectiveness of the body-based context characterization. Furthermore, the emotional polarity aspect is typically dependent on a key phrase. Zhang proposes a convolutional network of weighted proximity to provide an aspect-specific representation of syntax-aware context. However, this network only considers long distance correlations in text sequences, and therefore the effect of capturing local features is not ideal.
In a compound sentence, it is possible that each aspect is only related to its neighboring context. Before identifying its emotional polarity, the extent of influence of each aspect needs to be estimated. Therefore, there is a need for better language representation models to generate more accurate semantic expressions. Word2Vec and GLoVe have been widely used to convert words into real-valued vectors. However, both have a problem. In fact, words may have different meanings in different contexts, while the target sentence is in a different language, and the vector representation in the context is the same. ELMo is an improvement over them, but it is not perfect because it applies LSTM in language models. The LSTM has two major problems. The first problem is that it is unidirectional, meaning that it works through ordered reasoning. Even the BiLSTM bi-directional model is a simple deficit addition, making it unable to consider data in the other direction. Another problem is that it is a sequence model. In other words, in the course of its processing, a step cannot be performed until the previous step is completed, resulting in poor parallel computing capability.
The above problems all affect the accuracy of the facet-level sentiment analysis.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method for analyzing the aspect level emotion based on the truncated cyclic neural network and the adjacent weighted convolution is provided, the accuracy of the aspect level emotion analysis is improved, and the method comprises the following steps:
s1, representing the input sentence by using the pre-trained word vector;
s2, inputting the word vector obtained in the step S1 into a truncation recurrent neural network layer to extract local features and long-distance dependence semantic information of the sequence;
s3, inputting the text semantic information obtained in the S2 into the adjacent weighted convolution layer to capture n-gram information;
s4, inputting the n-gram information obtained after the adjacent weighted convolution into a pooling layer for maximum pooling operation, and extracting important features;
and S5, classifying the output obtained by the maximum pooling operation through a softmax classification layer to obtain a final result.
Wherein the pre-training word vector of step S1 refers to the BERT pre-training model proposed by Google, which can obtain context-dependent bidirectional feature representation and capture obvious word differences, such as ambiguity. In addition, these context-sensitive word-embedding also retrieves other forms of information, which may help to produce more accurate feature representations and improve model performance.
Further, in step S1, considering that the input data is represented by x, H is the embedding generated after x is processed by BERT, the formula is as follows:
H=BERT(x)
further, the truncated recurrent neural network layer in step S2 is used to extract local features and long-distance dependent semantic information in the sequence; the output is:
ht=RNN(xt,xt-1,xt-2,...,xt-k+1)
in the formula xtAn input vector representing a current time step; k is a hyper-parameter of a window size to be set, RNN may be
Figure BDA0003016191450000024
RNN, LSTM, GRU, or any other type of recursive element, GRU being selected herein.
Further, the proximity weighted convolution operation in step S3 applies proximity weights, i.e., syntactic proximity of context and aspect words, to calculate its importance in a sentence, and then inputs it to a convolutional neural network to obtain n-gram information.
Further, the proximity of context words to facet words is non-linear, which can lead to erroneous weight results and information loss, the curve according to the gaussian distribution is bell-shaped, and the values can become larger as one moves towards the center, and vice versa. This excellent pattern can effectively prevent interference noise in information, and conforms to the nonlinear characteristics of position information. The gaussian function is therefore an ideal weight distribution pattern, and the formula is:
Figure BDA0003016191450000021
further, the proximity weighted convolution is a one-dimensional convolution with a kernel length of 1, and its proximity weight is pre-assigned. R for representing the weight of the ith word in sentence representationiCan be retrieved as:
ri=pihi
the convolution is performed as follows:
Figure BDA0003016191450000022
Figure BDA0003016191450000023
wherein
Figure BDA0003016191450000031
Is the output of the convolutional layer.
Further, the maximum pooling layer described in step S4 may help the patent extract key information and location invariant features, expressed as:
Figure BDA0003016191450000032
wherein q isi,jDenotes qiPart j of (1).
Further, step S5, the softmax classification layer obtains a conditional probability distribution of emotion polarity y:
Figure BDA0003016191450000033
wherein b isfAnd WfIs a weighting matrix corresponding to the fully connected layer.
The invention has the following beneficial effects:
the invention provides a novel aspect level emotion classification model which is mainly extracted by a truncated cyclic neural network and is further enhanced by adjacent weighted convolution. The overfitting problem caused by parameter increase is relieved to a certain extent, and the aspect-level emotion classification result is effectively improved.
And secondly, introducing a Gaussian function to replace the position proximity, thereby better evaluating the proximity weight. The proximity of the context words to the aspects is better described, further improving the performance of the model.
Third, representing context using pre-trained BERT embedding, which can obtain context-dependent bi-directional feature representations and capture significant word differences, such as ambiguity. In addition, these context-sensitive word-embedding also retrieves other forms of information, which may help to produce more accurate feature representations and improve model performance.
Drawings
FIG. 1 is a model diagram of an aspect level sentiment analysis algorithm of the present invention based on a truncated recurrent neural network and a proximity weighted convolution.
Detailed Description
While the following description details certain exemplary embodiments which embody features and advantages of the invention, it will be understood that various changes may be made in the embodiments without departing from the scope of the invention, and that the description and drawings are to be regarded as illustrative in nature and not as restrictive.
An aspect level emotion analysis method based on a truncated cyclic neural network and a proximity weighted convolution, the aspect level emotion analysis method comprising the steps of:
s1, representing the input sentence by using the pre-trained word vector;
s2, inputting the word vector obtained in the step S1 into a truncation recurrent neural network layer to extract local features and long-distance dependence semantic information of the sequence;
s3, inputting the text semantic information obtained in the S2 into the adjacent weighted convolution layer to capture n-gram information;
s4, inputting the n-gram information obtained after the adjacent weighted convolution into a pooling layer for maximum pooling operation, and extracting important features;
and S5, classifying the output obtained by the maximum pooling operation through a softmax classification layer to obtain a final result.
Wherein the pre-training word vector of step S1 refers to the BERT pre-training model proposed by Google, which can obtain context-dependent bidirectional feature representation and capture obvious word differences, such as ambiguity. In addition, these context-sensitive word-embedding also retrieves other forms of information, which may help to produce more accurate feature representations and improve model performance.
Further, in step S1, considering that the input data is represented by x, H is the embedding generated after x is processed by BERT, the formula is as follows:
H=BERT(x)
further, the truncated recurrent neural network layer in step S2 is used to extract local features and long-distance dependent semantic information in the sequence; the output is:
ht=RNN(xt,xt-1,xt-2,...,xt-k+1)
in the formula xtThe input vector representing the current time step, k is a window size over-parameter that needs to be set, and RNN may be a naive RNN, LSTM, GRU or any other type of recursive element, GRU being selected by this patent.
Further, the proximity weighted convolution operation in step S3 applies proximity weights, i.e., syntactic proximity of context and aspect words, to calculate its importance in a sentence, and then inputs it to a convolutional neural network to obtain n-gram information.
Further, the proximity of context words to facet words is non-linear, which can lead to erroneous weight results and information loss, the curve according to the gaussian distribution is bell-shaped, and the values can become larger as one moves towards the center, and vice versa. This excellent pattern can effectively prevent interference noise in information, and conforms to the nonlinear characteristics of position information. The gaussian function is therefore an ideal weight distribution pattern, and the formula is:
Figure BDA0003016191450000041
further, the proximity weighted convolution is a one-dimensional convolution with a kernel length of 1, and its proximity weight is pre-assigned. R for representing the weight of the ith word in sentence representationiCan be retrieved as:
ri=pihi
the convolution is performed as follows:
Figure BDA0003016191450000042
Figure BDA0003016191450000043
wherein
Figure BDA0003016191450000044
Is the output of the convolutional layer.
Further, the maximum pooling layer described in step S5 may help the patent extract key information and location invariant features, expressed as:
Figure BDA0003016191450000045
wherein q isi,jDenotes qiPart j of (1).
Further, step S6, the softmax classification layer obtains a conditional probability distribution of emotion polarity y:
Figure BDA0003016191450000046
wherein b isfAnd WfIs a weighting matrix corresponding to the fully connected layer.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions, and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. An aspect-level emotion analysis method based on a truncated cyclic neural network and a proximity weighted convolution is characterized by comprising the following steps of: the method comprises the following steps:
s1, representing the input sentence by using the pre-trained word vector;
s2, inputting the word vector obtained in the step S1 into a truncation recurrent neural network layer to extract local features and long-distance dependence semantic information of the sequence;
s3, inputting the text semantic information obtained in the S2 into the adjacent weighted convolution layer to capture n-gram information;
s4, inputting the n-gram information obtained after the adjacent weighted convolution into a pooling layer for maximum pooling operation, and extracting important features;
and S5, classifying the output obtained by the maximum pooling operation through a softmax classification layer to obtain a final result.
2. The method of aspect-level sentiment analysis based on a truncated recurrent neural network and a proximately weighted convolution of claim 1, wherein: wherein the pre-training word vector of step S1 refers to the BERT pre-training model proposed by Google, which can obtain context-dependent bidirectional feature representation and capture obvious word differences, such as ambiguity. In addition, these context-sensitive word-embedding also retrieves other forms of information, which may help to produce more accurate feature representations and improve model performance.
The input data is represented by x, H is the embedding generated after x is processed by BERT, and the formula is as follows:
H=BERT(x)
3. the method of aspect-level sentiment analysis based on a truncated recurrent neural network and a proximately weighted convolution of claim 1, wherein: the truncated recurrent neural network layer in the step S2 is used for extracting local features and long-distance dependence semantic information in the sequence; the output is:
ht=RNN(xt,xt-1,xt-2,...,xt-k+1)
in the formula xtThe input vector representing the current time step, k is a window size over-parameter that needs to be set, and RNN may be a naive RNN, LSTM, GRU or any other type of recursive element, GRU being selected by this patent.
4. The method of aspect-level sentiment analysis based on a truncated recurrent neural network and a proximately weighted convolution of claim 1, wherein: the proximity weighted convolution operation in step S3 applies proximity weights, i.e., syntactic proximity of context and aspect words, to calculate its importance in a sentence, and then inputs it to a convolutional neural network to obtain n-gram information.
The proximity of context words to aspect words is non-linear, which leads to erroneous weight results and information loss, the curve according to the gaussian distribution is bell-shaped and the values become larger as one moves towards the center and vice versa. This excellent pattern can effectively prevent interference noise in information, and conforms to the nonlinear characteristics of position information. The gaussian function is therefore an ideal weight distribution pattern, and the formula is:
Figure FDA0003016191440000011
further, the proximity weighted convolution is a one-dimensional convolution with a kernel length of 1, and its proximity weight is pre-assigned. R for representing the weight of the ith word in sentence representationiCan be retrieved as:
ri=pihi
the convolution is performed as follows:
Figure FDA0003016191440000012
Figure FDA0003016191440000013
wherein
Figure FDA0003016191440000014
Is the output of the convolutional layer.
5. The method of aspect-level sentiment analysis based on a truncated recurrent neural network and a proximately weighted convolution of claim 1, wherein: step S4 the maximum pooling layer may help the patent extract key information and location invariant features, expressed as:
Figure FDA0003016191440000015
wherein q isi,jDenotes qiPart j of (1).
6. The method of aspect-level sentiment analysis based on a truncated recurrent neural network and a proximately weighted convolution of claim 1, wherein: step S5, the softmax classification layer obtains the conditional probability distribution of emotion polarity y:
Figure FDA0003016191440000016
wherein b isfAnd WfIs a weighting matrix corresponding to the fully connected layer.
CN202110389025.5A 2021-04-12 2021-04-12 Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution Withdrawn CN113221537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110389025.5A CN113221537A (en) 2021-04-12 2021-04-12 Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110389025.5A CN113221537A (en) 2021-04-12 2021-04-12 Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution

Publications (1)

Publication Number Publication Date
CN113221537A true CN113221537A (en) 2021-08-06

Family

ID=77086983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110389025.5A Withdrawn CN113221537A (en) 2021-04-12 2021-04-12 Aspect-level emotion analysis method based on truncated cyclic neural network and proximity weighted convolution

Country Status (1)

Country Link
CN (1) CN113221537A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200105589A (en) * 2019-02-28 2020-09-08 전남대학교산학협력단 Voice emotion recognition method and system
WO2020244066A1 (en) * 2019-06-04 2020-12-10 平安科技(深圳)有限公司 Text classification method, apparatus, device, and storage medium
CN112131886A (en) * 2020-08-05 2020-12-25 浙江工业大学 Method for analyzing aspect level emotion of text
CN112560440A (en) * 2020-12-03 2021-03-26 湘潭大学 Deep learning-based syntax dependence method for aspect-level emotion analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200105589A (en) * 2019-02-28 2020-09-08 전남대학교산학협력단 Voice emotion recognition method and system
WO2020244066A1 (en) * 2019-06-04 2020-12-10 平安科技(深圳)有限公司 Text classification method, apparatus, device, and storage medium
CN112131886A (en) * 2020-08-05 2020-12-25 浙江工业大学 Method for analyzing aspect level emotion of text
CN112560440A (en) * 2020-12-03 2021-03-26 湘潭大学 Deep learning-based syntax dependence method for aspect-level emotion analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
向进勇;刘小龙;丁明扬;李欢;曹文婷;: "基于卷积递归深度学习模型的句子级文本情感分类", 东北师大学报(自然科学版), no. 02 *
姚妮;高政源;娄坤;朱付保;: "基于BERT和BiGRU的在线评论文本情感分类研究", 轻工学报, no. 05 *

Similar Documents

Publication Publication Date Title
CN108875807B (en) Image description method based on multiple attention and multiple scales
Gu et al. Stack-captioning: Coarse-to-fine learning for image captioning
CN109284506B (en) User comment emotion analysis system and method based on attention convolution neural network
CN110609891B (en) Visual dialog generation method based on context awareness graph neural network
Sharma et al. Era of deep neural networks: A review
CN109934261B (en) Knowledge-driven parameter propagation model and few-sample learning method thereof
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN112232087B (en) Specific aspect emotion analysis method of multi-granularity attention model based on Transformer
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN111522908A (en) Multi-label text classification method based on BiGRU and attention mechanism
CN113987179A (en) Knowledge enhancement and backtracking loss-based conversational emotion recognition network model, construction method, electronic device and storage medium
WO2022218139A1 (en) Personalized search method and search system combined with attention mechanism
Mai et al. Analyzing unaligned multimodal sequence via graph convolution and graph pooling fusion
CN113255366B (en) Aspect-level text emotion analysis method based on heterogeneous graph neural network
Liu et al. A multi-label text classification model based on ELMo and attention
Shen et al. Hierarchical Attention Based Spatial-Temporal Graph-to-Sequence Learning for Grounded Video Description.
Sadr et al. Convolutional neural network equipped with attention mechanism and transfer learning for enhancing performance of sentiment analysis
CN114417851A (en) Emotion analysis method based on keyword weighted information
CN112560440B (en) Syntax dependency method for aspect-level emotion analysis based on deep learning
CN109308316A (en) A kind of adaptive dialog generation system based on Subject Clustering
Xu et al. Convolutional neural network using a threshold predictor for multi-label speech act classification
CN110889505A (en) Cross-media comprehensive reasoning method and system for matching image-text sequences
Liu et al. Hybrid neural network text classification combining TCN and GRU
JPH0934863A (en) Information integral processing method by neural network
Liu et al. Interpretable charge prediction for legal cases based on interdependent legal information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210806

WW01 Invention patent application withdrawn after publication