CN111259141A - Social media corpus emotion analysis method based on multi-model fusion - Google Patents
Social media corpus emotion analysis method based on multi-model fusion Download PDFInfo
- Publication number
- CN111259141A CN111259141A CN202010030785.2A CN202010030785A CN111259141A CN 111259141 A CN111259141 A CN 111259141A CN 202010030785 A CN202010030785 A CN 202010030785A CN 111259141 A CN111259141 A CN 111259141A
- Authority
- CN
- China
- Prior art keywords
- text
- data
- image
- emotion
- corpus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a social media corpus emotion analysis method based on multi-model fusion, which is characterized in that a pyglide crawler frame is used for obtaining social media, a data set obtained by a crawler is processed, and the data set is divided into three types: the method only comprises text information, only comprises image information and text image information, the method carries out corpus processing by using a cross-media method, constructs an emotion dictionary by using an SO-PMI algorithm for the text information in the corpus, and analyzes the positivity, neutrality and negativity of point-by-point mutual information. Replacing PMIs between words by using similar distances and constructing a new formula; and for the linguistic data of the image or the video, the meaning of the image is obtained and analyzed by using a visual text joint modeling method, so that the meaning of the image or the video is obtained. And carrying out weighted fusion by using the analysis result of the plain text and the analysis result obtained by vision to obtain the final emotion analysis result.
Description
Technical Field
The invention belongs to the field of emotion analysis, and relates to a social media corpus emotion analysis method based on multi-model fusion.
Background
In recent years, a large number of social platforms and software, such as micro blogs, WeChats, QQ and the like, emerge, and the social platforms greatly enrich the lives of people. More and more people are actively sharing information with others, expressing their opinions and feelings on social platforms, so each social platform slowly comes up with a large amount of linguistic information such as: images, text, video, etc. Human analysis of sentiment hidden in such information can be beneficial for online marketing, crisis, monitoring public opinions, illegal activities, finding lifelike signs of potential depression, and the like. The emotion analysis is a trend of the social information of the platform, namely, the classification is carried out according to the corpus information of the user, and the classification can be divided into positive, negative and neutral emotion tendencies. Heretofore, various methods have been employed to achieve a single recognition analysis of an image or text. However, emotional analysis with a single characteristic has many limitations, for example, social platforms such as microblogs, Facebook, Twitter and the like with a large user amount all support a method for simultaneously publishing images and texts, and most of the existing methods cannot comprehensively analyze that a user publishes a plurality of corpora on the social platform to cause a judgment error. For various corpus information of the social platform, the accuracy and comprehensiveness of emotion analysis are improved, and improvement is needed.
The social media corpus emotion analysis method based on multi-model fusion avoids the defect of single characteristics on emotion analysis, analyzes emotion by combining images and texts, and is more accurate and wider in application range. The semantic analysis is carried out on the information of the community media through the dual corpora, so that the accuracy and the comprehensiveness of emotion analysis are improved.
Disclosure of Invention
The invention aims to provide a social media corpus emotion analysis method based on multi-model fusion. Experiment related data are acquired from social media by using a pyspide crawler frame, a data set acquired by the crawler is processed, and the data set is divided into three types: only text information, only image information and text image information are contained, the invention emphasizes that the text image information contains the condition, and the linguistic data of other two conditions can be used for verifying the robustness of the invention. Firstly, information in the corpus is identified, and the identified corpus information can be divided into three categories: the method only contains text information, only contains image information and text image information, and the corpus information is processed by the corpus containing the image-text information no matter which of the three categories, so that the method has the advantages that the emotion analysis can be reasonably carried out regardless of the situation of the corpus of the user, and the robustness of the model is ensured. Firstly, for text information in a corpus, an emotion dictionary is constructed by using an SO-PMI algorithm (emotional tendency point mutual information algorithm), and the positivity, neutrality and negativity of the corpus are analyzed, but the SO-PMI algorithm cannot flexibly use Chinese words and phrases, SO that similar distances are used for replacing words and constructing a new emotion dictionary. Secondly, for the image (including the collection of the picture and the video), the image is analyzed in meaning by using a visual text joint modeling algorithm, so that the emotional tendency of the image is obtained. And finally, performing weighted fusion to obtain the final emotion analysis result by using the text corpus analysis result and the result obtained by image corpus analysis.
In order to achieve the purpose, the technical scheme adopted by the invention is a social media corpus emotion analysis method based on multi-model fusion, which comprises the following steps:
step 1, data preprocessing:
the used data is obtained from social platforms such as Xinlang microblogs through crawlers, irrelevant data such as advertisements is filtered, only the blog data with user subjectivity is reserved, the filtered text data is subjected to word segmentation by using a jieba word segmentation device, and the segmented data has a lot of meaningless data, so that the difficulty of model training in the later period is improved, the used word list is used for filtering, and the word list with great labor intensity is adopted to obtain the text subjected to data preprocessing; in order to facilitate the processing of the picture data, the picture data is processed into a picture with 256 pixels by 256 pixels in a normalization manner.
Step 2, performing SO-PMI model training on the text corpus:
and (3) carrying out emotion marking on words on the text obtained in the step (1), and also dividing the words into three categories of positive, negative and neutral. Text data used for model training accounted for 70% of the total data, and test validation data accounted for 30%. First, for data that has been segmented and filtered out of stop words, an extended emotion dictionary is obtained using 70% of the processed emotion vocabulary for the Word2vec tool. And judging which type the words belong to by using the distance between the words and an emotion dictionary based on a point mutual information algorithm (SO-PMI) of semantic positioning. Then considering the influence of negative words, degree adverbs, exclamation words, paraphrases and emotion diagrams, balancing all factors, and calculating the emotional tendency of the text content to obtain a classification result.
Step 3, CNN + LSTM model training is carried out on the picture data:
on the basis of a picture data set, emotion description texts for pictures are added, high-precision fine-grained classification convolution is provided by using data of the two modes for image classification, text classification is performed by CNN + LSTM, and the two classification results are combined to obtain emotion meaning explanation of the combined image. The classification in the aspect of image texts uses a CNN model, wherein the CNN model consists of a convolution layer and a full connection layer; for the text aspect, a deep structured joint embedding method is adopted to jointly embed images and fine-grained visual description. The method learns the compatibility function of the image and the text, and is regarded as the extension of multi-modal structure splicing embedding. Instead of using bilinear compatibility functions, the finite element inner product generated by the deep neural encoder is used to maximize the compatibility between the description and the matching image, while minimizing the compatibility with other types of images. Given data D ═ vn,tn,yn) N1, …, N, where V e V denotes visual information, T e T denotes text type, Y e Y denotes class labels, and then the image and text classifier function f is learned by minimizing empirical riskv: v → Y and ft: t → Y whereinFor 0-1 penalty, then define the compatibility of the function FCharacteristic of useThe function of the learnable encoder, θ (V) image and text, Φ (T), functions, where N represents the data dimension, V represents the image set, T represents the text set, and Y represents the mapping space. The following three formulas are explanation of the social media corpus emotion analysis method of multi-model fusion from the mathematical perspective, wherein the formula (1.3) is a graph-text fusion function, F (v, t) is a graph-text fusion result, and theta (v)TPhi (t) is an image function and phi (t) is a text function; the formula (1.1) is a maximum expected average function of the image, wherein F (v, t) is the formula (1.3), Y is the image corpus, t is the text corpus, and Y is the mapping space of Y; formula (1.2) is a maximum expected average function of the text, where F (v, t) is formula (1.3), Y is the image corpus, v is the image corpus, and Y is the mapping space.
fv(v)=argmaxyEt~T(y)[F(v,t)],yεY (1.1)
ft(t)=argmaxyEv~T(y)[F(v,t)],yεY (1.2)
F(v,t)=θ(v)TΦ(t) (1.3)
Step 4, multi-model fusion:
the final classification result of the text emotion of the two texts can be obtained through the steps 2 and 3, and then the two parts are processed in a weighting mode to judge the final classification result. And finally, solving to obtain threshold values a and b according to a genetic Algorithm genetic algorithm of an MATLB tool.
Step 5, final emotion analysis results:
and 4, obtaining the values of a and b in y ═ am + bn, inputting the text category similarity and the image text similarity, and outputting a graph-text classification value y with the values of 1, -1 and 0, wherein 1 is positive, -1 is negative, and 0 is a neutral classification result.
Compared with the prior art, the invention has the technical advantages that:
(1) the invention utilizes a cross-media method to process the linguistic data, firstly, for the text information in the linguistic data, an SO-PMI algorithm is used to construct an emotion dictionary, and the positivity, neutrality and negativity of point-by-point mutual information are analyzed. But this method does not allow for the flexible use of chinese words and phrases. Similar distances are used to replace PMIs between words and construct new formulas.
(2) Secondly, for the image or the corpus of the video (the video can be regarded as a collection of images), the meaning of the image is obtained and analyzed by using a visual text joint modeling method, so that the meaning of the image or the video is obtained.
(3) And finally, carrying out weighted fusion by using the analysis result of the plain text and the analysis result obtained by vision to obtain the final emotion analysis result.
Drawings
FIG. 1 is a sample drawing of the material used in the present invention.
FIG. 2 is a general structure diagram of social media corpus emotion analysis based on multi-model fusion.
Fig. 3 is a diagram showing the results of word segmentation in the present invention.
FIG. 4 is a stop vocabulary diagram.
FIG. 5 is a diagram of the sample processed in step 1.
FIG. 6 is a diagram of a SO-PMI model training process.
FIG. 7 is a subgraph of the inventive training of the CNN + LSTM model.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
The technical scheme adopted by the invention is a social media corpus emotion analysis method based on multi-model fusion, and the specific analysis process of the method is as follows
(1) Chinese word segmentation
The Chinese word segmentation is a process of recombining continuous word sequences into word sequences according to a certain standard, the word sequences are divided into single words according to a Chinese understanding method, a jieba word segmentation tool can be used for segmenting words of a text in the implementation process, the segmented sentence is shown in fig. 2, and the sentence can be seen to be segmented into the single words.
(2) Stop word
One or a sentence of normal chinese text usually contains special symbols such as comma, period, semicolon, etc. These punctuation marks need not survive after the word segmentation is complete. Secondly, the sentences contain words which have little influence on the importance of the sentences, such as the words which are not only used but also used in the subsequent steps, so that the words are deleted in the preprocessing.
(3) Constructing word vectors
And (3) extracting Word vectors from a large amount of data processed by the two steps (1) and (2) through a Word2Vec tool, reducing data dimensionality and obtaining an expanded data dictionary.
(4) Training SO-PMI model
And (3) processing the text data information through the steps (1), (2) and (3) to obtain an expanded emotion dictionary, and determining which type the text data information belongs to through the distance between words by using an SO-PMI algorithm to construct an SO-PMI model.
(5) Image normalization processing
The image data obtained by the crawler has the characteristic of inconsistent size, and the data is complex to process, so the size is normalized according to the selected algorithm, and the size is processed into a picture with the size of 256 pixels by 256 pixels.
(6) Training CNN + LSTM model
The image data processed in (5) (labeled data) to train the CNN + LSTM model.
(7) Multi-model fusion
And (4) training to obtain an SO-PMI model and a CNN + LSTM model, inputting image-text data to obtain two processing results, processing the two processing results in a weighting mode to judge the final classification result, and verifying the effectiveness and the accuracy of the method by using a social media corpus emotion analysis method experiment with multi-model fusion. Compared with a single model and the method for analyzing the emotion of the text only, the method for analyzing the emotion of the microblog is obviously improved in accuracy, and the result shows that the method is higher in accuracy when the emotion of the microblog is analyzed.
Claims (2)
1. A social media corpus emotion analysis method based on multi-model fusion is characterized by comprising the following steps: the method comprises the following steps in total,
step 1, data preprocessing:
the used data is obtained from a social platform through a crawler, irrelevant advertisement data is filtered, only the blog data with user subjectivity is reserved, the filtered text data is subjected to word segmentation by using a jieba word segmentation device, the segmented data has a plurality of meaningless data, a stop word list is used for filtering the data, and a stop word list with a great work size is adopted to obtain the text subjected to data preprocessing; in order to facilitate the processing of the picture data, the picture data is processed into a picture with 256 pixels by 256 pixels in a normalization mode;
step 2, performing SO-PMI model training on the text corpus:
performing emotion marking on words on the text obtained in the step (1), and dividing the words into positive, negative and neutral categories; text data used for model training accounts for 70% of total data, and test verification data accounts for 30%; firstly, for data which are segmented and stop words are filtered, 70% of processed emotion vocabularies are used for a Word2vec tool to obtain an expanded emotion dictionary; judging which type the words belong to by using the distance between the words and an emotion dictionary based on a point mutual information algorithm SO-PMI of semantic positioning; then considering the influence of negative words, degree adverbs, exclamation words, paraphrases and emotion diagrams, balancing all factors, and calculating the emotional tendency of the text content to obtain a classification result;
step 3, CNN + LSTM model training is carried out on the picture data:
adding emotion description texts for pictures on the basis of a picture data set, providing high-precision fine-grained classification convolution for image classification by using data of the two modes, performing text classification by using CNN + LSTM, and combining the two classification results to obtain emotion meaning explanation of the combined image; the classification in the aspect of image texts uses a CNN model, wherein the CNN model consists of a convolution layer and a full connection layer; for the aspect of text, a deep structured joint embedding method is adopted to jointly embed images and fine-grained visual description; the method learns the compatible function of the image and the text, and is regarded as the extension of multi-modal structure splicing embedding; the compatibility between description and matching images is improved to the maximum extent and the compatibility with other images is minimized by using a finite element inner product generated by a deep neural encoder instead of a bilinear compatibility function;
step 4, multi-model fusion:
the final classification result of the text emotion of the two texts can be obtained through the steps 2 and 3, and then the two parts are processed in a weighting mode to judge the final classification result; obtaining a final classification result y which is am + bn, wherein m is the category distance similarity determined by the plain text, n is the category distance similarity determined by the text obtained by the image, and then solving according to a genetic algorithm of an MATLB tool to obtain threshold values a and b;
step 5, final emotion analysis results:
and 4, obtaining the values of a and b in y ═ am + bn, inputting the text type similarity and the image text similarity, and outputting a graph-text classification value y, wherein the values are 1, -1 and 0, 1 is positive, -1 is negative, and 0 is a neutral classification result.
2. The method for analyzing social media corpus emotion based on multi-model fusion as claimed in claim 1, wherein:
given data D ═ vn,tn,yn) N1, …, N, where V e V denotes visual information, T e T denotes text type, Y e Y denotes class labels, and then the image and text classifier function f is learned by minimizing empirical riskυ: v → Y and ft: v → Y whereinFor 0-1 penalty, then define the compatibility of the function FA function θ (V) image and text Φ (T) function of an encoder using characteristics, where N represents a data dimension, V represents a set of images, T represents a set of texts, and Y represents a mapping space; the following three formulas are derived fromThe interpretation of the social media corpus emotion analysis method of multi-model fusion from the learning perspective, wherein the formula (1.3) is a graph-text fusion function, F (v, t) is a graph-text fusion result, and theta (v)TPhi (t) is an image function and phi (t) is a text function; the formula (1.1) is a maximum expected average function of the image, wherein F (v, t) is the formula (1.3), Y is the image corpus, t is the text corpus, and Y is the mapping space of Y; the formula (1.2) is a maximum expected average function of the text, wherein F (v, t) is the formula (1.3), Y is an image corpus, v is the image corpus, and Y is a mapping space;
fv(v)=argmaxyEt~T(y)[F(v,t)],yεY (1.1)
ft(t)=argmaxyEv~T(y)[F(v,t)],yεY(1.2)
F(v,t)=θ(v)TΦ(t) (1.3)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030785.2A CN111259141A (en) | 2020-01-13 | 2020-01-13 | Social media corpus emotion analysis method based on multi-model fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030785.2A CN111259141A (en) | 2020-01-13 | 2020-01-13 | Social media corpus emotion analysis method based on multi-model fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111259141A true CN111259141A (en) | 2020-06-09 |
Family
ID=70952992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010030785.2A Pending CN111259141A (en) | 2020-01-13 | 2020-01-13 | Social media corpus emotion analysis method based on multi-model fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259141A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881667A (en) * | 2020-07-24 | 2020-11-03 | 南京烽火星空通信发展有限公司 | Sensitive text auditing method |
CN112070093A (en) * | 2020-09-22 | 2020-12-11 | 网易(杭州)网络有限公司 | Method for generating image classification model, image classification method, device and equipment |
CN112133406A (en) * | 2020-08-25 | 2020-12-25 | 合肥工业大学 | Multi-mode emotion guidance method and system based on emotion maps and storage medium |
CN112214603A (en) * | 2020-10-26 | 2021-01-12 | Oppo广东移动通信有限公司 | Image-text resource classification method, device, terminal and storage medium |
CN112231535A (en) * | 2020-10-23 | 2021-01-15 | 山东科技大学 | Method for making multi-modal data set in field of agricultural diseases and insect pests, processing device and storage medium |
CN112396091A (en) * | 2020-10-23 | 2021-02-23 | 西安电子科技大学 | Social media image popularity prediction method, system, storage medium and application |
CN112651448A (en) * | 2020-12-29 | 2021-04-13 | 中山大学 | Multi-modal emotion analysis method for social platform expression package |
CN112669936A (en) * | 2021-01-04 | 2021-04-16 | 上海海事大学 | Social network depression detection method based on texts and images |
CN113157998A (en) * | 2021-02-28 | 2021-07-23 | 江苏匠算天诚信息科技有限公司 | Method, system, device and medium for polling website and judging website type through IP |
CN113222772A (en) * | 2021-04-08 | 2021-08-06 | 合肥工业大学 | Native personality dictionary construction method, system, storage medium and electronic device |
CN114169450A (en) * | 2021-12-10 | 2022-03-11 | 同济大学 | Social media data multi-modal attitude analysis method |
CN115827880A (en) * | 2023-02-10 | 2023-03-21 | 之江实验室 | Service execution method and device based on emotion classification |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105320960A (en) * | 2015-10-14 | 2016-02-10 | 北京航空航天大学 | Voting based classification method for cross-language subjective and objective sentiments |
CN106886580A (en) * | 2017-01-23 | 2017-06-23 | 北京工业大学 | A kind of picture feeling polarities analysis method based on deep learning |
CN107818084A (en) * | 2017-10-11 | 2018-03-20 | 北京众荟信息技术股份有限公司 | A kind of sentiment analysis method for merging comment figure |
CN108388544A (en) * | 2018-02-10 | 2018-08-10 | 桂林电子科技大学 | A kind of picture and text fusion microblog emotional analysis method based on deep learning |
CN108764268A (en) * | 2018-04-02 | 2018-11-06 | 华南理工大学 | A kind of multi-modal emotion identification method of picture and text based on deep learning |
-
2020
- 2020-01-13 CN CN202010030785.2A patent/CN111259141A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105320960A (en) * | 2015-10-14 | 2016-02-10 | 北京航空航天大学 | Voting based classification method for cross-language subjective and objective sentiments |
CN106886580A (en) * | 2017-01-23 | 2017-06-23 | 北京工业大学 | A kind of picture feeling polarities analysis method based on deep learning |
CN107818084A (en) * | 2017-10-11 | 2018-03-20 | 北京众荟信息技术股份有限公司 | A kind of sentiment analysis method for merging comment figure |
CN108388544A (en) * | 2018-02-10 | 2018-08-10 | 桂林电子科技大学 | A kind of picture and text fusion microblog emotional analysis method based on deep learning |
CN108764268A (en) * | 2018-04-02 | 2018-11-06 | 华南理工大学 | A kind of multi-modal emotion identification method of picture and text based on deep learning |
Non-Patent Citations (3)
Title |
---|
DIONYSIS GOULARAS.ETL: "Evaluation of Deep Learning Techniques in Sentiment Analysis from Twitter Data", 《2019 INTERNATIONAL CONFERENCE ON DEEP LEARNING AND MACHINE LEARNING IN EMERGING APPLICATIONS (DEEP-ML)》 * |
FEN YANG.ETL: "A Multi-model Fusion Framework based on Deep Learning for Sentiment Classification", 《2018 IEEE 22ND INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN ((CSCWD))》 * |
NAN CHEN.ETL: "Advanced Combined LSTM-CNN Model for Twitter Sentiment Analysis", 《2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS)》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881667B (en) * | 2020-07-24 | 2023-09-29 | 上海烽烁科技有限公司 | Sensitive text auditing method |
CN111881667A (en) * | 2020-07-24 | 2020-11-03 | 南京烽火星空通信发展有限公司 | Sensitive text auditing method |
CN112133406B (en) * | 2020-08-25 | 2022-11-04 | 合肥工业大学 | Multi-mode emotion guidance method and system based on emotion maps and storage medium |
CN112133406A (en) * | 2020-08-25 | 2020-12-25 | 合肥工业大学 | Multi-mode emotion guidance method and system based on emotion maps and storage medium |
CN112070093A (en) * | 2020-09-22 | 2020-12-11 | 网易(杭州)网络有限公司 | Method for generating image classification model, image classification method, device and equipment |
CN112231535B (en) * | 2020-10-23 | 2022-11-15 | 山东科技大学 | Method for making multi-modal data set in field of agricultural diseases and insect pests, processing device and storage medium |
CN112396091A (en) * | 2020-10-23 | 2021-02-23 | 西安电子科技大学 | Social media image popularity prediction method, system, storage medium and application |
CN112231535A (en) * | 2020-10-23 | 2021-01-15 | 山东科技大学 | Method for making multi-modal data set in field of agricultural diseases and insect pests, processing device and storage medium |
CN112396091B (en) * | 2020-10-23 | 2024-02-09 | 西安电子科技大学 | Social media image popularity prediction method, system, storage medium and application |
CN112214603A (en) * | 2020-10-26 | 2021-01-12 | Oppo广东移动通信有限公司 | Image-text resource classification method, device, terminal and storage medium |
CN112651448A (en) * | 2020-12-29 | 2021-04-13 | 中山大学 | Multi-modal emotion analysis method for social platform expression package |
CN112651448B (en) * | 2020-12-29 | 2023-09-15 | 中山大学 | Multi-mode emotion analysis method for social platform expression package |
CN112669936A (en) * | 2021-01-04 | 2021-04-16 | 上海海事大学 | Social network depression detection method based on texts and images |
CN113157998A (en) * | 2021-02-28 | 2021-07-23 | 江苏匠算天诚信息科技有限公司 | Method, system, device and medium for polling website and judging website type through IP |
CN113222772A (en) * | 2021-04-08 | 2021-08-06 | 合肥工业大学 | Native personality dictionary construction method, system, storage medium and electronic device |
CN113222772B (en) * | 2021-04-08 | 2023-10-31 | 合肥工业大学 | Native personality dictionary construction method, native personality dictionary construction system, storage medium and electronic equipment |
CN114169450A (en) * | 2021-12-10 | 2022-03-11 | 同济大学 | Social media data multi-modal attitude analysis method |
CN115827880A (en) * | 2023-02-10 | 2023-03-21 | 之江实验室 | Service execution method and device based on emotion classification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111259141A (en) | Social media corpus emotion analysis method based on multi-model fusion | |
Liu et al. | Visual listening in: Extracting brand image portrayed on social media | |
Kaur et al. | Multimodal sentiment analysis: A survey and comparison | |
CN109933664B (en) | Fine-grained emotion analysis improvement method based on emotion word embedding | |
CN111966917B (en) | Event detection and summarization method based on pre-training language model | |
CN111488931B (en) | Article quality evaluation method, article recommendation method and corresponding devices | |
CN107239444B (en) | A kind of term vector training method and system merging part of speech and location information | |
Azpiazu et al. | Multiattentive recurrent neural network architecture for multilingual readability assessment | |
CN109033433B (en) | Comment data emotion classification method and system based on convolutional neural network | |
CN108563638B (en) | Microblog emotion analysis method based on topic identification and integrated learning | |
CN107862087A (en) | Sentiment analysis method, apparatus and storage medium based on big data and deep learning | |
CN107704996B (en) | Teacher evaluation system based on emotion analysis | |
CN109492105B (en) | Text emotion classification method based on multi-feature ensemble learning | |
KR20120109943A (en) | Emotion classification method for analysis of emotion immanent in sentence | |
CN112287197B (en) | Method for detecting sarcasm of case-related microblog comments described by dynamic memory cases | |
CN107818173B (en) | Vector space model-based Chinese false comment filtering method | |
CN114170411A (en) | Picture emotion recognition method integrating multi-scale information | |
CN109101490A (en) | The fact that one kind is based on the fusion feature expression implicit emotion identification method of type and system | |
Wagle et al. | Explainable ai for multimodal credibility analysis: Case study of online beauty health (mis)-information | |
CN112800184A (en) | Short text comment emotion analysis method based on Target-Aspect-Opinion joint extraction | |
CN117115505A (en) | Emotion enhancement continuous training method combining knowledge distillation and contrast learning | |
CN115600605A (en) | Method, system, equipment and storage medium for jointly extracting Chinese entity relationship | |
Mazhar et al. | Movie reviews classification through facial image recognition and emotion detection using machine learning methods | |
CN107291686B (en) | Method and system for identifying emotion identification | |
CN116910294A (en) | Image filter generation method based on emotion analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200609 |