CN109376775A - The multi-modal sentiment analysis method of online news - Google Patents

The multi-modal sentiment analysis method of online news Download PDF

Info

Publication number
CN109376775A
CN109376775A CN201811181032.0A CN201811181032A CN109376775A CN 109376775 A CN109376775 A CN 109376775A CN 201811181032 A CN201811181032 A CN 201811181032A CN 109376775 A CN109376775 A CN 109376775A
Authority
CN
China
Prior art keywords
news
image
feature
text
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811181032.0A
Other languages
Chinese (zh)
Other versions
CN109376775B (en
Inventor
张莹
郭文雅
蔡祥睿
赵雪
袁晓洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN201811181032.0A priority Critical patent/CN109376775B/en
Publication of CN109376775A publication Critical patent/CN109376775A/en
Application granted granted Critical
Publication of CN109376775B publication Critical patent/CN109376775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention proposes a kind of multi-modal sentiment analysis methods of online news.Method disclosed by the invention is the text and picture material comprehensively utilized in online news, constructs multi-modal deep learning model, realizes the abundant fusion of multiple modalities data characteristics, and is analyzed and predicted accordingly to the emotion after readers ' reading news.For really having the news data of image, method significant effect of the invention does not consider the sentiment analysis model of image information better than other, it was demonstrated that the image in news facilitates the elaboration of entire media event, influences the reading emotion of reader.

Description

The multi-modal sentiment analysis method of online news
Technical field
The invention belongs to artificial intelligence fields, and in particular to a variety of moulds such as online news content of text, news picture material State data, the method that the emotion after readers ' reading news is analyzed and predicted.
Background technique
A kind of Social Media of the online news service as mainstream is increasingly becoming a kind of novel information transmitting form, daily Hundreds of millions of readers is attract, reader often generates Sentiment orientation actively, passive or neutral after reading news.Effectively The emotion of analysis user facilitates online news provider and provides better service for user, it helps government understands carriage in time Feelings effectively supervise internet content, and this emotional orientation analysis is as a weight in natural language processing field The research field wanted attracts the concern of more and more scholars.
It include one or more image in more and more online news with the development of internet development technique, these Image has very important effect the elaboration of entire media event.On the one hand, the image in news is more than verbal description Add intuitively, media event more vivo can be showed into reader;On the other hand, the selection of news figure influences entire chapter news Emotion keynote, influence understanding of the reader to news to a certain extent, deepen or change the Sentiment orientation of reader.
Existing research method only focuses on the content of text in news, using text feature, analyzes the language contained in text Justice and emotion information, and predict with this Sentiment orientation of reader, and a large number of studies show that, equally accumulate in image and voice data Containing affective characteristics.
Text emotion analysis method is exactly expanded to text, image and audio data by multi-modal sentiment analysis method, point Analyse the connection between different modalities data, the method for obtaining the Sentiment orientation that different data generates jointly, in recent years, multi-modal Sufficient development has been obtained in analysis method.The mainly video file of existing multi-modal sentiment analysis method concern, benefit With data such as text, image and sound in video, the characteristic synthetics such as lines, posture transformation, the tone of personage analysis view is extracted The Sentiment orientation of personage in frequency, the data of different modalities have one-to-one relationship in video file.
Online news are different from the data composition form of video file, and the text and image in journalistic style have certain position Relationship is set, the image positioned at news head is usually related to content most significant in news, is to entire journalistic style entire content Expression;The image being interspersed in journalistic style is then more closely bound up with its context text semantic, the position between image Relationship and the semanteme and logical relation of journalistic style text are consistent, and the positional relationship between image and text is affected to entire The expression of media event plays important to pass.And existing multimode analysis cannot special data knot this to online news Structure is modeled, it is difficult to reach ideal sentiment analysis effect.
In conclusion the multi-modal sentiment analysis in relation to online news is studying a question for an innovation, have important Research significance and application value.
Summary of the invention
The problem of not accounting for image information present invention aim to address existing online news sentiment analysis method, mentions A kind of multi-modal sentiment analysis method of online news out, this method consider online new using multi-modal sentiment analysis as frame Image and text information are heard, the Sentiment orientation generated after readers ' reading news is analyzed and predicted.
For the status in more and more online news including image information, the present invention utilizes the side based on deep learning Method innovatively proposes the multi-modal sentiment analysis method of online news, improves the effect of online news sentiment analysis.
The detailed content of the multi-modal sentiment analysis method of online news proposed by the present invention is as follows:
1st, data prediction
Online news data are collected, news are assigned in positive, passive or neutral emotional category.Original is had The online news of image are processed into unified format, guarantee that following step is gone on smoothly.Separately store newsletter archive and figure Picture retains the information such as home position and image hashing of the image in news.
2nd, multi-modal feature extraction
The feature of newsletter archive and image is extracted, and utilizes Feature fusion, acquisition can indicate newsletter archive and figure As the feature of information.The present invention from the superficial to the deep proposes deep neural network model, is gradually added into news picture material, gradually Improve the effect of news reader sentiment analysis.
2.1st, the multi-method of image position in newsletter archive is not considered
(1) multi-modal link model (multimodal concatenation, MC), for the text and image in news It is utilized respectively the semantic feature that classical LSTM extracts different sentences and image, text is subjected to vector with characteristics of image and connects behaviour Make, multi-modal feature of the obtained vector as entire news.
(2) the multi-modal sentiment analysis model based on LSTM (multimodal LSTM-based model, MLSTM) is protected The positional relationship between news image is stayed, using identical LSTM model treatment text and characteristics of image sequence, by the last layer Hidden layer exports the multi-modal feature as news.
2.2nd, the method based on location index
It regards image as a special sentence, subordinate sentence is carried out to news, and according to pretreated format data, it is right The sentence and image of text carry out Unified number, and using location index as last of image and sentence characteristics position, will have Feature input of the feature of this location information as MC and MLSTM model in above-mentioned 2.1st step, finally obtains entire news Feature vector.It is this to help deep learning model to consider in news in a kind of fairly simple mode based on the method for location index Location information between text and image.
2.3rd, orderly multi-modal Feature fusion
Newsletter archive and characteristics of image are re-ordered into the positional relationship in original online news, extract this using LSTM The orderly characteristic set of kind, the last layer hidden layer export the character representation as entire news.This orderly processing mode The interaction relationship between image and context is remained, the behavior that reader sequentially reads news from top to bottom is simulated.Newly The semantic feature in character representation not only comprising text and image in news is heard, but also contains the positional relationship between text and image Effect, can more fully express the semanteme and design feature of entire chapter news.
3rd, emotion is inferred
Define 1: emotional category tag set is Ee:
Ee={ epos,eneg,eneu}
Wherein, epos, eneg, eneuRespectively indicate positive, passive and neutral emotional category.
According to the news features of extraction, by news category into some emotional category in affective tag set.
The advantages and positive effects of the present invention:
The present invention in a creative way proposes online news for the multi-modal sentiment analysis method with image, considers new Content and the position for hearing image construct deep learning model, between image and text using the positional relationship of text and image Semantic relation modeled, extract the multi-modal feature that sufficiently merges, realize and merge a variety of data modality data semantics collaborations Effect, to analyze the Sentiment orientation of reader.The present invention pays close attention to influence of the news image to reader's Sentiment orientation for the first time, effectively Improve the effect of online news sentiment analysis.
Detailed description of the invention
Fig. 1 is the multi-modal sentiment analysis process schematic of online news.
Fig. 2 is multi-modal sentiment analysis block schematic illustration.
Fig. 3 is each emotional category quantity statistics information of news data collection with image.
Fig. 4 is image number statistical information in online news.
Fig. 5 is multi-modal link model schematic diagram.
Fig. 6 is the multi-modal sentiment analysis model schematic based on LSTM.
Fig. 7 is orderly multi-modal sentiment analysis model schematic.
Specific embodiment
The invention proposes a kind of multi-modal sentiment analysis method of online news, the main process of method is as shown in Figure 1.
Specific implementation process of the invention is divided into three phases, as shown in Fig. 2, being the tool of three phases implementation process below Body explanation.
Step 1, data prediction
Prepare the online news data set with image, such as 4485 online news of Daily Mail news, every news In include at least an image, data set affective tag and news amount of images statistical information are shown in Fig. 3 and Fig. 4, news are marked For positive, passive or neutral emotional category.
In data preprocessing phase, need the original online news with image being processed into a text and several figures The unified format of picture, specific pre-treatment step are as follows: for given online news, separately store text and image data, be Every news image distributes a timestamp, and as unique identifier, every newsletter archive is placed in individual text file and protects It deposits, retains sequence original between sentence, and identify the identifier of image between sentence, safeguard picture position in this way Dictionary of information saves location information of the image in news.In this way, an online news have been expressed as one with image The form of the text of label and several images with unique identifier.
By taking an online news in data set as an example, there are four images in this news, in data preprocessing phase, uses Four will not duplicate timestamp be that image name successively saves the sentence in news and by according to the sequence of newsletter archive In a text file, for the position that image in news occurs, with a line with " ## " beginning, immediately following the text of image temporal stamp Image is marked in this, in this way, in subsequent processing, so that it may according to text content indexing to correct image.
Step 2, multi-modal feature extraction:
News image can help the elaboration of media event, and then influence the emotion after the reading of reader, to online new It hears and carries out considering that news image information is very necessary in sentiment analysis.The feature extraction stage is the core of entire frame, is taken out Simultaneously fusing text and characteristics of image are taken, the character representation of entire journalistic style is obtained.This paper presents a series of depths based on LSTM Neural network model is spent to complete this task.
Before carrying out Fusion Features, newsletter archive and characteristics of image are extracted first with classical neural network model. It is input with word according to the hierarchical structure of " word-sentence-chapter " that text has, extracting sentence semantics indicates, then with sentence Son carries out next Fusion Features process as the basic unit of newsletter archive.Sentence semantics feature extraction process is by classics LSTM model is completed.Sentence in newsWherein, l is The number of word, ω in sentencetIndicate the word that the word t in sentence is obtained in data focus utilization word embedding grammar pre-training Feature vector.Sentence characteristics are extracted using LSTM, by the output h of the last one hidden layerlFeature vector as sentence s is joined With subsequent processing steps, it is denoted as hs, sentence characteristics sequence can be expressed as in online news Wherein, ns indicates the number of sentence in newsletter archive.
Image is regarded in news as " special " sentence, using 19 layers of VGGNet Jing Guo pre-training as feature extraction mould Type, by the 4096 dimension output feature v of the full articulamentum fc-7 of layer second from the bottomt, utilize transition matrixHandle vt, Obtain the feature vector p of imaget=vt×Wimg, characteristics of image sequence in online news Wherein, ni indicates the image number in online news.
1. not considering the multi-modal Feature fusion of picture position
The positional relationship of text and image in news is not considered, and only simply two kinds of features are attached, are obtained As a result as the character representation of news, this model be referred to as multi-modal link model (multimodal concatenation, MC), model primary structure is as known to Fig. 5.
For the content of text in news, sentence characteristics sequence is made into h for input, by LSTM structure, by the last layer Hidden layer exports the character representation as entire newsletter archive, is denoted as featuretext.Similar with the sentence in news, one new Also there is certain logical order between the image of news, in order to retain this sequence, news characteristics of image sequence p is inputted into LSTM In structure, equally it regard the hidden layer output of the last layer as news image feature representation, is denoted as featureimage.Then in the mould In type, the feature of entire news can be indicated are as follows:
Wherein,For vector attended operation.
MC model by news text and image be separately input in different LSTM models, although remain sentence it Between the logical relation between image, but have ignored the relationship between sentence and image completely.To solve this problem, into one Step proposes the multi-modal Fusion Features model (multimodal LSTM-based, MLSTM) based on LSTM, model primary structure As shown in Figure 6.
It is equally that the last layer hidden layer is defeated using the image and sentence in same LSTM processing news in MLSTM Feature vector as entire chapter news out, it is contemplated that the interaction relationship between text and image.Since LSTM is a kind of band Have the Recognition with Recurrent Neural Network model of memory, closer to list entries end feature decay it is fewer, to the news finally obtained Feature influence is bigger, and therefore, there are two types of combinations for characteristics of image sequence and sentence characteristics sequence, i.e., text feature is in image spy It is inputted before or after sign.In next narration, illustrated in such a way that image inputs earlier.The list entries of LSTM It can indicate are as follows:
Input={ p, h }
Wherein, LSTM list entries input can be expressed as It is whole after input is inputted LSTM The feature vector of a news can indicate are as follows:
Z=fn
Wherein, fnFor the output vector of the last one hidden layer of LSTM, n indicates the total number of sentence and image in news.
2. having the multi-modal Feature fusion of location index
News image also has certain other than having the feature of oneself in terms of content with the semantic information of its context Correlation, herein using the feature for having location index, the ability of enhancing multi-modal fusion model capture image location information.
Specifically, comprehensive number is carried out to sentence and image in journalistic style, this number is become into location index, position Set the positional relationship for indexing and reflecting between text and sentence.And sentence characteristics h is added in location indextWith characteristics of image ptIn, Obtain the feature vector h with location indext' and pt',
ht'=[ht iht]
pt'=[pt ipt]
Wherein, ihtAnd iptT-th of sentence or picture position number respectively in journalistic style, then WithIt respectively indicates The sentence characteristics and characteristics of image that position is reinforced.
MC and MLSTM model is improved using the feature vector that position is reinforced, new model is denoted as MPC and MPLSTM mould respectively Type.For MPC model, obtained newsletter archive and characteristics of image is respectively featuretext' and featureimage', pass through The news features that MPCM model obtains indicate are as follows:
Obviously, MPC model be there is a problem that certain.On the one hand, MPC model is defeated by the feature difference with location index Enter into different LSTM, what each LSTM was identified is the positional relationship between image between sentence, this makes location index Due effect is lost, the positional relationship between image and text is not applied;On the other hand, featuretext’ And featureimage' attended operation also without embody image and text positional relationship.Therefore, MPC model is difficult to reach pair The purpose of image text positional relationship modeling.
In MPLSTM model, LSTM mode input is The structure of a news story feature that study location index includes.Online news character representation are as follows:
Z=fn'
Wherein, fn' indicating the output of the last one hidden layer of LSTM, n indicates total of sentence and image in online news Number.
3. orderly multi-modal Feature fusion
Model with location index only represents the location information of news image using a bit position index, in news features In extraction, it is difficult to guarantee that LSTM can accurately distinguish this particularity, understand the meaning that location index represents, sentence and figure As the out-of-order input of feature weakens h ' and p ' in location index effect.
In order to which the location information more accurately to news image models, the invention proposes a kind of orderly multimodes State neural network model (multimodal sequential neural network, MSNN), in news features extraction process In, it is the text diagram of script in journalistic style by image and text feature order restoring according to the obtained picture position dictionary of pretreatment Image position relationship, obtained news features have the semantic information and structural information of news, and MSNN model structure is as shown in Figure 7.
In MSNN, the location index vector of image and sentence can be indicated are as follows:
I=[ip1,ip2,...,ipni,ih1,ih2,...,ihns]
Wherein, iptIndicate t positions of the image in news, ihtIndicate position of t-th of sentence in news, ni Image number and sentence number in news are respectively indicated with ns.Image and sentence are arranged again using location index vector I Sequence.Such as down conversion is carried out to I first:
Windex=g (I)
Wsort=Windex T
inputsort=Wsort×features
Wherein, g () is One-hot operation, and features is the result of original feature vector connection, that is,
Therefore, inputsortFor the characteristic sequence after rearrangement, characteristic sequence is consistent with the structure of a news story, will In the vector input LSTM structure of sequence, the feature of entire news is generated,
Z=fn
Wherein, fnFor the output vector of the last one hidden layer of LSTM, n indicates the total number of sentence and image in news.
Step 3, emotion are inferred:
In emotion deduction phase, the result that frame obtains feature extraction is divided into affective tag set EeA certain emotion In classification.Every news in data set can be expressed as feature vector z, and the present invention is W using a parameter setdEntirely connect Layer is connect as emotion and infers that layer, likelihood function are,
Wherein,Indicate that the emotion of news label being inferred to according to z, f () indicate a softmax function.Benefit of the invention With there is the mode of supervision to be trained model, true emotion is distributed to and is predicted the cross entropy between the distribution of obtained emotion As target loss function, entire sentiment analysis model parameter is updated, enables finally obtained model according to online news Text and picture material the emotion that reader generates after reading news is more accurately predicted out.

Claims (1)

1. the multi-modal sentiment analysis method of online news, the specific steps are as follows:
1st, data prediction
Online news data are collected, news are assigned in positive, passive or neutral emotional category;Original is had into image Online news be processed into unified format, guarantee that following step is gone on smoothly;Newsletter archive and image are separately stored, is protected Stay home position and image hashing information of the image in news;
2nd, multi-modal feature extraction
The feature of newsletter archive and image is extracted, and utilizes Feature fusion, acquisition can indicate newsletter archive and image letter The feature of breath;It from the superficial to the deep proposes deep neural network model, is gradually added into news picture material, steps up news reader The effect of sentiment analysis;
2.1st, the multi-method of image position in newsletter archive is not considered
(1) multi-modal link model (multimodal concatenation, MC), for the text and image difference in news Two kinds of data characteristicses of text and image are carried out vector company by the semantic feature that different sentences and image are extracted using classical LSTM Operation is connect, multi-modal feature of the obtained vector as entire news;
(2) the multi-modal sentiment analysis model based on LSTM (multimodal LSTM-based model, MLSTM) retains new The successive positional relationship between image is heard, using identical LSTM model treatment text and characteristics of image sequence, by the last layer Hidden layer exports the multi-modal feature as news;
2.2nd, the method based on location index
It regards image as a special sentence, subordinate sentence is carried out to news, and according to pretreated format data, to text Sentence and image carry out Unified number, and using location index as last of image and sentence characteristics position, will have this Feature input of the feature of location information as MC and MLSTM model in above-mentioned 2.1st step, finally obtains the feature of entire news Vector;It is this to help deep learning model to consider text in news in a kind of fairly simple mode based on the method for location index Location information between image;
2.3rd, orderly multi-modal Feature fusion
Newsletter archive and characteristics of image are re-ordered into the positional relationship in original online news, had using LSTM extraction is this The characteristic set of sequence, the last layer hidden layer export the character representation as entire news;This orderly processing mode retains Interaction relationship between image and context simulates the behavior that reader sequentially reads news from top to bottom;News is special Sign had not only included the semantic feature of text and image in news in indicating, but also the positional relationship contained between text and image is made With can more fully express the semanteme and design feature of entire chapter news;
3rd, emotion is inferred
Define 1: emotional category tag set is Ee:
Ee={ epos,eneg,eneu}
Wherein, epos, eneg, eneuRespectively indicate positive, passive and neutral emotional category;
According to the news features of extraction, by news category into some emotional category in affective tag set.
CN201811181032.0A 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method Active CN109376775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811181032.0A CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811181032.0A CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Publications (2)

Publication Number Publication Date
CN109376775A true CN109376775A (en) 2019-02-22
CN109376775B CN109376775B (en) 2021-08-17

Family

ID=65402812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811181032.0A Active CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Country Status (1)

Country Link
CN (1) CN109376775B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827265A (en) * 2019-11-07 2020-02-21 南开大学 Image anomaly detection method based on deep learning
CN111985243A (en) * 2019-05-23 2020-11-24 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis device and storage medium
CN112201339A (en) * 2019-07-08 2021-01-08 四川大学华西医院 Auxiliary diagnostic system for psychology
CN112364168A (en) * 2020-11-24 2021-02-12 中国电子科技集团公司电子科学研究院 Public opinion classification method based on multi-attribute information fusion
CN112784011A (en) * 2021-01-04 2021-05-11 南威软件股份有限公司 Emotional problem processing method, device and medium based on CNN and LSTM
CN112801219A (en) * 2021-03-22 2021-05-14 华南师范大学 Multi-mode emotion classification method, device and equipment
CN113377901A (en) * 2021-05-17 2021-09-10 内蒙古工业大学 Mongolian text emotion analysis method based on multi-size CNN and LSTM models
CN113408385A (en) * 2021-06-10 2021-09-17 华南理工大学 Audio and video multi-mode emotion classification method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107092596A (en) * 2017-04-24 2017-08-25 重庆邮电大学 Text emotion analysis method based on attention CNNs and CCR
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
US20180083901A1 (en) * 2016-09-20 2018-03-22 Google Llc Automatic response suggestions based on images received in messaging applications
CN107832663A (en) * 2017-09-30 2018-03-23 天津大学 A kind of multi-modal sentiment analysis method based on quantum theory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180083901A1 (en) * 2016-09-20 2018-03-22 Google Llc Automatic response suggestions based on images received in messaging applications
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107092596A (en) * 2017-04-24 2017-08-25 重庆邮电大学 Text emotion analysis method based on attention CNNs and CCR
CN107832663A (en) * 2017-09-30 2018-03-23 天津大学 A kind of multi-modal sentiment analysis method based on quantum theory
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAI PHAM.ETC.: ""Seq2Seq2Sentiment:Multimodal Sequence to Sequence Models for Sentiment Analysis"", 《ARXIV》 *
周虎等: ""基于深度 LSTM 神经网络的在线消费评论情感分类研究"", 《中华医学图书情报杂志》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985243A (en) * 2019-05-23 2020-11-24 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis device and storage medium
CN111985243B (en) * 2019-05-23 2023-09-08 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis device and storage medium
CN112201339A (en) * 2019-07-08 2021-01-08 四川大学华西医院 Auxiliary diagnostic system for psychology
CN110827265A (en) * 2019-11-07 2020-02-21 南开大学 Image anomaly detection method based on deep learning
CN110827265B (en) * 2019-11-07 2023-04-07 南开大学 Image anomaly detection method based on deep learning
CN112364168A (en) * 2020-11-24 2021-02-12 中国电子科技集团公司电子科学研究院 Public opinion classification method based on multi-attribute information fusion
CN112784011A (en) * 2021-01-04 2021-05-11 南威软件股份有限公司 Emotional problem processing method, device and medium based on CNN and LSTM
CN112784011B (en) * 2021-01-04 2023-06-30 南威软件股份有限公司 Emotion problem processing method, device and medium based on CNN and LSTM
CN112801219A (en) * 2021-03-22 2021-05-14 华南师范大学 Multi-mode emotion classification method, device and equipment
CN112801219B (en) * 2021-03-22 2021-06-18 华南师范大学 Multi-mode emotion classification method, device and equipment
CN113377901A (en) * 2021-05-17 2021-09-10 内蒙古工业大学 Mongolian text emotion analysis method based on multi-size CNN and LSTM models
CN113408385A (en) * 2021-06-10 2021-09-17 华南理工大学 Audio and video multi-mode emotion classification method and system

Also Published As

Publication number Publication date
CN109376775B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN109376775A (en) The multi-modal sentiment analysis method of online news
Abdullah et al. SEDAT: sentiment and emotion detection in Arabic text using CNN-LSTM deep learning
CN113177124B (en) Method and system for constructing knowledge graph in vertical field
CN106919646B (en) Chinese text abstract generating system and method
CN111104498B (en) Semantic understanding method in task type dialogue system
CN104735468B (en) A kind of method and system that image is synthesized to new video based on semantic analysis
CN110781668B (en) Text information type identification method and device
CN111159414B (en) Text classification method and system, electronic equipment and computer readable storage medium
CN101404036B (en) Keyword abstraction method for PowerPoint electronic demonstration draft
CN110032630A (en) Talk about art recommendation apparatus, method and model training equipment
CN110489565B (en) Method and system for designing object root type in domain knowledge graph body
CN113032552B (en) Text abstract-based policy key point extraction method and system
CN110705490B (en) Visual emotion recognition method
CN113076483A (en) Case element heteromorphic graph-based public opinion news extraction type summarization method
CN110472245A (en) A kind of multiple labeling emotional intensity prediction technique based on stratification convolutional neural networks
CN114339450A (en) Video comment generation method, system, device and storage medium
CN113051887A (en) Method, system and device for extracting announcement information elements
CN112287240A (en) Case microblog evaluation object extraction method and device based on double-embedded multilayer convolutional neural network
CN111444720A (en) Named entity recognition method for English text
CN114661951A (en) Video processing method and device, computer equipment and storage medium
CN113312924A (en) Risk rule classification method and device based on NLP high-precision analysis label
CN113204624A (en) Multi-feature fusion text emotion analysis model and device
CN112949284B (en) Text semantic similarity prediction method based on Transformer model
Oyama et al. Visual clarity analysis and improvement support for presentation slides
CN116977992A (en) Text information identification method, apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant