CN109376775B - Online news multi-mode emotion analysis method - Google Patents

Online news multi-mode emotion analysis method Download PDF

Info

Publication number
CN109376775B
CN109376775B CN201811181032.0A CN201811181032A CN109376775B CN 109376775 B CN109376775 B CN 109376775B CN 201811181032 A CN201811181032 A CN 201811181032A CN 109376775 B CN109376775 B CN 109376775B
Authority
CN
China
Prior art keywords
news
image
images
emotion
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811181032.0A
Other languages
Chinese (zh)
Other versions
CN109376775A (en
Inventor
张莹
郭文雅
蔡祥睿
赵雪
袁晓洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN201811181032.0A priority Critical patent/CN109376775B/en
Publication of CN109376775A publication Critical patent/CN109376775A/en
Application granted granted Critical
Publication of CN109376775B publication Critical patent/CN109376775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an online news multi-modal emotion analysis method. The method disclosed by the invention is characterized in that text and image contents in online news are comprehensively utilized to construct a multi-mode deep learning model, so that the full fusion of various modal data characteristics is realized, and the emotion of a reader after reading the news is analyzed and predicted. For real news data with images, the method has remarkable effect superior to other emotion analysis models without considering image information, and proves that the images in the news contribute to the explanation of the whole news event and influence the reading emotion of readers.

Description

Online news multi-mode emotion analysis method
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a method for analyzing and predicting the emotion of a reader after reading news by using on-line news text content, news image content and other multi-modal data.
Background
On-line news service, a mainstream social medium, is gradually a new information delivery form, attracting hundreds of millions of readers every day, and the readers tend to have positive, negative or neutral emotional tendency after reading news. The effective analysis of the emotion of the user is helpful for online news providers to provide better services for the user, and is also helpful for governments to know public sentiments in time and effectively supervise internet contents.
With the development of internet development technology, more and more online news contains one or more images, and the images have a very important role in the description of the whole news event. On one hand, images in news are more visual than character descriptions, and news events can be more vividly displayed to readers; on the other hand, the selection of the news matching chart influences the emotional tone of the whole news, influences the understanding of the reader to the news to a certain extent, and deepens or changes the emotional tendency of the reader.
The existing research method only focuses on text content in news, utilizes text features to analyze semantic and emotional information contained in the text, and predicts emotional tendency of readers, and a large amount of research shows that the image and voice data also contain emotional features.
The multi-modal emotion analysis method is a method for expanding a text emotion analysis method to text, image and audio data, analyzing the relation among different modal data and obtaining the emotion tendency generated by different data together. The existing multi-modal emotion analysis method mainly focuses on a video file, extracts characteristics such as lines, posture transformation and tone of characters in the video and comprehensively analyzes emotion tendencies of the characters in the video by using data such as texts, images and sounds in the video, and data of different modalities in the video file have one-to-one correspondence.
The data composition forms of the online news and the video files are different, the texts and the images in the news body have a certain position relationship, and the images positioned at the heads of the news are usually related to the most significant content in the news and are expression of the whole news body; the images interspersed in the news volume are more related to the context text semantics, the position relationship between the images is consistent with the semantics and logic relationship of the news volume text, and the position relationship between the images and the text influences the expression of the whole news event to be important. The existing multi-modal analysis can not model a special data structure of online news, and an ideal emotion analysis effect is difficult to achieve.
In conclusion, the multi-modal sentiment analysis about online news is an innovative research problem and has important research significance and application value.
Disclosure of Invention
The invention aims to solve the problem that image information is not considered in the existing online news emotion analysis method, and provides an online news multi-mode emotion analysis method.
Aiming at the current situation that more and more online news contain image information, the invention innovatively provides a multi-mode emotion analysis method of the online news by using a deep learning-based method, and improves the emotion analysis effect of the online news.
The details of the online news multi-modal emotion analysis method provided by the invention are as follows:
1 st, data preprocessing
On-line news data is collected and news is assigned to positive, negative, or neutral sentiment categories. The original online news with the images is processed into a uniform format, and the following steps are guaranteed to be smoothly carried out. News text and images are stored separately, and information such as original positions of the images in news and image abstracts is reserved.
2 nd, multi-modal feature extraction
And extracting the characteristics of the news text and the image, and obtaining the characteristics capable of representing the news text and the image information by using a characteristic fusion method. The invention provides a deep neural network model from shallow to deep, gradually adds news image content, and gradually improves the emotional analysis effect of news readers.
2.1 multiple methods that do not consider the location of images in news text
(1) A multimodal linkage Model (MC) extracts semantic features of different sentences and images for text and images in news by using a classical LSTM, respectively, and performs vector linkage operation on the text and image features to obtain a vector as a multimodal feature of the whole news.
(2) A multimodality emotion analysis model (MLSTM) based on LSTM retains the position relation between news images, text and image feature sequences are processed by using the same LSTM model, and the last hidden layer is output as multimodality features of news.
2.2 method based on location indexing
The image is regarded as a special sentence, the news is divided into sentences, the sentences and the images of the text are numbered uniformly according to the preprocessed formatted data, the position index is used as the last bit of the image and sentence characteristics, the characteristics with the position information are used as the characteristic input of the MC and MLSTM models in the step 2.1, and finally the characteristic vector of the whole news is obtained. This location-index-based approach helps the deep learning model to consider location information between text and images in news in a relatively simple manner.
2.3 ordered multimodal feature fusion method
The news text and image features are reordered into a positional relationship in the original online news, an LSTM is utilized to extract the ordered feature set, and the last hidden layer is output as a feature representation of the whole news. The ordered processing mode reserves the interaction relation between the image and the context and simulates the behavior of reading news from top to bottom sequentially by a reader. The news characteristic representation not only contains semantic characteristics of texts and images in news, but also contains the position relation effect between the texts and the images, and the semantic and structural characteristics of the whole news can be more completely expressed.
3, emotion estimation
Definition 1: the set of emotion category labels is Ee
Ee={epos,eneg,eneu}
Wherein e ispos,eneg,eneuRepresenting positive, negative and neutral emotion categories, respectively.
And classifying the news into a certain emotion type in the emotion label set according to the extracted news characteristics.
The invention has the advantages and positive effects that:
the invention creatively provides a multi-mode emotion analysis method for online news with images, takes the content and position of news images into consideration, constructs a deep learning model by utilizing the position relation of texts and images, models the semantic relation between the images and the texts, extracts fully-fused multi-mode characteristics, realizes the semantic synergistic effect of fusing data of multiple data modes, and analyzes the emotional tendency of readers. The method and the device provided by the invention pay attention to the influence of the news image on the emotional tendency of the reader for the first time, and effectively improve the effect of online news emotional analysis.
Drawings
FIG. 1 is a schematic diagram of an online news multimodal emotion analysis process.
FIG. 2 is a schematic diagram of a multimodal emotion analysis framework.
FIG. 3 is a graph of the statistics of the number of emotion types in a news data set with images.
Fig. 4 is statistical information of the number of images in online news.
FIG. 5 is a schematic view of a multimodal connection model.
FIG. 6 is a schematic diagram of a multi-modal sentiment analysis model based on LSTM.
FIG. 7 is a schematic diagram of an ordered multimodal emotion analysis model.
Detailed Description
The invention provides an online news multi-modal emotion analysis method, and the main process of the method is shown in figure 1.
The implementation of the present invention is divided into three stages, as shown in fig. 2, and the following is a detailed description of the implementation of the three stages.
Step 1, data preprocessing
An online news data set with images, such as 4485 online news items for daily post news, is prepared, each news item containing at least one image, the data set sentiment tags and news image quantity statistics are shown in fig. 3 and 4, and the news items are labeled as positive, negative or neutral sentiment categories.
In the data preprocessing stage, the original online news with images needs to be processed into a uniform format of a text and a plurality of images, and the specific preprocessing steps are as follows: for given online news, text and image data are stored separately, a time stamp is assigned to each news image as a unique identifier, each news text is stored in a separate text file, the original sequence between sentences is preserved, and the identifier of the image is identified between sentences, in this way, an image position information dictionary is maintained to store the position information of the image in the news. Thus, an online news item is represented in the form of a text with image tags and several images with unique identifiers.
Taking an online news in a data set as an example, four images exist in the news, in the data preprocessing stage, four unrepeatable timestamps are used as names of the images, sentences in the news are sequentially stored in a text file according to the sequence of news texts, and for the position of the image in the news, a line is started by "##", and the image is marked by the text following the image timestamp, so that in the subsequent processing, the correct image can be indexed according to the text content.
Step 2, multi-modal feature extraction:
the news image can help to explain news events, so that the emotion of a reader after reading is influenced, and the news image information is very necessary to be considered in emotion analysis of online news. The feature extraction stage is the core of the whole frame, and extracts and fuses text and image features to obtain feature representation of the whole news body. A series of LSTM-based deep neural network models are presented to accomplish this task.
Before feature fusion, news text and image features are extracted by using a classical neural network model. According to the hierarchical structure of 'word-sentence-chapter' of the text, the word is taken as input, the semantic representation of the sentence is extracted, and then the sentence is taken as the basic unit of the news text to carry out the following characteristic fusion process. The sentence semantic feature extraction process is completed by a classical LSTM model. Sentences in news
Figure BDA0001824950050000041
Where l is the number of words in the sentence, ωtAnd representing the feature vector of the word obtained by pre-training the word t in the sentence in the data set by using a word embedding method. Using LSTM to extract sentence characteristics and output h of the last hidden layerlParticipates as a feature vector of the sentence s in the subsequent processing step, denoted as hsThe sentence feature sequence in the online news can be expressed as
Figure BDA0001824950050000042
Figure BDA0001824950050000043
Where ns represents the number of sentences in the news text.
The image is regarded as a special sentence in news, a pre-trained 19-layer VGGNet is used as a feature extraction model, and 4096-dimensional output features v of a penultimate full-connected layer fc-7 are outputtUsing transformation matrices
Figure BDA0001824950050000053
Treatment vtObtaining a feature vector p of the imaget=vt×WimgSequence of image features in online news
Figure BDA0001824950050000054
Figure BDA0001824950050000055
Where ni represents the number of images in the online news.
1. Multi-modal feature fusion method without considering image position
The position relation of the text and the image in the news is not considered, and the two features are simply connected to obtain a result as a feature representation of the news, and the model is called a multi-modal connection Model (MC), and the main structure of the model is shown in fig. 5.
For text content in news, a sentence feature sequence is taken as h as input, the last hidden layer is output as feature representation of the whole news text through an LSTM structure and recorded as featuretext. Similar to sentences in news, a certain logic sequence is arranged among images of a piece of news, in order to keep the sequence, a news image feature sequence p is input into an LSTM structure, and the hidden layer output of the last layer is used as a news image feature representation and recorded as featureimage. Then in this model, the characteristics of the entire news can be represented as:
Figure BDA0001824950050000051
wherein the content of the first and second substances,
Figure BDA0001824950050000052
is a vector join operation.
The MC model inputs the text and images in news into different LSTM models, respectively, and completely ignores the relationship between sentences and images, although the logical relationship between sentences and between images is preserved. To solve this problem, a multimodality feature fusion model (MLSTM) based on LSTM is further proposed, and the main structure of the model is shown in fig. 6.
In MLSTM, the same LSTM is used to process images and sentences in the news, and the last hidden layer is also output as a feature vector of the whole news, taking into account the interaction relationship between the text and the images. Since LSTM is a recurrent neural network model with memory, the closer the feature attenuation is to the end of the input sequence, the greater the impact on the finally obtained news feature, and therefore, there are two ways of combining the image feature sequence and sentence feature sequence, i.e., text features are input before or after the image feature. In the following description, explanation is made in such a manner that an image is input earlier. The input sequence of LSTM may be expressed as:
input={p,h}
wherein the LSTM input sequence input can be expressed as
Figure BDA0001824950050000056
Figure BDA0001824950050000057
After inputting input into LSTM, the feature vector of the entire news can be represented as:
z=fn
wherein f isnFor the output vector of the last hidden layer of LSTM, n represents the total number of sentences and images in the newsAnd (4) counting.
2. Multi-modal feature fusion method with position index
Besides the characteristics of the news image in content, the news image has certain relevance with the semantic information of the context, and the characteristics with position indexes are adopted in the news image to enhance the capability of a multi-modal fusion model for capturing the image position information.
Specifically, sentences and images are comprehensively numbered in the news body, and such numbers are made into position indexes that reflect the positional relationship between the text and the sentence. And add the position index to the sentence feature htAnd image feature ptIn the method, a feature vector h with a position index is obtainedt' and pt’,
ht'=[ht iht]
pt'=[pt ipt]
Wherein ihtAnd iptRespectively numbering the t-th sentence or image position in the news body
Figure BDA0001824950050000062
Figure BDA0001824950050000063
And
Figure BDA0001824950050000064
respectively representing sentence features and image features with enhanced positions.
The MC and MLSTM models are improved by using the position-reinforced feature vectors, and the new models are respectively named as MPC and MPLSTM models. For the MPC model, the obtained news text and image features are features respectivelytext' and featureimage' news characteristics obtained by the MPCM model are expressed as:
Figure BDA0001824950050000061
clearly, MPC models have certain problems.On one hand, the MPC model respectively inputs the characteristics with the position indexes into different LSTMs, and each LSTM identifies the position relation between images and sentences, so that the position indexes lose due effects, and the position relation between the images and the texts is not applied; feature, on the other handtext' and featureimageThe connection operation of' also does not reflect the positional relationship of the image and the text. Therefore, the MPC model is difficult to achieve the purpose of modeling the image text position relationship.
In the MPLSTM model, the LSTM model is input as
Figure BDA0001824950050000065
Figure BDA0001824950050000066
And learning the news structure characteristics contained in the position index. The online news feature is represented as:
z=fn'
wherein f isn' denotes the output of the last hidden layer of the LSTM and n denotes the total number of sentences and images in the online news.
3. Ordered multi-modal feature fusion method
The model with the position index only uses the position index of one bit to represent the position information of the news image, in the extraction of news features, the specificity that the LSTM can accurately distinguish the bit is difficult to guarantee, the meaning represented by the position index is understood, and the disorder input of the sentence and the image features weakens the effect of the position index in h 'and p'.
In order to more accurately model the position information of a news image, the invention provides an ordered multi-modal neural network (MSNN) model, in the process of extracting news features, images and text features are sequentially restored to be the original text image position relation in a news body according to an image position dictionary obtained by preprocessing, the obtained news features have semantic information and structural information of the news, and the structure of the MSNN model is shown in fig. 7.
In MSNN, the position index vector of images and sentences can be expressed as:
I=[ip1,ip2,...,ipni,ih1,ih2,...,ihns]
wherein iptIndicating the position of the t-th image in the news, ihtThe position of the t-th sentence in the news is shown, and ni and ns respectively show the number of images and the number of sentences in the news. Images and sentences are reordered using the position index vector I. First, I is transformed as follows:
Windex=g(I)
Wsort=Windex T
inputsort=Wsort×features
where g () is the One-hot operation, features are the result of the concatenation of the original feature vectors, i.e.,
Figure BDA0001824950050000071
thus, inputsortFor the feature sequence after reordering, the feature sequence is kept consistent with the news structure, the vector of the sequence is input into the LSTM structure to generate the features of the whole news,
z=fn
wherein f isnFor the output vector of the last hidden layer of LSTM, n represents the total number of sentences and images in the news.
Step 3, emotion inference:
in the emotion inference stage, the framework divides the result obtained by feature extraction into an emotion label set EeWithin a certain emotion category. Each news item in the data set can be represented as a feature vector z, and the invention utilizes a parameter set WdThe fully connected layer of (a) is used as an emotion inference layer, the likelihood function is,
Figure BDA0001824950050000072
wherein the content of the first and second substances,
Figure BDA0001824950050000073
representing the news emotion tag inferred from z, and f () representing a softmax function. The method utilizes a supervision mode to train the model, uses the cross entropy between the real emotion distribution and the predicted emotion distribution as a target loss function, and updates the parameters of the whole emotion analysis model, so that the finally obtained model can more accurately predict the emotion generated by a reader after reading news according to the text and image content of the online news.

Claims (1)

1. The online news multi-modal sentiment analysis method specifically comprises the following steps:
1 st, data preprocessing
Collecting online news data, and assigning the news to positive, negative or neutral emotion categories; processing the original online news with images into a uniform format to ensure that the following steps are smoothly carried out; separately storing news texts and images, and keeping the original positions of the images in the news and the information of image abstracts;
2 nd, multi-modal feature extraction
Extracting the characteristics of the news text and the image, and obtaining the characteristics capable of representing the news text and the image information by using a characteristic fusion method;
(1) method based on position index
The method comprises the steps of taking an image as a special sentence, dividing the news into sentences, uniformly numbering the sentences and the images of the text according to preprocessed formatted data, taking a position index as the last bit of the image and the sentence characteristics, inputting the characteristics with the position information as the characteristics of a multi-mode connection model and an LSTM-based multi-mode emotion analysis model, and finally obtaining a characteristic vector of the whole news;
(2) ordered multi-modal feature fusion method
Reordering news texts and image features into a position relation in original online news, extracting the ordered feature set by using an LSTM, and outputting the last hidden layer as a feature representation of the whole news;
3, emotion estimation
Definition 1: the set of emotion category labels is Ee
Ee={epos,eneg,eneu}
Wherein e ispos,eneg,eneuRepresenting positive, negative and neutral emotion categories, respectively;
and classifying the news into a certain emotion type in the emotion label set according to the extracted news characteristics.
CN201811181032.0A 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method Active CN109376775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811181032.0A CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811181032.0A CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Publications (2)

Publication Number Publication Date
CN109376775A CN109376775A (en) 2019-02-22
CN109376775B true CN109376775B (en) 2021-08-17

Family

ID=65402812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811181032.0A Active CN109376775B (en) 2018-10-11 2018-10-11 Online news multi-mode emotion analysis method

Country Status (1)

Country Link
CN (1) CN109376775B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985243B (en) * 2019-05-23 2023-09-08 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis device and storage medium
CN112201339A (en) * 2019-07-08 2021-01-08 四川大学华西医院 Auxiliary diagnostic system for psychology
CN110827265B (en) * 2019-11-07 2023-04-07 南开大学 Image anomaly detection method based on deep learning
CN112364168A (en) * 2020-11-24 2021-02-12 中国电子科技集团公司电子科学研究院 Public opinion classification method based on multi-attribute information fusion
CN112784011B (en) * 2021-01-04 2023-06-30 南威软件股份有限公司 Emotion problem processing method, device and medium based on CNN and LSTM
CN112801219B (en) * 2021-03-22 2021-06-18 华南师范大学 Multi-mode emotion classification method, device and equipment
CN113377901B (en) * 2021-05-17 2022-08-19 内蒙古工业大学 Mongolian text emotion analysis method based on multi-size CNN and LSTM models
CN113408385B (en) * 2021-06-10 2022-06-14 华南理工大学 Audio and video multi-mode emotion classification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107092596A (en) * 2017-04-24 2017-08-25 重庆邮电大学 Text emotion analysis method based on attention CNNs and CCR
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
CN107832663A (en) * 2017-09-30 2018-03-23 天津大学 A kind of multi-modal sentiment analysis method based on quantum theory

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015124B2 (en) * 2016-09-20 2018-07-03 Google Llc Automatic response suggestions based on images received in messaging applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107092596A (en) * 2017-04-24 2017-08-25 重庆邮电大学 Text emotion analysis method based on attention CNNs and CCR
CN107832663A (en) * 2017-09-30 2018-03-23 天津大学 A kind of multi-modal sentiment analysis method based on quantum theory
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Seq2Seq2Sentiment:Multimodal Sequence to Sequence Models for Sentiment Analysis";Hai Pham.etc.;《arXiv》;20180806;全文 *
"基于深度 LSTM 神经网络的在线消费评论情感分类研究";周虎等;《中华医学图书情报杂志》;20180531;第27卷(第5期);全文 *

Also Published As

Publication number Publication date
CN109376775A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376775B (en) Online news multi-mode emotion analysis method
CN107748757B (en) Question-answering method based on knowledge graph
CN108595708A (en) A kind of exception information file classification method of knowledge based collection of illustrative plates
CN107729309A (en) A kind of method and device of the Chinese semantic analysis based on deep learning
CN111159414B (en) Text classification method and system, electronic equipment and computer readable storage medium
CN110083710A (en) It is a kind of that generation method is defined based on Recognition with Recurrent Neural Network and the word of latent variable structure
CN112257452B (en) Training method, training device, training equipment and training storage medium for emotion recognition model
CN110321549B (en) New concept mining method based on sequential learning, relation mining and time sequence analysis
CN107818084A (en) A kind of sentiment analysis method for merging comment figure
CN113051914A (en) Enterprise hidden label extraction method and device based on multi-feature dynamic portrait
CN108829823A (en) A kind of file classification method
CN110750635A (en) Joint deep learning model-based law enforcement recommendation method
CN109508448A (en) Short information method, medium, device are generated based on long article and calculate equipment
CN113032552A (en) Text abstract-based policy key point extraction method and system
CN115293817A (en) Advertisement text generation method and device, equipment, medium and product thereof
CN114780582A (en) Natural answer generating system and method based on form question and answer
CN115203338A (en) Label and label example recommendation method
CN116737922A (en) Tourist online comment fine granularity emotion analysis method and system
CN113051887A (en) Method, system and device for extracting announcement information elements
KR102575507B1 (en) Article writing soulution using artificial intelligence and device using the same
CN113486143A (en) User portrait generation method based on multi-level text representation and model fusion
CN109409529A (en) A kind of event cognitive analysis method, system and storage medium
CN109063772A (en) A kind of image individuation semantic analysis, device and equipment based on deep learning
CN113011126A (en) Text processing method and device, electronic equipment and computer readable storage medium
Oyama et al. Visual clarity analysis and improvement support for presentation slides

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant