CN111414478B - Social network emotion modeling method based on deep cyclic neural network - Google Patents

Social network emotion modeling method based on deep cyclic neural network Download PDF

Info

Publication number
CN111414478B
CN111414478B CN202010174687.6A CN202010174687A CN111414478B CN 111414478 B CN111414478 B CN 111414478B CN 202010174687 A CN202010174687 A CN 202010174687A CN 111414478 B CN111414478 B CN 111414478B
Authority
CN
China
Prior art keywords
emotion
model
user
social network
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010174687.6A
Other languages
Chinese (zh)
Other versions
CN111414478A (en
Inventor
王晓慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202010174687.6A priority Critical patent/CN111414478B/en
Publication of CN111414478A publication Critical patent/CN111414478A/en
Application granted granted Critical
Publication of CN111414478B publication Critical patent/CN111414478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a social network emotion modeling method based on a deep cyclic neural network, which comprises the following steps: processing the social network heterogeneous data based on the attention model; constructing a depth LSTM-based long-term memory model, comprising: constructing a generalized deep neural network residual structure, constructing a deep LSTM model and constructing a deep recurrent neural network fused with a plurality of LSTM models; and inputting the processed data into a built long-time memory model based on the depth LSTM, and outputting to obtain the emotion states of the user at different moments in the social network. Different from the traditional social network emotion calculation, the method gets rid of dependence on artificial assumption and modeling, automatically extracts relevant features, establishes a relation model of emotion transfer and influence change, avoids deviation between the artificial model and actual conditions, and enhances popularization capability of the system.

Description

Social network emotion modeling method based on deep cyclic neural network
Technical Field
The invention relates to the technical field of multi-mode emotion calculation, in particular to a social network emotion modeling method based on a deep circulation neural network.
Background
Along with the development of the Internet, understanding the emotion of people through a social network becomes a research hotspot of multiple subjects such as societies, psychology and computer science at present, is also a core problem of emotion calculation, and has important research significance. The media selection pushes various information including news information on the social network, and if the emotion of a person can be accurately analyzed and understood, intelligent recommendation can be accurately personalized through the social platform. Therefore, social network emotion research has important practical value.
Studies have been made to analyze the emotional state of a user based on the behavior of the user in a social network, such as microblog, geographic location, phone recording, etc., and predict the personality of the user based on the structure of the social network and the behavior of the user. Research based on Facebook data suggests that users' emotions on social networks are closely related to their social activities and interactions. Social studies have shown that emotion is crowd-sourced due to the concentricity of people, i.e. your feeling depends on what you touch and what you feel in relation to or near or far from you. Researchers have built impact models, and propagation models of impacts, based on user labels, microblogs, published articles, etc. on the internet.
Deep learning has made rapid progress in recent years. Compared with the prior shallow learning, the method has the outstanding characteristics that various heterogeneous data can be processed by a relatively uniform structure, the input and output modes are flexible, and the characteristic screening is not required to be performed manually. Hidden variables between upper layers can be mapped to different spaces, that is, neural networks trained for certain classes of tasks can be easily adapted to similar tasks. The characteristics of the deep neural network determine that the deep neural network is suitable for processing various heterogeneous data in the social network, so that modeling from various heterogeneous data to emotion is realized. In addition to conventional static neural networks, recurrent neural networks (Recurrent Neural Network, RNN) have received increasing attention in recent years. It adds part of the cyclic feedback, i.e. the state memory function, compared to the static neural network. When the current input is processed, the history state is used as input, namely, the history information is memorized, and the behavior mode is more similar to that of the human brain. However, conventional RNNs appear to be distracting when dealing with long-term memory dependencies, such as the probability of a word appearing at a location in natural language, and being closely related to a number of words. The advent of LSTM (Long Short-Term Memory) solves this problem, which is one of the RNNs that is effectively addressed by well-designed cell state transfer.
In recent years, some work of processing audio and video emotion by using a neural network, such as processing emotion change in video by using deep learning, but only consider inter-frame relationship based on traditional classification emotion recognition and supplement some steps by using deep learning. Shizhe Chen and Qin Jin propose emotion modeling by directly adopting classical LSTM on the basis of various characteristics, and the model structure is relatively single, and various characteristics need to be manually extracted. Compared with a social network, the simple audio and video data are more regular, and fewer factors need to be considered in analysis.
Starting from the new deep belief network training method proposed by Geoffrey Hinton in 2006 and by Simon Osdero, the rapid development of deep learning pulls the prologue. Compared with the prior shallow learner, the deep learning has more excellent feature learning capability, has more essential characterization on data, and can learn more complex concepts. Many of the previous feature extraction steps requiring manual coding are completely replaced by isomorphic networks in deep learning, so that the difficulty of developing new algorithms for specific tasks is greatly reduced. Studies by Alex Krizhevsky et al show that deep networks often extract better representation of features than artificial elaboration, provided that training is sufficient. At present, the study of deep learning is continuously heated in academia, and the addition of large enterprises such as Google, microsoft and the like is promoted by the addition of large enterprises such as famous Google brain which the network is a 152-layer residual error network of microsoft and the like. The deep company in recent time utilizes the deep neural network to model AI, can automatically learn to play video games, input original pixels, does not need manual labeling, and finally surpasses human players. AlphaGo has achieved a high quality go AI using a deep network, defeating the european champion at 5:0, and has greatly challenged the field where the computer has traditionally been considered impossible to defeat humans.
According to the method, emotion cognition of a social network is taken as a starting point, a social network emotion analysis model based on a recurrent neural network is established, social network data comprising heterogeneous data such as texts, images, videos and network relations in the social network are input, and emotion states of users in the social network at different moments are output.
Disclosure of Invention
The invention aims to provide a social network emotion modeling method based on a deep cyclic neural network, which predicts emotion states of users at different moments according to heterogeneous data such as texts, images, videos, network relations and the like in the social network, solves key problems of social network emotion calculation and can provide a model foundation for applications such as intelligent advertisements, recommendations and the like; in addition, the method organically combines social network, emotion calculation, deep learning, memory neural network, attention model and the like, is different from most of the existing research works on social network emotion, does not need to manually establish emotion change and influence model, avoids a plurality of priori assumptions, and has natural advantages in universality and accuracy of data.
In order to solve the technical problems, the embodiment of the invention provides the following scheme:
a social network emotion modeling method based on a deep cyclic neural network comprises the following steps:
processing the social network heterogeneous data based on the attention model;
constructing a depth LSTM-based long-term memory model, comprising: constructing a generalized deep neural network residual structure, constructing a deep LSTM model and constructing a deep recurrent neural network fused with a plurality of LSTM models;
and inputting the processed data into a built long-time memory model based on the depth LSTM, and outputting to obtain the emotion states of the user at different moments in the social network.
Preferably, the social network heterogeneous data comprises text, images, audio, video and network relations in the social network.
Preferably, the step of processing the social network heterogeneous data based on the attention model includes:
extracting information meeting importance distribution from social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following components:
generating importance distribution of all heterogeneous data by combining user emotion state vectors with rough data representation and performing sparse sampling, wherein the rough data representation comprises vectorized representations of labels, titles and thumbnails;
the extracted information is vectorized and a compact vector representation is generated for input into a subsequent model.
Wherein for an image, an AutoEncoder is used to generate a compact vector representation;
for audio, generating a compact vector representation using an LSTM-based AutoEncoder;
for video, processing a single picture by using an AutoEncoder, and then processing by using an audio processing method;
for text, a word vector is used for representation.
Preferably, the step of constructing a generalized deep neural network residual structure includes:
adding a path from the input end to the internal node directly on the basic deep neural network structure;
shorting to any node.
Preferably, the step of constructing the depth LSTM model includes:
constructing an emotion change time sequence model and an influence correlation time sequence model;
the transfer relationship of each variable is as follows:
wherein,to activate the function, the result takes the value of [0,1],X t+1 ,R t+1 Z as input processed data t+1 ,r t+1 For the last state H t Through->Two activation amounts generated, ++>Is a deep neural network generalized by->The new intermediate state is generated.
Preferably, the step of constructing a deep recurrent neural network fusing the multiple LSTM models includes:
the method comprises the steps of representing by adopting a classical RNN time sequence data stream, modeling based on an emotion change time sequence model and an influence related time sequence model, and simultaneously, the emotion change time sequence model and the influence related time sequence model also depend on the state of the other party as input;
wherein modeling and prediction is performed by the following parameters:
x, I respectively represent observation data and processed data, H, A, R represent state vectors, f represent various mapping functions, θ is a model parameter,d representing the time t of user i m Class observation data->A summary vector representing observation data of user i at time t, and f AT Output (I)>The interaction state vector of the friend j of the user i and the user i at the moment t;
mapping the emotional state vector of the user i at the time t into comprehensible information including joy, fun and fun through an output layer>The process is performed by a function->It is shown that, implemented with deep neural network residual structure,for the influence state vector of friend j of user i at time t on user i, +.>The aggregation vector which is affected by friends and is used for the user i at the moment t;
the method comprises the steps of representing that according to the past two-person influence state vectors and the current two-person emotion state vectors and the current interaction state vector, the next two-person influence state vector is presumed; />For attention model->Information screening is carried out by matching with the method;integrating the influence of other users on the user i, wherein Nei (u i ) Representing an associated user of user i in the social network; />And inputting a current emotion state vector of the user i, a current behavior state vector and a state vector influenced by other users, and predicting the emotion state vector of the user i at the next moment.
The scheme of the invention at least comprises the following beneficial effects:
the invention fully combines the advantages of deep learning processing of various heterogeneous data and fully simulates the cognitive characteristics of human brain on memory, thereby providing a new thought for processing emotion analysis problems. Different from the traditional social network emotion calculation, the method gets rid of dependence on artificial assumption and modeling, automatically extracts relevant features, establishes a relation model of emotion transfer and influence change, avoids deviation between the artificial model and actual conditions, and enhances popularization capability of the system. The method is not dependent on a specific emotion space, and retraining is not needed in the occasion of switching emotion spaces or adding new emotion types, so that the cost is reduced. Meanwhile, the method is not limited by a small amount of static data, and massive dynamic social network data are automatically screened, downloaded and incrementally learned, so that deviation caused by manual steps is avoided. In addition, the emotion space is directly utilized to process multi-class emotion problems, and the multi-class emotion problems do not need to be split into a plurality of two-class emotion problems to be indirectly processed.
Drawings
FIG. 1 is a flow chart of a social network emotion modeling method based on a deep cyclic neural network provided by an embodiment of the invention;
FIGS. 2a and 2b are schematic diagrams of a basic deep neural network structure and a deep neural network residual structure in an embodiment of the present invention, respectively;
FIG. 3 is a schematic diagram of a depth LSTM model in an embodiment of the invention;
FIG. 4 is a schematic diagram of a deep recurrent neural network incorporating multiple LSTM models in an embodiment of the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a social network emotion modeling method based on a deep cyclic neural network, which comprises the following steps as shown in fig. 1:
processing the social network heterogeneous data based on the attention model;
constructing a depth LSTM-based long-term memory model, comprising: constructing a generalized deep neural network residual structure, constructing a deep LSTM model and constructing a deep recurrent neural network fused with a plurality of LSTM models;
and inputting the processed data into a built long-time memory model based on the depth LSTM, and outputting to obtain the emotion states of the user at different moments in the social network.
The invention fully combines the advantages of deep learning processing of various heterogeneous data and fully simulates the cognitive characteristics of human brain on memory, thereby providing a new thought for processing emotion analysis problems. Different from the traditional social network emotion calculation, the method gets rid of dependence on artificial assumption and modeling, automatically extracts relevant features, establishes a relation model of emotion transfer and influence change, avoids deviation between the artificial model and actual conditions, and enhances popularization capability of the system.
The method is not dependent on a specific emotion space, and retraining is not needed in the occasion of switching emotion spaces or adding new emotion types, so that the cost is reduced. Meanwhile, the method is not limited by a small amount of static data, and massive dynamic social network data are automatically screened, downloaded and incrementally learned, so that deviation caused by manual steps is avoided. In addition, the emotion space is directly utilized to process multi-class emotion problems, and the multi-class emotion problems do not need to be split into a plurality of two-class emotion problems to be indirectly processed.
Further, the social network heterogeneous data comprises heterogeneous data of text, images, audio, video, network relations and the like in a social network. According to the method, the system and the device, the social network heterogeneous data are input, and the established model can output the emotion states of the user at different moments.
Further, the step of processing the social network heterogeneous data based on the attention model comprises the following steps:
extracting information meeting importance distribution from social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following components:
generating importance distribution of all heterogeneous data by combining user emotion state vectors with rough data representation and performing sparse sampling, wherein the rough data representation comprises vectorized representations of labels, titles and thumbnails;
the extracted information is vectorized and a compact vector representation is generated for input into a subsequent model.
The social network heterogeneous data contains various heterogeneous information such as pictures, audios, videos, characters and the like, and the information is firstly required to be screened, collected and normalized. In the embodiment of the invention, the importance distribution on all data is generated by combining the user emotion vector with the rough data representation so as to carry out sparse sampling, and the importance distribution is obtained through deep network modeling. In addition, for large media data such as pictures, videos and the like, part of the content is selectively skipped through importance distribution, so that resources are saved. The information is then vectorized and spliced into a compact vector representation.
Specifically, for an image, generating a compact vector representation using an AutoEncoder; for audio, generating a compact vector representation using an LSTM-based AutoEncoder; for video, processing a single picture by using an AutoEncoder, and then processing by using an audio processing method; for characters, representing the characters by using word vectors; other weak classifier outputs may also be referenced to take full advantage of previous findings.
Further, the step of constructing the generalized deep neural network residual structure comprises the following steps:
adding a path from the input end to the internal node directly on the basic deep neural network structure;
shorting to any node.
The underlying deep neural network is the basis of all structural units. Fig. 2a is a schematic diagram of a basic deep neural network structure, which is formed by alternately connecting a plurality of roll layers (CONV), activation layers (RELU), full connection layers and the like. Fig. 2b is a schematic diagram of a residual structure of a deep neural network in an embodiment of the present invention, which designs a more flexible data path, shorts any part on the basis of the original deep neural network structure, and increases a direct path from an input end to an internal node, thereby reducing a level accumulated error. The layout of the middle hidden layer and the short-circuit edge placement scheme are designed and studied, and the learning of functions is converted into the learning of residual errors, so that the learning efficiency of the deep network is greatly improved. Experiments show that the optimized short-circuit edge connection can obtain stronger expression capacity than the regular short-circuit edge connection, and the problem of gradient disappearance during training is effectively avoided.
Further, the step of constructing the depth LSTM model includes:
and constructing an emotion change time sequence model and an influence correlation time sequence model.
The user emotion change, influence change relation and the like are taken as core modules to be the basis of the whole time sequence network, and the depth LSTM module is designed by combining the advantages of long-time relevance and easy training of LSTM and the characteristic of strong expression capability of the depth network.
FIG. 3 is a schematic diagram of a depth LSTM model according to an embodiment of the present invention, where the transfer relationships of the variables are:
wherein,to activate the function, the result takes the value of [0,1]Compared with classical LSTM, the linear part is replaced by a depth residual neural network, so that the state is more compact; x is X t+1 ,R t+1 Z as input processed data t+1 ,r t+1 For the last state H t Through->Two activation amounts generated, ++>Is a deep neural network generalized by->The new intermediate state is generated.
Input X t+1 ,R t+1 (post-processing observations) and last state H t Through the process ofBecomes two activation amounts z t+1 ,r t+1 Respectively for modulating the state H t For new intermediate states->(by deep network->Generated) and a new intermediate state +.>State H t Contribution to the final new state.
Further, the step of constructing the deep recurrent neural network fusing the multiple LSTM models comprises the following steps:
the method is characterized in that classical RNN time sequence data flow is adopted for representation, modeling is carried out on the basis of an emotion change time sequence model and an influence related time sequence model, and meanwhile the emotion change time sequence model and the influence related time sequence model also depend on the state of the other party as input.
Specifically, as shown in fig. 4, taking the relationship between the user i and the friend j as an example, a classical RNN time sequence data stream is adopted for representation, wherein the core functions of the emotion change time sequence model and the influence correlation time sequence model are respectively f H ,f A Implemented by the deep LSTM model, FIG. 4 shows the transfer relationship of data streams at two adjacent times t, t+1.
Wherein X, I respectively represent observation data and processed data, H, A, R represent state vectors, f represent various mapping functions, θ is a model parameter,d representing the time t of user i m Class observation data->A summary vector representing observation data of user i at time t, and f AT Output (I)>For the interaction state vector of the friend j of the user i and the user i at the moment t, for example, j leaves a message for i;
mapping the emotional state vector of the user i at the time t into comprehensible information including joy, fun and fun through an output layer>The process is performed by a function->It is shown that, implemented with deep neural network residual structure,far ratio->Rich information, in some cases->Even will->Information is contained therein, including->Influence state vector of friend j of user i at time t on user i, and +.>Similarly, is->Rich information may also be stored, in addition to the influence intensity at the current time, the influence before encoding may also be encoded,/>The aggregation vector which is affected by friends and is used for the user i at the moment t;
the method comprises the steps of representing that according to the past two-person influence state vectors and the current two-person emotion state vectors and the current interaction state vector, the next two-person influence state vector is presumed; />For attention model->Information screening is carried out by matching with the method;integrating the influence of other users on the user i, wherein Nei (u i ) Representing an associated user of user i in the social network; />And inputting a current emotion state vector of the user i, a current behavior state vector and a state vector influenced by other users, and predicting the emotion state vector of the user i at the next moment.
In conclusion, the invention fully combines the advantages of deep learning processing of various heterogeneous data and fully simulates the cognitive characteristics of human brain on memory, thereby providing a new thought for processing emotion analysis problems. Different from the traditional social network emotion calculation, the method gets rid of dependence on artificial assumption and modeling, automatically extracts relevant features, establishes a relation model of emotion transfer and influence change, avoids deviation between the artificial model and actual conditions, and enhances popularization capability of the system. The method is not dependent on a specific emotion space, and retraining is not needed in the occasion of switching emotion spaces or adding new emotion types, so that the cost is reduced. Meanwhile, the method is not limited by a small amount of static data, and massive dynamic social network data are automatically screened, downloaded and incrementally learned, so that deviation caused by manual steps is avoided. In addition, the emotion space is directly utilized to process multi-class emotion problems, and the multi-class emotion problems do not need to be split into a plurality of two-class emotion problems to be indirectly processed.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (2)

1. The social network emotion modeling method based on the deep circulation neural network is characterized by comprising the following steps of:
processing the social network heterogeneous data based on the attention model;
constructing a depth LSTM-based long-term memory model, comprising: constructing a generalized deep neural network residual structure, constructing a deep LSTM model and constructing a deep recurrent neural network fused with a plurality of LSTM models;
the step of constructing the generalized deep neural network residual structure comprises the following steps:
adding a path from the input end to the internal node directly on the basic deep neural network structure;
shorting any node;
the step of constructing the depth LSTM model comprises the following steps:
constructing an emotion change time sequence model and an influence correlation time sequence model;
the transfer relationship of each variable is as follows:
wherein,to activate the function, the result takes the value of [0,1],X t+1 ,R t+1 Z as input processed data t+1 ,r t+1 For the last state H t Through->Two activation amounts generated, ++>To be generalized by deep neural networkThe new intermediate state is generated;
the step of constructing the deep recurrent neural network fusing the multi-LSTM model comprises the following steps:
the method comprises the steps of representing by adopting a classical RNN time sequence data stream, modeling based on an emotion change time sequence model and an influence related time sequence model, and simultaneously, the emotion change time sequence model and the influence related time sequence model also depend on the state of the other party as input;
wherein modeling and prediction is performed by the following parameters:
x, I respectively represent observation data and processed data, H, A, R represent state vectors, f represent various mapping functions, θ is a model parameter,d representing the time t of user i m Class observation data->A summary vector representing observation data of user i at time t, and f AT Output (I)>The interaction state vector of the friend j of the user i and the user i at the moment t;
mapping the emotion state vector of the user i at the time t into comprehensible information containing happiness, anger and funeral through an output layerThe process is performed by a function->Representing, < > implemented with deep neural network residual structure>For the influence state vector of friend j of user i at time t on user i, +.>The aggregation vector which is affected by friends and is used for the user i at the moment t;
the method comprises the steps of representing that according to the past two-person influence state vectors and the current two-person emotion state vectors and the current interaction state vector, the next two-person influence state vector is presumed; />For attention model->Information screening is carried out by matching with the method;integrating the influence of other users on the user i, wherein Nei (u i ) Representing an associated user of user i in the social network; />Inputting a current emotion state vector of a user i, a current behavior state vector and a state vector influenced by other users, and predicting the emotion state vector of the user i at the next moment;
and inputting the processed data into a built long-time memory model based on the depth LSTM, and outputting to obtain the emotion states of the user at different moments in the social network.
2. The social network emotion modeling method of claim 1, wherein the step of processing social network heterogeneous data based on an attention model includes:
extracting information meeting importance distribution from social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following components:
generating importance distribution of all heterogeneous data by combining user emotion state vectors with rough data representation and performing sparse sampling, wherein the rough data representation comprises vectorized representations of labels, titles and thumbnails;
vectorizing the extracted information, and generating a compact vector representation for input into a subsequent model;
wherein for an image, an AutoEncoder is used to generate a compact vector representation;
for audio, generating a compact vector representation using an LSTM-based AutoEncoder;
for video, processing a single picture by using an AutoEncoder, and then processing by using an audio processing method;
for text, a word vector is used for representation.
CN202010174687.6A 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network Active CN111414478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010174687.6A CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010174687.6A CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Publications (2)

Publication Number Publication Date
CN111414478A CN111414478A (en) 2020-07-14
CN111414478B true CN111414478B (en) 2023-11-17

Family

ID=71492941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010174687.6A Active CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Country Status (1)

Country Link
CN (1) CN111414478B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327631B (en) * 2021-07-15 2023-03-21 广州虎牙科技有限公司 Emotion recognition model training method, emotion recognition method and emotion recognition device
CN113609306B (en) * 2021-08-04 2024-04-23 北京邮电大学 Social network link prediction method and system for anti-residual diagram variation self-encoder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101688A1 (en) * 2014-12-25 2016-06-30 清华大学 Continuous voice recognition method based on deep long-and-short-term memory recurrent neural network
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107808168A (en) * 2017-10-31 2018-03-16 北京科技大学 A kind of social network user behavior prediction method based on strong or weak relation
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109508375A (en) * 2018-11-19 2019-03-22 重庆邮电大学 A kind of social affective classification method based on multi-modal fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101688A1 (en) * 2014-12-25 2016-06-30 清华大学 Continuous voice recognition method based on deep long-and-short-term memory recurrent neural network
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107808168A (en) * 2017-10-31 2018-03-16 北京科技大学 A kind of social network user behavior prediction method based on strong or weak relation
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109508375A (en) * 2018-11-19 2019-03-22 重庆邮电大学 A kind of social affective classification method based on multi-modal fusion

Also Published As

Publication number Publication date
CN111414478A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN107247702A (en) A kind of text emotion analysis and processing method and system
CN111581966A (en) Context feature fusion aspect level emotion classification method and device
Wang et al. Neural aesthetic image reviewer
CN111414478B (en) Social network emotion modeling method based on deep cyclic neural network
CN113064968B (en) Social media emotion analysis method and system based on tensor fusion network
Jha et al. A novel approach on visual question answering by parameter prediction using faster region based convolutional neural network
Zhang et al. A BERT fine-tuning model for targeted sentiment analysis of Chinese online course reviews
CN112800225A (en) Microblog comment emotion classification method and system
Sharif et al. Vision to language: Methods, metrics and datasets
Zhu et al. Multimodal emotion classification with multi-level semantic reasoning network
Bansal et al. Multilingual personalized hashtag recommendation for low resource Indic languages using graph-based deep neural network
CN116913278B (en) Voice processing method, device, equipment and storage medium
CN113627550A (en) Image-text emotion analysis method based on multi-mode fusion
CN117271745A (en) Information processing method and device, computing equipment and storage medium
CN113741759B (en) Comment information display method and device, computer equipment and storage medium
CN115129930A (en) Video information processing method and device, computer equipment and storage medium
CN114443916A (en) Supply and demand matching method and system for test data
Deng et al. A depression tendency detection model fusing weibo content and user behavior
Jiang et al. Association between community psychological label and user portrait model based on multimodal neural network
CN118014086B (en) Data processing method, device, equipment, storage medium and product
CN117493490B (en) Topic detection method, device, equipment and medium based on heterogeneous multi-relation graph
Lei et al. Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion
Kim et al. Question‐aware prediction with candidate answer recommendation for visual question answering
Lin et al. Emotional semantic recognition of visual scene in flash animation
CN118093878A (en) PeM-Bert-based knowledge embedded social media emotion detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant