CN114491289A - Social content depression detection method of bidirectional gated convolutional network - Google Patents

Social content depression detection method of bidirectional gated convolutional network Download PDF

Info

Publication number
CN114491289A
CN114491289A CN202111674925.0A CN202111674925A CN114491289A CN 114491289 A CN114491289 A CN 114491289A CN 202111674925 A CN202111674925 A CN 202111674925A CN 114491289 A CN114491289 A CN 114491289A
Authority
CN
China
Prior art keywords
image
attention
representation
text
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111674925.0A
Other languages
Chinese (zh)
Other versions
CN114491289B (en
Inventor
张小瑞
原春霖
孙伟
孙逊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111674925.0A priority Critical patent/CN114491289B/en
Publication of CN114491289A publication Critical patent/CN114491289A/en
Application granted granted Critical
Publication of CN114491289B publication Critical patent/CN114491289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/268Morphological analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a social content depression detection method of a two-way gating convolutional network, which comprises the steps of obtaining texts and images in social content, and carrying out vectorization processing on the texts to obtain word vector sequences; constructing a part-of-speech position attention feature matrix according to the word vector sequence, calculating an input matrix of a convolution network, and performing convolution by a multi-scale filter to obtain a multi-channel convolution feature; encoding the word vector sequence by using Bi-GRU to obtain word representation, and distributing weights among words by using a word vector emotion attention mechanism to obtain sentence representation; coding the image through a residual attention network, learning the specific attention weight of the image through a visual attention mechanism, and aggregating the specific attention weight of the image and the sentence representation into an image specific text representation; the image specific text is aggregated by learning the importance weight of the image specific text to obtain a final text representation; and performing feature splicing on the multi-channel convolution features and the final text representation, and then obtaining whether depression exists and the depression severity degree through a softmax classifier.

Description

Social content depression detection method of bidirectional gated convolutional network
Technical Field
The invention relates to a social content depression detection method of a bidirectional gated convolutional network, and belongs to the technical field of depression detection.
Background
According to statistics, 3.5 hundred million depression patients are counted at present in the world, the prevalence rate of depression in China reaches 2.1%, the patient group is gradually younger, the age of 20-50 years is a high-incidence age group at present, and even worse, the suicide rate of depression reaches 15%, so that a method for early screening and diagnosing depression is urgently needed. With the popularization of networks and the feature of complexity of virtualization thereof in recent years, more and more patients like expressing and releasing their own feelings on social media, and therefore finding an effective, fast and accurate depression detection way on social media becomes an urgent task to be solved.
The content of the current social media is mainly multi-mode information including pictures and texts, and the current research realizes the enhancement effect of the pictures on the texts through a visual attention mechanism when processing the text information, but still has some problems: firstly, a bidirectional long-time memory network (Bi-LSTM) is mostly used for processing texts at present, and although the problem of sequence processing can be well solved, the problems of long training time, overfitting and poor characteristic information easily exist; secondly, there is a noise problem in the extraction of image features in the visual attention module, and the current attention mechanism mostly depends on external knowledge such as syntactic analysis, and cannot be detected quickly and accurately.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a social content depression detection method of a bidirectional gated convolutional network, which is used for quickly and accurately detecting depression of social content.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention provides a social content depression detection method of a bidirectional gating convolutional network, which comprises the following steps of:
acquiring a text and an image in social content, and performing vectorization processing on words in the text to obtain a word vector sequence;
constructing a part-of-speech position attention feature matrix according to the word vector sequence, calculating an input matrix of a convolution network, and performing convolution by a multi-scale filter to obtain a multi-channel convolution feature;
encoding the word vector sequence by using Bi-GRU to obtain word representation, and distributing weights among words by using a word vector emotion attention mechanism to obtain sentence representation;
coding the image through a residual attention network, learning the specific attention weight of the image through a visual attention mechanism, and aggregating the specific attention weight of the image and the sentence representation into an image specific text representation;
the image specific text is aggregated by learning the importance weight of the image specific text to obtain a final text representation;
and performing feature splicing on the multi-channel convolution features and the final text representation, and then obtaining whether depression exists and the depression severity degree through a softmax classifier.
Further, the text in the social content is divided into L sentences siSentence siBy T words wi,tComposition, wherein i ∈ (1, …, L), T ∈ (1, …, T).
Further, the input matrix of the convolutional network is calculated as follows:
Figure BDA0003451561710000021
Figure BDA0003451561710000022
wherein z iseIs the input matrix of the convolution network, e is the target word position, delta is the influence degree, epsilon is the weight coefficient, AeTo focus on part of speechFeature matrix, tageIs a part-of-speech vector, P is a sentence position matrix, and n is a sentence length.
Further, the input matrix of the convolutional network is convolved by 3-5 groups of filters with different scales, each group of the filters is 128, and the step size stride is set to be 1.
Further, the calculation of the word vector emotion attention mechanism for distributing weights among words to obtain sentence representation is as follows:
ui,t=UTtanh(hi,tWw+bw)
Figure BDA0003451561710000031
hi=∑tαi,thi,t
wherein u isi,tOf relative importance, UTFor context vectors, tanh is the nonlinear activation function, hi,tIs a word representation, WwAs a word weight matrix, bwIs a word deviation, αi,tAs a word attention weight, hiIs represented as a sentence.
Further, the image in the social content is encoded by using a residual attention network, which comprises stacking a plurality of residual attention blocks RABs on the basis of ResNet-101 as feature selectors to enhance the good feature representation of the trunk branches and suppress the noise of the trunk branches.
Further, the process of encoding the image is as follows:
mj=f(Vj)
Vj=T(aj)*(1+M(aj))
wherein m isjFor image representation, f is the visual attention function, VjIs a spatial feature vector, ajIs an image, T (a)j) For trunk branch output, M (a)j) For the mask branch output, j ∈ (1, …, E) is the taken image position, and E is the number of images.
Further, the visual attention mechanism is processed as follows:
pj=tanh(Wpmj+bp)
qi=tanh(Wqhi+bq)
vj,i=VT(pj*qi+qi)
Figure BDA0003451561710000041
dj=∑iβj,ihi
wherein p isjFor image projection, WpAs the image correspondence weight, mjIs an image representation, bpRepresenting image deviation, qiFor projection of sentences, WqIs the sentence weight, hiFor sentence representation, bqIndicates sentence deviation, vj,iTo pay attention to, value, VTTo learn the vector, betaj,iFor image-specific attention weights, djA text representation is specified for the image.
Further, the final text representation is calculated as follows:
kj=KTtanh(djWd+bd)
Figure BDA0003451561710000042
d=∑jγjdj
wherein k isjIs djImportance of, KTFor a global context attention vector, WdIs djCorresponding weight matrix, bdFor text deviation, γjFor image-specific text importance weight, d is the final text representation.
The invention also provides a computer system, comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any of the above.
Compared with the prior art, the invention has the following beneficial effects:
the method uses a multi-channel convolution network to extract spatial local features of different fine granularities of the document, uses the background semantic association information of the Bi-GRU learning text, and improves the richness and comprehensiveness of emotional feature representation;
a part-of-speech position attention mechanism is added before multi-channel convolution, so that the limitation of only depending on content level information is solved, and the model can acquire deeper information without depending on external knowledge;
the Bi-GRU is used for coding the word vector sequence, forward and backward bidirectional learning of the text is reserved, meanwhile, the operation time is reduced, and the speed of the model is improved;
and a residual error attention block is introduced, noise is reduced, the detection precision is improved, a visual attention mechanism is introduced, important sentences are identified, the interference of noise is reduced, and the utilization rate and classification precision of social information are improved.
Drawings
Fig. 1 is a block diagram of a social content depression detection method of a bidirectional gated convolutional network according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, a frame diagram of a social content depression detection method for a bidirectional gated convolutional network provided in an embodiment of the present invention is combined with a bidirectional gated network (Bi-GRU) and a multi-channel convolutional network, a possibility of combining part of speech with a location attention mechanism and a convolutional network is considered, and meanwhile, a residual attention module is introduced to further reduce noise and improve detection accuracy, which specifically includes the following steps:
(1) text in social content can be divided intoL sentences si(i ∈ (1, …, L)), sentence siBy T words wi,t(T e (1, …, T)); for each Word w using Word2veci,tVectorizing to obtain a word vector sequence xi,tThe method can avoid the problem of dimension disaster existing in text representation, and simultaneously improves the precision rate of semantic representation.
(2) Constructing a part-of-speech position attention feature matrix, and then calculating an input matrix of the multi-granularity convolutional network, wherein the calculation process of the specific input matrix is as follows:
Figure BDA0003451561710000051
Figure BDA0003451561710000061
wherein z iseIs the input matrix of the convolution network, e is the target word position, delta is the influence degree, epsilon is the weight coefficient, AeTag a matrix of part-of-speech attention featureseIs a part-of-speech vector, P is a sentence position matrix, and n is a sentence length.
Mapping each part of speech into a part of speech vector tageThe part-of-speech vector of the target word is marked as tar and is used as a part-of-speech attention feature matrix AeAnd epsilon takes 1.2 for emotional words and 1.0 for other words, the invention adopts a bidirectional scanning algorithm to more accurately determine the position values between the words and the target, stores the position values of all sentences by using a matrix P, and then calculates delta.
(3) And (3) convolving the feature information of the sentences with different fine granularities by using filters with different scales, uniformly generating the dimensionality of the feature vector, and splicing the obtained results to obtain the output feature representation of the multi-channel convolution.
The multi-scale convolution adopts 3-5 groups of filters with different scales, the feature information of different fine granularities of sentences is convoluted respectively, the size of five groups of filters is respectively 2 x 128, 3 x 128, 4 x 128, 5 x 128 and 6 x 128, 128 filters in each group are arranged, and the step size stride is set to be 1.
(4) Encoding a word vector sequence using Bi-GRU, and then representing the hidden state vectors, i.e. words, at different times by hi,tAfter the words are spliced respectively, weights are distributed among the words through a word vector emotion attention mechanism, and finally h is represented by all the words in the current sentencei,tAnd their word attention weight alphai,tWeighted summation yields sentence representation hi
The word vector emotion attention mechanism process is as follows:
ui,t=UTtanh(hi,tWw+bw)
Figure BDA0003451561710000062
hi=∑tαi,thi,t
wherein u isi,tOf relative importance, UTFor context vectors, tanh is the nonlinear activation function, hi,tBeing a word representation, WwAs a word weight matrix, bwIs a word deviation, αi,tAs a word attention weight, hiIs represented as a sentence.
Denote a word by hi,tThe representation in the attention space is obtained by a layer of neurons with a non-linear activation function tanh, which is then related to a context vector UTMultiplication to obtain ui,tNormalized using Softmax, yielding a word attention weight αi,tFinally, the sentence represents hiRepresenting h by all words in the current sentencei,tAnd their word attention weight alphai,tA weighted sum results.
(5) The visual components are composed of E images aj(j E (1, …, E)) and encoding the image in the social content to obtain an image representation mjSubsequently, an image-specific attention weight β of the sentence is learned for each imagej,iAggregating sentence representations into an image-specific text representation djLearning image-specific text importance weights γjAnd aggregated into a final textual representation d;
the process of encoding an image is as follows:
mj=f(Vj)
Vj=T(aj)*(1+M(aj))
wherein m isjFor image representation, f is the visual attention function, VjIs a spatial feature vector, ajIs an image, T (a)j) For trunk branch output, M (a)j) For the mask branch output, j ∈ (1, …, E) is the fetched image position, and E is the number of images.
A plurality of residual attention blocks RAB are stacked on the basis of ResNet-101 as feature selectors to enhance the good feature representation of the trunk branches and suppress the noise of the trunk branches.
The visual attention mechanism generation process is as follows:
get the sentence representation hiLater, important sentences need to be given more weight, where visual attention is used:
pj=tanh(Wpmj+bp)
qi=tanh(Wqhi+bq)
vj,i=VT(pj*qi+qi)
Figure BDA0003451561710000081
dj=∑iβj,ihi
wherein p isjFor image projection, WpAs the image correspondence weight, mjIs an image representation, bpRepresenting image deviations, qiFor projection of sentences, WqIs sentence weight, hiFor sentence representation, bqIndicates sentence deviation, vj,iTo pay attention to, value, VTTo learn the vector, betaj,iIs shown as a drawingLike a particular attention weight, djFor image-specific text representation, SiIs a sentence vector sequence.
Representing an image mjAnd sentence representation hiMultiplying the data by corresponding weights respectively, projecting the data to an attention space, and scaling the data to the same value range through a nonlinear activation function to obtain respective projections; the two projections are interacted with in an elemental multiplication and summation manner. The element multiplication can ensure that the influence of the visual part is not removed when the attention weight is calculated, and the summation can ensure that the effect of the text part is not obviously weakened due to the sparsity of the visual part. Generating an attention value vj,iObtaining an image specific attention weight β by softmax normalizationj,iThe sentence representations are then aggregated into an image-specific text representation dj
Next, the image-specific text importance weight γ is learnedjIndicating how much each image-specific text representation dj will contribute to the final text representation d, the formula is:
kj=KTtanh(djWa+bd)
Figure BDA0003451561710000082
d=∑jγjdj
wherein k isjIs djImportance of, KTFor a global context attention vector, WdIs djCorresponding weight matrix, bdFor text deviation, γjD is the final text representation for importance weights.
Image specific text representation djProjection to attention space by neurons with nonlinear activation function tanh, djAttention weight gamma to image with textjCollectively aggregated to the final text representation d.
(6) The output representations of the two models are feature-stitched and fed into two softmax classifiers, one of which generates a label of whether depression is present, and the other predicts the severity of depression.
The invention also provides a computer system comprising a processor and a storage medium, the storage medium being arranged to store instructions, the processor being arranged to operate in accordance with the instructions to perform the steps of any of the methods described above.
The method combines the Bi-GRU and the multi-granularity convolution network, learns the background semantic associated information of the text while acquiring more spatial local feature information, and improves the operation speed by reducing the number of parameters. Secondly, a part of speech and position attention mechanism is added into the convolutional network, the model receives the text information input in a parallelization mode, and the defect that only a content level attention mechanism is relied on is made up by utilizing and learning deeper emotion information of the input text. Meanwhile, a residual error attention module is added to extract image features, noise is reduced, and accuracy is improved. Finally, the quick and accurate diagnosis of the depression and the severity can be completed through social media content.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, it is possible to make various improvements and modifications without departing from the technical principle of the present invention, and those improvements and modifications should be considered as the protection scope of the present invention.

Claims (10)

1. A method for detecting social content depression of a two-way gated convolutional network, comprising the steps of:
acquiring a text and an image in social content, and performing vectorization processing on the text to obtain a word vector sequence;
constructing a part-of-speech position attention feature matrix according to the word vector sequence, calculating an input matrix of a convolution network, and performing convolution by a multi-scale filter to obtain a multi-channel convolution feature;
encoding the word vector sequence by using Bi-GRU to obtain word representation, and distributing weights among words by using a word vector emotion attention mechanism to obtain sentence representation;
coding the image through a residual attention network, learning the specific attention weight of the image through a visual attention mechanism, and aggregating the specific attention weight of the image and the sentence representation into an image specific text representation;
the image specific text is aggregated by learning the importance weight of the image specific text to obtain a final text representation;
and performing feature splicing on the multi-channel convolution features and the final text representation, and then obtaining whether depression exists and the depression severity degree through a softmax classifier.
2. The method of claim 1, wherein the text in the social content is divided into L sentences siSentence siBy T words wi,tComposition, wherein i ∈ (1, …, L), T ∈ (1, …, T).
3. The method of claim 1, wherein the input matrix of the convolutional network is calculated as follows:
Figure FDA0003451561700000011
Figure FDA0003451561700000012
wherein z iseIs the input matrix of the convolution network, e is the target word position, delta is the influence degree, epsilon is the weight coefficient, AeTag a matrix of part-of-speech attention featureseIs a part-of-speech vector, P is a sentence position matrix, and n is a sentence length.
4. The method for detecting social content depression of the bidirectional gated convolutional network as claimed in claim 1, wherein the input matrix of the convolutional network is convolved by 3-5 sets of filters with different scales, each set of filters is 128, and the step size stride is set to 1.
5. The method of claim 1, wherein the word vector emotional attention mechanism assigns weights among words to be calculated as follows:
ui,t=UTtanh(hi,tWw+bw)
Figure FDA0003451561700000021
hi=∑tαi,thi,t
wherein u isi,tOf relative importance, UTFor context vectors, tanh is the nonlinear activation function, hi,tIs a word representation, WwAs a word weight matrix, bwIs a word deviation, αi,tAs a word attention weight, hiIs represented as a sentence.
6. The method of claim 1, wherein encoding images in social content using a residual attention network comprises stacking a plurality of residual attention blocks RABs on a ResNet-101 basis as feature selectors.
7. The method for detecting social content depression of bidirectional gated convolutional network according to claim 1, wherein the process of encoding image using residual attention network is as follows:
mj=f(Vj)
Vj=T(aj)*(1+M(aj))
wherein m isjFor image representation, f is the visual attention function, VjIs a spatial feature vector, ajIs an image, T (a)j) For trunk branch output, M (a)j) For the mask branch output, j ∈ (1, …, E) is the fetched image position, and E is the number of images.
8. The method of claim 1, wherein the visual attention mechanism is processed as follows:
pj=tanh(Wpmj+bp)
qi=tanh(Wqhi+bq)
vj,i=VT(pj*qi+qi)
Figure FDA0003451561700000031
dj=∑iβj,ihi
wherein p isjFor image projection, WpAs the image correspondence weight, mjIs an image representation, bpRepresenting image deviations, qiFor projection of sentences, WqIs the sentence weight, hiFor sentence representation, bqIndicates sentence deviation, vj,iTo pay attention to, value, VTTo learn the vector, betaj,iFor image-specific attention weights, djA text representation is specified for the image.
9. The method of claim 8, wherein the final textual representation is calculated as follows:
kj=KTtanh(djWd+bd)
Figure FDA0003451561700000032
d=∑jγjdj
wherein k isjIs djImportance of, KTFor a global context attention vector, WdIs djCorresponding weight matrix, bdFor text deviation, γjFor image-specific text importance weight, d is the final text representation.
10. A computer system comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 9.
CN202111674925.0A 2021-12-31 2021-12-31 Social content depression detection method of bidirectional gating convolutional network Active CN114491289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111674925.0A CN114491289B (en) 2021-12-31 2021-12-31 Social content depression detection method of bidirectional gating convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111674925.0A CN114491289B (en) 2021-12-31 2021-12-31 Social content depression detection method of bidirectional gating convolutional network

Publications (2)

Publication Number Publication Date
CN114491289A true CN114491289A (en) 2022-05-13
CN114491289B CN114491289B (en) 2024-09-17

Family

ID=81507985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111674925.0A Active CN114491289B (en) 2021-12-31 2021-12-31 Social content depression detection method of bidirectional gating convolutional network

Country Status (1)

Country Link
CN (1) CN114491289B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994539A (en) * 2023-02-17 2023-04-21 成都信息工程大学 Entity extraction method and system based on convolution gating and entity boundary prediction
CN117077085A (en) * 2023-10-17 2023-11-17 中国科学技术大学 Multi-mode harmful social media content identification method combining large model with two-way memory

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948165A (en) * 2019-04-24 2019-06-28 吉林大学 Fine granularity feeling polarities prediction technique based on mixing attention network
US20200356724A1 (en) * 2019-05-06 2020-11-12 University Of Electronic Science And Technology Of China Multi-hop attention and depth model, method, storage medium and terminal for classification of target sentiments
CN112269876A (en) * 2020-10-26 2021-01-26 南京邮电大学 Text classification method based on deep learning
CN112860888A (en) * 2021-01-26 2021-05-28 中山大学 Attention mechanism-based bimodal emotion analysis method
CN113641820A (en) * 2021-08-10 2021-11-12 福州大学 Visual angle level text emotion classification method and system based on graph convolution neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948165A (en) * 2019-04-24 2019-06-28 吉林大学 Fine granularity feeling polarities prediction technique based on mixing attention network
US20200356724A1 (en) * 2019-05-06 2020-11-12 University Of Electronic Science And Technology Of China Multi-hop attention and depth model, method, storage medium and terminal for classification of target sentiments
CN112269876A (en) * 2020-10-26 2021-01-26 南京邮电大学 Text classification method based on deep learning
CN112860888A (en) * 2021-01-26 2021-05-28 中山大学 Attention mechanism-based bimodal emotion analysis method
CN113641820A (en) * 2021-08-10 2021-11-12 福州大学 Visual angle level text emotion classification method and system based on graph convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王婷;何小海;孙伟恒;熊淑华;KARN PRADEEP;: "结合卷积神经网络的HEVC帧内编码压缩改进算法", 太赫兹科学与电子信息学报, no. 02, 25 April 2020 (2020-04-25) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994539A (en) * 2023-02-17 2023-04-21 成都信息工程大学 Entity extraction method and system based on convolution gating and entity boundary prediction
CN115994539B (en) * 2023-02-17 2024-05-10 成都信息工程大学 Entity extraction method and system based on convolution gating and entity boundary prediction
CN117077085A (en) * 2023-10-17 2023-11-17 中国科学技术大学 Multi-mode harmful social media content identification method combining large model with two-way memory
CN117077085B (en) * 2023-10-17 2024-02-09 中国科学技术大学 Multi-mode harmful social media content identification method combining large model with two-way memory

Also Published As

Publication number Publication date
CN114491289B (en) 2024-09-17

Similar Documents

Publication Publication Date Title
Micikevicius et al. Mixed precision training
WO2021114840A1 (en) Scoring method and apparatus based on semantic analysis, terminal device, and storage medium
CN112199956B (en) Entity emotion analysis method based on deep representation learning
CN112084331A (en) Text processing method, text processing device, model training method, model training device, computer equipment and storage medium
CN114298158A (en) Multi-mode pre-training method based on image-text linear combination
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN112818861A (en) Emotion classification method and system based on multi-mode context semantic features
CN110457718B (en) Text generation method and device, computer equipment and storage medium
CN111832584A (en) Image processing apparatus, training apparatus and training method thereof
CN111464881B (en) Full-convolution video description generation method based on self-optimization mechanism
CN111914085A (en) Text fine-grained emotion classification method, system, device and storage medium
CN112256866B (en) Text fine-grained emotion analysis algorithm based on deep learning
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
CN114491289B (en) Social content depression detection method of bidirectional gating convolutional network
CN113987187A (en) Multi-label embedding-based public opinion text classification method, system, terminal and medium
CN114330499A (en) Method, device, equipment, storage medium and program product for training classification model
Zhang et al. Attention pooling-based bidirectional gated recurrent units model for sentimental classification
CN110276396A (en) Picture based on object conspicuousness and cross-module state fusion feature describes generation method
CN116258990A (en) Cross-modal affinity-based small sample reference video target segmentation method
CN117370736A (en) Fine granularity emotion recognition method, electronic equipment and storage medium
CN116578738B (en) Graph-text retrieval method and device based on graph attention and generating countermeasure network
CN111445545A (en) Text-to-map method, device, storage medium and electronic equipment
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
CN112765955B (en) Cross-modal instance segmentation method under Chinese finger representation
Feng et al. Research on optimization method of convolutional nerual network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant