CN111768803A - General audio steganalysis method based on convolutional neural network and multi-task learning - Google Patents

General audio steganalysis method based on convolutional neural network and multi-task learning Download PDF

Info

Publication number
CN111768803A
CN111768803A CN202010415020.0A CN202010415020A CN111768803A CN 111768803 A CN111768803 A CN 111768803A CN 202010415020 A CN202010415020 A CN 202010415020A CN 111768803 A CN111768803 A CN 111768803A
Authority
CN
China
Prior art keywords
network
audio
convolution
layer
steganalysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010415020.0A
Other languages
Chinese (zh)
Other versions
CN111768803B (en
Inventor
王让定
林昱臻
严迪群
董理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongyue Information Technology Co ltd
Tianyi Safety Technology Co Ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202010415020.0A priority Critical patent/CN111768803B/en
Publication of CN111768803A publication Critical patent/CN111768803A/en
Application granted granted Critical
Publication of CN111768803B publication Critical patent/CN111768803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a general audio steganalysis method based on a convolutional neural network and multitask learning, which is characterized by comprising the following steps: the network framework corresponding to the method comprises a feature extraction sub-network, a two-classification sub-network and a multi-classification sub-network, and the detection effect of various audio steganography algorithms is effectively improved by providing an audio general steganography analysis model and an audio general steganography analysis method based on a convolutional neural network and multi-task learning; the method improves the detection capability of the unknown steganography algorithm, and is convenient for the application of the audio steganography analysis technology in the scene of obtaining evidence of big data of the complex internet.

Description

General audio steganalysis method based on convolutional neural network and multi-task learning
Technical Field
The invention relates to the technical field of audio steganography, in particular to a general audio steganography analysis method based on a convolutional neural network and multi-task learning.
Background
At present, an audio steganalysis model based on a deep learning technology obtains higher detection performance under laboratory conditions. However, in an actual network big data evidence obtaining environment, the secret-containing audio may be generated by a plurality of steganographic algorithms, wherein the steganographic algorithms may include the steganographic algorithms that are not used in the training data set, under this scenario, if a steganographic analyst directly uses a model obtained by laboratory training to perform detection, the Steganographic Algorithmmismatch (SAM) in the audio steganographic analysis may be caused, and the accuracy may be greatly reduced.
SAM occurs during the generation of dense carriers, specifically the training set and the test set differ in the embedding method used to generate the dense carriers. In this mode, the steganalysis researcher knows the statistical properties of the carrier source, and only needs to use the carrier database with the same statistical properties to design and train the classifier, however, since the steganography algorithm is not known, the characteristic distribution of the dense carrier in the training stage and the dense carrier in the testing stage may be different, so that the classifier in the training stage may fail in the testing stage even though the detection performance of the classifier in the training stage is good.
Disclosure of Invention
In view of the foregoing problems, an object of the present invention is to provide a general audio steganography analysis method based on a convolutional neural network and multitask learning, which can effectively improve the detection effect of an audio steganography algorithm and the detection capability of an unknown steganography algorithm.
In order to achieve the purpose, the technical scheme of the invention is as follows: a general audio steganalysis method based on a convolutional neural network and multitask learning is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-network, a two-classification sub-network and a multi-classification sub-network, the method comprises,
s1, inputting audio data;
s2, outputting a steganalysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through a sub-classification network, if so, sequentially executing S4-S8, and if not, outputting the audio data as normal audio;
s4, steganalysis feature directionThe quantity F is output through a binary classification sub-network to obtain a binary steganography prediction probability vector
Figure BDA0002494662060000021
Computing binary steganographic prediction probability vectors
Figure BDA0002494662060000022
Two-element steganographic label vector y ═ y after One-hot coding0,y1]Cross entropy loss of Lm
Figure BDA0002494662060000023
Wherein y isi∈ {0,1}, i denotes the class index, i ∈ [0, 1]]Updating parameters of the two classification sub-networks through a back propagation error and gradient descent algorithm;
s5, outputting the steganography analysis feature vector F through a multi-classification sub-network to obtain the prediction probability value of the steganography algorithm type
Figure BDA0002494662060000024
Calculating a predicted probability value
Figure BDA0002494662060000025
Steganographic class tag m ═ m after encoding with One-hot0,m1,…,mM-1]Cross entropy loss of La
Figure BDA0002494662060000026
Wherein M represents the number of M different steganographic algorithms contained in the training set data, and sub-network parameters are classified according to the number through a back propagation error and gradient descent algorithm;
s6, according to the combined loss L ═ Lm+λLaSetting lambda as an auxiliary task weight factor;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
and S8, judging whether the confidence value C (m) is greater than a set experience threshold value CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
Further, the feature extraction sub-network in S2 includes an audio preprocessing layer and 5 concatenated convolution groups after the audio preprocessing layer, that is, a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group, a 4 th convolution group, and a 5 th convolution group.
Further, the audio preprocessing layer is composed of 4 1 × 5 convolution kernels D1-D4, and the initial weights are respectively:
D1=[1,-1,0,0,0],D1=[1,-2,1,0,0],D1=[1,-3,3,1,0],D1=[1,-4,6,-4,1];
the 1 st convolution group includes a 1 × 1 first convolution layer, a 1 × 5 second convolution layer, and a 1 × 1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group respectively comprise a 1 x 5 convolution layer, a 1 x 1 convolution layer and a mean pooling layer, wherein the mean pooling layer of the 5 th convolution group is a global mean pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
Furthermore, the audio preprocessing layer adopts a differential filtering design.
Further, the first convolution layer in the 1 st convolution group is activated by using a truncated linear unit TLU.
Further, the two classification subnetworks include a fully connected layer having 128 neurons and a binary steganographic label prediction layer.
Further, the multi-class subnetwork comprises two cascaded fully-connected layers and steganographic class label prediction layers, wherein the two cascaded layers respectively have 128 neurons and 64 neurons.
Further, the confidence value C (m) in S8 is calculated by
Figure BDA0002494662060000031
Setting an empirical threshold CT ═ 0.5 ═ c (m) max, where c (m)max=logM。
Compared with the prior art, the invention has the advantages that:
by providing the audio universal steganography analysis model and the audio universal steganography analysis method based on the convolutional neural network and the multitask learning, the detection effect of various audio steganography algorithms is effectively improved; the method improves the detection capability of the unknown steganography algorithm, and is convenient for the application of the audio steganography analysis technology in the scene of obtaining evidence of big data of the complex internet.
Detailed Description
The following detailed description of embodiments of the invention is merely exemplary in nature and is intended to be illustrative of the invention and not to be construed as limiting the invention.
The invention discloses a general audio steganalysis method based on convolutional neural network and multitask learning, which comprises the following steps,
s1, inputting audio data;
s2, outputting a steganalysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through a sub-classification network, if so, sequentially executing S4-S8, and if not, outputting the audio data as normal audio;
s4, outputting the feature vector F of steganalysis through two classification sub-networks to obtain a binary steganalysis prediction probability vector
Figure BDA0002494662060000032
Computing binary steganographic prediction probability vectors
Figure BDA0002494662060000033
Binary steganographic label vector y ═ y after One-hot coding0,y1]Cross entropy loss of Lm
Figure BDA0002494662060000034
Wherein y isi∈ {0,1}, i denotes the class index, i ∈ [0, 1]]Updating parameters of the two classification sub-networks through a back propagation error and gradient descent algorithm;
s5, outputting the steganography analysis feature vector F through a multi-classification sub-network to obtain the prediction probability value of the steganography algorithm type
Figure BDA0002494662060000035
Calculating a predicted probability value
Figure BDA0002494662060000036
Steganographic class tag m ═ m after encoding with One-hot0,m1,…,mM-1]Cross entropy loss of La
Figure BDA0002494662060000037
Wherein M represents the number of M different steganographic algorithms contained in the training set data, and sub-network parameters are classified according to the number through a back propagation error and gradient descent algorithm;
s6, according to the combined loss L ═ Lm+λLaSetting lambda as an auxiliary task weight factor;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
and S8, judging whether the confidence value C (m) is greater than a set experience threshold value CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
Two related steganalysis tasks are constructed in the application: a two-classification task to distinguish between normal audio (Cover) and secret audio (Stego) images and a multi-classification task to distinguish between the types of the secret audio steganography algorithm. Of the two tasks, the two classification tasks for distinguishing the normal audio and the secret audio are the main focus of the work and can be regarded as a main task (maincast).
In particular, the role of the feature extraction sub-network is to adaptively extract steganalysis features from the input audio data. Setting a reasonable preprocessing layer in a CNN (Convolutional Neural Networks) steganalysis model can often improve the steganalysis performance of the network, so we use an audio preprocessing layer based on differential filtering design at the beginning of a feature extraction subnetwork, which is composed of 4 1 × 5 convolution kernels D1-D4, and the initial weights are respectively: d1 ═ 1, -1,0,0, 0], D1 ═ 1, -2,1,0,0], D1 ═ 1, -3, 3, 1, 0], D1 ═ 1, -4,6, -4, 1;
the audio pre-processing layer is followed by 5 concatenated convolution groups, namely, the 1 st convolution group, the 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group.
The 1 st convolution group includes a 1 × 1 first convolution layer, a 1 × 5 second convolution layer, and a 1 × 1 third convolution layer; the 1 st convolutional layer is activated by using a Truncation Linear Unit (TLU), the TLU can inhibit the activation of an overlarge positive value zone and keep certain activation capability in a negative value zone compared with a linear rectification activation unit ReLU commonly used in a deep learning voice recognition task, and compared with another commonly used TanH activation unit, the TLU has a larger activation interval and keeps certain gradient in the activation interval, so that the risk of gradient disappearance can be reduced when a network is trained. In addition, the other convolution layers of the 1 st convolution group do not carry out activation processing, and the pooling operation is cancelled, so that the aim is to more effectively capture the weak information brought by steganography.
The 2 nd, 3 rd, 4 th and 5 th convolution groups all contain a 1 × 5 convolution layer, a 1 × 1 convolution layer and a mean pooling layer, wherein the last mean pooling layer of the 5 th convolution group module is replaced by a global mean pooling (global average Pooling) layer for the purpose of fusing global features.
The feature extraction sub-network further comprises a feature output layer, the feature output layer is composed of a fully connected layer with 256 neurons, and the feature output layer finally outputs 256-dimensional steganalysis feature vectors F.
The detailed parameters for each subnetwork are shown in the following table:
Figure BDA0002494662060000041
Figure BDA0002494662060000051
the numerical meanings in the tables are exemplary: 64 × (1 × 5), ReLU, a 1 × 5 convolution kernel with the parameter setting for the convolutional layer being output channel 64, and activating the output using the ReLU; FC-256 represents a fully connected layer with 256 neurons.
The two classification sub-networks are next to the feature output layer and are composed of full connection containing 128 neuronsAnd (3) layer composition. Feature vector F outputs a binary steganographic predictive probability vector through the subnetwork
Figure BDA0002494662060000052
Calculating binary steganographic label vector y ═ y after the binary steganographic label vector is coded with One-hot0,y1](yi∈ {0,1}, i represents the class index, and when the value on the class index i is 1, the cross entropy loss L representing the data label class i) is I)m
Figure BDA0002494662060000061
And finally, updating the network parameters through a back propagation error and gradient descent algorithm.
The structure of the multi-classification sub-network is a fully-connected layer of two-layer cascade, and two sides of the cascade respectively comprise 128 neuron structures and 64 neuron structures. The feature vector F outputs the prediction probability value of the steganography algorithm type through the sub network
Figure BDA0002494662060000062
Calculating the steganographic class label m ═ m after encoding with One-hot0,m1,…,mM-1](representing M different steganographic algorithms of the classes contained in the training set data) cross entropy loss La
Figure BDA0002494662060000063
And finally, updating the network parameters through a back propagation error and gradient descent algorithm.
The optimization problem required by the whole network is the comprehensive loss L (L) of the main task loss and the auxiliary task lossm+λLaWherein λ is an auxiliary task weight factor, which determines the importance of the auxiliary task to the main task, and the larger λ is, the larger the guidance of the auxiliary task to the main task training is, and the larger interference information brought by the auxiliary task is, so it is also important to set a reasonable λ.
Finally, the multiclassification subnetwork also includes a Softmax activation layer, which outputs a predicted probability p (m) that can be considered for each classification categoryk) The more concentrated the predictive probability distribution, the more representative of the network predictionThe larger the credibility is, the information entropy can reflect the concentration degree of the information, so that the confidence value of the prediction probability is calculated and output according to the information entropy
Figure BDA0002494662060000064
As known from the knowledge of information theory, the confidence value C (M) obtains the maximum value logM (C (M)) when the output probability is uniformly distributed (namely the prediction probability of all algorithm types is 1/M)maxLog M, from which an empirical confidence threshold CT is set, CT 0.5c (M)max
When the confidence value C (m) is greater than CT, the prediction probability distribution is considered to be uniform, and the network has low prediction confidence for each type of algorithm, so that the algorithm can be considered not to be included in the training data, namely, the unknown steganography algorithm. Conversely, when the confidence value C (m) is less than CT, the prediction probability distribution is more concentrated, and the type with the highest prediction probability is selected as the output type of the data.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A general audio steganalysis method based on a convolutional neural network and multitask learning is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-network, a two-classification sub-network and a multi-classification sub-network, the method comprises,
s1, inputting audio data;
s2, outputting a steganalysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through a sub-classification network, if so, sequentially executing S4-S8, and if not, outputting the audio data as normal audio;
s4, outputting the feature vector F of steganalysis through two classification sub-networks to obtain a binary steganalysis prediction probability vector
Figure FDA0002494662050000011
Computing binary steganographic prediction probability vectors
Figure FDA0002494662050000012
Binary steganographic label vector y ═ y after One-hot coding0,y1]Cross entropy loss of Lm
Figure FDA0002494662050000013
Wherein y isi∈ {0,1}, i denotes the class index, i ∈ [0, 1]]Updating parameters of the two classification sub-networks through a back propagation error and gradient descent algorithm;
s5, outputting the steganography analysis feature vector F through a multi-classification sub-network to obtain the prediction probability value of the steganography algorithm type
Figure FDA0002494662050000014
Calculating a predicted probability value
Figure FDA0002494662050000015
Steganographic class tag m ═ m after encoding with One-hot0,m1,…,mM-1]Cross entropy loss of La
Figure FDA0002494662050000016
Wherein M represents the number of M different steganographic algorithms contained in the training set data, and sub-network parameters are classified according to the number through a back propagation error and gradient descent algorithm;
s6, according to the combined loss L ═ Lm+λLaSetting lambda as an auxiliary task weight factor;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
and S8, judging whether the confidence value C (m) is greater than a set experience threshold value CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
2. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 1, characterized in that:
the feature extraction sub-network in S2 includes an audio pre-processing layer and 5 concatenated convolution groups after the audio pre-processing layer, namely, a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group, a 4 th convolution group and a 5 th convolution group.
3. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 2, characterized in that:
the audio preprocessing layer consists of 4 1 multiplied by 5 convolution kernels D1-D4, and the initial weights are respectively as follows:
D1=[1,-1,0,0,0],D1=[1,-2,1,0,0],D1=[1,-3,3,1,0],D1=[1,-4,6,-4,1];
the 1 st convolution group includes a 1 × 1 first convolution layer, a 1 × 5 second convolution layer, and a 1 × 1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group respectively comprise a 1 x 5 convolution layer, a 1 x 1 convolution layer and a mean pooling layer, wherein the mean pooling layer of the 5 th convolution group is a global mean pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
4. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 3, characterized in that:
the audio preprocessing layer adopts a differential filtering design.
5. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 3, characterized in that:
the first convolution layer in the 1 st convolution group is activated using a truncated linear unit TLU.
6. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 1, characterized in that:
the bi-classification subnetwork includes a fully connected layer with 128 neurons and a binary steganographic label prediction layer.
7. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 1, characterized in that:
the multi-class subnetwork includes two cascaded fully-connected layers and steganographic class label prediction layers, the two cascaded layers having 128 neurons and 64 neurons, respectively.
8. The convolutional neural network and multitask learning based general audio steganalysis method according to claim 1, characterized in that:
the confidence value C (m) in S8 is calculated by the formula
Figure FDA0002494662050000021
Setting the empirical threshold CT-0.5C (m)maxWherein C (m)max=logM。
CN202010415020.0A 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning Active CN111768803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010415020.0A CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010415020.0A CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Publications (2)

Publication Number Publication Date
CN111768803A true CN111768803A (en) 2020-10-13
CN111768803B CN111768803B (en) 2024-01-30

Family

ID=72719425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010415020.0A Active CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Country Status (1)

Country Link
CN (1) CN111768803B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114462382A (en) * 2022-03-17 2022-05-10 长沙理工大学 Multi-class natural language steganalysis method
CN115457985A (en) * 2022-09-15 2022-12-09 北京邮电大学 Visual audio steganography method based on convolutional neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
US10459975B1 (en) * 2016-12-20 2019-10-29 Shutterstock, Inc. Method and system for creating an automatic video summary
CN110428846A (en) * 2019-07-08 2019-11-08 清华大学 Voice-over-net stream steganalysis method and device based on bidirectional circulating neural network
WO2019222401A2 (en) * 2018-05-17 2019-11-21 Magic Leap, Inc. Gradient adversarial training of neural networks
CN110968845A (en) * 2019-11-19 2020-04-07 天津大学 Detection method for LSB steganography based on convolutional neural network generation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10459975B1 (en) * 2016-12-20 2019-10-29 Shutterstock, Inc. Method and system for creating an automatic video summary
WO2019222401A2 (en) * 2018-05-17 2019-11-21 Magic Leap, Inc. Gradient adversarial training of neural networks
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
CN110428846A (en) * 2019-07-08 2019-11-08 清华大学 Voice-over-net stream steganalysis method and device based on bidirectional circulating neural network
CN110968845A (en) * 2019-11-19 2020-04-07 天津大学 Detection method for LSB steganography based on convolutional neural network generation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAO JIN等: "A novel detection scheme for MP3Stego with low payload", 《2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP)》 *
张坚等: "基于CNN的低嵌入率MP3stego隐写分析", 《无线通信技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114462382A (en) * 2022-03-17 2022-05-10 长沙理工大学 Multi-class natural language steganalysis method
CN115457985A (en) * 2022-09-15 2022-12-09 北京邮电大学 Visual audio steganography method based on convolutional neural network
CN115457985B (en) * 2022-09-15 2023-04-07 北京邮电大学 Visual audio steganography method based on convolutional neural network

Also Published As

Publication number Publication date
CN111768803B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Rajasegaran et al. Self-supervised knowledge distillation for few-shot learning
CN107516110B (en) Medical question-answer semantic clustering method based on integrated convolutional coding
CN109543039B (en) Natural language emotion analysis method based on deep network
Xu et al. Investigation on the Chinese text sentiment analysis based on convolutional neural networks in deep learning.
CN109766557B (en) Emotion analysis method and device, storage medium and terminal equipment
Hong et al. Sentiment analysis with deeply learned distributed representations of variable length texts
CN110472695B (en) Abnormal working condition detection and classification method in industrial production process
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN112015863A (en) Multi-feature fusion Chinese text classification method based on graph neural network
CN113220886A (en) Text classification method, text classification model training method and related equipment
CN113326377A (en) Name disambiguation method and system based on enterprise incidence relation
CN111768803A (en) General audio steganalysis method based on convolutional neural network and multi-task learning
CN113987187A (en) Multi-label embedding-based public opinion text classification method, system, terminal and medium
CN111768792A (en) Audio steganalysis method based on convolutional neural network and domain confrontation learning
CN111522953B (en) Marginal attack method and device for naive Bayes classifier and storage medium
CN110991247B (en) Electronic component identification method based on deep learning and NCA fusion
CN114780767A (en) Large-scale image retrieval method and system based on deep convolutional neural network
CN111079930A (en) Method and device for determining quality parameters of data set and electronic equipment
Fonseca et al. Model-agnostic approaches to handling noisy labels when training sound event classifiers
CN114491289A (en) Social content depression detection method of bidirectional gated convolutional network
CN116957304B (en) Unmanned aerial vehicle group collaborative task allocation method and system
Jenny Li et al. Evaluating deep learning biases based on grey-box testing results
CN116467930A (en) Transformer-based structured data general modeling method
CN116644798A (en) Knowledge distillation method, device, equipment and storage medium based on multiple teachers
CN115495579A (en) Method and device for classifying text of 5G communication assistant, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240104

Address after: Chinatelecom tower, No. 19, Chaoyangmen North Street, Dongcheng District, Beijing 100010

Applicant after: Tianyi Safety Technology Co.,Ltd.

Address before: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant before: Shenzhen Hongyue Information Technology Co.,Ltd.

Effective date of registration: 20240104

Address after: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant after: Shenzhen Hongyue Information Technology Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Applicant before: Ningbo University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant