CN111768803B - General audio steganalysis method based on convolutional neural network and multitask learning - Google Patents

General audio steganalysis method based on convolutional neural network and multitask learning Download PDF

Info

Publication number
CN111768803B
CN111768803B CN202010415020.0A CN202010415020A CN111768803B CN 111768803 B CN111768803 B CN 111768803B CN 202010415020 A CN202010415020 A CN 202010415020A CN 111768803 B CN111768803 B CN 111768803B
Authority
CN
China
Prior art keywords
network
layer
audio
convolution
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010415020.0A
Other languages
Chinese (zh)
Other versions
CN111768803A (en
Inventor
王让定
林昱臻
严迪群
董理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongyue Information Technology Co ltd
Tianyi Safety Technology Co Ltd
Original Assignee
Tianyi Safety Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Safety Technology Co Ltd filed Critical Tianyi Safety Technology Co Ltd
Priority to CN202010415020.0A priority Critical patent/CN111768803B/en
Publication of CN111768803A publication Critical patent/CN111768803A/en
Application granted granted Critical
Publication of CN111768803B publication Critical patent/CN111768803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a general audio steganalysis method based on convolutional neural network and multitask learning, which is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-network, a two-class sub-network and a multi-class sub-network, and the detection effect of various audio steganography algorithms is effectively improved by providing an audio general steganography analysis model and an analysis method based on convolutional neural network and multi-task learning; moreover, the method improves the detection capability of an unknown steganography algorithm, and is convenient for the application of the audio steganography analysis technology in a complex Internet big data evidence obtaining scene.

Description

General audio steganalysis method based on convolutional neural network and multitask learning
Technical Field
The invention relates to the technical field of audio steganography, in particular to a general audio steganography analysis method based on convolutional neural network and multitask learning.
Background
The current audio steganalysis model based on the deep learning technology has higher detection performance under laboratory conditions. However, in an actual network big data evidence obtaining environment, the secret-containing audio may be generated by various steganography algorithms, which may include unused steganography algorithms in the training dataset, and in this scenario, if the steganography analyst directly uses a model obtained by laboratory training to detect, the steganography algorithm mismatch (SteganographicAlgorithm Mismatch, SAM) in the audio steganography analysis will be caused, and the accuracy will be greatly compromised.
SAM occurs in the generation of a dense carrier, and specifically refers to the difference in the embedding method used to generate the dense carrier in the training set and the test set. In this mode, the steganalyst researcher knows the statistical properties of the carrier source and only needs to design and train the classifier by using a carrier database with the same statistical properties, however, since the steganalyst algorithm is not known, the characteristic distribution of the dense carrier in the training stage and the dense carrier in the testing stage may have a certain difference, so that even if the detection performance of the classifier in the training stage is very good, the classifier in the testing stage may be invalid.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a general audio steganalysis method based on convolutional neural network and multi-task learning, which can effectively improve the detection effect of an audio steganalysis algorithm and the detection capability of an unknown steganalysis algorithm.
In order to achieve the above purpose, the technical scheme of the invention is as follows: the general audio steganalysis method based on convolutional neural network and multitask learning is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-network, a classification sub-network and a multi-classification sub-network, the method comprises,
s1, inputting audio data;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through the two classification sub-networks, if so, executing S4-S8 in sequence, and if not, outputting the audio data as normal audio;
s4, the steganalysis feature vector F is output through a classification sub-network to obtain a binary steganalysis predictive probability vectorCalculating a binary steganographic predictive probability vector>Binary steganographic tag vector y= [ y ] after being encoded with One-hot 0 ,y 1 ]Cross entropy loss L of (2) m ,/>Wherein y is i E {0,1}, i represents the category index, i e [0, 1]]Updating parameters of the two classification sub-networks according to the backward propagation error and the gradient descent algorithm;
s5, steganalysisThe feature vector F is outputted by a multi-classification sub-network to obtain a predictive probability value of the steganography algorithm typeCalculating a predictive probability value->Steganographic category label m= [ m after being encoded with One-hot 0 ,m 1 ,…,m M-1 ]Cross entropy loss L of (2) a ,/>Wherein M represents the number of M different steganography algorithms contained in the training set data, and accordingly the sub-network parameters are classified by the counter-propagation error and gradient descent algorithm;
s6, according to the integrated loss l=l m +λL a Setting lambda as auxiliary task weight factors;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
s8, judging whether the confidence value C (m) is larger than a set experience threshold CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
Further, the feature extraction sub-network in S2 includes an audio preprocessing layer and 5 cascaded convolution groups after the audio preprocessing layer, i.e. a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group, a 4 th convolution group, and a 5 th convolution group.
Further, the audio preprocessing layer consists of 4 1×5 convolution kernels D1 to D4, and initial weights are respectively:
D1=[1,-1,0,0,0],D1=[1,-2,1,0,0],D1=[1,-3,3,1,0],D1=[1,-4,6,-4,1];
the 1 st convolution group includes a 1×1 first convolution layer, a 1×5 second convolution layer, and a 1×1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group all comprise a 1 multiplied by 5 convolution layer, a 1 multiplied by 1 convolution layer and a mean value pooling layer, wherein the mean value pooling layer of the 5 th convolution group is a global mean value pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
Furthermore, the audio preprocessing layer adopts a differential filtering design.
Further, the first convolution layer in the 1 st convolution group is activated using a truncated linear unit TLU.
Further, the two-classification sub-network includes a fully connected layer having 128 neurons and a binary steganographic label prediction layer.
Further, the multi-classification sub-network comprises a full-connection layer and a steganographic class label prediction layer which are cascaded in two layers, wherein the two cascaded layers respectively comprise 128 neurons and 64 neurons.
Further, the calculation formula of the confidence value C (m) in S8 isSetting an empirical threshold ct=0.5×c (m) max, where C (m) max =logM。
Compared with the prior art, the invention has the advantages that:
by providing the general audio steganography analysis model and the analysis method based on convolutional neural network and multitask learning, the detection effect of various audio steganography algorithms is effectively improved; moreover, the method improves the detection capability of an unknown steganography algorithm, and is convenient for the application of the audio steganography analysis technology in a complex Internet big data evidence obtaining scene.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
The invention provides a general audio steganalysis method based on convolutional neural network and multitask learning, which comprises the following steps of,
s1, inputting audio data;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through the two classification sub-networks, if so, executing S4-S8 in sequence, and if not, outputting the audio data as normal audio;
s4, the steganalysis feature vector F is output through a classification sub-network to obtain a binary steganalysis predictive probability vectorCalculating a binary steganographic predictive probability vector>Binary steganographic tag vector y= [ y ] after being encoded with One-hot 0 ,y 1 ]Cross entropy loss L of (2) m ,/>Wherein y is i E {0,1}, i represents the category index, i e [0, 1]]Updating parameters of the two classification sub-networks according to the backward propagation error and the gradient descent algorithm;
s5, the steganography analysis feature vector F is output through a multi-classification sub-network to obtain a prediction probability value of the steganography algorithm typeCalculating a predictive probability value->Steganographic category label m= [ m after being encoded with One-hot 0 ,m 1 ,…,m M-1 ]Cross entropy loss L of (2) a ,/>Wherein M represents the number of M different steganography algorithms contained in the training set data, and accordingly the sub-network parameters are classified by the counter-propagation error and gradient descent algorithm;
s6, according to healdLoss of synthesis l=l m +λL a Setting lambda as auxiliary task weight factors;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
s8, judging whether the confidence value C (m) is larger than a set experience threshold CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
Two related steganalysis tasks are constructed in this application: a classification task that distinguishes between normal audio (Cover) and close-containing audio (Stego) images, and a multi-classification task that distinguishes between close-containing audio steganography algorithm types. Of the two tasks, the two classification tasks of distinguishing normal audio from dense audio are the important targets of the present work and can be regarded as the main task (Maintask).
In particular, the feature extraction sub-network functions to adaptively extract steganalysis features from the input audio data. The reasonable preprocessing layer is arranged in the CNN (Convolutional Neural Networks, convolutional neural network) steganalysis model, so that the steganalysis performance of the network can be improved, and therefore, an audio preprocessing layer based on differential filtering design is used at the beginning of the characteristic extraction sub-network, and consists of 4 1X 5 convolutional kernels D1-D4, wherein the initial weights are respectively as follows: d1 = [1, -1, 0], d1= [1, -2,1,0,0], d1= [1, -3,3,1,0], d1= [1, -4,6, -4,1];
the audio preprocessing layer is followed by 5 concatenated convolutions, namely, the 1 st convolutions, the 2 nd convolutions, the 3 rd convolutions, the 4 th convolutions, and the 5 th convolutions.
The 1 st convolution group includes a 1×1 first convolution layer, a 1×5 second convolution layer, and a 1×1 third convolution layer; the 1 st convolution layer is activated by using a cut-off linear unit (TLU), compared with a linear rectification activation unit ReLU commonly used in a deep learning voice recognition task, the TLU can inhibit activation in an oversized positive value area and keep certain activation capacity in a negative value area, compared with another commonly used tanH activation unit, the TLU has a larger activation interval, gradients are kept certain in the activation interval, and the risk of gradient disappearance can be reduced when a network is trained. In addition, the other convolution layers of the 1 st convolution group do not perform activation processing, and the pooling operation is eliminated, so as to more effectively capture weak information brought by steganography.
The 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group all comprise a 1×5 convolution layer, a 1×1 convolution layer and a mean pooling layer, wherein the last mean pooling layer of the 5 th convolution group module is replaced by a global mean pooling (globalargeagepooling) layer, so as to fuse global features.
The feature extraction sub-network further comprises a feature output layer, wherein the feature output layer is composed of a full-connection layer with 256 neurons, and finally outputs 256-dimensional steganalysis feature vector F.
The detailed parameters of each subnetwork are shown in the following table:
examples of meaning of parameters in the table: 64× (1×5), reLU, indicating that the parameters of the convolutional layer are set to a 1×5 convolutional kernel with output channel 64, and the output is activated using ReLU; FC-256 represents a fully connected layer with 256 neurons.
The two classification sub-networks are next to the feature output layer and consist of a fully connected layer containing 128 neurons. The feature vector F outputs a binary steganography predictive probability vector through the sub-networkCalculating the binary steganographic tag vector y= [ y ] after the binary steganographic tag vector y and One-hot are coded 0 ,y 1 ](y i E {0,1}, i represents a class index, when the value on class index i is 1, represents the cross entropy loss L for the data tag class i) m ,/>Finally, updating network parameters through a back propagation error and gradient descent algorithm.
The multi-classification sub-network is structured as a fully connected layer of two cascaded layers, the two sides of the cascade respectively comprising 128 and 64 neuron structures. The feature vector F outputs a predictive probability value of the steganography algorithm type through the sub-networkCalculating the steganography class label m= [ m ] after the steganography class label is encoded with One-hot 0 ,m 1 ,…,m M-1 ]Cross entropy loss L (representing M classes of different steganographic algorithms contained in training set data) a ,/>Finally, updating network parameters through a back propagation error and gradient descent algorithm.
The optimization problem of the solution required by the whole network is the comprehensive loss L=L of the main task loss and the auxiliary task loss m +λL a The auxiliary task weight factor is lambda, which determines the importance degree of the auxiliary task to the main task, the larger lambda is, the larger the auxiliary task is for guiding the main task training, and the larger the interference information correspondingly is, so that the reasonable lambda is also important to set.
The multi-class sub-network finally also includes a Softmax activation layer whose output can be seen as the predicted probability p (m k ) The more concentrated the prediction probability distribution, the more credibility of the representative network prediction result, the information entropy can reflect the concentration degree of the information, so the confidence value of the prediction probability is calculated and output according to the information entropy
From knowledge of the theory of information, the confidence value C (M) takes the maximum value log M, i.e. C (M), when the output probability is evenly distributed (i.e. the predicted probability for all algorithm types is 1/M) max Log M, thereby setting an empirical confidence threshold CT, ct=0.5c (M) max
When the confidence value C (m) is greater than CT, the predictive probability distribution is considered to be more uniform, and the network has little predictive confidence for each type of algorithm, so the algorithm may be considered not to be contained in the training data, i.e., the unknown steganography algorithm. Conversely, when the confidence value C (m) is smaller than CT, the predictive probability distribution is more concentrated, and the type with the highest predictive probability is selected as the output type of the data.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A general audio steganalysis method based on convolutional neural network and multitask learning is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-network, a classification sub-network and a multi-classification sub-network, the method comprises,
s1, inputting audio data;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, judging whether the audio data is a steganographic carrier or not through the two classification sub-networks, if so, executing S4-S8 in sequence, and if not, outputting the audio data as normal audio;
s4, the steganalysis feature vector F is output through a classification sub-network to obtain a binary steganalysis predictive probability vectorCalculating a binary steganographic predictive probability vector>Binary steganographic tag vector y= [ y ] after being encoded with One-hot 0 ,y 1 ]Cross entropy loss L of (2) m ,/>Wherein y is i ∈{0,1},i represents class index, i e [0, 1]]Updating parameters of the two classification sub-networks according to the backward propagation error and the gradient descent algorithm;
s5, the steganography analysis feature vector F is output through a multi-classification sub-network to obtain a prediction probability value of the steganography algorithm typeCalculating a predictive probability value->Steganographic category label m= [ m after being encoded with One-hot 0 ,m 1 ,…,m M-1 ]Cross entropy loss L of (2) a ,/>Wherein M represents the number of M different steganography algorithms contained in the training set data, and updates a plurality of classification sub-network parameters according to the number of M different steganography algorithms through a back propagation error and gradient descent algorithm;
s6, according to the integrated loss l=l m +λL a Setting lambda as auxiliary task weight factors;
s7, calculating a confidence value C (m) of the prediction probability through the multi-classification sub-network;
s8, judging whether the confidence value C (m) is larger than a set experience threshold CT, if so, outputting the result as an unknown steganography algorithm, and if not, outputting the type of the steganography algorithm.
2. The general audio steganalysis method based on convolutional neural network and multitasking learning of claim 1, characterized in that:
the characteristic extraction sub-network in the S2 comprises an audio preprocessing layer and 5 cascaded convolution groups after the audio preprocessing layer, namely a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group, a 4 th convolution group and a 5 th convolution group.
3. The general audio steganalysis method based on convolutional neural network and multitasking learning of claim 2, characterized in that:
the audio preprocessing layer consists of 4 1X 5 convolution kernels D1-D4, and initial weights are respectively as follows:
D1=[1,-1,0,0,0],D2=[1,-2,1,0,0],D3=[1,-3,3,1,0],D4=[1,-4,6,-4,1];
the 1 st convolution group includes a 1×1 first convolution layer, a 1×5 second convolution layer, and a 1×1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group, the 4 th convolution group and the 5 th convolution group all comprise a 1 multiplied by 5 convolution layer, a 1 multiplied by 1 convolution layer and a mean value pooling layer, wherein the mean value pooling layer of the 5 th convolution group is a global mean value pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
4. A general audio steganalysis method based on convolutional neural network and multitasking learning as claimed in claim 3, characterized in that:
the audio preprocessing layer adopts a differential filtering design.
5. A general audio steganalysis method based on convolutional neural network and multitasking learning as claimed in claim 3, characterized in that:
the first convolutional layer in the 1 st convolutional group is activated with a truncated linear unit TLU.
6. The general audio steganalysis method based on convolutional neural network and multitasking learning of claim 1, characterized in that:
the two-class subnetwork includes a fully connected layer having 128 neurons and a binary steganographic label prediction layer.
7. The general audio steganalysis method based on convolutional neural network and multitasking learning of claim 1, characterized in that:
the multi-classification sub-network comprises a full-connection layer and a steganographic class label prediction layer which are cascaded in two layers, wherein the two cascaded layers respectively comprise 128 neurons and 64 neurons.
8. The general audio steganalysis method based on convolutional neural network and multitasking learning of claim 1, characterized in that:
the calculation formula of the confidence value C (m) in the S8 isSetting an empirical threshold ct=0.5×c (m) max Wherein C (m) max =logM。
CN202010415020.0A 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning Active CN111768803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010415020.0A CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010415020.0A CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Publications (2)

Publication Number Publication Date
CN111768803A CN111768803A (en) 2020-10-13
CN111768803B true CN111768803B (en) 2024-01-30

Family

ID=72719425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010415020.0A Active CN111768803B (en) 2020-05-15 2020-05-15 General audio steganalysis method based on convolutional neural network and multitask learning

Country Status (1)

Country Link
CN (1) CN111768803B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114462382A (en) * 2022-03-17 2022-05-10 长沙理工大学 Multi-class natural language steganalysis method
CN115457985B (en) * 2022-09-15 2023-04-07 北京邮电大学 Visual audio steganography method based on convolutional neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
US10459975B1 (en) * 2016-12-20 2019-10-29 Shutterstock, Inc. Method and system for creating an automatic video summary
CN110428846A (en) * 2019-07-08 2019-11-08 清华大学 Voice-over-net stream steganalysis method and device based on bidirectional circulating neural network
WO2019222401A2 (en) * 2018-05-17 2019-11-21 Magic Leap, Inc. Gradient adversarial training of neural networks
CN110968845A (en) * 2019-11-19 2020-04-07 天津大学 Detection method for LSB steganography based on convolutional neural network generation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10459975B1 (en) * 2016-12-20 2019-10-29 Shutterstock, Inc. Method and system for creating an automatic video summary
WO2019222401A2 (en) * 2018-05-17 2019-11-21 Magic Leap, Inc. Gradient adversarial training of neural networks
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
CN110428846A (en) * 2019-07-08 2019-11-08 清华大学 Voice-over-net stream steganalysis method and device based on bidirectional circulating neural network
CN110968845A (en) * 2019-11-19 2020-04-07 天津大学 Detection method for LSB steganography based on convolutional neural network generation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel detection scheme for MP3Stego with low payload;Chao Jin等;《2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP)》;20140713;全文 *
基于CNN的低嵌入率MP3stego隐写分析;张坚等;《无线通信技术》;20180915(第03期);全文 *

Also Published As

Publication number Publication date
CN111768803A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
Liu et al. An embedded feature selection method for imbalanced data classification
Kwon et al. Beta shapley: a unified and noise-reduced data valuation framework for machine learning
Cao et al. Deep neural networks for learning graph representations
Xu et al. Investigation on the Chinese text sentiment analysis based on convolutional neural networks in deep learning.
CN110929029A (en) Text classification method and system based on graph convolution neural network
Lin et al. What Does Social Media Say about Your Stress?.
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN109063719B (en) Image classification method combining structure similarity and class information
CN112015863A (en) Multi-feature fusion Chinese text classification method based on graph neural network
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN110472695B (en) Abnormal working condition detection and classification method in industrial production process
CN112199536A (en) Cross-modality-based rapid multi-label image classification method and system
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
CN111768803B (en) General audio steganalysis method based on convolutional neural network and multitask learning
CN113159171B (en) Plant leaf image fine classification method based on counterstudy
CN110851654A (en) Industrial equipment fault detection and classification method based on tensor data dimension reduction
CN113032525A (en) False news detection method and device, electronic equipment and storage medium
CN111522953B (en) Marginal attack method and device for naive Bayes classifier and storage medium
CN110991247B (en) Electronic component identification method based on deep learning and NCA fusion
CN111768792A (en) Audio steganalysis method based on convolutional neural network and domain confrontation learning
Zhang et al. Attention pooling-based bidirectional gated recurrent units model for sentimental classification
CN112988548A (en) Improved Elman neural network prediction method based on noise reduction algorithm
CN112786160A (en) Multi-image input multi-label gastroscope image classification method based on graph neural network
Fonseca et al. Model-agnostic approaches to handling noisy labels when training sound event classifiers
CN114491289A (en) Social content depression detection method of bidirectional gated convolutional network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240104

Address after: Chinatelecom tower, No. 19, Chaoyangmen North Street, Dongcheng District, Beijing 100010

Applicant after: Tianyi Safety Technology Co.,Ltd.

Address before: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant before: Shenzhen Hongyue Information Technology Co.,Ltd.

Effective date of registration: 20240104

Address after: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant after: Shenzhen Hongyue Information Technology Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Applicant before: Ningbo University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant