CN113538199B - Image steganography detection method based on multi-layer perception convolution and channel weighting - Google Patents

Image steganography detection method based on multi-layer perception convolution and channel weighting Download PDF

Info

Publication number
CN113538199B
CN113538199B CN202110637231.3A CN202110637231A CN113538199B CN 113538199 B CN113538199 B CN 113538199B CN 202110637231 A CN202110637231 A CN 202110637231A CN 113538199 B CN113538199 B CN 113538199B
Authority
CN
China
Prior art keywords
layer
image
convolution
layers
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110637231.3A
Other languages
Chinese (zh)
Other versions
CN113538199A (en
Inventor
郭文风
叶学义
张珂绅
王凌宇
孙伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110637231.3A priority Critical patent/CN113538199B/en
Publication of CN113538199A publication Critical patent/CN113538199A/en
Application granted granted Critical
Publication of CN113538199B publication Critical patent/CN113538199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image steganography detection method based on multi-layer perception convolution and channel weighting, which comprises the steps of firstly constructing an image steganography detection model: then compressing the images in the image database to obtain a carrier image; embedding steganographic information into the carrier image by using a steganographic algorithm to obtain a carrier image; dividing the carrier image and the secret image into a training set and a testing set according to a certain proportion; training the built steganography detection model based on the multi-layer perception convolution through a training set according to a back propagation algorithm to obtain a trained steganography detection model; and performing steganography detection on the images of the test set by using the trained steganography detection model. The invention uses a plurality of perception convolution layers to replace the traditional linear convolution layers, improves the abstraction capability of the model to high-order characteristics, and thereby improves the detection accuracy; and (3) allocating different weights to different feature map channels by using global information, and recalibrating the feature map obtained by convolution to further improve the detection accuracy.

Description

Image steganography detection method based on multi-layer perception convolution and channel weighting
Technical Field
The invention belongs to the technical field of information security, and particularly relates to an image steganography detection method based on multi-layer perception convolution and channel weighting.
Background
The digital image steganography detection prevents illegal steganography communication activities, ensures communication safety, is an important research direction in the field of information safety, and is widely researched and developed rapidly. The method is used for judging whether the image is embedded with secret information or not by analyzing the acquired digital image, namely, the image is regarded as a classification problem, and the purpose of distinguishing the carrier image from the carrier image is achieved.
The conventional image steganography detection method mainly completes classification of the carrier image and the secret image by extracting high-order statistical features based on correlation between adjacent pixels and training a classifier, and is typically as an SRM algorithm in documents [ FRIDRICH J, KODOVSKY J.Rich models for steganalysis of digital images [ J ]. IEEE Transactions on Information Forensics and Security,2012,7 (3): 868-882], wherein a filter used by the SRM algorithm is cited so far. However, the detection features extracted by the methods need to be designed manually, the feature extraction cost is extremely high, and with the continuous development of steganography, the difficulty of designing the steganography detection features manually is increased.
With the rise of deep learning, researchers start to introduce deep learning into image steganography detection, especially the application of convolutional neural networks, and through training a network model built by a plurality of convolutional layers, adjacent pixel correlation information related to the steganography detection can be automatically mined, so that feature learning and classification can be completed. Although the detection accuracy is obviously improved by image steganography detection based on deep learning, the linear convolution layers adopted by the existing detection model all adopt linear filters to carry out convolution operation with input, then nonlinear activating functions are connected to increase the nonlinear expression capacity of the model, and the linear convolution layers have limited expression capacity on high-order characteristics; the characteristic diagram obtained by convolution is input to the next layer by the existing model with the same weight, the main property and the secondary property of the characteristic diagram are not considered, different weights are distributed for different convolution channels, and the detection accuracy still cannot meet the application requirement. Aiming at the problems, an image steganography detection method based on multi-layer perception convolution and channel weighting is provided, so that the detection accuracy is improved.
Disclosure of Invention
Aiming at the problems that the linear convolution layer used by the existing detection method is not strong enough in high-order characteristic expression capability, the main property and the secondary property of a characteristic diagram obtained by convolution are not considered when the existing model inputs the characteristic diagram to the next layer, and the detection accuracy cannot meet the application requirement, the invention provides an image steganography detection method based on multi-layer perception convolution, and the method is based on multi-layer perception convolution and channel weighting so as to be capable of extracting higher-order image characteristics through the multi-layer perception convolution layer; the main features are selectively emphasized through a channel weighting mechanism, unnecessary features are restrained, and the accuracy of the image steganography detection method is further improved.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
an image steganography detection method based on multi-layer perceptual convolution and channel weighting comprises the following steps:
step 1, constructing an image steganography detection model:
the image steganography detection model comprises a preprocessing module, a feature extraction module, a channel weighting module and a classification module;
the preprocessing module filters an input image by using a high-pass filter to obtain a residual image, and transmits the residual image to the feature extraction module.
The feature extraction module performs feature extraction on the residual images to obtain features required by detection and transmits the features to the classification module.
The channel weighting module is applied to the nonlinear activation function of each layer of convolution, so that different weights are distributed to different channel feature images according to global information, and after the feature images are subjected to weight redistribution, a new feature image is obtained and then is input to the next layer of convolution layer.
The classification module is composed of a full connection layer and a softmax function, maps the image steganography analysis feature into a classification probability vector, and judges whether the image is a secret image or not according to the classification probability vector.
Step 2, preparing a data set:
compressing the images in the image database to obtain carrier images; embedding steganographic information into the carrier image by using a steganographic algorithm to obtain a carrier image; dividing the carrier image and the secret image into a training set and a testing set according to a certain proportion;
step 3: training of a steganography detection model:
training the built steganography detection model based on the multi-layer perception convolution through a training set according to a back propagation algorithm to obtain a trained steganography detection model;
step 4: and performing steganography detection on the images of the test set by using the trained steganography detection model.
Further, the preprocessing module of the image steganography detection model in the step 1 is composed of convolution layers containing 30 convolution kernels, wherein 30 convolution kernels in the convolution layers are initialized by using 30 filters in the SRM and used for extracting image residual errors and performing expansion processing on the image residual errors; the 30 filter sizes are expanded to 13 3×3 and 17 5×5 sizes and normalized; the threshold value of the extracted residual image is T 1 Is performed by the cutting process.
Further, the feature extraction module of the image steganography detection model in the step 1 is formed by sequentially connecting two layers of multi-layer perception convolution layers and three layers of traditional convolution layers, wherein the traditional convolution layers adopt a layer of linear convolution layers, the multi-layer perception convolution layers comprise a layer of linear convolution layers and a multi-layer perceptron, the multi-layer perceptron consists of two completely connected layers with nonlinear activation functions, and the multi-layer perceptron re-abstracts data obtained by linear convolution kernel calculation, so that the abstraction capability of the model is improved; for the selection of the activation function, the TLU activation function is applied after the linear convolution in the first two layers of multi-layer perceptual convolution layers, and the ReLU activation function is applied to the last three layers of traditional convolution layers; the first layer is not subjected to pooling, the second layer, the third layer and the fourth layer are subjected to average pooling, and the last layer is subjected to global average pooling.
Further, the channel weighting module of the image steganography detection model in step 1 includes a global average pooling layer, two full-connection layers with bottleneck structures, and a scaling layer: the global average pooling layer performs feature map compression, the full-connection layer performs nonlinear transformation, and the scaling layer completes weight redistribution; after the channel weighting module is applied to the nonlinear activation function of each layer of convolution layer, feature recalibration is performed on the feature map obtained by convolution, so that important features are enhanced, unimportant features are weakened, the directivity of the features extracted by the network is stronger, and the expressive power of the network is enhanced.
The channel weighting module comprises the following three operations:
firstly, the dimension of a feature map U obtained by convolution in a feature extraction module is H multiplied by W multiplied by C, C is the channel number of the feature map obtained by convolution, and average pooling operation is carried out on the feature map U obtained by convolution according to a formula (5), so that feature map compression is completed:
secondly, the result obtained by the averaging pooling is input into two fully connected layers with bottleneck structures, namely a dimension reduction layer with a reduction rate r, a ReLU and a dimension increase layer with a channel dimension returned to the conversion output U. In this embodiment, when the input feature map channel dimensions are 30, 32, 64, and 128, respectively, the parameter reduction rate r is set to 15, 16, 32, and 64, respectively. The calculation process is shown as a formula (6):
s=σ(g(z,W))=σ(W 2 δ(W 1 z)) (6)
where δ represents a ReLU activation function and σ represents a sigmoid activation function. The output is a set of modulation weights for each channel.
Finally, multiplying the weights with the original characteristic diagram U, recalibrating the characteristic diagram to finish the process of channel weighting, wherein the calculation formula is shown in the formula (7):
wherein the method comprises the steps ofThe output new feature map is input directly to the subsequent layers of the network.
The classification module comprises three full-connection layers, and the number of units contained in the three full-connection layers is 256, 1024 and 2 respectively. Every two full-connection layers are operated according to the formula (8):
wherein,an ith input unit representing an L-th full connection layer,>representing the weight of the ith input unit and the jth input unit connecting the L-th full connection layer, +.>Representing the bias of the j-th input cell of the L-th fully connected layer; each unit is connected with all units of the previous layer, wherein the first layer is connected with the last layer of the convolution layer, the last layer is connected with the output layer, and the output of each layer is used as the input of the next time; f (x) is an activation function, and the activation function of the full connection layer element is a ReLU function; the activation function of the elements in the last fully connected layer is a softmax function, as shown in equation (9):
where i=1, 2 denotes the classification category.
Further, the image database comprises BOSSBase v1.0 and BOWS2.
The invention has the following beneficial effects:
the invention discloses an image steganography detection algorithm based on multi-layer perception convolution. The invention uses the multi-layer perception convolution layer to replace the traditional linear convolution layer in the first two layers of the model, improves the abstraction capability of the model to the high-order characteristics, and thus improves the detection accuracy; meanwhile, in order to enable the model to selectively emphasize main features, a channel weighting module is added into the model, different weights are distributed to different feature map channels by using global information, and recalibration is carried out on the feature map obtained by convolution, so that detection accuracy is further improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is an overall frame diagram of a steganographic detection model in the present invention.
FIG. 3 is a block diagram of a channel weighting module in a steganographic detection model of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
As shown in fig. 1, an image steganography detection method based on multi-layer perceptual convolution and channel weighting includes the following steps:
step 1, constructing an image steganography detection model based on multi-layer perception convolution and channel weighting, wherein the image steganography detection model comprises a preprocessing module, a feature extraction module, a channel weighting module and a classification module; the specific construction method and the parameter configuration of the model are as follows:
the preprocessing module filters an input image by using a high-pass filter to obtain a residual image, and transmits the residual image to the feature extraction module. The purpose is to suppress the influence of the image content and to reveal hidden information. For the convolutional layer in the preprocessing module, 30 filters in the SRM are used for initializing the convolutional layer, and expansion processing is performed on the convolutional layer: the "EDGE5×5", "SQUARE5×5", and "third order" total 13 filters are extended to 5×5 size, while the "first order", "second order", "EDGE3×3", and "SQUARE3×3" total 17 filters are extended to 3×3 size; normalizing the expanded filter; in this embodiment, only one of the expansion processes of the filter is given, and the expansion process of one second-order filter is shown in the following formula (1):
thresholding the filtered residual image to T 1 The truncation operation of the method can effectively inhibit useless image content, prevent the network from modeling larger valuesHigh network expression capacity of characteristics, and T is set 1 =3。
The characteristic extraction module consists of two layers of multi-layer sensing convolution layers and three layers of traditional convolution layers which are sequentially connected; the multi-layer perception convolution layer is equivalent to a multi-layer perceptron which is contained in the convolution layer, the multi-layer perceptron is composed of two completely connected layers with ReLU activation functions, when a plurality of feature graphs are input, the multi-layer perceptron is also equivalent to the convolution layer with a plurality of 1×1 convolution kernels, as shown in fig. 2, each multi-layer perception convolution layer contains one convolution layer with 5×5 convolution kernels and two convolution layers with 1×1 convolution kernels, and the number of the convolution kernels is set to be 30; the convolution kernels of the three traditional convolution layers are3 multiplied by 3, and the number of the convolution kernels is 32, 64 and 128 respectively; for the selection of the activation function, the activation function after the linear convolution of 5×5 in the first two layers of multi-layer perceptual convolution layers is TLU (i.e. truncated linear unit, linear truncation unit), and the last three layers of traditional convolution layers apply the ReLU activation function; the TLU and ReLU function calculation formulas are as follows:
wherein T is>And 0 is a cutoff threshold. The present embodiment sets the truncation threshold T in the first layer 2 =3, truncation threshold T in the second layer 3 =2。
The operation mode of the convolution layer is as follows:
the calculation formula of the characteristic diagram of the traditional convolution layer is shown in formula (3):
where f (x) is the activation function, (i, j) represents the index of the pixel in the feature map, x i,j Representing a picture block centered at position (i, j) in the convolution window, k represents the channel index of the feature map.
The multi-layer perception convolution layer is equivalent to adding a multi-layer perceptron in the traditional linear convolution, and the calculation formula of the multi-layer perceptron is shown as formula (4):
further, in the operation included in the convolution layer, performing an average pooling operation on the pooling window corresponding to the convolution layer to obtain an output of the convolution layer; the first layer of multi-layer sensing convolution layer is not pooled, the average pooling layer size of the second layer, the third layer and the fourth layer is 5 multiplied by 5, the step length is 2, and the last layer of traditional convolution layer uses global average pooling.
The channel weighting module is shown in fig. 3, and after being applied to the nonlinear activation function of each layer of convolution, the channel weighting module comprises a global average pooling layer, two full-connection layers with bottleneck structures and a scaling layer; the reason for global averaging pooling is to aggregate global information such that information from the global receiving domain of the network is used by all its layers; the purpose of the full connection layer is to perform nonlinear variation on the input through the full connection layer, dynamically learn nonlinear interaction among various channels in the process of network training, and then map the input into a set of modulation weights of each channel.
The channel weighting module comprises the following three operations:
firstly, the convolution operation in the feature extraction module obtains feature graphs U with dimensions H multiplied by W multiplied by C, C is the channel number of the feature graphs obtained by convolution, and average pooling operation is carried out on the feature graphs U obtained by convolution according to a formula (5), so that feature graph compression is completed:
secondly, the result obtained by the averaging pooling is input into two fully connected layers with bottleneck structures, namely a dimension reduction layer with a reduction rate r, a ReLU and a dimension increase layer with a channel dimension returned to the conversion output U. In this embodiment, when the input feature map channel dimensions are 30, 32, 64, and 128, respectively, the parameter reduction rate r is set to 15, 16, 32, and 64, respectively. The calculation process is shown as a formula (6):
s=σ(g(z,W))=σ(W 2 δ(W 1 z)) (6)
where δ represents a ReLU activation function and σ represents a sigmoid activation function. The output is a set of modulation weights for each channel.
Finally, multiplying the weights with the original characteristic diagram U, recalibrating the characteristic diagram to finish the process of channel weighting, wherein the calculation formula is shown in the formula (7):
wherein the method comprises the steps ofThe output new feature map is input directly to the subsequent layers of the network.
The classification module comprises three full-connection layers, and the number of units contained in the three full-connection layers is 256, 1024 and 2 respectively. Every two full-connection layers are operated according to the formula (8):
wherein,an ith input unit representing an L-th full connection layer,>indicating connection of layer L full connectionWeights of the i-th input unit and the j-th input unit of the layer, < >>Representing the bias of the j-th input cell of the L-th fully connected layer; each unit is connected with all units of the previous layer, wherein the first layer is connected with the last layer of the convolution layer, the last layer is connected with the output layer, and the output of each layer is used as the input of the next time; f (x) is an activation function, and the activation function of the full connection layer element is a ReLU function; the activation function of the elements in the last fully connected layer is a softmax function, as shown in equation (9):
where i=1, 2 denotes the classification category.
Step 2, preparing a data set required by an experiment:
the image database selected in this embodiment is BOSSBase v1.0, which contains 10000 grayscale images of size 512 x 512 resampling all pictures to images of size 256 x 256; using a steganography algorithm to embed steganography information into the compressed picture to obtain a secret carrying image with the same number; the carrier/secret image pair is set to 1: the scale of 1 is randomly divided into a training set and a test set, so that both the training set and the test set contain 5000 pairs of carrier/secret image pairs.
Step 3: training of an image steganography detection model: performing supervised learning training on the image steganography detection model constructed in the step 1 in the training set obtained in the step 2 by minimizing a function shown in the formula (10) according to a back propagation algorithm so as to obtain a trained image steganography detection model:
L=-logz i (10)
where i=1, 2.
Step 4: and (3) performing steganography detection on the images of the test set by using the image steganography detection model trained in the step (3) to calculate the classification probability, so as to judge whether the carrier image or the secret image is the carrier image when the images are input into the image.

Claims (3)

1. An image steganography detection method based on multi-layer perceptual convolution and channel weighting is characterized by comprising the following steps:
step 1, constructing an image steganography detection model:
the image steganography detection model comprises a preprocessing module, a feature extraction module, a channel weighting module and a classification module;
the preprocessing module filters an input image by using a high-pass filter to obtain a residual image, and transmits the residual image to the feature extraction module;
the characteristic extraction module performs characteristic extraction on the residual image to obtain characteristics required by detection and transmits the characteristics to the classification module;
the channel weighting module is applied to the nonlinear activation function of each layer of convolution, so that different weights are distributed to different channel feature images according to global information, and after the feature images are subjected to weight redistribution, a new feature image is obtained and then is input to the next layer of convolution layer;
the classifying module consists of a full-connection layer and a softmax function, maps the image steganography analysis characteristics into classifying probability vectors, and judges whether the image is a secret-carrying image or not according to the classifying probability vectors;
step 2, preparing a data set:
compressing the images in the image database to obtain carrier images; embedding steganographic information into the carrier image by using a steganographic algorithm to obtain a carrier image; dividing the carrier image and the secret image into a training set and a testing set according to a certain proportion;
step 3: training of a steganography detection model:
training the built steganography detection model based on the multi-layer perception convolution through a training set according to a back propagation algorithm to obtain a trained steganography detection model;
step 4: performing steganography detection on the images of the test set by using a trained steganography detection model;
pre-processing of the image steganography detection model described in step 1The processing module consists of convolution layers containing 30 convolution kernels, wherein 30 convolution kernels in the convolution layers are initialized by using 30 filters in the SRM and are used for extracting image residual errors and performing expansion processing on the image residual errors; the 30 filter sizes are expanded to 13 3×3 and 17 5×5 sizes and normalized; the threshold value of the extracted residual image is T 1 Is cut off;
the feature extraction module of the image steganography detection model in the step 1 is formed by sequentially connecting two layers of multi-layer perception convolution layers and three layers of traditional convolution layers, wherein the traditional convolution layers adopt a layer of linear convolution layers, the multi-layer perception convolution layers comprise a layer of linear convolution layers and a multi-layer perceptron, the multi-layer perceptron consists of two completely connected layers with nonlinear activation functions, and the multi-layer perceptron re-abstracts data obtained by calculation of the linear convolution kernel, so that the abstraction capability of the model is improved; for the selection of the activation function, the TLU activation function is applied after the linear convolution in the first two layers of multi-layer perceptual convolution layers, and the ReLU activation function is applied to the last three layers of traditional convolution layers; the first layer is not subjected to pooling, the second layer, the third layer and the fourth layer are subjected to average pooling, and the last layer is subjected to global average pooling.
2. The method according to claim 1, wherein the channel weighting module of the image steganography detection model in step 1 comprises a global average pooling layer, two full-connection layers with bottleneck structures, and a scaling layer: the global average pooling layer performs feature map compression, the full-connection layer performs nonlinear transformation, and the scaling layer completes weight redistribution; after the channel weighting module is applied to the nonlinear activation function of each layer of convolution layer, the characteristic diagram obtained by convolution is subjected to characteristic recalibration, so that important characteristics are enhanced, unimportant characteristics are weakened, the directivity of the characteristics extracted by a network is stronger, and the expressive power of the network is enhanced;
the channel weighting module comprises the following three operations:
firstly, the dimension of a feature map U obtained by convolution in a feature extraction module is H multiplied by W multiplied by C, C is the channel number of the feature map obtained by convolution, and average pooling operation is carried out on the feature map U obtained by convolution according to a formula (5), so that feature map compression is completed:
secondly, inputting the result obtained by the average pooling into two fully-connected layers with bottleneck structures, namely a dimension reduction layer with a reduction rate r, a ReLU and a dimension increase layer returning to the channel dimension of the conversion output U; in this embodiment, when the channel dimensions of the input feature map are 30, 32, 64, and 128, respectively, the parameter reduction rate r is set to 15, 16, 32, and 64; the calculation process is shown as a formula (6):
s=σ(g(z,W))=σ(W 2 δ(W 1 z)) (6)
wherein δ represents a ReLU activation function, σ represents a sigmoid activation function; outputting a set of modulation weights for each channel;
finally, multiplying the weights with the original characteristic diagram U, recalibrating the characteristic diagram to finish the process of channel weighting, wherein the calculation formula is shown in the formula (7):
wherein the method comprises the steps ofThe new feature map obtained by output is directly input to the subsequent layer of the network;
the classification module comprises three full-connection layers, wherein the number of units contained in the three full-connection layers is 256, 1024 and 2 respectively; every two full-connection layers are operated according to the formula (8):
wherein,an ith input unit representing an L-th full connection layer,>representing the weight of the ith input unit and the jth input unit connecting the L-th full connection layer, +.>Representing the bias of the j-th input cell of the L-th fully connected layer; each unit is connected with all units of the previous layer, wherein the first layer is connected with the last layer of the convolution layer, the last layer is connected with the output layer, and the output of each layer is used as the input of the next time; f (x) is an activation function, and the activation function of the full connection layer element is a ReLU function; the activation function of the elements in the last fully connected layer is a softmax function, as shown in equation (9):
where i=1, 2 denotes the classification category.
3. The method of claim 2, wherein the image database comprises BOSSBase v1.0 and boss 2.
CN202110637231.3A 2021-06-08 2021-06-08 Image steganography detection method based on multi-layer perception convolution and channel weighting Active CN113538199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110637231.3A CN113538199B (en) 2021-06-08 2021-06-08 Image steganography detection method based on multi-layer perception convolution and channel weighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110637231.3A CN113538199B (en) 2021-06-08 2021-06-08 Image steganography detection method based on multi-layer perception convolution and channel weighting

Publications (2)

Publication Number Publication Date
CN113538199A CN113538199A (en) 2021-10-22
CN113538199B true CN113538199B (en) 2024-04-16

Family

ID=78124671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110637231.3A Active CN113538199B (en) 2021-06-08 2021-06-08 Image steganography detection method based on multi-layer perception convolution and channel weighting

Country Status (1)

Country Link
CN (1) CN113538199B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233077A (en) * 2020-10-10 2021-01-15 北京三快在线科技有限公司 Image analysis method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233077A (en) * 2020-10-10 2021-01-15 北京三快在线科技有限公司 Image analysis method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Image Steganalysis via Diverse Filters andSqueeze-and-Excitation Convolutional Neural Network;Liu F等;Mathematics;第1-13页 *

Also Published As

Publication number Publication date
CN113538199A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN113469094A (en) Multi-mode remote sensing data depth fusion-based earth surface coverage classification method
CN109872305B (en) No-reference stereo image quality evaluation method based on quality map generation network
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN112580782B (en) Channel-enhanced dual-attention generation countermeasure network and image generation method
CN111680176A (en) Remote sensing image retrieval method and system based on attention and bidirectional feature fusion
CN111625675A (en) Depth hash image retrieval method based on feature pyramid under attention mechanism
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN112861690A (en) Multi-method fused remote sensing image change detection method and system
CN113516133B (en) Multi-modal image classification method and system
CN109919921B (en) Environmental impact degree modeling method based on generation countermeasure network
CN113034506B (en) Remote sensing image semantic segmentation method and device, computer equipment and storage medium
CN114943893B (en) Feature enhancement method for land coverage classification
CN110751195A (en) Fine-grained image classification method based on improved YOLOv3
CN111008570B (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
CN112926533A (en) Optical remote sensing image ground feature classification method and system based on bidirectional feature fusion
CN114299305A (en) Salient object detection algorithm for aggregating dense and attention multi-scale features
US20220301106A1 (en) Training method and apparatus for image processing model, and image processing method and apparatus
CN114494699A (en) Image semantic segmentation method and system based on semantic propagation and foreground and background perception
CN112766099B (en) Hyperspectral image classification method for extracting context information from local to global
CN110728186A (en) Fire detection method based on multi-network fusion
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN113538199B (en) Image steganography detection method based on multi-layer perception convolution and channel weighting
CN115457015A (en) Image no-reference quality evaluation method and device based on visual interactive perception double-flow network
CN113011506B (en) Texture image classification method based on deep fractal spectrum network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant