CN111696021A - Image self-adaptive steganalysis system and method based on significance detection - Google Patents

Image self-adaptive steganalysis system and method based on significance detection Download PDF

Info

Publication number
CN111696021A
CN111696021A CN202010524234.1A CN202010524234A CN111696021A CN 111696021 A CN111696021 A CN 111696021A CN 202010524234 A CN202010524234 A CN 202010524234A CN 111696021 A CN111696021 A CN 111696021A
Authority
CN
China
Prior art keywords
saliency
image
map
region
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010524234.1A
Other languages
Chinese (zh)
Other versions
CN111696021B (en
Inventor
张敏情
黄思远
柯彦
毕新亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Engineering University of Chinese Peoples Armed Police Force
Original Assignee
Engineering University of Chinese Peoples Armed Police Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engineering University of Chinese Peoples Armed Police Force filed Critical Engineering University of Chinese Peoples Armed Police Force
Priority to CN202010524234.1A priority Critical patent/CN111696021B/en
Publication of CN111696021A publication Critical patent/CN111696021A/en
Application granted granted Critical
Publication of CN111696021B publication Critical patent/CN111696021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Abstract

The invention belongs to the technical field of image processing, and particularly discloses an image self-adaptive steganalysis system based on saliency detection. Also disclosed is an analytical method, specifically: firstly, inputting an image with errors detected by a discriminator module into a saliency detection module to form a saliency map; then extracting a saliency map meeting the requirement by a region screening module, and carrying out image fusion on the saliency map and a corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region. The method utilizes the significance detection technology to guide the steganalysis model to pay more attention to the characteristics of the image steganalysis area, thereby improving the training effect and the detection accuracy of the model.

Description

Image self-adaptive steganalysis system and method based on significance detection
Technical Field
The invention belongs to the technical field of image processing, and relates to an image self-adaptive steganalysis system and method based on significance detection.
Background
The image steganography is a covert communication technology for embedding a secret message in an image carrier file for transmission, different from a traditional encryption communication mode, a third party is difficult to decipher, the communication behavior is hidden by the image steganography, the embedding of the secret message is difficult to be perceived by the third party, and therefore the image steganography has strong deception. Especially, in the image adaptive steganography proposed in recent years, the secret information is preferentially embedded into the complicated texture area of the image during steganography, and under the condition of low embedding rate, the detection is more difficult, so that the security of the steganography is greatly improved, and great challenges are brought to image steganography analysis.
The main idea of the content self-adaptive steganography algorithm based on the combination of the distortion function and the STC comprises two parts: the method comprises the steps of quantitative analysis of change cost based on a distortion function and embedding based on STC, wherein the purpose of the distortion function is to capture the change of local or global characteristics after embedding through the quantitative analysis, for example, the possible distortion after each element is changed is calculated, and the purpose of the STC is to comprehensively consider and determine the elements to be finally changed according to the distortion condition, so that the overall distortion is minimized. The most common adaptive steganography algorithms are HUGO, WOW, S-UNIWARD, UED, and J-UNIWARD, and elements in the image that are located in complex regions of texture are more likely to be modified than elements in smooth regions, because the perturbation caused by steganography algorithms in these statistically complex regions is less likely to be perceived.
However, most of the existing image steganalysis research works are to improve the network structure to improve the steganalysis detection performance based on the network model, but when steganography is performed by using the image adaptive steganography, the secret message is not embedded in all the areas of the image, but the image contains abundant information dimensions, so that the information contained in the image is not completely beneficial to training, and in this case, unnecessary and miscellaneous interference exists in the model during training, which results in the reduction of the detection precision. Therefore, it is necessary to provide a steganography analysis method with high pertinence to the steganography region, and the guide model focuses more on the features of the steganography region.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an image adaptive steganalysis system and method based on significance detection, wherein the significance detection technology is utilized to guide a steganalysis model to pay more attention to the characteristics of an image steganalysis region, so that the training effect and the detection accuracy of the model are improved.
The invention is realized by the following technical scheme:
an image self-adaptive steganalysis method based on significance detection comprises the following steps:
1) segmenting a saliency area of the image in the image with the detection error to form a saliency map;
2) screening the saliency map according to the coincidence degree of the saliency region and the steganography region, extracting the saliency map meeting the requirements, and carrying out image fusion on the saliency map meeting the requirements and the corresponding original image to form a saliency fusion map; the saliency map meeting the requirement is an image with high coincidence degree of a saliency region and a steganographic region;
3) replacing the unqualified saliency map with an original image, and combining the original image and the saliency fusion map into an updated data set;
4) and then trained with the updated data set.
Further, in step 1), a discriminator module is used for detecting wrong images, and the discriminator module adopts an SRNet model.
Further, in step 2), the image fusion specifically comprises: and setting the other pixels except the pixel of the salient region in the image to be 0, and enabling the discriminator module to focus on the image characteristics of the salient region only.
Further, in the step 1), a saliency detection module is used for segmenting a saliency area of the image;
the significance detection module adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module, and the significance graph is formed by the following specific steps:
introducing a prediction module of a BASNet model and a multi-scale residual error optimization module into a network, and obtaining a rough significance map through the prediction module;
and the multi-scale residual optimization module optimizes the rough significance map of the prediction module by learning the residual between the rough significance map and the real label, and finally obtains the refined significance map.
Further, in the step 2), the saliency map is screened by using an area screening module, and the saliency map meeting the requirement is extracted.
Further, in step 2), the method for calculating the coincidence degree η between the saliency region and the steganographic region is as follows:
Figure BDA0002533234170000031
obtaining formula (2) from formula (1):
Figure BDA0002533234170000032
wherein N represents the total number of pixel points in the image, NcoinNumber of pixels representing overlap region, NstegoNumber of pixels representing steganographic region, PStego(i, j) and PSOD(i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively.
Further, in the step 2), the coincidence degree corresponding to the significance map meeting the requirement is 0.6-1.
The invention also discloses an image self-adaptive steganalysis system based on significance detection, which comprises a significance detection module, a region screening module and a discriminator module;
the saliency detection module is used for generating a saliency region of an image to be detected, adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module;
the prediction module is used for obtaining a rough saliency map; the multi-scale residual optimization module is used for optimizing the rough significance map of the prediction module by learning the residual between the rough significance map and the real label to finally obtain a refined significance map;
the region screening module is used for screening the saliency map and extracting the saliency map meeting the requirement;
the discriminator module employs an SRNet model for providing an image of the initial detection error and retraining of the updated data set.
Further, the multi-scale residual optimization module comprises an input layer, an encoder, a bridge layer, a decoder and an output layer.
Further, both the encoder and decoder have four stages, each with only one convolutional layer, each layer having 64 filters of size 33;
the bridge layer is also provided with a convolutional layer, and the convolutional layer has the same parameters as other convolutional layers.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention discloses an image self-adaptive steganalysis method based on significance detection, which comprises the steps of firstly processing an image with a detection error to form a significance map; then, screening the saliency map according to the coincidence degree of the saliency region and the steganography region, extracting the saliency map meeting the requirement, and carrying out image fusion on the saliency map and the corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set for training, and performing targeted feature learning on the region with higher coincidence degree with the steganographic region. Compared with the conventional convolutional network model, the method can guide the model to learn the characteristics of the image steganography region, and has better pertinence. The invention carries out data statistical analysis on the training set, extracts the images meeting the conditions for processing, and can ensure the effectiveness of the processing. Experiments show that the method is universal in the airspace and JPEG domains and has good overall performance by performing steganalysis on a data set embedded by the adaptive steganography algorithm of the airspace and JPEG domains.
Furthermore, through experimental comparison and analysis, images with the coincidence degree of the saliency region and the steganography region concentrated between 0.6 and 1 are screened out, and the model training effect is good.
The invention discloses an image self-adaptive steganalysis system based on significance detection, which comprises a significance detection module, a region screening module and a discriminator module, wherein the steganalysis system is simulated, and the discriminator module detects wrong image input significance detection module to form a significance map; then extracting a saliency map meeting the requirement by a region screening module, and carrying out image fusion on the saliency map and a corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region. Compared with the prior work, the method can be used for testing any image on the network, improves the generalization of the method, and visually outputs the classification probability and result, thereby having more practical value and certain generalization capability and practicability.
Drawings
FIG. 1 is an overall block diagram of the method of the present invention;
FIG. 2 is a comparative experimental graph of saliency and steganographic regions performed by the present invention;
FIG. 3 is a data statistics graph of the overlap ratio of saliency regions and steganographic regions according to the present invention;
FIG. 4 is a comparison of experiments for different training strategies according to the present invention;
FIG. 5 is a graph of steganalysis systems and test results simulated in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
the invention discloses an image self-adaptive steganalysis system based on significance detection, which mainly comprises three modules: the device comprises a significance detection module, a region screening module and a discriminator module.
As shown in fig. 1, firstly, inputting the image with the error detected by the discriminator module into the saliency detection module to form a saliency map; then extracting a saliency map meeting the requirement by a region screening module, and carrying out image fusion on the saliency map and a corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region.
The saliency detection module is used for generating a saliency region of the image to be detected, and specifically adopts a BASNet model. The prediction module of BASNet and the multi-scale residual error optimization module are introduced into a network, the prediction module is a dense supervision encoder/decoder network similar to U-Net, and the network learns to predict a saliency region from an input image to obtain a rough saliency map; the multi-scale residual optimization module optimizes the rough saliency map of the prediction module by learning the residual between the rough saliency map and the real annotation, and finally obtains a refined saliency map.
Srefined=Scoarse+Sresidual
Wherein S iscoarse、SresidualAnd SrefinedRespectively representing the residual error of the prediction module, the rough saliency map and the real labeled residual error and the optimized residual error.
The main structure of the multi-scale residual optimization module is simpler than that of the prediction module, and comprises an input layer, an encoder, a bridge layer, a decoder and an output layer, wherein the encoder and the decoder have four stages, each stage only has one convolution layer, each layer has 64 filters with the size of 3 × 3, and then Batch Normalization (BN) and ReLU activation are carried out, the bridge layer is also provided with one convolution layer, the parameters of the convolution layer are the same as those of other convolution layers, a maximum pooling layer (maxpool) is used for down-sampling in the encoder, and a bilinear up-sampling layer (bilinear up-sampling) is used for up-sampling in the decoderFor the final saliency map, the model defines a blending penalty,/, in training in order to obtain high quality region subdivision and sharp boundaries(k)Expressed as:
Figure BDA0002533234170000061
this loss, combined with the BCE, SSIM and IoU losses, helps to reduce false errors that result from cross-propagating learned information across the boundary, making the boundary more refined.
As shown in fig. 2, the saliency region of the image is a white region shown in the picture on the 2 nd row in fig. 2, the steganographic region is a scatter region shown in the picture on the 3 rd row in fig. 2, the saliency region of the image is compared with the steganographic region, after saliency detection, the content of the image can be analyzed, and when a saliency target in the image is clear (e.g., left 4), the coincidence degree between the saliency region marked by the image and the steganographic region is high; when the salient object in the image is fuzzy (such as right 2), the coincidence degree of the salient area marked by the image and the steganographic area is low. Therefore, it is necessary to extract and process the images that meet the conditions by screening the saliency maps, so as to ensure the effectiveness of the processing.
The region screening module is used for screening out images meeting the processing requirements and ensuring the effectiveness of processing. In this module, data statistics and analysis are performed on the coincidence degree of the saliency region and the steganographic region. In the image adaptive steganography, the secret information is embedded in a region with complex texture, and the region is a more prominent region for human eyes, so that the region is marked as a salient region in the saliency detection. And (3) carrying out data analysis by using a BOSSbase1.01 data set, wherein 10000 digital images with the size of 512 multiplied by 512 are subjected to steganography by using a J-UNIWARD image adaptive steganography algorithm, and the embedding rate is 0.4 bpp. In order to analyze the overall data distribution, data statistics is performed on the coincidence degree of the saliency region and the steganographic region of 10000 pictures, and the coincidence degree is represented in a scatter diagram form, as shown in fig. 3.
Data analysis shows that the coincidence degree of the saliency region and the steganographic region is concentrated between 0.5 and 1, and part of the coincidence degree is concentrated between 0 and 0.2, as shown in the right 2 of fig. 2, which indicates that not all images are suitable for saliency processing, and images meeting requirements need to be screened out, so that the processing result is guaranteed to be effective. The calculation method of the coincidence degree eta of the saliency region and the steganographic region is as follows:
Figure BDA0002533234170000071
Figure BDA0002533234170000072
wherein N represents the total number of pixel points in the image, NcoinNumber of pixels representing overlap region, NstegoNumber of pixels representing steganographic region, PStego(i, j) and PSOD(i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively. The steganographic dot diagram represents the position of a pixel point which is changed after steganography through a J-UNIWARD self-adaptive steganographic algorithm.
In the region screening module, through experimental comparison and analysis, images with the coincidence degree of the saliency region and the steganography region concentrated between 0.6 and 1 are screened out to have good training effect on the model, and the comparison experiment is shown in table 1, wherein the threshold value K of the screening region is set to be 0.7, and the training effect is optimal. After the saliency map is screened out, the saliency map is fused with the original image by using an image fusion technology, and the pixels except the pixels of the saliency region in the image are set to be 0, so that the model only focuses on the image characteristics of the saliency region.
TABLE 1 detection accuracy at screening threshold for different regions (%)
Figure BDA0002533234170000081
The discriminator module is used to provide an image for initial detection of errors and retraining of the updated data set. The method uses an SRNet model as an arbiter in an experiment. The SRNet model is composed of four parts, two parts (layers 1-7) at the front end are responsible for extracting partial residual errors of noise, the two parts are outlined by two shades in the figure, the third part (layers 8-11) aims to reduce the dimension of a feature map, the last part is a standard full-connection layer and a Softmax linear classifier, and the last part is a standard full-connection layer and a Softmax linear classifier part, wherein all convolution layers adopt a convolution kernel of 3 x 3, and all nonlinear activation functions are ReLU.
The detection errors are classified into two types, one is to discriminate the steganographic image as an original image, and the other is to discriminate the original image as a steganographic image. The initially detected error image is the trained SRNet discrimination error image, wherein the steganographic image and the original image exist.
In order to verify the effectiveness of the method for detecting the adaptive steganography algorithm, steganography analysis is respectively carried out on the data sets embedded by the adaptive steganography algorithm of the airspace and the JPEG domain, as shown in the table 2 and the table 3, and the experimental results show that the method is universal in the airspace and the JPEG domain and has good overall performance.
Table 2 detection accuracy of different spatial domain steganography algorithms%
Figure BDA0002533234170000082
TABLE 3 detection accuracy of different steganographic algorithms in JPEG domain%
Figure BDA0002533234170000091
Note: the row of SRNet in the table represents the result of the first processing by the arbiter; the line of the deployed represents the second processing result of the discriminator after the method of the invention; the quality factor QF is not distinguished in the null domain, but only in the JPEG domain.
FIG. 4 is an experimental comparison chart for different training strategies according to the present invention, wherein when the replacement strategy is to replace all images, the training effect of the model is poor, the detection accuracy is low, and convergence is difficult, mainly because the model loses many characteristics of the images during learning, resulting in the degradation of the detection performance; therefore, the method is changed into a second training strategy, namely, only the images with the detection errors of the discriminators are replaced by the saliency map, the detection accuracy of the training strategy is obviously improved compared with that of the first training strategy, but the detection accuracy is lower compared with that of the SRNet base line without replacement; and finally, a third training strategy is adopted, namely, only the images which are processed by the region screening module and meet the requirements are replaced by the saliency map, and experiments show that the training strategy has the advantages of optimal effect and quick convergence.
The discriminator is equivalent to a person, and can be discriminated without learning, and the discriminator can be learned after a period of learning (running on a computer), and the learning process is called training. Therefore, the final effect of the change of the invention is the discrimination accuracy of the discriminator, and the method is to improve the effect of training in essence.
FIG. 5 is a graph of steganalysis systems and test results simulated using the present invention, cover representing the original image; stego stands for stego image, values behind cover and stego represent probability values belonging to cover and stego, and the probability of which is high indicates the type of image. In the system, a trained model is called to detect any image on the network.
Firstly, randomly downloading an image from a network, and converting an RGB image with three channels into a gray image with a single channel as most of the images on the network are color images, wherein the gray image is named cover.jpg; and secondly, steganography is carried out by using a J-UNIWARD self-adaptive steganography algorithm, the embedding rate is 0.1-0.4bpp, the embedded images are named stego-0.1.jpg, stego-0.2.jpg, stego-0.3.jpg and stego-0.4.jpg respectively, and 4 stego-0.4.jpg images cannot be distinguished by naked eyes. And finally, inputting the 4 steganographic images into a steganographic analysis and detection system for detection. The system has the functions of judging whether the input image is an original image or a steganographic image and displaying the probability value belonging to each class and the detection result, and the detection result shows that the probability value of the steganographic image is larger along with the increase of the embedding rate, so that the accuracy of the detection result is further explained.

Claims (10)

1. An image adaptive steganalysis method based on significance detection is characterized by comprising the following steps:
1) segmenting a saliency area of the image in the image with the detection error to form a saliency map;
2) screening the saliency map according to the coincidence degree of the saliency region and the steganography region, extracting the saliency map meeting the requirements, and carrying out image fusion on the saliency map meeting the requirements and the corresponding original image to form a saliency fusion map; the saliency map meeting the requirement is an image with high coincidence degree of a saliency region and a steganographic region;
3) replacing the unqualified saliency map with an original image, and combining the original image and the saliency fusion map into an updated data set;
4) and then trained with the updated data set.
2. The adaptive steganalysis method for images based on saliency detection as claimed in claim 1, characterized in that in step 1), the erroneous images are detected using a discriminator module, which uses SRNet model.
3. The adaptive image steganalysis method based on saliency detection as claimed in claim 2, wherein in step 2), the image fusion specifically is: and setting the other pixels except the pixel of the salient region in the image to be 0, and enabling the discriminator module to focus on the image characteristics of the salient region only.
4. The image adaptive steganalysis method based on saliency detection as claimed in claim 1, characterized in that in step 1), a saliency detection module is used to segment out saliency areas of an image;
the significance detection module adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module, and the significance graph is formed by the following specific steps:
introducing a prediction module of a BASNet model and a multi-scale residual error optimization module into a network, and obtaining a rough significance map through the prediction module;
and the multi-scale residual optimization module optimizes the rough significance map of the prediction module by learning the residual between the rough significance map and the real label, and finally obtains the refined significance map.
5. The image adaptive steganalysis method based on saliency detection as claimed in claim 1, characterized in that in step 2), a region screening module is used to screen saliency maps and extract saliency maps meeting requirements.
6. The adaptive steganalysis method for images based on saliency detection as claimed in claim 1, characterized in that in step 2), the coincidence η between saliency region and steganographic region is calculated as follows:
Figure FDA0002533234160000021
obtaining formula (2) from formula (1):
Figure FDA0002533234160000022
wherein N represents the total number of pixel points in the image, NcoinNumber of pixels representing overlap region, NstegoNumber of pixels representing steganographic region, PStego(i, j) and PSOD(i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively.
7. The image adaptive steganalysis method based on saliency detection as claimed in claim 1, wherein in step 2), the coincidence degree corresponding to the saliency map meeting the requirement is 0.6-1.
8. An image adaptive steganalysis system for realizing the image adaptive steganalysis method based on saliency detection of any one of claims 1-7, which is characterized by comprising a saliency detection module, a region screening module and a discriminator module;
the saliency detection module is used for generating a saliency region of an image to be detected, adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module;
the prediction module is used for obtaining a rough saliency map; the multi-scale residual optimization module is used for optimizing the rough significance map of the prediction module by learning the residual between the rough significance map and the real label to finally obtain a refined significance map;
the region screening module is used for screening the saliency map and extracting the saliency map meeting the requirement;
the discriminator module employs an SRNet model for providing an image of the initial detection error and retraining of the updated data set.
9. The adaptive steganalysis system for images based on saliency detection as claimed in claim 8 characterized in that the multi-scale residual optimization module comprises an input layer, an encoder, a bridge layer, a decoder and an output layer.
10. The adaptive steganalysis system for images based on saliency detection as claimed in claim 9 characterized by the fact that both the encoder and the decoder have four stages, each stage having only one convolution layer, each layer having 64 filters of size 33;
the bridge layer is also provided with a convolutional layer, and the convolutional layer has the same parameters as other convolutional layers.
CN202010524234.1A 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection Active CN111696021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010524234.1A CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010524234.1A CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Publications (2)

Publication Number Publication Date
CN111696021A true CN111696021A (en) 2020-09-22
CN111696021B CN111696021B (en) 2023-03-28

Family

ID=72480120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010524234.1A Active CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Country Status (1)

Country Link
CN (1) CN111696021B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637605A (en) * 2020-11-11 2021-04-09 中国科学院信息工程研究所 Video steganalysis method and device based on analysis of CAVLC code words and number of nonzero DCT coefficients
CN112785478A (en) * 2021-01-15 2021-05-11 南京信息工程大学 Hidden information detection method and system based on embedded probability graph generation
CN112991344A (en) * 2021-05-11 2021-06-18 苏州天准科技股份有限公司 Detection method, storage medium and detection system based on deep transfer learning
CN114782697A (en) * 2022-04-29 2022-07-22 四川大学 Adaptive steganography detection method for confrontation sub-field

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杜玲等: "图像篡改检测感知哈希技术综述", 《计算机科学与探索》 *
汪然等: "分类与分割相结合的JPEG图像隐写分析", 《中国图象图形学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637605A (en) * 2020-11-11 2021-04-09 中国科学院信息工程研究所 Video steganalysis method and device based on analysis of CAVLC code words and number of nonzero DCT coefficients
CN112785478A (en) * 2021-01-15 2021-05-11 南京信息工程大学 Hidden information detection method and system based on embedded probability graph generation
CN112785478B (en) * 2021-01-15 2023-06-23 南京信息工程大学 Hidden information detection method and system based on generation of embedded probability map
CN112991344A (en) * 2021-05-11 2021-06-18 苏州天准科技股份有限公司 Detection method, storage medium and detection system based on deep transfer learning
CN114782697A (en) * 2022-04-29 2022-07-22 四川大学 Adaptive steganography detection method for confrontation sub-field

Also Published As

Publication number Publication date
CN111696021B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111696021B (en) Image self-adaptive steganalysis system and method based on significance detection
Li et al. Identification of deep network generated images using disparities in color components
CN111311563B (en) Image tampering detection method based on multi-domain feature fusion
CN109949317B (en) Semi-supervised image example segmentation method based on gradual confrontation learning
CN109492416B (en) Big data image protection method and system based on safe area
CN110956094A (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network
CN112150450B (en) Image tampering detection method and device based on dual-channel U-Net model
CN110232380A (en) Fire night scenes restored method based on Mask R-CNN neural network
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN113553954A (en) Method and apparatus for training behavior recognition model, device, medium, and program product
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
CN116152173A (en) Image tampering detection positioning method and device
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN114257697B (en) High-capacity universal image information hiding method
CN111798359A (en) Deep learning-based image watermark removing method
CN115880203A (en) Image authenticity detection method and image authenticity detection model training method
CN117391920A (en) High-capacity steganography method and system based on RGB channel differential plane
CN111882525A (en) Image reproduction detection method based on LBP watermark characteristics and fine-grained identification
CN111666977A (en) Shadow detection method of monochrome image
CN113065407B (en) Financial bill seal erasing method based on attention mechanism and generation countermeasure network
CN113570564B (en) Multi-definition fake face video detection method based on multi-path convolution network
CN115482463A (en) Method and system for identifying land cover of mine area of generated confrontation network
CN113706636A (en) Method and device for identifying tampered image
CN112991200B (en) Method and device for adaptively enhancing infrared image
CN114842399B (en) Video detection method, training method and device for video detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant