CN111696021B - Image self-adaptive steganalysis system and method based on significance detection - Google Patents
Image self-adaptive steganalysis system and method based on significance detection Download PDFInfo
- Publication number
- CN111696021B CN111696021B CN202010524234.1A CN202010524234A CN111696021B CN 111696021 B CN111696021 B CN 111696021B CN 202010524234 A CN202010524234 A CN 202010524234A CN 111696021 B CN111696021 B CN 111696021B
- Authority
- CN
- China
- Prior art keywords
- saliency
- image
- map
- module
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012216 screening Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 21
- 230000003044 adaptive effect Effects 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 16
- 239000000284 extract Substances 0.000 claims description 2
- 238000012549 training Methods 0.000 abstract description 23
- 238000012545 processing Methods 0.000 abstract description 12
- 230000000694 effects Effects 0.000 abstract description 10
- 238000004458 analytical method Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 description 11
- 238000002474 experimental method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004445 quantitative analysis Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0065—Extraction of an embedded watermark; Reliable detection
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and particularly discloses an image self-adaptive steganalysis system based on saliency detection. Also disclosed is an analytical method, specifically: firstly, inputting an image with errors detected by a discriminator module into a saliency detection module to form a saliency map; then extracting a significance map meeting the requirements by a region screening module, and carrying out image fusion on the significance map and the corresponding original image to form a significance fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the part of original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region. The method utilizes the significance detection technology to guide the steganalysis model to pay more attention to the characteristics of the image steganalysis area, thereby improving the training effect and the detection accuracy of the model.
Description
Technical Field
The invention belongs to the technical field of image processing, and relates to an image self-adaptive steganalysis system and method based on significance detection.
Background
The image steganography is a covert communication technology for embedding a secret message in an image carrier file for transmission, different from a traditional encryption communication mode, a third party is difficult to decipher, the communication behavior is hidden by the image steganography, the embedding of the secret message is difficult to be perceived by the third party, and therefore the image steganography has strong deception. Especially, in the image adaptive steganography proposed in recent years, the secret information is preferentially embedded into the complicated texture area of the image during steganography, and under the condition of low embedding rate, the detection is more difficult, so that the security of the steganography is greatly improved, and great challenges are brought to image steganography analysis.
The main idea of the content self-adaptive steganography algorithm based on the combination of the distortion function and the STC comprises two parts: the method comprises the steps of quantitative analysis of change cost based on a distortion function and embedding based on STC, wherein the purpose of the distortion function is to capture the change of local or global characteristics after embedding through the quantitative analysis, for example, the possible distortion after each element is changed is calculated, and the purpose of the STC is to comprehensively consider and determine the elements to be finally changed according to the distortion condition, so that the overall distortion is minimized. The most common adaptive steganography algorithms are HUGO, WOW, S-UNIWARD, UED, and J-UNIWARD, and elements in the image that are located in complex regions of texture are more likely to be modified than elements in smooth regions, because the perturbation caused by steganography algorithms in these statistically complex regions is less likely to be perceived.
However, most of the existing image steganalysis research works are to improve the network structure to improve the steganalysis detection performance based on the network model, but when steganography is performed by using image adaptive steganography, the secret messages are not embedded in all regions of the image, but the image contains abundant information dimensions, so that the information contained in the image is not completely beneficial to training, and in such a situation, unnecessary redundant interference exists in the model during training, and the detection precision is reduced. Therefore, it is necessary to provide a steganography analysis method with high pertinence to the steganography region, and the guide model focuses more on the features of the steganography region.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an image adaptive steganalysis system and method based on significance detection, wherein the significance detection technology is used for guiding a steganalysis model to pay more attention to the characteristics of an image steganalysis area, so that the training effect and the detection accuracy of the model are improved.
The invention is realized by the following technical scheme:
an image self-adaptive steganalysis method based on significance detection comprises the following steps:
1) Segmenting a saliency area of the image in the image with the detection error to form a saliency map;
2) Screening the saliency map according to the coincidence degree of the saliency region and the steganography region, extracting the saliency map meeting the requirements, and carrying out image fusion on the saliency map meeting the requirements and the corresponding original image to form a saliency fusion map; the saliency map meeting the requirement is an image with high coincidence degree of a saliency region and a steganographic region;
3) Replacing the unqualified significance map with an original image, and combining the original image and the significance fusion map into an updated data set;
4) And then trained with the updated data set.
Further, in step 1), a discriminator module is used for detecting wrong images, and the discriminator module adopts an SRNet model.
Further, in the step 2), the image fusion specifically comprises: and setting the other pixels except the pixel of the salient region in the image to be 0, and enabling the discriminator module to focus on the image characteristics of the salient region only.
Further, in the step 1), a saliency detection module is used for segmenting a saliency area of the image;
the significance detection module adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module, and the significance graph is formed by the following specific steps:
introducing a prediction module of a BASNet model and a multi-scale residual error optimization module into a network, and obtaining a rough significance map through the prediction module;
and the multi-scale residual optimization module optimizes the rough significance map of the prediction module by learning the residual between the rough significance map and the real label, and finally obtains the refined significance map.
Further, in the step 2), the saliency map is screened by using an area screening module, and the saliency map meeting the requirement is extracted.
Further, in step 2), the method for calculating the coincidence degree η between the saliency region and the steganographic region is as follows:
obtaining formula (2) from formula (1):
wherein N represents the total number of pixel points in the image, N coin Number of pixels representing overlapping area, N stego Number of pixels representing steganographic area, P Stego (i, j) and P SOD (i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively.
Further, in the step 2), the coincidence degree corresponding to the significance map meeting the requirement is 0.6-1.
The invention also discloses an image self-adaptive steganalysis system based on significance detection, which comprises a significance detection module, a region screening module and a discriminator module;
the saliency detection module is used for generating a saliency region of an image to be detected, adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module;
the prediction module is used for obtaining a rough saliency map; the multi-scale residual optimization module is used for optimizing the rough significance map of the prediction module by learning the residual between the rough significance map and the real label to finally obtain a refined significance map;
the region screening module is used for screening the saliency map and extracting the saliency map meeting the requirement;
the discriminator module employs an SRNet model for providing an image of the initial detection error and retraining of the updated data set.
Further, the multi-scale residual optimization module comprises an input layer, an encoder, a bridge layer, a decoder and an output layer.
Further, both the encoder and decoder have four stages, each with only one convolutional layer, each layer having 64 filters of size 33;
the bridge layer is also provided with a convolutional layer, and the convolutional layer has the same parameters as other convolutional layers.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention discloses an image self-adaptive steganalysis method based on significance detection, which comprises the steps of processing an image with a detection error to form a significance map; then, screening the saliency map according to the coincidence degree of the saliency region and the steganography region, extracting the saliency map meeting the requirement, and carrying out image fusion on the saliency map and the corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the part of original image and the saliency fusion map into an updated data set for training, and performing targeted feature learning on the region with higher coincidence degree with the steganographic region. Compared with the conventional convolutional network model, the model can be guided to learn the characteristics of the image steganography region, and the method has better pertinence. The invention carries out data statistical analysis on the training set, extracts the images meeting the conditions for processing, and can ensure the effectiveness of the processing. Experiments show that the method is universal in the airspace and JPEG domains and has good overall performance by performing steganalysis on a data set embedded by the adaptive steganography algorithm of the airspace and JPEG domains.
Furthermore, through experimental comparison and analysis, images with the coincidence degree of the salient region and the steganographic region concentrated between 0.6 and 1 are screened out, and the model training effect is good.
The invention discloses an image self-adaptive steganalysis system based on significance detection, which comprises a significance detection module, a region screening module and a discriminator module, wherein the steganalysis system is simulated, and the discriminator module detects wrong image input significance detection module to form a significance map; then extracting a saliency map meeting the requirement by a region screening module, and carrying out image fusion on the saliency map and a corresponding original image to form a saliency fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region. Compared with the prior work, the method can be used for testing any image on the network, improves the generalization of the method, and visually outputs the classification probability and result, thereby having more practical value and certain generalization capability and practicability.
Drawings
FIG. 1 is an overall framework diagram of the method of the present invention;
FIG. 2 is a comparative experimental graph of saliency and steganographic regions performed by the present invention;
FIG. 3 is a data statistics graph of the overlap ratio of saliency regions and steganographic regions according to the present invention;
FIG. 4 is a comparison of experiments for different training strategies according to the present invention;
FIG. 5 is a graph of steganalysis systems and test results simulated in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
the invention discloses an image self-adaptive steganalysis system based on significance detection, which mainly comprises three modules: the device comprises a significance detection module, a region screening module and a discriminator module.
As shown in fig. 1, firstly, inputting the image with the error detected by the discriminator module into the saliency detection module to form a saliency map; then extracting a significance map meeting the requirements by a region screening module, and carrying out image fusion on the significance map and the corresponding original image to form a significance fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region.
The saliency detection module is used for generating a saliency region of the image to be detected, and specifically adopts a BASNet model. The invention introduces a prediction module of BASNet and a multi-scale residual optimization module into a network, wherein the prediction module is a dense supervision encoder/decoder network similar to U-Net, and the network learns to predict a saliency region from an input image to obtain a rough saliency map; the multi-scale residual optimization module optimizes the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true label, and finally obtains a refined saliency map.
S refined =S coarse +S residual
Wherein S is coarse 、S residual And S refined Respectively representing the residual error of the prediction module, the rough saliency map and the real labeled residual error and the optimized residual error.
The main architecture of the multi-scale residual optimization module is simpler than that of the prediction module, and comprises an input layer, an encoder, a bridge layer, a decoder and an output layer. The encoder and decoder have four stages, each stage having only one convolutional layer, each layer having 64 filters of size 3 × 3, followed by Batch Normalization (BN) and ReLU activation; the bridging layer is also provided with a convolution layer, and the parameters of the convolution layer are the same as those of other convolution layers; the max-pooling layer (maxpool) is used for downsampling in the encoder and the bilinear upsampling layer (bilinear upsample) is used for upsampling in the decoder. The multi-scale residual optimization module outputs a model as a final saliency map, and the model defines a mixing loss l during training in order to obtain high-quality region subdivision and clear boundary (k) Expressed as:
the loss combines BCE, SSIM and IoU losses, and is beneficial to reducing false errors generated by learning information on the boundary through cross propagation, so that the boundary is more refined.
As shown in fig. 2, the saliency region of the image is a white region shown in the picture on the 2 nd row in fig. 2, the steganographic region is a scatter region shown in the picture on the 3 rd row in fig. 2, the saliency region of the image is compared with the steganographic region, after saliency detection, the content of the picture can be analyzed, and when the saliency target in the image is clear (for example, left 4), the coincidence degree of the saliency region marked by the image and the steganographic region is high; when the salient object in the image is blurred (for example, right 2), the coincidence degree of the salient region marked by the image and the steganographic region is low. Therefore, it is necessary to extract and process the images that meet the conditions by screening the saliency maps, so as to ensure the effectiveness of the processing.
The region screening module is used for screening out images meeting the processing requirements and ensuring the effectiveness of processing. In this module, data statistics and analysis are performed on the coincidence degree of the saliency region and the steganographic region. In the image adaptive steganography, the secret information is embedded in a region with complex texture, and the region is a more prominent region for human eyes, so that the region is marked as a salient region in the saliency detection. And (3) carrying out data analysis by using a BOSSbase1.01 data set, wherein 10000 digital images with the size of 512 multiplied by 512 are subjected to steganography by using a J-UNIWARD image adaptive steganography algorithm, and the embedding rate is 0.4bpp. In order to analyze the overall data distribution, data statistics is performed on the coincidence degree of the salient region and the steganographic region of 10000 pictures, and the coincidence degree is represented in a scatter diagram form as shown in fig. 3.
Data analysis shows that the coincidence degree of the saliency region and the steganographic region is concentrated between 0.5 and 1, and part of the coincidence degree is concentrated between 0 and 0.2, as shown in the right 2 of fig. 2, which indicates that not all images are suitable for saliency processing, and images meeting requirements need to be screened out, so that the processing result is guaranteed to be effective. The calculation method of the contact ratio eta of the saliency region and the steganographic region is as follows:
wherein N represents the total number of pixel points in the image, N coin Number of pixels representing overlapping area, N stego Number of pixels representing steganographic region, P Stego (i, j) and P SOD (i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively. The steganographic dot diagram represents the position of a pixel point which is changed after steganography through a J-UNIWARD self-adaptive steganographic algorithm.
In the region screening module, through experimental comparison and analysis, images with the coincidence degree of the saliency region and the steganography region concentrated between 0.6 and 1 are screened out to have good training effect on the model, and the comparison experiment is shown in table 1, wherein the threshold value K of the screening region is set to be 0.7, and the training effect is optimal. And after the saliency map is screened out, fusing the saliency map with the original image by using an image fusion technology, and setting the other pixels except the pixels of the saliency region in the image to be 0, namely, allowing the model to focus on the image characteristics of the saliency region only.
TABLE 1 detection accuracy (%) at screening threshold for different regions
The discriminator module is used to provide an image for initial detection of errors and retraining of the updated data set. The method uses an SRNet model as an arbiter in an experiment. The SRNet model is composed of four parts, two parts (layers 1-7) at the front end are responsible for extracting partial residual errors of noise, the two parts are outlined by two shades in the figure, the third part (layers 8-11) aims to reduce the dimension of a feature map, the last part is a standard full-connection layer and a Softmax linear classifier, and the last part is a standard full-connection layer and a Softmax linear classifier part, wherein all convolution layers adopt a convolution kernel of 3 x 3, and all nonlinear activation functions are ReLU.
The detection errors are classified into two types, one is to discriminate the steganographic image as an original image, and the other is to discriminate the original image as a steganographic image. The initially false-detected image is a trained SRNet discrimination false image, wherein a steganographic image exists and an original image also exists.
In order to verify the effectiveness of the method for detecting the adaptive steganography algorithm, steganography analysis is respectively carried out on the data sets embedded by the adaptive steganography algorithm of the airspace and the JPEG domain, as shown in the table 2 and the table 3, and the experimental results show that the method is universal in the airspace and the JPEG domain and has good overall performance.
Table 2 detection accuracy of different spatial domain steganography algorithms%
TABLE 3 detection accuracy of different steganographic algorithms in JPEG Domain%
Note: the row of SRNet in the table represents the result of the first processing by the arbiter; the line of the deployed represents the second processing result of the discriminator after the method of the invention; the quality factor QF is not distinguished in the null domain, but only in the JPEG domain.
FIG. 4 is an experimental comparison chart for different training strategies according to the present invention, where the replacement strategy is to replace all images, the training effect of the model is poor, the detection accuracy is low, and convergence is difficult, mainly because the model loses many characteristics of the images during learning, resulting in a decrease in detection performance; therefore, the method is changed into a second training strategy, namely, only the images with the detection errors of the discriminators are replaced by the saliency map, the detection accuracy of the training strategy is obviously improved compared with that of the first training strategy, but the detection accuracy is lower compared with that of the SRNet base line without replacement; and finally, a third training strategy is adopted, namely, only the images which are processed by the region screening module and meet the requirements are replaced by the saliency map, and experiments show that the training strategy has the advantages of optimal effect and quick convergence.
The discriminator is equivalent to a person, can not be discriminated without learning, and can only learn discrimination after a period of learning (running on a computer), and the learning process is called training. Therefore, the final effect of the change of the invention is the discrimination accuracy of the discriminator, and the method is to improve the effect of training in essence.
FIG. 5 is a graph of steganalysis systems and test results simulated using the present invention, cover representing the original image; stego stands for stego image, and the numerical values behind cover and stego stand for the probability values belonging to cover and stego, and which probability is high indicates which category the image belongs to. In the system, a trained model is called to detect any image on the network.
Firstly, randomly downloading an image from a network, and converting an RGB image with three channels into a gray image with a single channel as most of the images on the network are color images, wherein the gray image is named cover.jpg; and secondly, performing steganography by using a J-UNIWARD self-adaptive steganography algorithm, wherein the embedding rate is 0.1-0.4bpp, the embedded images are named stego-0.1.Jpg, stego-0.2.Jpg, stego-0.3.Jpg and stego-0.4.Jpg respectively, and 4 steganography images cannot be distinguished by naked eyes. And finally, inputting the 4 steganographic images into a steganographic analysis and detection system for detection. The system has the functions of judging whether the input image is an original image or a steganographic image and displaying the probability value belonging to each class and the detection result, and the detection result shows that the probability value of the steganographic image is larger along with the increase of the embedding rate, so that the accuracy of the detection result is further explained.
Claims (10)
1. An image adaptive steganalysis method based on significance detection is characterized by comprising the following steps:
1) Segmenting a saliency area of the image in the image with the detection error to form a saliency map;
2) Screening the saliency map according to the coincidence degree of the saliency region and the steganographic region, extracting the saliency map meeting the requirements, and carrying out image fusion on the saliency map meeting the requirements and the corresponding original image to form a saliency fusion map; the saliency map meeting the requirement is an image with high coincidence degree of a saliency region and a steganographic region;
3) Replacing the unqualified saliency map with an original image, and combining the original image and the saliency fusion map into an updated data set;
4) And then trained with the updated data set.
2. The adaptive steganalysis method for images based on saliency detection as claimed in claim 1, characterized in that in step 1), the erroneous images are detected using a discriminator module, which uses SRNet model.
3. The adaptive image steganalysis method based on saliency detection as claimed in claim 2, wherein in step 2), the image fusion specifically is: setting the rest pixels except the pixels of the salient region in the image to be 0, and enabling the discriminator module to focus on the image characteristics of the salient region only.
4. The image adaptive steganalysis method based on significance detection according to claim 1, characterized in that in step 1), a significance detection module is used to segment out significance regions of an image;
the significance detection module adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module, and the significance graph is formed by the following specific steps:
introducing a prediction module of a BASNet model and a multi-scale residual error optimization module into a network, and obtaining a rough significance map through the prediction module;
and the multi-scale residual optimization module optimizes the rough significance map of the prediction module by learning the residual between the rough significance map and the real label, and finally obtains the refined significance map.
5. The image adaptive steganalysis method based on saliency detection as claimed in claim 1, characterized in that in step 2), a region screening module is used to screen saliency maps and extract saliency maps meeting requirements.
6. The adaptive image steganalysis method based on saliency detection according to claim 1 characterized in that in step 2), the overlap ratio η between a saliency region and a steganographic region is calculated as follows:
obtaining formula (2) from formula (1):
wherein N represents the total number of pixel points in the image, N coin Number of pixels representing overlap region, N stego Number of pixels representing steganographic region, P Stego (i, j) and P SOD (i, j) represent the pixel values of the steganographic point map and the saliency map at position (i, j), respectively.
7. The adaptive steganalysis method for images based on saliency detection as claimed in claim 1, characterized in that in step 2), coincidence degree corresponding to the saliency map meeting the requirement is 0.6-1.
8. An image adaptive steganalysis system for implementing the image adaptive steganalysis method based on saliency detection of any one of claims 1 to 7, comprising a saliency detection module, a region screening module and a discriminator module;
the saliency detection module is used for generating a saliency region of an image to be detected, adopts a BASNet model and comprises a prediction module and a multi-scale residual error optimization module;
the prediction module is used for obtaining a rough saliency map; the multi-scale residual optimization module is used for optimizing the rough significance map of the prediction module by learning the residual between the rough significance map and the real label to finally obtain a refined significance map;
the region screening module is used for screening the significance map and extracting the significance map meeting the requirement;
the discriminator module employs an SRNet model for providing an image of the initial detection error and retraining of the updated data set.
9. The adaptive image steganalysis system based on saliency detection as claimed in claim 8 characterized by that the multi-scale residual optimization module contains one input layer, one encoder, one bridge layer, one decoder and one output layer.
10. The adaptive steganalysis system for images based on saliency detection as claimed in claim 9 characterized by the fact that both the encoder and the decoder have four stages, each stage having only one convolution layer, each layer having 64 filters of size 33;
the bridge layer is also provided with a convolution layer, and the parameters of the convolution layer are the same as those of other convolution layers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010524234.1A CN111696021B (en) | 2020-06-10 | 2020-06-10 | Image self-adaptive steganalysis system and method based on significance detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010524234.1A CN111696021B (en) | 2020-06-10 | 2020-06-10 | Image self-adaptive steganalysis system and method based on significance detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111696021A CN111696021A (en) | 2020-09-22 |
CN111696021B true CN111696021B (en) | 2023-03-28 |
Family
ID=72480120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010524234.1A Active CN111696021B (en) | 2020-06-10 | 2020-06-10 | Image self-adaptive steganalysis system and method based on significance detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111696021B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112637605B (en) * | 2020-11-11 | 2022-01-11 | 中国科学院信息工程研究所 | Video steganalysis method and device based on analysis of CAVLC code words and number of nonzero DCT coefficients |
CN112785478B (en) * | 2021-01-15 | 2023-06-23 | 南京信息工程大学 | Hidden information detection method and system based on generation of embedded probability map |
CN112991344A (en) * | 2021-05-11 | 2021-06-18 | 苏州天准科技股份有限公司 | Detection method, storage medium and detection system based on deep transfer learning |
CN114782697B (en) * | 2022-04-29 | 2023-05-23 | 四川大学 | Self-adaptive steganography detection method for anti-domain |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016165082A1 (en) * | 2015-04-15 | 2016-10-20 | 中国科学院自动化研究所 | Image stego-detection method based on deep learning |
CN106157319A (en) * | 2016-07-28 | 2016-11-23 | 哈尔滨工业大学 | The significance detection method that region based on convolutional neural networks and Pixel-level merge |
-
2020
- 2020-06-10 CN CN202010524234.1A patent/CN111696021B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016165082A1 (en) * | 2015-04-15 | 2016-10-20 | 中国科学院自动化研究所 | Image stego-detection method based on deep learning |
CN106157319A (en) * | 2016-07-28 | 2016-11-23 | 哈尔滨工业大学 | The significance detection method that region based on convolutional neural networks and Pixel-level merge |
Non-Patent Citations (2)
Title |
---|
分类与分割相结合的JPEG图像隐写分析;汪然等;《中国图象图形学报》(第10期);全文 * |
图像篡改检测感知哈希技术综述;杜玲等;《计算机科学与探索》(第05期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111696021A (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111696021B (en) | Image self-adaptive steganalysis system and method based on significance detection | |
CN109492416B (en) | Big data image protection method and system based on safe area | |
CN112150450B (en) | Image tampering detection method and device based on dual-channel U-Net model | |
CN114677346A (en) | End-to-end semi-supervised image surface defect detection method based on memory information | |
CN113920094B (en) | Image tampering detection technology based on gradient residual U-shaped convolutional neural network | |
CN110020658B (en) | Salient object detection method based on multitask deep learning | |
CN114842524B (en) | Face false distinguishing method based on irregular significant pixel cluster | |
Wang et al. | HidingGAN: High capacity information hiding with generative adversarial network | |
CN114724008B (en) | Method and system for detecting deep fake image by combining multi-scale features | |
CN113553954A (en) | Method and apparatus for training behavior recognition model, device, medium, and program product | |
Zhou et al. | Deep multi-scale features learning for distorted image quality assessment | |
CN116152173A (en) | Image tampering detection positioning method and device | |
CN118135641B (en) | Face counterfeiting detection method based on local counterfeiting area detection | |
CN114257697B (en) | High-capacity universal image information hiding method | |
CN111476727A (en) | Video motion enhancement method for face changing video detection | |
CN118037641A (en) | Multi-scale image tampering detection and positioning method based on double-flow feature extraction | |
CN117315284A (en) | Image tampering detection method based on irrelevant visual information suppression | |
CN117391920A (en) | High-capacity steganography method and system based on RGB channel differential plane | |
CN116311430A (en) | Depth forging detection method and device based on image diversity characteristics | |
CN113065407B (en) | Financial bill seal erasing method based on attention mechanism and generation countermeasure network | |
CN111738254A (en) | Automatic identification method for panel and screen contents of relay protection device | |
CN113989494B (en) | Image recognition method and device under complex climate condition | |
CN114842399B (en) | Video detection method, training method and device for video detection model | |
CN112132735B (en) | Carrier selection method avoiding pretreatment | |
CN118154906B (en) | Image tampering detection method based on feature similarity and multi-scale edge attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |