CN111696021B - Image self-adaptive steganalysis system and method based on significance detection - Google Patents

Image self-adaptive steganalysis system and method based on significance detection Download PDF

Info

Publication number
CN111696021B
CN111696021B CN202010524234.1A CN202010524234A CN111696021B CN 111696021 B CN111696021 B CN 111696021B CN 202010524234 A CN202010524234 A CN 202010524234A CN 111696021 B CN111696021 B CN 111696021B
Authority
CN
China
Prior art keywords
saliency
image
module
map
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010524234.1A
Other languages
Chinese (zh)
Other versions
CN111696021A (en
Inventor
张敏情
黄思远
柯彦
毕新亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Engineering University of Chinese Peoples Armed Police Force
Original Assignee
Engineering University of Chinese Peoples Armed Police Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engineering University of Chinese Peoples Armed Police Force filed Critical Engineering University of Chinese Peoples Armed Police Force
Priority to CN202010524234.1A priority Critical patent/CN111696021B/en
Publication of CN111696021A publication Critical patent/CN111696021A/en
Application granted granted Critical
Publication of CN111696021B publication Critical patent/CN111696021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly discloses an image self-adaptive steganalysis system based on saliency detection. Also disclosed is an analytical method, specifically: firstly, inputting an image with errors detected by a discriminator module into a saliency detection module to form a saliency map; then extracting a significance map meeting the requirements by a region screening module, and carrying out image fusion on the significance map and the corresponding original image to form a significance fusion map; and finally, replacing the saliency map which does not meet the requirement with an original image, combining the part of original image and the saliency fusion map into an updated data set, inputting the updated data set into a discriminator module for training, and enabling the discriminator to carry out targeted feature learning on the region with higher coincidence degree with the steganographic region. The method utilizes the significance detection technology to guide the steganalysis model to pay more attention to the characteristics of the image steganalysis area, thereby improving the training effect and the detection accuracy of the model.

Description

一种基于显著性检测的图像自适应隐写分析系统及方法An image adaptive steganalysis system and method based on saliency detection

技术领域Technical Field

本发明属于图像处理技术领域,涉及一种基于显著性检测的图像自适应隐写分析系统及方法。The invention belongs to the technical field of image processing, and relates to an image adaptive steganalysis system and method based on saliency detection.

背景技术Background Art

图像隐写术是将秘密消息嵌入在图像载体文件中进行传递的一种隐蔽通信技术,与传统的加密通信方式使第三方难以破译不同,图像隐写术掩盖了通信行为本身,使第三方难以察觉秘密消息的嵌入,因此具有很强的欺骗性.图像隐写分析专门用于检测使用图像隐写术进行嵌入的消息,与图像隐写术是一对相互博弈并在博弈的过程中促进彼此发展的技术.图像隐写分析的困难在于隐写操作嵌入图像中的信息很少,图像前后产生的微弱差异很容易被图像内容间的差异所掩盖。尤其是近几年提出的图像自适应隐写术,在进行隐写操作时优先将秘密信息嵌入到图像的纹理复杂区域,在低嵌入率的情况下,更加难以检测,极大地提高了隐写术的安全性,为图像隐写分析带来了巨大的挑战。Image steganography is a covert communication technology that embeds secret messages in image carrier files for transmission. Unlike traditional encrypted communication methods that make it difficult for third parties to decipher, image steganography conceals the communication behavior itself, making it difficult for third parties to detect the embedding of secret messages, so it is very deceptive. Image steganalysis is specifically used to detect messages embedded using image steganography. It is a pair of technologies that compete with image steganography and promote each other's development in the process of competition. The difficulty of image steganalysis lies in the fact that the information embedded in the image by the steganographic operation is very small, and the slight difference between the image before and after can be easily covered up by the difference between the image content. In particular, the image adaptive steganography proposed in recent years gives priority to embedding secret information into the complex texture area of the image when performing the steganographic operation. In the case of low embedding rate, it is more difficult to detect, which greatly improves the security of steganography and brings great challenges to image steganalysis.

基于失真函数和STC相结合的内容自适应隐写算法的主要思想包括两部分:基于失真函数的变化代价定量分析和基于STC的嵌入,失真函数的目的是通过定量分析捕捉嵌入后局部或全局特征的变化,例如计算每个元素变化后可能的失真,STC的目的是根据失真情况,综合考虑确定最终要改变的元素,从而使整体失真最小化。最常用的自适应隐写算法有HUGO、WOW、S-UNIWARD、UED、和J-UNIWARD,图像中位于纹理复杂区域的元素大于位于平滑区域元素的修改概率,其原因是隐写算法在这些统计规律复杂区域造成的扰动更加不易被觉察。The main idea of the content adaptive steganography algorithm based on the combination of distortion function and STC includes two parts: quantitative analysis of the change cost based on the distortion function and embedding based on STC. The purpose of the distortion function is to capture the changes of local or global features after embedding through quantitative analysis, such as calculating the possible distortion after each element changes. The purpose of STC is to determine the elements to be changed in the end according to the distortion situation, so as to minimize the overall distortion. The most commonly used adaptive steganography algorithms are HUGO, WOW, S-UNIWARD, UED, and J-UNIWARD. The modification probability of elements in the complex texture area of the image is greater than that of elements in the smooth area. The reason is that the disturbance caused by the steganographic algorithm in these statistically complex areas is less noticeable.

但是,现有的图像隐写分析研究工作大多是改进网络结构以提升基于网络模型的隐写分析检测性能,但是在利用图像自适应隐写术进行隐写时,秘密消息并非嵌入在图像的所有区域,而图像含有丰富的信息维度,所以图像中所包含的信息并不完全对训练有利,这种情况使得模型在训练时存在不必要的冗杂干扰,导致检测精度降低。因此有必要提出一种对隐写区域针对性高的隐写分析方法,引导模型更加关注隐写区域的特征。However, most of the existing research work on image steganalysis is to improve the network structure to enhance the detection performance of steganalysis based on the network model. However, when using image adaptive steganography for steganography, the secret message is not embedded in all areas of the image, and the image contains rich information dimensions, so the information contained in the image is not completely beneficial to the training. This situation makes the model have unnecessary redundant interference during training, resulting in reduced detection accuracy. Therefore, it is necessary to propose a steganalysis method that is highly targeted at the steganalysis area, guiding the model to pay more attention to the characteristics of the steganalysis area.

发明内容Summary of the invention

为了克服上述现有技术存在的缺陷,本发明的目的在于提供一种基于显著性检测的图像自适应隐写分析系统及方法,利用显著性检测技术引导隐写分析模型更加关注图像隐写区域的特征,从而提高模型的训练效果以及检测准确率。In order to overcome the defects of the above-mentioned prior art, the purpose of the present invention is to provide an image adaptive steganalysis system and method based on saliency detection, which uses saliency detection technology to guide the steganalysis model to pay more attention to the characteristics of the image steganographic area, thereby improving the training effect of the model and the detection accuracy.

本发明是通过以下技术方案来实现:The present invention is achieved through the following technical solutions:

一种基于显著性检测的图像自适应隐写分析方法,包括以下步骤:An image adaptive steganalysis method based on saliency detection comprises the following steps:

1)在检测错误的图像中分割出图像的显著性区域,形成显著性图;1) Segment the salient regions of the image in the image with the detected error to form a saliency map;

2)根据显著性区域与隐写区域的重合度,对显著性图进行筛选,提取出符合要求的显著性图,将符合要求的显著性图与其对应的原始图像进行图像融合,形成显著性融合图;其中,符合要求的显著性图为显著性区域与隐写区域重合度高的图像;2) According to the overlap between the salient area and the steganographic area, the saliency map is screened, and the saliency map that meets the requirements is extracted, and the saliency map that meets the requirements is fused with the corresponding original image to form a saliency fusion map; wherein the saliency map that meets the requirements is an image with a high overlap between the salient area and the steganographic area;

3)将不符合要求的显著性图替换为原始图像,由这部分原始图像和显著性融合图组合成更新后的数据集;3) Replace the saliency maps that do not meet the requirements with the original images, and combine these original images and the saliency fusion maps into an updated dataset;

4)然后用更新后的数据集进行训练。4) Then use the updated dataset for training.

进一步,步骤1)中,使用判别器模块检测错误的图像,判别器模块采用SRNet模型。Furthermore, in step 1), a discriminator module is used to detect erroneous images, and the discriminator module adopts an SRNet model.

进一步,步骤2)中,图像融合具体为:将图像中除了显著性区域的像素外的其余像素置0,让判别器模块只关注显著性区域的图像特征。Furthermore, in step 2), the image fusion is specifically as follows: setting all pixels in the image except those in the salient area to 0, so that the discriminator module only focuses on the image features in the salient area.

进一步,步骤1)中,使用显著性检测模块分割出图像的显著性区域;Further, in step 1), a salient region of the image is segmented using a saliency detection module;

显著性检测模块采用BASNet模型,包括预测模块和多尺度残差优化模块,形成显著性图具体为:The saliency detection module adopts the BASNet model, including the prediction module and the multi-scale residual optimization module, to form a saliency map as follows:

将BASNet模型的预测模块和多尺度残差优化模块引入网络,通过预测模块得到粗糙显著性图;The prediction module and multi-scale residual optimization module of the BASNet model are introduced into the network, and a rough saliency map is obtained through the prediction module;

多尺度残差优化模块通过学习粗糙显著性图和真实标注之间的残差来优化预测模块的粗糙显著性图,最终得到细化后的显著性图。The multi-scale residual optimization module optimizes the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true annotation, and finally obtains the refined saliency map.

进一步,步骤2)中,使用区域筛选模块对显著性图进行筛选,提取出符合要求的显著性图。Furthermore, in step 2), the saliency map is screened using a region screening module to extract saliency maps that meet the requirements.

进一步,步骤2)中,显著性区域与隐写区域的重合度η的计算方法如下:Furthermore, in step 2), the calculation method of the overlap degree η between the salient area and the stego area is as follows:

Figure BDA0002533234170000031
Figure BDA0002533234170000031

由式(1)得到式(2):From formula (1), we can get formula (2):

Figure BDA0002533234170000032
Figure BDA0002533234170000032

其中,N表示图像中像素点的总数,Ncoin表示重合区域的像素点的个数,Nstego表示隐写区域的像素点个数,PStego(i,j)和PSOD(i,j)分别表示隐写点图和显著性图在位置(i,j)的像素值。Among them, N represents the total number of pixels in the image, N coin represents the number of pixels in the overlapping area, N stego represents the number of pixels in the steganographic area, P Stego (i, j) and P SOD (i, j) represent the pixel values of the stegographic point map and the saliency map at position (i, j), respectively.

进一步,步骤2)中,符合要求的显著性图对应的重合度为0.6~1。Furthermore, in step 2), the overlap degree corresponding to the saliency map that meets the requirements is 0.6-1.

本发明还公开了一种基于显著性检测的图像自适应隐写分析系统,包括显著性检测模块、区域筛选模块和判别器模块;The present invention also discloses an image adaptive steganalysis system based on saliency detection, comprising a saliency detection module, a region screening module and a discriminator module;

显著性检测模块用于生成待测图像的显著性区域,采用BASNet模型,包括预测模块和多尺度残差优化模块;The saliency detection module is used to generate the salient area of the image to be tested. It adopts the BASNet model, including the prediction module and the multi-scale residual optimization module.

预测模块用于得到粗糙显著性图;多尺度残差优化模块用于通过学习粗糙显著性图和真实标注之间的残差来优化预测模块的粗糙显著性图,最终得到细化后的显著性图;The prediction module is used to obtain a rough saliency map; the multi-scale residual optimization module is used to optimize the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true annotation, and finally obtain the refined saliency map;

区域筛选模块,用于对显著性图进行筛选,提取出符合要求的显著性图;The region screening module is used to screen the saliency maps and extract the saliency maps that meet the requirements;

判别器模块采用SRNet模型,用于提供初始检测错误的图像以及更新后数据集的重新训练。The discriminator module adopts the SRNet model, which is used to provide images with initial detection errors and retrain on the updated dataset.

进一步,多尺度残差优化模块包含一个输入层,一个编码器,一个桥接层,一个解码器和一个输出层。Furthermore, the multi-scale residual optimization module contains an input layer, an encoder, a bridge layer, a decoder and an output layer.

进一步,编码器和解码器都有四个阶段,每个阶段只有一个卷积层,每层有64个大小为33的过滤器;Furthermore, both the encoder and decoder have four stages, each with only one convolutional layer, and each layer has 64 filters of size 33;

桥接层还设置了一个卷积层,该卷积层与其他卷积层参数相同。The bridge layer also sets up a convolutional layer with the same parameters as the other convolutional layers.

与现有技术相比,本发明具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:

本发明公开了一种基于显著性检测的图像自适应隐写分析方法,首先将检测错误的图像处理后,形成显著性图;然后根据显著性区域与隐写区域的重合度,对显著性图进行筛选,提取出符合要求的显著性图,与其对应的原始图像进行图像融合,形成显著性融合图;最后将不符合要求的显著性图则替换为原始图像,使用由这部分原始图像和显著性融合图组合成更新后的数据集进行训练,对与隐写区域重合度较高的区域进行有针对性的特征学习。相较以往的卷积网络模型,可以引导模型对图像隐写区域特征进行学习,具有更好地针对性。本发明对训练集进行数据统计分析,提取出符合条件的图像进行处理,能够确保处理的有效性。通过对用空域和JPEG域的自适应隐写算法嵌入的数据集进行隐写分析,实验表明本发明在空域和JPEG域通用,并且整体表现良好。The invention discloses an image adaptive steganalysis method based on saliency detection. First, after processing the image with detection error, a saliency map is formed; then, according to the overlap between the saliency area and the stego area, the saliency map is screened, and the saliency map that meets the requirements is extracted, and image fusion is performed with the corresponding original image to form a saliency fusion map; finally, the saliency map that does not meet the requirements is replaced with the original image, and the updated data set composed of this part of the original image and the saliency fusion map is used for training, and targeted feature learning is performed on the area with a high overlap with the stego area. Compared with the previous convolutional network model, the model can be guided to learn the features of the image stego area, which has better pertinence. The invention performs data statistical analysis on the training set, extracts the image that meets the conditions for processing, and can ensure the effectiveness of the processing. By performing steganalysis on the data set embedded with the adaptive steganography algorithm in the air domain and the JPEG domain, the experiment shows that the invention is universal in the air domain and the JPEG domain, and has good overall performance.

进一步,通过实验对比分析,显著性区域和隐写区域的重合度集中在0.6-1之间的图像筛选出来对模型的训练效果好。Furthermore, through experimental comparative analysis, it is found that images with a degree of overlap between the salient area and the steganographic area concentrated between 0.6 and 1 are screened out, which has a good training effect on the model.

本发明公开了一种基于显著性检测的图像自适应隐写分析系统,包括显著性检测模块、区域筛选模块和判别器模块,模拟了一个隐写分析系统,判别器模块检测错误的图像输入显著性检测模块,形成显著性图;然后由区域筛选模块提取出符合要求的显著性图,与其对应的原始图像进行图像融合,形成显著性融合图;最后将不符合要求的显著性图则替换为原始图像,由这部分原始图像和显著性融合图组合成更新后的数据集输入判别器模块进行训练,使判别器对与隐写区域重合度较高的区域进行有针对性的特征学习。对网络上随机下载的图像进行测试,与以往的工作相比,可以用于测试网络上的任意图像,提高该发明的泛化性,并且直观地输出分类的概率和结果,更具有实用价值,具有一定的泛化能力和实用性。The invention discloses an image adaptive steganalysis system based on saliency detection, comprising a saliency detection module, a region screening module and a discriminator module, simulating a steganalysis system, wherein the discriminator module detects an erroneous image and inputs it into the saliency detection module to form a saliency map; then the region screening module extracts the saliency map that meets the requirements, and performs image fusion with the corresponding original image to form a saliency fusion map; finally, the saliency map that does not meet the requirements is replaced with the original image, and the updated data set composed of this part of the original image and the saliency fusion map is input into the discriminator module for training, so that the discriminator performs targeted feature learning on the area with a high overlap with the stego area. The image randomly downloaded from the network is tested, and compared with the previous work, it can be used to test any image on the network, improve the generalization of the invention, and intuitively output the probability and result of classification, which is more practical and has certain generalization ability and practicality.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明方法的整体框架图;FIG1 is an overall framework diagram of the method of the present invention;

图2为本发明进行显著性与隐写区域对比实验图;FIG2 is a diagram showing a comparison experiment of saliency and steganographic regions conducted by the present invention;

图3为本发明对显著性区域和隐写区域重合度的数据统计图;FIG3 is a statistical diagram of the overlap between the salient area and the steganographic area according to the present invention;

图4为本发明针对不同训练策略的实验对比图;FIG4 is an experimental comparison diagram of the present invention for different training strategies;

图5为本发明模拟的隐写分析系统及检测结果图。FIG5 is a diagram of a steganalysis system simulated by the present invention and a detection result diagram.

具体实施方式DETAILED DESCRIPTION

下面结合附图对本发明做进一步详细描述:The present invention is further described in detail below in conjunction with the accompanying drawings:

本发明公开了一种基于显著性检测的图像自适应隐写分析系统,主要由三个模块组成:显著性检测模块,区域筛选模块和判别器模块。The invention discloses an image adaptive steganalysis system based on saliency detection, which is mainly composed of three modules: a saliency detection module, a region screening module and a discriminator module.

如图1所示,首先将判别器模块检测错误的图像输入显著性检测模块,形成显著性图;然后由区域筛选模块提取出符合要求的显著性图,与其对应的原始图像进行图像融合,形成显著性融合图;最后将不符合要求的显著性图则替换为原始图像,由这部分原始图像和显著性融合图组合成更新后的数据集输入判别器模块进行训练,使判别器对与隐写区域重合度较高的区域进行有针对性的特征学习。As shown in Figure 1, the image detected by the discriminator module with errors is first input into the saliency detection module to form a saliency map; then the region screening module extracts the saliency map that meets the requirements and fuses it with the corresponding original image to form a saliency fusion map; finally, the saliency map that does not meet the requirements is replaced with the original image, and the updated data set composed of this part of the original image and the saliency fusion map is input into the discriminator module for training, so that the discriminator can conduct targeted feature learning on the areas with a high degree of overlap with the steganographic area.

显著性检测模块用于生成待测图像的显著性区域,具体采用BASNet模型。本发明将BASNet的预测模块和多尺度残差优化模块引入网络,预测模块是一个类似于U-Net的密集监督编码器/解码器网络,该网络学习从输入图像中预测显著性区域,得到粗糙显著性图;多尺度残差优化模块通过学习粗糙显著性图和真实标注之间的残差来优化预测模块的粗糙显著性图,最终得到细化后的显著性图.该模型算法的中心思想可表示为:The saliency detection module is used to generate the salient area of the image to be tested, and specifically adopts the BASNet model. The present invention introduces the prediction module and the multi-scale residual optimization module of BASNet into the network. The prediction module is a densely supervised encoder/decoder network similar to U-Net. The network learns to predict the salient area from the input image to obtain a rough saliency map; the multi-scale residual optimization module optimizes the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true annotation, and finally obtains the refined saliency map. The central idea of the model algorithm can be expressed as:

Srefined=Scoarse+Sresidual S refined =S coarse +S residual

其中,Scoarse、Sresidual和Srefined分别表示预测模块的残差、粗糙显著性图和真实标注的残差以及优化后的残差。Among them, S coarse , S residual and S refined represent the residual of the prediction module, the residual of the rough saliency map and the true annotation, and the optimized residual, respectively.

多尺度残差优化模块的主要架构与预测模块相比更加简单,包含一个输入层,一个编码器,一个桥接层,一个解码器和一个输出层。编码器和解码器都有四个阶段,每个阶段只有一个卷积层,每层有64个大小为3×3的过滤器,然后进行批量归一化(BN)和ReLU激活;桥接层还设置了一个卷积层,该卷积层与其他卷积层参数相同;最大池化层(maxpool)用于编码器中的下采样,双线性上采样层(bilinear upsample)用于解码器中的上采样。多尺度残差优化模块输出模型为最终显著性图,为了获得高质量的区域细分和清晰的边界,该模型在训练时定义了一种混合损失l(k),表示为:The main architecture of the multi-scale residual optimization module is simpler than that of the prediction module, consisting of an input layer, an encoder, a bridge layer, a decoder, and an output layer. Both the encoder and the decoder have four stages, each with only one convolutional layer, each with 64 filters of size 3×3, followed by batch normalization (BN) and ReLU activation; the bridge layer also sets a convolutional layer with the same parameters as other convolutional layers; the maximum pooling layer (maxpool) is used for downsampling in the encoder, and the bilinear upsampling layer (bilinear upsample) is used for upsampling in the decoder. The output model of the multi-scale residual optimization module is the final saliency map. In order to obtain high-quality regional segmentation and clear boundaries, the model defines a mixed loss l (k) during training, expressed as:

Figure BDA0002533234170000061
Figure BDA0002533234170000061

该损失结合了BCE,SSIM和IoU损失,有助于减少交叉传播在边界上学习信息而产生的虚假误差,使边界更加的精细化。This loss combines BCE, SSIM and IoU losses, which helps reduce the false errors caused by cross-propagation learning information on the boundary and makes the boundary more refined.

如图2所示,图像的显著性区域为图2中的第2行图片所示的白色区域,隐写区域为图2中的第3行图片所示的散点区域,将图像的显著性区域与隐写区域对比,在经过显著性检测后,从图片的内容可以分析出,在图像中显著目标较为明确时(如左4),图像标注出的显著性区域与隐写区域重合度较高;而在图像中显著目标较为模糊时(如右2),图像标注出的显著性区域与隐写区域重合度较低。因此,需要对显著性图进行筛选,提取出符合条件的图像进行处理,才能确保处理的有效性。As shown in Figure 2, the salient area of the image is the white area shown in the second row of pictures in Figure 2, and the steganographic area is the scattered area shown in the third row of pictures in Figure 2. The salient area of the image is compared with the steganographic area. After saliency detection, it can be analyzed from the content of the image that when the salient target in the image is clear (such as 4 on the left), the overlap between the salient area marked in the image and the steganographic area is high; and when the salient target in the image is vague (such as 2 on the right), the overlap between the salient area marked in the image and the steganographic area is low. Therefore, it is necessary to screen the saliency map and extract the images that meet the conditions for processing to ensure the effectiveness of the processing.

区域筛选模块用于筛选出符合处理要求的图像,保证处理的有效性。在本模块中,首先对显著性区域与隐写隐写区域的重合度进行数据的统计与分析。在图像自适应隐写中,秘密信息会嵌入在纹理较为复杂的区域,而这部分区域对于人眼来说,即是较为显著的区域,那么在显著性检测中就会被标注出来作为显著性区域。利用BOSSbase1.01数据集进行数据分析,其中包含10000张512×512的数字图像,用J-UNIWARD图像自适应隐写算法进行隐写,嵌入率为0.4bpp。为了对数据分布情况进行整体的分析,将10000张图片的显著性区域和隐写区域的重合度进行数据统计,通过散点图的形式表示出来,如图3所示。The region screening module is used to filter out images that meet the processing requirements to ensure the effectiveness of the processing. In this module, the overlap between the salient area and the stego area is first statistically analyzed. In image adaptive steganography, secret information will be embedded in areas with more complex textures, and these areas are more significant areas for the human eye, so they will be marked as salient areas in saliency detection. The BOSSbase1.01 data set is used for data analysis, which contains 10,000 512×512 digital images, which are steganographically written using the J-UNIWARD image adaptive steganography algorithm with an embedding rate of 0.4bpp. In order to analyze the data distribution as a whole, the overlap between the salient area and the stego area of 10,000 images is statistically analyzed and expressed in the form of a scatter plot, as shown in Figure 3.

通过数据分析可以看出显著性区域和隐写区域的重合度集中在0.5-1之间,而仍有部分集中在0-0.2之间,如图2中的右2,说明并不是所有图像都适合进行显著性处理,需要将符合要求的图像筛选出来,保证处理结果有效。显著性区域和隐写区域的重合度η的计算方法如下:Through data analysis, it can be seen that the overlap between the salient area and the stego area is concentrated between 0.5-1, and some are still concentrated between 0-0.2, as shown in Figure 2, right 2, which means that not all images are suitable for saliency processing. It is necessary to screen out images that meet the requirements to ensure that the processing results are effective. The calculation method of the overlap between the salient area and the stego area η is as follows:

Figure BDA0002533234170000071
Figure BDA0002533234170000071

Figure BDA0002533234170000072
Figure BDA0002533234170000072

其中,N表示图像中像素点的总数,Ncoin表示重合区域的像素点的个数,Nstego表示隐写区域的像素点个数,PStego(i,j)和PSOD(i,j)分别表示隐写点图和显著性图在位置(i,j)的像素值。隐写点图表示经过J-UNIWARD自适应隐写算法隐写后改变的像素点位置。Where N represents the total number of pixels in the image, N coin represents the number of pixels in the overlapped area, N stego represents the number of pixels in the steganographic area, P Stego (i, j) and P SOD (i, j) represent the pixel values of the stego point map and saliency map at position (i, j) respectively. The stego point map represents the pixel positions that have changed after being steganographically written by the J-UNIWARD adaptive steganographic algorithm.

在区域筛选模块中,通过实验对比分析,显著性区域和隐写区域的重合度集中在0.6-1之间的图像筛选出来对模型的训练效果好,对比实验如表1所示,将筛选区域的阈值K设为0.7,训练效果最佳。筛选出显著性图后,利用图像融合技术,将显著性图与原始图像进行融合,使图像中除了显著性区域的像素,其余像素置0,即让模型只关注显著性区域的图像特征。In the region screening module, through experimental comparative analysis, the images with the overlap between the salient region and the steganographic region concentrated between 0.6-1 are screened out, which has a good training effect on the model. The comparative experiment is shown in Table 1. The threshold K of the screening area is set to 0.7, and the training effect is the best. After the saliency map is screened out, the image fusion technology is used to fuse the saliency map with the original image, so that the pixels in the image except the pixels in the saliency region are set to 0, that is, the model only focuses on the image features of the saliency region.

表1不同区域筛选阈值下的检测准确率(%)Table 1 Detection accuracy under different area screening thresholds (%)

Figure BDA0002533234170000081
Figure BDA0002533234170000081

判别器模块用于提供初始检测错误的图像以及更新后数据集的重新训练。本发明方法使用SRNet模型作为实验中的判别器。SRNet模型由四部分组成,前端的两个部分(1-7层)负责提取噪声的部分残差,在图中由前两个阴影勾勒,第三部分(8-11层)的目标是降低特征图的维数,最后一部分是一个标准的全连接层和一个Softmax线性分类器,12层通过计算每个特征图的统计矩,并将特征向量输入分类器部分,其中所有卷积层采用3×3的卷积核,所有非线性激活函数均为ReLU。The discriminator module is used to provide images with initial detection errors and retraining of updated data sets. The method of the present invention uses the SRNet model as the discriminator in the experiment. The SRNet model consists of four parts. The two front-end parts (1-7 layers) are responsible for extracting partial residuals of the noise, which are outlined by the first two shadows in the figure. The third part (8-11 layers) aims to reduce the dimension of the feature map. The last part is a standard fully connected layer and a Softmax linear classifier. The 12 layers calculate the statistical moment of each feature map and input the feature vector into the classifier part. All convolutional layers use 3×3 convolution kernels, and all nonlinear activation functions are ReLU.

检测错误分为两类,一类是将隐写图像判别为原始图像,一类是将原始图像判别为隐写图像。一开始给的检测错误的图像是经过训练好的SRNet判别错误的图像,其中存在隐写图像也存在原始图像。There are two types of detection errors: one is to identify the steganographic image as the original image, and the other is to identify the original image as the steganographic image. The images given as errors at the beginning are the images that the trained SRNet has identified as errors, in which both the steganographic image and the original image exist.

为了验证本发明提出的方法对于自适应隐写算法检测的有效性,分别对用空域和JPEG域的自适应隐写算法嵌入的数据集进行隐写分析,如表2和表3所示,从实验结果可以看出,本发明在空域和JPEG域通用,并且整体表现良好。In order to verify the effectiveness of the method proposed in the present invention for adaptive steganography algorithm detection, steganalysis is performed on the data sets embedded with adaptive steganography algorithms in spatial domain and JPEG domain, as shown in Table 2 and Table 3. It can be seen from the experimental results that the present invention is applicable to both spatial domain and JPEG domain, and performs well overall.

表2空域不同隐写算法的检测准确率%Table 2 Detection accuracy of different steganographic algorithms in the spatial domain %

Figure BDA0002533234170000082
Figure BDA0002533234170000082

表3 JPEG域不同隐写算法的检测准确率%Table 3 Detection accuracy of different steganography algorithms in JPEG domain %

Figure BDA0002533234170000091
Figure BDA0002533234170000091

注:表中的SRNet这一行代表判别器第一次处理的结果;Proposed这一行代表经过本发明的方法后,判别器第二次的处理结果;空域中不区分质量因子QF,只有在JPEG域里区分。Note: The SRNet row in the table represents the result of the first processing of the discriminator; the Proposed row represents the result of the second processing of the discriminator after the method of the present invention; the quality factor QF is not distinguished in the spatial domain, but only in the JPEG domain.

图4是利用本发明针对不同训练策略的实验对比图,当替换策略为替换所有图像时,模型的训练效果很差,检测准确率很低,并且难以收敛,主要原因是模型在学习时丢失了很多图像本身的特征,导致检测性能的下降;因此,改变为第二种训练策略,即只将判别器检测错误的图像替换为显著性图,这种训练策略的检测准确率较第一种训练策略有明显提升,但是相比无替换的SRNet基线较低;最后,采用第三种训练策略,即只将经过区域筛选模块处理后符合要求的图像替换为显著性图,在实验中发现,这种训练策略的效果最优,并且收敛较快。FIG4 is an experimental comparison diagram of different training strategies using the present invention. When the replacement strategy is to replace all images, the training effect of the model is very poor, the detection accuracy is very low, and it is difficult to converge. The main reason is that the model loses a lot of features of the image itself during learning, resulting in a decrease in detection performance. Therefore, the second training strategy is changed to replace only the images that are incorrectly detected by the discriminator with the saliency map. The detection accuracy of this training strategy is significantly improved compared with the first training strategy, but it is lower than the SRNet baseline without replacement. Finally, the third training strategy is adopted, that is, only the images that meet the requirements after being processed by the regional screening module are replaced with the saliency map. It is found in the experiment that this training strategy has the best effect and converges faster.

判别器就相当于一个人,并不是什么都不用学习就可以去判别的,是要经过一段时间的学习(在计算机上运行),才能学会判别,这种学习的过程,叫做“训练”。所以,本发明改变的最终效果是判别器的判别准确率,实质上方法就是提升“训练”的效果。The discriminator is like a person. It cannot discriminate without learning anything. It needs to learn for a period of time (running on a computer) to learn to discriminate. This learning process is called "training". Therefore, the final effect of the present invention is the discrimination accuracy of the discriminator. In essence, the method is to improve the effect of "training".

图5是利用本发明模拟的隐写分析系统及检测结果图,cover代表原始图像;stego代表隐写图像,cover和stego后面的数值代表属于cover和stego的概率值,哪个概率大,就说明图像属于哪一类。在该系统中,调用训练好的模型,对网络上的任意图像进行检测。Figure 5 is a steganalysis system simulated by the present invention and the detection result diagram, where cover represents the original image; stego represents the stego image, and the values after cover and stego represent the probability values of belonging to cover and stego. The larger the probability, the more likely the image belongs to. In this system, the trained model is called to detect any image on the network.

首先,从网络上随机下载一张图像,由于网络上的图像大多是彩色图像,所以首先将三通道的RGB图像转化为单通道的灰度图像,命名为cover.jpg;其次,用J-UNIWARD自适应隐写算法进行隐写,嵌入率为0.1-0.4bpp,将嵌入后的图像分别命名为stego-0.1.jpg,stego-0.2.jpg,stego-0.3.jpg,stego-0.4.jpg,4张隐写后的图像无法用肉眼分辨出来。最后,将4张隐写图像输入隐写分析检测系统进行检测。该系统的功能是判别输入图像是原始图像还是隐写图像,并显示属于每个类的概率值以及检测的结果,从检测结果可以看出,随着嵌入率的增加,隐写图像的概率值就越大,进而说明检测结果的准确性,本发明整体具有较好的泛化能力和实用价值。First, randomly download an image from the Internet. Since most of the images on the Internet are color images, the three-channel RGB image is first converted into a single-channel grayscale image, named cover.jpg; secondly, the J-UNIWARD adaptive steganography algorithm is used for steganography, and the embedding rate is 0.1-0.4bpp. The embedded images are named stego-0.1.jpg, stego-0.2.jpg, stego-0.3.jpg, and stego-0.4.jpg respectively. The four steganographic images cannot be distinguished by the naked eye. Finally, the four steganographic images are input into the steganalysis detection system for detection. The function of the system is to determine whether the input image is the original image or the steganographic image, and to display the probability value of each class and the detection results. It can be seen from the detection results that as the embedding rate increases, the probability value of the steganographic image increases, which further illustrates the accuracy of the detection results. The present invention as a whole has good generalization ability and practical value.

Claims (10)

1.一种基于显著性检测的图像自适应隐写分析方法,其特征在于,包括以下步骤:1. An image adaptive steganalysis method based on saliency detection, characterized in that it comprises the following steps: 1)在检测错误的图像中分割出图像的显著性区域,形成显著性图;1) Segment the salient regions of the image in the image with the detected error to form a saliency map; 2)根据显著性区域与隐写区域的重合度,对显著性图进行筛选,提取出符合要求的显著性图,将符合要求的显著性图与其对应的原始图像进行图像融合,形成显著性融合图;其中,符合要求的显著性图为显著性区域与隐写区域重合度高的图像;2) According to the overlap between the salient area and the steganographic area, the saliency map is screened, and the saliency map that meets the requirements is extracted, and the saliency map that meets the requirements is fused with the corresponding original image to form a saliency fusion map; wherein the saliency map that meets the requirements is an image with a high overlap between the salient area and the steganographic area; 3)将不符合要求的显著性图替换为原始图像,由这部分原始图像和显著性融合图组合成更新后的数据集;3) Replace the saliency maps that do not meet the requirements with the original images, and combine these original images and the saliency fusion maps into an updated dataset; 4)然后用更新后的数据集进行训练。4) Then use the updated dataset for training. 2.根据权利要求1所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤1)中,使用判别器模块检测错误的图像,判别器模块采用SRNet模型。2. According to the image adaptive steganalysis method based on saliency detection according to claim 1, it is characterized in that in step 1), a discriminator module is used to detect erroneous images, and the discriminator module adopts an SRNet model. 3.根据权利要求2所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤2)中,图像融合具体为:将图像中除了显著性区域的像素外的其余像素置0,让判别器模块只关注显著性区域的图像特征。3. According to the image adaptive steganalysis method based on saliency detection according to claim 2, it is characterized in that in step 2), the image fusion is specifically: setting the remaining pixels in the image except the pixels in the salient area to 0, so that the discriminator module only focuses on the image features of the salient area. 4.根据权利要求1所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤1)中,使用显著性检测模块分割出图像的显著性区域;4. The image adaptive steganalysis method based on saliency detection according to claim 1, characterized in that, in step 1), a saliency detection module is used to segment the salient area of the image; 显著性检测模块采用BASNet模型,包括预测模块和多尺度残差优化模块,形成显著性图具体为:The saliency detection module adopts the BASNet model, including the prediction module and the multi-scale residual optimization module, to form a saliency map as follows: 将BASNet模型的预测模块和多尺度残差优化模块引入网络,通过预测模块得到粗糙显著性图;The prediction module and multi-scale residual optimization module of the BASNet model are introduced into the network, and a rough saliency map is obtained through the prediction module; 多尺度残差优化模块通过学习粗糙显著性图和真实标注之间的残差来优化预测模块的粗糙显著性图,最终得到细化后的显著性图。The multi-scale residual optimization module optimizes the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true annotation, and finally obtains the refined saliency map. 5.根据权利要求1所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤2)中,使用区域筛选模块对显著性图进行筛选,提取出符合要求的显著性图。5. The image adaptive steganalysis method based on saliency detection according to claim 1 is characterized in that, in step 2), a region screening module is used to screen the saliency map to extract the saliency map that meets the requirements. 6.根据权利要求1所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤2)中,显著性区域与隐写区域的重合度η的计算方法如下:6. The image adaptive steganalysis method based on saliency detection according to claim 1 is characterized in that, in step 2), the calculation method of the overlap degree η between the saliency area and the stego area is as follows:
Figure FDA0002533234160000021
Figure FDA0002533234160000021
由式(1)得到式(2):From formula (1), we can get formula (2):
Figure FDA0002533234160000022
Figure FDA0002533234160000022
其中,N表示图像中像素点的总数,Ncoin表示重合区域的像素点的个数,Nstego表示隐写区域的像素点个数,PStego(i,j)和PSOD(i,j)分别表示隐写点图和显著性图在位置(i,j)的像素值。Among them, N represents the total number of pixels in the image, N coin represents the number of pixels in the overlapping area, N stego represents the number of pixels in the steganographic area, P Stego (i, j) and P SOD (i, j) represent the pixel values of the stegographic point map and the saliency map at position (i, j), respectively.
7.根据权利要求1所述的基于显著性检测的图像自适应隐写分析方法,其特征在于,步骤2)中,符合要求的显著性图对应的重合度为0.6~1。7. The image adaptive steganalysis method based on saliency detection according to claim 1 is characterized in that, in step 2), the overlap degree corresponding to the saliency map that meets the requirements is 0.6 to 1. 8.一种实现权利要求1~7任意一项所述的基于显著性检测的图像自适应隐写分析方法的图像自适应隐写分析系统,其特征在于,包括显著性检测模块、区域筛选模块和判别器模块;8. An image adaptive steganalysis system for implementing the image adaptive steganalysis method based on saliency detection according to any one of claims 1 to 7, characterized in that it comprises a saliency detection module, a region screening module and a discriminator module; 显著性检测模块用于生成待测图像的显著性区域,采用BASNet模型,包括预测模块和多尺度残差优化模块;The saliency detection module is used to generate the salient area of the image to be tested. It adopts the BASNet model, including the prediction module and the multi-scale residual optimization module. 预测模块用于得到粗糙显著性图;多尺度残差优化模块用于通过学习粗糙显著性图和真实标注之间的残差来优化预测模块的粗糙显著性图,最终得到细化后的显著性图;The prediction module is used to obtain a rough saliency map; the multi-scale residual optimization module is used to optimize the rough saliency map of the prediction module by learning the residual between the rough saliency map and the true annotation, and finally obtain the refined saliency map; 区域筛选模块,用于对显著性图进行筛选,提取出符合要求的显著性图;The region screening module is used to screen the saliency maps and extract the saliency maps that meet the requirements; 判别器模块采用SRNet模型,用于提供初始检测错误的图像以及更新后数据集的重新训练。The discriminator module adopts the SRNet model, which is used to provide images with initial detection errors and retrain on the updated dataset. 9.根据权利要求8所述的基于显著性检测的图像自适应隐写分析系统,其特征在于,多尺度残差优化模块包含一个输入层,一个编码器,一个桥接层,一个解码器和一个输出层。9. The image adaptive steganalysis system based on saliency detection according to claim 8, characterized in that the multi-scale residual optimization module comprises an input layer, an encoder, a bridge layer, a decoder and an output layer. 10.根据权利要求9所述的基于显著性检测的图像自适应隐写分析系统,其特征在于,编码器和解码器都有四个阶段,每个阶段只有一个卷积层,每层有64个大小为33的过滤器;10. The image adaptive steganalysis system based on saliency detection according to claim 9, characterized in that both the encoder and the decoder have four stages, each stage has only one convolutional layer, and each layer has 64 filters of size 33; 桥接层还设置了一个卷积层,该卷积层与其他卷积层参数相同。The bridge layer also sets up a convolutional layer with the same parameters as the other convolutional layers.
CN202010524234.1A 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection Active CN111696021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010524234.1A CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010524234.1A CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Publications (2)

Publication Number Publication Date
CN111696021A CN111696021A (en) 2020-09-22
CN111696021B true CN111696021B (en) 2023-03-28

Family

ID=72480120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010524234.1A Active CN111696021B (en) 2020-06-10 2020-06-10 Image self-adaptive steganalysis system and method based on significance detection

Country Status (1)

Country Link
CN (1) CN111696021B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637605B (en) * 2020-11-11 2022-01-11 中国科学院信息工程研究所 Video steganalysis method and device based on analyzing CAVLC codewords and the number of non-zero DCT coefficients
CN112785478B (en) * 2021-01-15 2023-06-23 南京信息工程大学 Hidden information detection method and system based on generating embedded probability map
CN112991344A (en) * 2021-05-11 2021-06-18 苏州天准科技股份有限公司 Detection method, storage medium and detection system based on deep transfer learning
CN114782697B (en) * 2022-04-29 2023-05-23 四川大学 An Adaptive Steganographic Detection Method Against Sub-Domains

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
分类与分割相结合的JPEG图像隐写分析;汪然等;《中国图象图形学报》(第10期);全文 *
图像篡改检测感知哈希技术综述;杜玲等;《计算机科学与探索》(第05期);全文 *

Also Published As

Publication number Publication date
CN111696021A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111696021B (en) Image self-adaptive steganalysis system and method based on significance detection
CN111311563A (en) Image tampering detection method based on multi-domain feature fusion
CN110349136A (en) A kind of tampered image detection method based on deep learning
CN112150450B (en) Image tampering detection method and device based on dual-channel U-Net model
CN113920094B (en) Image tampering detection technology based on gradient residual U-shaped convolutional neural network
CN115063373A (en) A social network image tampering localization method based on multi-scale feature intelligent perception
CN112861671B (en) An identification method for deepfake face images and videos
CN112884758A (en) Defective insulator sample generation method and system based on style migration method
CN108596818A (en) A kind of image latent writing analysis method based on multi-task learning convolutional neural networks
Wang et al. HidingGAN: High capacity information hiding with generative adversarial network
Mazumdar et al. Universal image manipulation detection using deep siamese convolutional neural network
CN110348320A (en) A kind of face method for anti-counterfeit based on the fusion of more Damage degrees
CN113537110A (en) False video detection method fusing intra-frame and inter-frame differences
CN112580661A (en) Multi-scale edge detection method under deep supervision
CN117496583A (en) Deep fake face detection positioning method capable of learning local difference
Chen et al. Image splicing localization using residual image and residual-based fully convolutional network
CN114820541B (en) Defect Detection Method Based on Reconstruction Network
CN111882525A (en) Image reproduction detection method based on LBP watermark characteristics and fine-grained identification
CN118135641B (en) Face forgery detection method based on local forgery area detection
CN118154906A (en) Image tampering detection method based on feature similarity and multi-scale edge attention
CN116824430A (en) Deep pseudo video evidence obtaining method based on semi-supervised learning
Kumari et al. Image splicing forgery detection: A review
CN117011255A (en) Image tampering detection method based on feature fusion and local abnormality
CN117853397A (en) Image tampering detection and positioning method and system based on multi-level feature learning
CN113255704B (en) A Pixel Difference Convolution Edge Detection Method Based on Local Binary Pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant