CN115526758A - Hadamard transform screen-shot-resistant watermarking method based on deep learning - Google Patents
Hadamard transform screen-shot-resistant watermarking method based on deep learning Download PDFInfo
- Publication number
- CN115526758A CN115526758A CN202211210100.8A CN202211210100A CN115526758A CN 115526758 A CN115526758 A CN 115526758A CN 202211210100 A CN202211210100 A CN 202211210100A CN 115526758 A CN115526758 A CN 115526758A
- Authority
- CN
- China
- Prior art keywords
- watermark
- attack
- image
- hadamard
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 20
- 230000009466 transformation Effects 0.000 claims description 58
- 238000000605 extraction Methods 0.000 claims description 45
- 238000012549 training Methods 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 22
- 238000004088 simulation Methods 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 abstract description 22
- 230000000694 effects Effects 0.000 abstract description 5
- 239000000284 extract Substances 0.000 abstract description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 abstract 1
- 238000012360 testing method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
技术领域technical field
本发明属于水印技术领域,具体涉及一种基于深度学习的Hadamard变换抗屏摄水印方法,该方法能够根据偷拍者的照片高效提取水印信息,对数字媒体进行一个版权保护。The invention belongs to the technical field of watermarking, and in particular relates to a Hadamard transform anti-screen watermarking method based on deep learning. The method can efficiently extract watermark information from photos of sneak photographers, and perform copyright protection on digital media.
背景技术Background technique
电子屏幕作为承载知识和传播信息的新型媒介,已经成为了人们日常生活必不可少的一部分,电子屏幕极大的改变了人们的阅读方式,当代人们更多的选择屏幕阅读代替纸质阅读。As a new medium for carrying knowledge and disseminating information, electronic screens have become an indispensable part of people's daily life. Electronic screens have greatly changed the way people read. Contemporary people choose screen reading instead of paper reading.
同时随着智能手机的广泛使用,拍照过程变得简单易行。在信息安全领域,通过屏幕拍照、录像等手段窃取单位内部敏感信息已逐渐成为当前重要信息泄露的主要途径。At the same time, with the widespread use of smart phones, the process of taking pictures has become simple and easy. In the field of information security, stealing sensitive information within the unit through screen photography, video recording and other means has gradually become the main way of important information leakage.
屏摄(即屏幕拍摄)泄密取证成为了热点问题。为了解决泄密问题,很多部门和公司需要部署屏摄溯源系统,对泄密行为进行取证,形成震慑能力,避免泄密事件的发射。具体来讲,主要分为事前主动防御,事后有据可查。在受控终端预先藏于溯源信息,并布置客户端进行警示,系统部署形成巨大震慑作用,有效降低泄密者意愿。另一方面当机要信息泄露后通过溯源系统找到泄密源头和责任人,让泄密者无法抵赖,有效追究关系人责任,这样保证机要所属机构的版权信息,保障信息安全。目前,深度学习技术在水印算法领域的应用正在迅速增长,因为它可以有效地解决水印嵌入和提取的过程。已有的数字水印技术能够有效解决多媒体数据版权保护等问题,但如何设计出能抵抗屏摄攻击的数字水印算法依然是个难题。传统的数字水印算法可以有效抵抗常见的攻击类型,如图像处理、几何变换等,但因为屏摄攻击是个复杂的过程,在拍摄显示在屏幕上的图像时,图像和水印都要经历一系列的模数转换和数模转换过程,这些过程会表现为受到的强大攻击组合。目前对于抗屏摄攻击的数字水印算法研究较少。因此,如何在数字水印算法中提高算法的抗屏摄鲁棒性,是目前亟待解决的技术问题。Screen shot (that is, screen shot) leaks and forensics has become a hot issue. In order to solve the problem of leaks, many departments and companies need to deploy screen capture traceability systems to collect evidence for leaks, form a deterrent capability, and avoid the launch of leaks. Specifically, it is mainly divided into active defense before the event and well-documented after the event. The traceability information is hidden in the controlled terminal in advance, and the client is arranged to warn. The system deployment forms a huge deterrent effect, effectively reducing the willingness of leakers. On the other hand, when the confidential information is leaked, the source of the leak and the responsible person can be found through the traceability system, so that the leaker cannot deny it, and the responsibility of the related party can be effectively investigated, so as to ensure the copyright information of the confidential organization and ensure information security. At present, the application of deep learning technology in the field of watermarking algorithm is growing rapidly, because it can effectively solve the process of watermark embedding and extraction. The existing digital watermarking technology can effectively solve the problems of multimedia data copyright protection, but how to design a digital watermarking algorithm that can resist screen capture attacks is still a difficult problem. Traditional digital watermarking algorithms can effectively resist common types of attacks, such as image processing, geometric transformation, etc., but because screen capture attacks are a complex process, when shooting images displayed on the screen, both images and watermarks have to go through a series of Analog-to-digital and digital-to-analog conversion processes that present a powerful combination of attacks. At present, there are few researches on digital watermarking algorithms against screen capture attacks. Therefore, how to improve the robustness of the algorithm against screen shooting in the digital watermarking algorithm is a technical problem to be solved urgently.
由于机器学习工具和深度网络在各种计算机视觉和图像处理领域的快速发展,卷积神经网络在水印方面的应用最近出现了。The application of convolutional neural networks to watermarking has recently emerged due to the rapid development of machine learning tools and deep networks in various computer vision and image processing fields.
发明内容Contents of the invention
本发明的目的是解决现有技术中数字水印算法难以抗屏摄攻击的问题,并提出了一种基于深度学习的Hadamard变换抗屏摄水印方法。本发明将卷积神经网络(CNN)和残差块相结合,在Hadamard域内实现嵌入和提取水印的端到端过程。此外本发明通过将模拟屏摄攻击的模块加入到水印嵌入层和提取层中间,以保证网络稳健嵌入水印能够从泄露的照片中提取水印信息,实现版权保护。The purpose of the present invention is to solve the problem that the digital watermarking algorithm in the prior art is difficult to resist the screen shot attack, and proposes a Hadamard transform anti-screen shot watermarking method based on deep learning. The invention combines a convolutional neural network (CNN) and a residual block to realize an end-to-end process of embedding and extracting a watermark in the Hadamard domain. In addition, the present invention adds a module for simulating screen capture attacks between the watermark embedding layer and the extraction layer to ensure that the network is robust and embedded watermarks can extract watermark information from leaked photos, thereby realizing copyright protection.
本发明具体采用的技术方案如下:The technical scheme that the present invention specifically adopts is as follows:
一种基于深度学习的Hadamard变换抗屏摄水印方法,其包括:A Hadamard transform anti-screen watermarking method based on deep learning, which includes:
S1、构建由水印嵌入模块、攻击模拟模块和水印提取模块组成的水印模型框架;S1. Construct a watermark model framework composed of a watermark embedding module, an attack simulation module and a watermark extraction module;
所述水印嵌入模块由第一Hadamard变换层、第一卷积模块和Hadamard逆变换层级联而成,其输入为待嵌入的水印以及需嵌入水印的原始图像;单通道的原始图像预先被切分为一系列相同尺寸的第一图像块,每个第一图像块输入第一Hadamard变换层中通过Hadamard变换从空域变换到频域,每个第一图像块的二维变换结果沿通道维度进行拼接后得到第一变换特征图,再将待嵌入的水印嵌入第一变换特征图中得到第二变换特征图,将第二变换特征图输入第一卷积模块中进行卷积操作,从而得到与第一变换特征图通道数相同的第三变换特征图,将第三变换特征图逐通道输入Hadamard逆变换层中通过Hadamard逆变换从频域变换回空域,每个通道的二维变换结果重新按切分顺序拼接还原得到与原始图像相同尺寸的中间图像,将中间图像与原始图像叠加后输出为单通道的含水印图像;The watermark embedding module is formed by cascading the first Hadamard transformation layer, the first convolution module and the Hadamard inverse transformation layer, and its input is the watermark to be embedded and the original image to be embedded with the watermark; the original image of the single channel is pre-segmented It is a series of first image blocks of the same size, each first image block is input into the first Hadamard transformation layer and transformed from the spatial domain to the frequency domain by Hadamard transformation, and the two-dimensional transformation results of each first image block are spliced along the channel dimension Finally, the first transformed feature map is obtained, and then the watermark to be embedded is embedded in the first transformed feature map to obtain the second transformed feature map, and the second transformed feature map is input into the first convolution module for convolution operation, thereby obtaining the The third transformation feature map with the same number of channels in the first transformation feature map, the third transformation feature map is input into the Hadamard inverse transformation layer channel by channel, and the Hadamard inverse transformation is used to transform from the frequency domain back to the space domain, and the two-dimensional transformation results of each channel are re-pressed. The intermediate image with the same size as the original image is obtained by sequential splicing and restoration, and the intermediate image is superimposed on the original image and output as a single-channel watermarked image;
所述攻击模拟模块中内置包含屏摄攻击在内的多种攻击操作,其输入为所述含水印图像,每一种攻击操作均能够对所述含水印图像进行攻击,并生成单通道的受攻击后的含水印图像;且所述屏摄攻击中的moiré攻击由moiré攻击网络实现,所述moiré攻击网络由U-Net网络训练得到,输入为含水印图像,输出为经过moiré攻击增加噪声后的含水印图像;A variety of attack operations including screen shot attacks are built into the attack simulation module, the input of which is the watermarked image, and each attack operation can attack the watermarked image and generate a single-channel victim The watermarked image after the attack; and the moiré attack in the screen shot attack is realized by the moiré attack network, the moiré attack network is trained by the U-Net network, the input is a watermarked image, and the output is after adding noise through the moiré attack The watermarked image;
所述水印提取模块由第二Hadamard变换层和第二卷积模块级联而成,其输入为单通道的受攻击后的含水印图像;受攻击后的含水印图像预先被切分为一系列相同尺寸的第二图像块,每个第二图像块输入第二Hadamard变换层中通过Hadamard变换从空域变换到频域,每个第二图像块的二维变换结果沿通道维度进行拼接后得到第四变换特征图,再将第四变换特征图输入第二卷积模块中进行卷积操作,得到水印提取结果;The watermark extraction module is formed by cascading the second Hadamard transformation layer and the second convolution module, and its input is the attacked watermarked image of a single channel; the attacked watermarked image is pre-segmented into a series of For the second image blocks of the same size, each second image block is input into the second Hadamard transformation layer and transformed from the spatial domain to the frequency domain through Hadamard transformation, and the two-dimensional transformation results of each second image block are spliced along the channel dimension to obtain the first Transform the feature map four times, and then input the fourth transformed feature map into the second convolution module to perform a convolution operation to obtain a watermark extraction result;
S2、利用训练数据,通过最小化总损失函数对所述水印模型框架进行迭代训练,且所述水印提取模块在不同训练轮次中选择不同攻击操作对所述水印嵌入模块输出的含水印图像进行攻击,每一轮训练选择一种攻击操作,所有训练轮次覆盖全部攻击操作;所述总损失函数由归一化互相关损失和结构相似性指数损失的带偏置倒数加权而成;S2. Use the training data to iteratively train the watermark model framework by minimizing the total loss function, and the watermark extraction module selects different attack operations in different training rounds to perform the watermarked image output by the watermark embedding module Attack, each round of training selects an attack operation, and all training rounds cover all attack operations; the total loss function is weighted by the biased reciprocal of the normalized cross-correlation loss and the structural similarity index loss;
S3、在所述水印模型框架训练完毕后,利用水印嵌入模块进行水印嵌入,输出带有水印的含水印图像,而对于需要进行水印提取的图像,将其直接输入水印提取模块中进行水印提取。S3. After the training of the watermark model framework is completed, the watermark embedding module is used to embed the watermark, and the watermarked image with the watermark is output, and for the image requiring watermark extraction, directly input it into the watermark extraction module for watermark extraction.
作为优选,所述总损失函数的表达式如下:Preferably, the expression of the total loss function is as follows:
其中:Lw表示原始嵌入水印和水印提取结果的归一化互相关系数,LI表示原始图像与含水印图像的结构相似性指数,C3和C4分别为两个用来稳定分母的弱变量。Among them: L w represents the normalized cross-correlation coefficient of the original embedded watermark and watermark extraction results, L I represents the structural similarity index between the original image and the watermarked image, C 3 and C 4 are two weak points used to stabilize the denominator variable.
作为优选,所述α和β均为大于0小于1的小数,且满足α+β=1。Preferably, both α and β are decimals greater than 0 and less than 1, and satisfy α+β=1.
作为优选,所述C1、C2、C3、C4的值分别为10-4、9×10-4、10-2和3×10-2。Preferably, the values of C 1 , C 2 , C 3 , and C 4 are 10 -4 , 9×10 -4 , 10 -2 and 3×10 -2 , respectively.
作为优选,所述待嵌入的水印嵌入第一变换特征图的方式为将待嵌入的水印与第一变换特征图沿通道维度进行拼接。Preferably, the watermark to be embedded is embedded in the first transformed feature map by splicing the watermark to be embedded and the first transformed feature map along the channel dimension.
作为优选,所述第一卷积模块和第二卷积模块中分别包含5层卷积层。Preferably, the first convolution module and the second convolution module respectively include 5 convolution layers.
作为优选,所述的屏摄攻击包括透视变换、光线畸变、JPEG畸变和moiré模式多种攻击操作。Preferably, the screen shooting attack includes multiple attack operations of perspective transformation, light distortion, JPEG distortion and moiré mode.
作为优选,所述攻击模拟模块中还包括非屏摄攻击,具体包括对图像进行模糊、裁剪、高斯噪声、马赛克噪声、缩放、旋转、锐化、浮水印、显示失真、亮度和对比度多种攻击操作。Preferably, the attack simulation module also includes non-screen shot attacks, specifically including various attacks on images such as blurring, cropping, Gaussian noise, mosaic noise, scaling, rotation, sharpening, watermarking, display distortion, brightness and contrast operate.
作为优选,所述moiré攻击网络的训练过程中,将一系列含水印图像样本xi及对应的经过moiré攻击增加噪声后的含水印图像yi作为输入样本,通过最小化损失函数Lm对U-Net网络进行训练;其中损失函数Lm的形式为:Preferably, in the training process of the moiré attack network, a series of watermarked image samples x i and corresponding watermarked images y i after moiré attack with noise added are used as input samples, and by minimizing the loss function L m to U -Net network for training; where the form of the loss function L m is:
式中:m是训练所用的样本总数,f(xi)为U-Net网络针对输入的含水印图像样本xi输出的预测结果。In the formula: m is the total number of samples used for training, and f( xi ) is the prediction result output by the U-Net network for the input watermarked image sample xi .
作为优选,所述S3中,需要进行水印提取的图像为水印嵌入模块生成的含水印图像通过屏幕拍摄方式被泄露的机密照片。Preferably, in said S3, the image requiring watermark extraction is a confidential photo in which the watermarked image generated by the watermark embedding module is leaked through screen capture.
相对于现有技术而言,本发明的有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:
本发明中提出了一种深度端到端鲁棒抗屏摄水印方法,它可以在Hadamard变换空间中学习新的水印算法。该算法框架由两个具有残差块的全卷积神经网络组成,实时处理嵌入和提取操作。对整个深度网络进行端到端训练,进行盲安全水印。本发明提出的抗屏摄水印方法将模拟屏摄攻击作为一个可微网络层,以方便端到端训练,同时通过将水印数据通过变换域扩散到图像中较宽的区域,增强了算法的安全性和鲁棒性。与最新研究结果的比较表明,本发明提出的算法框架在隐蔽性、鲁棒性和速度方面具有优势。In the present invention, a deep end-to-end robust anti-screen watermarking method is proposed, which can learn a new watermarking algorithm in the Hadamard transform space. The algorithmic framework consists of two fully convolutional neural networks with residual blocks, which handle embedding and extraction operations in real time. End-to-end training of the entire deep network for blind secure watermarking. The anti-screen watermarking method proposed by the present invention uses the simulated screen attack as a differentiable network layer to facilitate end-to-end training, and at the same time, the security of the algorithm is enhanced by spreading the watermark data to a wider area of the image through the transform domain and robustness. The comparison with the latest research results shows that the algorithm framework proposed by the present invention has advantages in concealment, robustness and speed.
附图说明Description of drawings
图1为水印模型框架的总体流程图。Figure 1 is the overall flow chart of the watermark model framework.
图2为水印嵌入模块中嵌入算法得到的图像PSNR。Figure 2 is the image PSNR obtained by the embedding algorithm in the watermark embedding module.
图3为水印在嵌入原始输入图前后的差异变化,其中第一行是原始图像,第二行是含水印图像,第三行是水印嵌入前后差异图。Figure 3 shows the difference of the watermark before and after embedding the original input image, where the first row is the original image, the second row is the watermarked image, and the third row is the difference map before and after watermark embedding.
图4为Hadamard部分变换系数。Figure 4 shows some Hadamard transformation coefficients.
图5为屏摄攻击展示。Figure 5 shows the screenshot attack display.
图6为Moiré数据集训练的U-Net网络结构示意图。Figure 6 is a schematic diagram of the U-Net network structure trained on the Moiré dataset.
图7为Moiré网络测试结果图,其中第一行是纯净输入图,第二行是加Moiré模式的输出图。Figure 7 shows the test results of the Moiré network, where the first row is the pure input graph, and the second row is the output graph with the Moiré mode added.
具体实施方式detailed description
下面结合附图和具体实施方式对本发明做进一步阐述和说明。The present invention will be further elaborated and illustrated below in conjunction with the accompanying drawings and specific embodiments.
多媒体技术的发展与泄漏源的需求密切相关。音频、图像和视频等多媒体技术的发展给泄露源定位方法带来了新的挑战,因此数字水印算法作为实现泄露的一种重要手段受到了广泛的关注。针对不同的多媒体技术,现有技术中提出了音频水印方案、图像水印方案、和视频水印方案。但随着数字技术的发展,多媒体信息传输过程发生了巨大的变化,这对泄露源提出了新的要求。对于传统的窃取信息的方式,如扫描和在发送商业文件或复制电子文件时,可以利用传统的鲁棒图像水印方案来跟踪泄漏源,该方案用于图像处理攻击。然而随着智能手机的普及,摄影成为了最简单有效的信息传递方式,这给泄露追踪问题带来了新的挑战。任何有权限访问文件的人都可以通过拍照而不留下任何记录的方式泄露信息。此外,相机拍摄过程不容易被外界监控或阻止,因此设计一种抗屏摄攻击水印方案对于解决这一问题至关重要。屏摄图像水印方案可以为泄露追踪提供强有力的保证。本发明可以在原始图像中嵌入相关的水印信息,当这些文件被偷拍时能从照片中提取相应的信息,从而定位泄露的设备或员工信息,根据信息定位,缩小调查范围,以便于实现问责流程。The development of multimedia technology is closely related to the needs of leakage sources. The development of multimedia technologies such as audio, image, and video has brought new challenges to leak source location methods, so digital watermarking algorithms, as an important means to achieve leaks, have received extensive attention. For different multimedia technologies, audio watermarking schemes, image watermarking schemes, and video watermarking schemes have been proposed in the prior art. However, with the development of digital technology, the transmission process of multimedia information has undergone tremendous changes, which puts forward new requirements for leakage sources. For traditional ways of stealing information, such as scanning and when sending commercial documents or copying electronic documents, the traditional robust image watermarking scheme can be used to track the source of the leak, which is used in image processing attacks. However, with the popularization of smartphones, photography has become the simplest and most effective way of information transmission, which brings new challenges to leak tracking. Anyone with access to the files can leak information by taking a photo without leaving any records. In addition, the camera shooting process is not easy to be monitored or blocked by the outside world, so it is very important to design a watermarking scheme that resists screen capture attacks to solve this problem. Screenshot image watermarking schemes can provide strong guarantees for leak tracking. The present invention can embed relevant watermark information in the original image, and when these files are secretly photographed, the corresponding information can be extracted from the photo, thereby locating the leaked equipment or employee information, and narrowing the scope of investigation according to information locating, so as to realize accountability process.
下面对本发明中提出的一种基于深度学习的Hadamard变换抗屏摄水印方法的具体实现方式进行详细描述。The specific implementation of a Hadamard transform anti-screen watermarking method based on deep learning proposed in the present invention will be described in detail below.
S1、构建由水印嵌入模块、攻击模拟模块和水印提取模块组成的水印模型框架。S1. Construct a watermark model framework composed of a watermark embedding module, an attack simulation module and a watermark extraction module.
本发明所采用的水印模型框架由三部分组成,分别为:水印嵌入模块、攻击模拟模块和水印提取模块。水印嵌入模块中利用嵌入分量将水印嵌入到图像的Hadamard系数中,对图像进行修改得到图像的Hadamard系数特征。攻击模拟模块用于模拟在截屏过程中产生的一系列屏摄攻击和传统攻击中产生的畸变,如透视变换、光线畸变、JPEG畸变、moiré图案等。特别的,本发明在攻击模拟模块中设计了一个moiré攻击网络用于模拟moiré现象,这是最常见的屏摄攻击,以提高水印图像在真实截屏场景中对失真的复原能力。水印提取模块从捕获的照片中提取水印。上述框架的总体流程图如图1所示,下面对三个模块中的具体数据处理过程进行详细描述。The watermark model framework adopted in the present invention is composed of three parts, which are respectively: a watermark embedding module, an attack simulation module and a watermark extraction module. In the watermark embedding module, the embedded component is used to embed the watermark into the Hadamard coefficient of the image, and the image is modified to obtain the Hadamard coefficient feature of the image. The attack simulation module is used to simulate a series of screen capture attacks and distortions generated in traditional attacks during the screen capture process, such as perspective transformation, light distortion, JPEG distortion, moiré patterns, etc. In particular, the present invention designs a moiré attack network in the attack simulation module to simulate the moiré phenomenon, which is the most common screen capture attack, so as to improve the recovery ability of the watermark image against distortion in the real screen capture scene. The watermark extraction module extracts watermarks from captured photos. The overall flow chart of the above framework is shown in Figure 1, and the specific data processing process in the three modules will be described in detail below.
上述水印嵌入模块用于将水印嵌入到原始图像中,最小化原始图像与水印图像之间的感知差异,提高水印图像的不可感知性和安全性。如图1所示,水印嵌入模块由第一Hadamard变换层(Hadamard Transform)、第一卷积模块和Hadamard逆变换层(InverseHadamard Transform)级联而成,水印嵌入模块的输入为待嵌入的水印(Watermark)Wo以及需嵌入水印的原始图像Io。单通道的原始图像Io预先被切分为一系列相同尺寸的第一图像块Ip,每个第一图像块Ip输入第一Hadamard变换层中通过Hadamard变换从空域变换到频域,每个第一图像块Ip的二维变换结果I'p沿通道维度进行拼接后得到第一变换特征图Ho,再将待嵌入的水印Wo嵌入第一变换特征图Ho中得到第二变换特征图H1,将第二变换特征图H1输入第一卷积模块中进行卷积操作,从而得到与第一变换特征图Ho通道数相同的第三变换特征图H2,将第三变换特征图H2逐通道输入Hadamard逆变换层中通过Hadamard逆变换从频域变换回空域,每个通道的二维变换结果H'2i重新按切分顺序拼接还原得到与原始图像相同尺寸的中间图像I',将中间图像I'与原始图像叠加后输出为单通道的含水印图像Io。The above watermark embedding module is used to embed the watermark into the original image, minimize the perceptual difference between the original image and the watermark image, and improve the imperceptibility and security of the watermark image. As shown in Figure 1, the watermark embedding module is formed by cascading the first Hadamard transform layer (Hadamard Transform), the first convolution module and the Hadamard inverse transform layer (InverseHadamard Transform), and the input of the watermark embedding module is the watermark to be embedded ( Watermark)W o and the original image I o to be embedded with watermark. The single-channel original image I o is pre-cut into a series of first image blocks I p of the same size, and each first image block I p is input into the first Hadamard transform layer and transformed from the spatial domain to the frequency domain by Hadamard transform, and each The two-dimensional transformation result I' p of the first image block I p is spliced along the channel dimension to obtain the first transformation feature map H o , and then the watermark W o to be embedded is embedded in the first transformation feature map H o to obtain the second transformation feature map H o Transform the feature map H 1 , input the second transformed feature map H 1 into the first convolution module for convolution operation, so as to obtain the third transformed feature map H 2 with the same number of channels as the first transformed feature map H o , and convert the second transformed
需要注意的是,上述第三变换特征图H2逐通道进行Hadamard逆变换后,每个通道都将得到一个二维变换结果H'2i,而这些二维变换结果H'2i在拼接成中间图像时,需要根据原始图像Io切分成第一图像块Ip的对应顺序进行拼接,从而实现图像还原。It should be noted that after the Hadamard inverse transformation is performed on the third transformation feature map H 2 channel by channel, each channel will get a two-dimensional transformation result H' 2i , and these two-dimensional transformation results H' 2i are spliced into an intermediate image When , it is necessary to splicing according to the corresponding order in which the original image I o is divided into the first image block I p , so as to realize image restoration.
在本发明中,待嵌入的水印嵌入第一变换特征图的方式为:将待嵌入的水印与第一变换特征图沿通道维度进行拼接。而由于水印图像Wo需要拼接在第一变换特征图中,因此为了保证其尺寸满足拼接要求,原始图像Io切分形成的第一图像块Ip尺寸需要与水印图像Wo一致。若原始图像Io的尺寸为X×Y,水印图像Wo的尺寸为H×G,那么X×Y尺寸的原始图像Io需要切分成一系列的H×G尺寸的第一图像块Ip。在本发明中,考虑图像的实际特点和水印嵌入的需求,设置图像的长宽尺寸一致,即X=Y=M,水印图像Wo的长宽也尺寸一致,即H×G=N,M和N的具体取值可根据实际调整,但M应当被N整除,例如M=512,而N=32。In the present invention, the way of embedding the watermark to be embedded into the first transformed feature map is: splicing the watermark to be embedded and the first transformed feature map along the channel dimension. Since the watermark image W o needs to be spliced in the first transformed feature map, in order to ensure that its size meets the splicing requirements, the size of the first image block I p formed by segmenting the original image I o needs to be consistent with the watermark image W o . If the size of the original image I o is X×Y, and the size of the watermark image W o is H×G, then the original image I o of size X×Y needs to be divided into a series of first image blocks I p of size H×G . In the present invention, considering the actual characteristics of the image and the needs of watermark embedding, the length and width of the image are set to be consistent, that is, X=Y=M, and the length and width of the watermark image W o are also consistent in size, that is, H×G=N, M The specific values of N and N can be adjusted according to actual conditions, but M should be divisible by N, for example, M=512 and N=32.
如图1所示,上述攻击模拟模块位于水印嵌入模块和水印提取模块之间,其作用是在水印嵌入模块和水印提取模块训练过程中模拟攻击操作进而向含水印图像Io中引入噪声。上述攻击模拟模块中内置包含屏摄攻击在内的多种攻击操作,其输入为含水印图像Io,每一种攻击操作均能够对含水印图像Io进行攻击,并生成单通道的受攻击后的含水印图像。且由于moiré攻击是最常见的攻击类型,因此设置屏摄攻击中的moiré攻击由moiré攻击网络实现,moiré攻击网络由U-Net网络训练得到,输入为含水印图像Io,输出为经过moiré攻击增加噪声后的含水印图像。As shown in Figure 1, the above-mentioned attack simulation module is located between the watermark embedding module and the watermark extraction module, and its function is to simulate the attack operation during the training process of the watermark embedding module and the watermark extraction module, and then introduce noise into the watermarked image Io . The above-mentioned attack simulation module has built-in various attack operations including screen shot attack, and its input is the watermarked image I o , each attack operation can attack the watermarked image I o and generate a single-channel attacked After the watermarked image. And because the moiré attack is the most common type of attack, the moiré attack in the screen capture attack is implemented by the moiré attack network, which is trained by the U-Net network. The input is the watermarked image I o , and the output is the moiré attack Watermarked image after adding noise.
U-Net网络的具体网络结构属于现有技术,网络的训练方式也属于现有技术。在本发明中,moiré攻击网络的训练过程中,将一系列含水印图像样本xi及对应的经过moiré攻击增加噪声后的含水印图像yi作为输入样本,通过最小化损失函数Lm对U-Net网络进行训练。其中损失函数Lm的形式为:The specific network structure of the U-Net network belongs to the prior art, and the training method of the network also belongs to the prior art. In the present invention, during the training process of the moiré attack network, a series of watermarked image samples x i and corresponding watermarked images y i after moiré attack with noise added are used as input samples, and by minimizing the loss function L m to U -Net network for training. The form of the loss function L m is:
式中:m是训练所用的样本总数,f(xi)为U-Net网络针对输入的含水印图像样本xi输出的预测结果。In the formula: m is the total number of samples used for training, and f( xi ) is the prediction result output by the U-Net network for the input watermarked image sample xi .
需要注意的是,屏摄攻击的具体形式可根据实际调整,本发明中可包括透视变换、光线畸变、JPEG畸变和moiré模式多种攻击操作。另外,为了增强鲁棒性,攻击模拟模块中除了屏摄攻击之外,还需要包括非屏摄攻击,具体包括对图像进行模糊、裁剪、高斯噪声、马赛克噪声、缩放、旋转、锐化、浮水印、显示失真、亮度和对比度等多种攻击操作。除了moiré攻击之外,其余的攻击操作可通过直接调用图像处理函数或者操作来实现。It should be noted that the specific form of the screen capture attack can be adjusted according to the actual situation, and the present invention can include various attack operations such as perspective transformation, light distortion, JPEG distortion and moiré mode. In addition, in order to enhance the robustness, in addition to the screen capture attack, the attack simulation module also needs to include non-screen capture attacks, including blurring, cropping, Gaussian noise, mosaic noise, scaling, rotation, sharpening, floating Various attack operations such as watermark, display distortion, brightness and contrast. Except for the moiré attack, other attack operations can be realized by directly calling image processing functions or operations.
如图1所示,上述水印提取模块由第二Hadamard变换层和第二卷积模块级联而成,其输入为单通道的受攻击后的含水印图像。受攻击后的含水印图像预先被切分为一系列相同尺寸的第二图像块,每个第二图像块输入第二Hadamard变换层中通过Hadamard变换从空域变换到频域,每个第二图像块的二维变换结果沿通道维度进行拼接后得到第四变换特征图H3,再将第四变换特征图H3输入第二卷积模块中进行卷积操作,得到水印提取结果we。As shown in Figure 1, the above-mentioned watermark extraction module is formed by cascading the second Hadamard transformation layer and the second convolution module, and its input is a single-channel attacked watermarked image. The attacked watermarked image is pre-cut into a series of second image blocks of the same size, and each second image block is input into the second Hadamard transformation layer and transformed from the spatial domain to the frequency domain through Hadamard transformation, and each second image block After splicing the two-dimensional transformation results of the blocks along the channel dimension, the fourth transformation feature map H 3 is obtained, and then the fourth transformation feature map H 3 is input into the second convolution module for convolution operation, and the watermark extraction result w e is obtained.
需要注意的是,本发明的上述第一卷积模块和第二卷积模块中分别包含5层卷积层,当然具体的卷积层数量以及各层的卷积核参数也可根据实际进行优化。It should be noted that the above-mentioned first convolution module and the second convolution module of the present invention respectively contain 5 layers of convolution layers. Of course, the specific number of convolution layers and the convolution kernel parameters of each layer can also be optimized according to the actual situation. .
S2、当构建完成上述图1所示的水印模型框架后,即可利用训练数据,通过最小化总损失函数对该水印模型框架进行迭代训练,且为了保证训练后的水印提取模型能够抵抗不同的攻击,水印提取模块在不同训练轮次中选择不同攻击操作对所述水印嵌入模块输出的含水印图像进行攻击,但每一轮训练选择一种攻击操作,所有训练轮次覆盖全部攻击操作。S2. After constructing the watermark model framework shown in Figure 1 above, the training data can be used to iteratively train the watermark model framework by minimizing the total loss function, and in order to ensure that the trained watermark extraction model can resist different Attack, the watermark extraction module selects different attack operations in different training rounds to attack the watermarked image output by the watermark embedding module, but each round of training selects an attack operation, and all training rounds cover all attack operations.
本发明中对该水印模型框架进行迭代训练所用的具体总损失函数可根据实际进行优化。在本发明中,总损失函数可由归一化互相关损失和结构相似性指数损失的带偏置倒数加权而成。具体的总损失函数的表达式如下:The specific total loss function used in the iterative training of the watermark model framework in the present invention can be optimized according to the actual situation. In the present invention, the overall loss function can be weighted by the biased inverse of the normalized cross-correlation loss and the structural similarity index loss. The expression of the specific total loss function is as follows:
其中:Lw表示归一化互相关损失,计算式为:Among them: L w represents the normalized cross-correlation loss, and the calculation formula is:
LI表示结构相似性指数损失,计算式为:L I represents the structural similarity index loss, and the calculation formula is:
式中:wo(h,g)是大小为H×G的原始嵌入水印中坐标位置(h,g)处的像素值,we(h,g)是水印提取结果中坐标位置(h,g)处的像素值;Io(x,y)是大小为X×Y的原始图像Io中坐标位置(x,y)处的像素值,Iw(x,y)是含水印图像Iw中坐标位置(x,y)处的像素值,和分别是所有Io(x,y)的平均值和所有Iw(x,y)的平均值,和分别是所有Io(x,y)的方差和所有Iw(x,y)的方差,C1、C2、C3和C4分别是四个弱变量超参数,α和β为两个权值超参数。In the formula: w o (h, g) is the pixel value at the coordinate position (h, g) of the original embedded watermark with a size of H×G, and w e (h, g) is the coordinate position (h, g) in the watermark extraction result The pixel value at g); I o (x, y) is the pixel value at the coordinate position (x, y) in the original image I o of size X×Y, and I w (x, y) is the watermarked image I The pixel value at the coordinate position (x, y) in w , and are the mean of all I o (x,y) and the mean of all I w (x,y), respectively, and are the variance of all I o (x, y) and the variance of all I w (x, y), respectively, C 1 , C 2 , C 3 and C 4 are four weak variable hyperparameters, and α and β are two Weight hyperparameters.
在本发明中,可设置α和β均为大于0小于1的小数,且满足α+β=1,优选的α=β=0.5。另外,C1、C2、C3、C4的值分别可选择为10-4、9×10-4、10-2和3×10-2。In the present invention, both α and β can be set to be decimals greater than 0 and less than 1, and satisfy α+β=1, preferably α=β=0.5. In addition, the values of C 1 , C 2 , C 3 , and C 4 can be selected to be 10 -4 , 9×10 -4 , 10 -2 , and 3×10 -2 , respectively.
S3、在上述水印模型框架训练完毕后,即可利用水印嵌入模块进行水印嵌入,输出带有水印的含水印图像,而对于需要进行水印提取的图像,将其直接输入水印提取模块中进行水印提取。S3. After the training of the above watermark model framework is completed, the watermark embedding module can be used for watermark embedding, and the watermarked image with watermark is output, and for the image that needs to be extracted, directly input it into the watermark extraction module for watermark extraction .
但需要注意的是,上述S3中,不再需要图1中的攻击模拟模块,因为在实际应用场景中,此时的攻击噪声是由实际的屏幕拍摄机密这一过程所引入的。因此S3中,需要输入水印提取模块进行水印提取的图像是水印嵌入模块生成的含水印图像,通过屏幕拍摄方式被泄露时产生的机密照片。当这些机密照片被偷拍泄露时依然能从照片中提取相应的信息,从而定位泄露的设备或员工信息。However, it should be noted that in the above S3, the attack simulation module in Figure 1 is no longer needed, because in the actual application scenario, the attack noise at this time is introduced by the process of actually capturing the secret on the screen. Therefore, in S3, the image that needs to be input to the watermark extraction module for watermark extraction is the watermarked image generated by the watermark embedding module, and the confidential photo produced when it is leaked through the screen shooting method. When these confidential photos are secretly photographed and leaked, the corresponding information can still be extracted from the photos, so as to locate the leaked equipment or employee information.
下面将上述S1~S3所示的方法应用于一个具体实例中,以展示其所能实现的技术效果。In the following, the methods shown in S1-S3 above are applied to a specific example to demonstrate the technical effects that can be achieved.
实施例Example
本实施例中的具体方法过程如前述S1~S3所示,不再完全赘述,下面主要展示其具体实现细节及技术效果。The specific method process in this embodiment is shown in S1-S3 above, and will not be described in detail here. The following mainly shows its specific implementation details and technical effects.
首先,构架一个水印模型框架。本实施例所采用的水印模型框架由三部分组成,分别为:水印嵌入模块、攻击模拟模块和水印提取模块。三个模块中的具体数据处理过程如前所述,此处不再赘述。First, construct a watermark model framework. The watermark model framework used in this embodiment consists of three parts, namely: a watermark embedding module, an attack simulation module and a watermark extraction module. The specific data processing process in the three modules is as described above, and will not be repeated here.
(1)水印嵌入模块(1) Watermark embedding module
水印嵌入模块负责将水印嵌入到图像的变换系数中。本实施例中使用一个CNN架构,以512*512像素的单通道灰度图形式的原始图像作为输入,输出单通道含水印图像。其中输入的水印表示为32*32尺寸的二维图像,因此原始图像也被切分为16*16个32*32尺寸的图像块。将Hadamard变换后的图像块与水印图像沿通道方向拼接成(16*16)+1维张量后输入到第一卷积模块中进行卷积,卷积结果再逐通道经过Inverse Hadamard变换后重新还原得到含水印图像。本实施例中,Hadamard变换层采用大小为1*1的卷积核实现。而第一卷积模块中包含5层卷积层,其卷积核尺寸依次为1*1、2*2、2*2、2*2和2*2。图2用PSNR展示了该水印嵌入模块中嵌入算法的不可感知性,通过图2可以看出本算法的不可感知性比较理想,PNSR平均值为36.62。图3显示了水印在嵌入原始输入图前后的差异变化,从主观的角度来看,嵌入水印后的图像水印不可视性比较好。由于该嵌入算法是在Hadamard变换域进行水印嵌入的,所以图4展示了部分Hadamard变换系数,可以看出Hadamard变换系数的绝对值是一样的。The watermark embedding module is responsible for embedding the watermark into the transform coefficients of the image. In this embodiment, a CNN architecture is used, an original image in the form of a single-channel grayscale image of 512*512 pixels is used as input, and a single-channel watermarked image is output. The input watermark is represented as a two-dimensional image of 32*32 size, so the original image is also divided into 16*16 image blocks of 32*32 size. The Hadamard-transformed image block and the watermark image are spliced into a (16*16)+1-dimensional tensor along the channel direction and then input to the first convolution module for convolution. Restore the watermarked image. In this embodiment, the Hadamard transform layer is implemented by using a convolution kernel with a size of 1*1. The first convolution module contains 5 convolutional layers, and the convolution kernel sizes are 1*1, 2*2, 2*2, 2*2, and 2*2. Figure 2 shows the imperceptibility of the embedding algorithm in the watermark embedding module with PSNR. It can be seen from Figure 2 that the imperceptibility of this algorithm is relatively ideal, and the average PNSR is 36.62. Figure 3 shows the difference before and after the watermark is embedded in the original input image. From a subjective point of view, the image watermark after the watermark is embedded is relatively invisible. Since the embedding algorithm performs watermark embedding in the Hadamard transform domain, Figure 4 shows some Hadamard transform coefficients, and it can be seen that the absolute values of the Hadamard transform coefficients are the same.
(2)攻击模拟模块(2) Attack simulation module
通过对屏幕拍摄攻击进行理论分析,可以将屏幕拍摄过程中造成的攻击分为:透视变换,光学失真,亮度变化,JPEG压缩,moiré模式,可视化不同攻击结果图,如图5所示。其中moiré攻击是最常见的攻击类型,为此本发明构建了U-Net网络对moiré攻击进行训练和测试,其网络结构图如图6所示。Moiré训练网络的损失函数如前所述。图7是U-Net网络经过训练后,通过测试图得到的测试结果图,从图7可以很容易观察到原始纯净图经过U-Net网络后会自动添加moiré模式,实现了屏摄过程中产生的moiré模拟攻击。将训练好的此网络加入到水印水印嵌入模块和水印提取模块中间作为噪声层,以增加屏摄算法鲁棒性和攻击的多样性。另外,屏幕拍摄过程中有很多情况下会发生混合攻击现象,既包括传统的攻击,也包括屏摄攻击,本实施例在攻击模拟层中模拟了以上所有的攻击,如表1所示。屏摄攻击包括透视变换(Perspective)、光线畸变(Light)、JPEG畸变(JPEG)和moiré模式(Moiré)多种攻击操作,非屏摄攻击具体包括对图像进行模糊(Blurring)、裁剪(Cropping)、高斯噪声(Gaussian noise)、马赛克噪声(Block noise)、缩放(Scaling)、旋转(Rotation)、锐化(Sharpening)、浮水印(Visible watermark)、显示失真(Display distortion)、亮度和对比度(Display distortion)多种攻击操作。Through the theoretical analysis of screen shooting attacks, the attacks caused during the screen shooting process can be divided into: perspective transformation, optical distortion, brightness change, JPEG compression, moiré mode, and visualize different attack results, as shown in Figure 5. Among them, the moiré attack is the most common attack type. Therefore, the present invention constructs a U-Net network to train and test the moiré attack, and its network structure diagram is shown in FIG. 6 . The loss function for the Moiré trained network is as described previously. Figure 7 is the test result picture obtained through the test chart after the U-Net network is trained. From Figure 7, it can be easily observed that the original pure picture will automatically add moiré mode after passing through the U-Net network, realizing the screen capture process. The moiré simulates the attack. The trained network is added between the watermark embedding module and the watermark extraction module as a noise layer to increase the robustness of the screen capture algorithm and the diversity of attacks. In addition, mixed attacks may occur in many cases during the screen capture process, including traditional attacks and screen capture attacks. This embodiment simulates all the above attacks in the attack simulation layer, as shown in Table 1. Screenshot attacks include perspective transformation (Perspective), light distortion (Light), JPEG distortion (JPEG) and moiré mode (Moiré) attack operations, and non-screenshot attacks specifically include image blurring (Blurring) and cropping (Cropping) , Gaussian noise, Block noise, Scaling, Rotation, Sharpening, Visible watermark, Display distortion, Brightness and contrast (Display distortion) various attack operations.
表1攻击模拟层的不同攻击参数Table 1 Different attack parameters of the attack simulation layer
(3)水印提取模块(3) Watermark extraction module
该水印提取模块用于从遭受屏摄攻击的含水印图像中提取水印。水印嵌入在Hadamard域,在提取水印之前先将水印图像变换到Hadamard域。含水印图像经过Hadamard变换层和第二卷积模块中的一系列卷积层提取的水印。本实施例中,此处的Hadamard变换层通过大小为1*1的卷积核实现,而第一卷积模块中包含5层卷积层,其卷积核尺寸依次为1*1、2*2、2*2、2*2和1*1。The watermark extraction module is used to extract watermarks from watermarked images subjected to screen capture attacks. The watermark is embedded in the Hadamard domain, and the watermark image is transformed to the Hadamard domain before extracting the watermark. The watermarked image is extracted by Hadamard transformation layer and a series of convolutional layers in the second convolutional module. In this embodiment, the Hadamard transformation layer here is implemented by a convolution kernel with a size of 1*1, and the first convolution module contains 5 layers of convolution layers, and the convolution kernel sizes are 1*1, 2* 2, 2*2, 2*2, and 1*1.
该算法需要保证水印图像质量最大,水印提取错误率最小,以平衡算法的不可感知性和鲁棒性算法。因此,本实施例中发明引入Peak signal-to-noise ratio(PSNR)和structural similarity index(SSIM)来保证图像的不感知性,normalized cross-correlation(NC)和bit error rate(BER)用来描述算法的鲁棒性。为了平衡算法的鲁棒性和不可感知性,此处采用了一种基于NC和SSIM耦合的损失函数。具体描述如下所示:The algorithm needs to ensure the maximum watermark image quality and the minimum watermark extraction error rate to balance the imperceptibility and robustness of the algorithm. Therefore, the invention in this embodiment introduces Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) to ensure the imperceptibility of the image, and normalized cross-correlation (NC) and bit error rate (BER) are used to describe The robustness of the algorithm. In order to balance the robustness and imperceptibility of the algorithm, a loss function based on the coupling of NC and SSIM is adopted here. The specific description is as follows:
公式(1)描述了均方误差,Io(x,y)是大小为X×Y原始图像,Iw(x,y)是嵌入水印后的含水印图像。公式(2)是PSNR的表达式。公式(3)是NC的表达式,wo(h,g)是大小为H×G原始嵌入的水印图,we(h,g)是提取的水印图。公式(4)是描述了SSIM的表达形式,和是平均值,和分别是Io(x,y)和Iw(x,y)的方差,C1和C2是用来稳定分母的两个弱变量。公式(5)是总的损失函数,由NC和SSIM两部分组成,α=β=0.5,并且C1和C2的值分别为10-4,9*10-4,C3、C4的值分别为10-2和3×10-2。Formula (1) describes the mean square error, I o (x, y) is the original image of size X×Y, and I w (x, y) is the watermarked image after embedding the watermark. Equation (2) is an expression of PSNR. Equation (3) is the expression of NC, w o (h, g) is the original embedded watermark image of size H×G, and we ( h, g) is the extracted watermark image. Equation (4) describes the expression form of SSIM, and is the mean value, and are the variances of I o (x,y) and I w (x,y) respectively, and C1 and C2 are two weak variables used to stabilize the denominator. Formula (5) is the total loss function, which is composed of NC and SSIM, α=β=0.5, and the values of C 1 and C 2 are 10 -4 , 9*10 -4 , C 3 , C 4 The values are 10 -2 and 3×10 -2 , respectively.
利用训练数据集,通过最小化总损失函数Lt对水印模型框架进行迭代训练,且水印提取模块在不同训练轮次中选择不同攻击操作对水印嵌入模块输出的含水印图像进行攻击,每一轮训练选择一种攻击操作,所有训练轮次覆盖全部攻击操作。训练至最大迭代轮数后,完成训练。Using the training data set, the watermark model framework is iteratively trained by minimizing the total loss function L t , and the watermark extraction module selects different attack operations in different training rounds to attack the watermarked image output by the watermark embedding module, each round Select an attack operation for training, and all training rounds cover all attack operations. After training to the maximum number of iterations, the training is completed.
在水印模型框架训练完毕后,对训练后的水印模型框架仅需测试。测试过程中,利用水印嵌入模块进行水印嵌入,输出带有水印的含水印图像,继续利用水印模型框架通过不同的攻击形式向水印嵌入模块生成的含水印图像中引入噪声,再将其直接输入水印提取模块中进行水印提取,以测试其抗攻击的鲁棒性。本实施例中,测试了多种攻击类型,并用BER指标来展示了该网络层提取水印的能力,其中BER值越小代表提取效果越好。测试结果如表2所示:After the training of the watermark model framework is completed, the trained watermark model framework only needs to be tested. During the test, the watermark embedding module is used to embed the watermark, and the watermarked image with the watermark is output, and the watermark model framework is continuously used to introduce noise into the watermarked image generated by the watermark embedding module through different attack forms, and then it is directly input into the watermark Watermark extraction is performed in the extraction module to test its robustness against attacks. In this embodiment, various attack types are tested, and the BER index is used to demonstrate the ability of the network layer to extract the watermark, wherein the smaller the BER value, the better the extraction effect. The test results are shown in Table 2:
表2水印提取能力展现Table 2 Watermark extraction capability display
由此可见,本发明提出的算法框架对于各种屏摄攻击和非屏摄攻击,在隐蔽性、鲁棒性和速度方面均具有优势。It can be seen that the algorithm framework proposed by the present invention has advantages in concealment, robustness and speed for various screen capture attacks and non-screen capture attacks.
以上所述的实施例只是本发明的一种较佳的方案,然其并非用以限制本发明。有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型。因此凡采取等同替换或等效变换的方式所获得的技术方案,均落在本发明的保护范围内。The above-mentioned embodiment is only a preferred solution of the present invention, but it is not intended to limit the present invention. Various changes and modifications can be made by those skilled in the relevant technical fields without departing from the spirit and scope of the present invention. Therefore, all technical solutions obtained by means of equivalent replacement or equivalent transformation fall within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211210100.8A CN115526758A (en) | 2022-09-30 | 2022-09-30 | Hadamard transform screen-shot-resistant watermarking method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211210100.8A CN115526758A (en) | 2022-09-30 | 2022-09-30 | Hadamard transform screen-shot-resistant watermarking method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115526758A true CN115526758A (en) | 2022-12-27 |
Family
ID=84701105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211210100.8A Pending CN115526758A (en) | 2022-09-30 | 2022-09-30 | Hadamard transform screen-shot-resistant watermarking method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115526758A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116308986A (en) * | 2023-05-24 | 2023-06-23 | 齐鲁工业大学(山东省科学院) | Concealed Watermarking Attack Algorithm Based on Wavelet Transform and Attention Mechanism |
-
2022
- 2022-09-30 CN CN202211210100.8A patent/CN115526758A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116308986A (en) * | 2023-05-24 | 2023-06-23 | 齐鲁工业大学(山东省科学院) | Concealed Watermarking Attack Algorithm Based on Wavelet Transform and Attention Mechanism |
CN116308986B (en) * | 2023-05-24 | 2023-08-04 | 齐鲁工业大学(山东省科学院) | Concealed Watermarking Attack Algorithm Based on Wavelet Transform and Attention Mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fang et al. | PIMoG: An effective screen-shooting noise-layer simulation for deep-learning-based watermarking network | |
Gallagher et al. | Image authentication by detecting traces of demosaicing | |
Roy et al. | A hybrid domain color image watermarking based on DWT–SVD | |
WO2022127374A1 (en) | Color image steganography method based on convolutional neural network | |
CN108171689B (en) | Appraisal method, device and storage medium for retaking display screen image | |
Ge et al. | A screen‐shooting resilient document image watermarking scheme using deep neural network | |
Cao et al. | Screen-shooting resistant image watermarking based on lightweight neural network in frequency domain | |
He et al. | Robust blind video watermarking against geometric deformations and online video sharing platform processing | |
Lu et al. | Wavelet-based CNN for robust and high-capacity image watermarking | |
Fang et al. | Denol: a few-shot-sample-based decoupling noise layer for cross-channel watermarking robustness | |
Woo | Digital image watermarking methods for copyright protection and authentication | |
Juarez-Sandoval et al. | Digital image ownership authentication via camouflaged unseen-visible watermarking | |
CN109886856A (en) | A Robust Digital Watermarking Method for Screen Shooting | |
CN115526758A (en) | Hadamard transform screen-shot-resistant watermarking method based on deep learning | |
Dzhanashia et al. | Neural networks-based data hiding in digital images: overview | |
CN114648436A (en) | Screen shot resistant text image watermark embedding and extracting method based on deep learning | |
CN113628090A (en) | Anti-interference message steganography and extraction method and system, computer equipment and terminal | |
Zhang et al. | A convolutional neural network-based blind robust image watermarking approach exploiting the frequency domain | |
Sandoval Orozco et al. | Image source acquisition identification of mobile devices based on the use of features | |
Wang et al. | Splicing image and its localization: a survey | |
Mohamed et al. | A survey on image data hiding techniques | |
Yakushev et al. | Docmarking: Real-time screen-cam robust document image watermarking | |
Wang et al. | Print-cam robust image watermarking based on hybrid domain | |
Hashem et al. | Passive aproaches for detecting image tampering: a review | |
CN109242749B (en) | Blind digital image watermarking method resistant to printing and rephotography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |