CN114255151B - High-resolution image robust digital watermarking method based on key point detection and deep learning - Google Patents

High-resolution image robust digital watermarking method based on key point detection and deep learning Download PDF

Info

Publication number
CN114255151B
CN114255151B CN202011022189.6A CN202011022189A CN114255151B CN 114255151 B CN114255151 B CN 114255151B CN 202011022189 A CN202011022189 A CN 202011022189A CN 114255151 B CN114255151 B CN 114255151B
Authority
CN
China
Prior art keywords
image
watermark
network
embedded
hidden
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011022189.6A
Other languages
Chinese (zh)
Other versions
CN114255151A (en
Inventor
竺乐庆
莫凌强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202011022189.6A priority Critical patent/CN114255151B/en
Publication of CN114255151A publication Critical patent/CN114255151A/en
Application granted granted Critical
Publication of CN114255151B publication Critical patent/CN114255151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-resolution image blind watermarking method based on key point detection and deep learning. Firstly, a plurality of embedded areas with fixed sizes are determined in a carrier image after scale normalization, and then the embedded areas are mapped to an original image. The watermark is hidden within the inscribed square of the inscribed circle of each embedded region so that the watermark remains within the determined embedded region after the image has undergone geometric transformation. The watermark is embedded and extracted with an improved ResNet, the network removes all pooling layers in ResNet, introduces hole convolution in the residual connection of the residual modules, and adopts a multi-scale fusion structure. The training adopts the strategies of multi-scale cross training and course learning, and the attack is applied to the watermark image in the training process, and the attack is changed from less to more and from weak to strong. The method has ideal robustness to common signal processing operation and geometric transformation attack.

Description

High-resolution image robust digital watermarking method based on key point detection and deep learning
Technical Field
The invention belongs to the technical field of image security authentication, and particularly relates to a high-resolution image digital watermark embedding and extracting method based on key point detection and deep learning.
Background
With the development of network technology and media recording devices, more and more multimedia information is distributed and shared through networks and storage devices. Unauthorized individuals or organizations can easily copy, modify, or forward such information, which can create copyright infringement issues, and a convenient and fast viable technique for protecting the copyright of digital media is urgently needed to maintain a healthy production environment for digital media producers. Digital watermarking is a copyright protection technique that hides copyright information in digital media and can be extracted when needed to detect and authenticate whether the copyright of digital media is legal. The clear watermark affects the content of the image itself, and its remarkable existence makes it vulnerable to malicious tampering, and the transparent hidden watermark can resist copyright infringement and maintain the content integrity of the media. Early spatial watermarking techniques such as least significant bit modification (LSB) can simply and effectively hide data in an image, but are sensitive to image processing such as JPEG compression, filtering, noise addition, etc.; transform domain watermarking techniques hide watermarks in the transform domains of Discrete Fourier Transform (DFT), discrete Cosine Transform (DCT), discrete Wavelet Transform (DWT), etc., and have better robustness to JPEG compression and other signal processing operations, whereas most transform domain watermarking techniques are based on blocking and pseudo-random scrambling and still are sensitive to geometric attacks and position changes.
Since 2012, deep learning technology based on Convolutional Neural Network (CNN) has been developed rapidly, performance in computer vision tasks such as image pattern recognition, object detection and object segmentation is superior to that of traditional algorithms, researchers have also applied CNN to the field of digital watermarking, kandi and the like propose a self-coding CNN based on learning to realize non-blind extraction of image digital watermarking. Mun et al propose a robust blind watermarking method called WMNet, whose structural similarity index (Structure Similarity Index Measure, SSIM) and normalized correlation coefficient (Normalized Correlation, NC) are superior to the Quaternary Discrete Fourier Transform (QDFT) and DCT methods; li, etc. embed watermark in DCT domain, then extract watermark with CNN, but the robustness of the method is not reported; ahmadia et al propose a blind watermark model called ReDMark, in which 1024-bit watermarks can be embedded in a gray-scale image of 512×512 size, reDMark is robust to JPEG compression, noise adding, filtering, post-scaling restoration, etc., but is not robust to geometric transformations such as rotation, scaling, translation, etc. With the development of image acquisition technology, the size and resolution of images are becoming larger and larger, and a practical watermarking method should be applicable to high-resolution images. However, most of the current deep learning frameworks cannot be used directly in steganography of high resolution images due to computational resources. The invention fully exploits the computing power of the deep learning framework and seamlessly integrates the computing power with the traditional image processing technology, thereby realizing a robust and efficient high-resolution image digital watermarking method.
Disclosure of Invention
The invention designs a multi-region image hidden watermark embedding method based on key point detection, which is used for embedding and extracting transparent watermarks in a selected image region by using a deep learning model, and the designed high-resolution image hidden watermark embedding method can realize blind extraction, can resist JPEG compression, salt and pepper noise, gaussian filtering, average filtering, median filtering, contrast adjustment and other signal processing operations and rotation, scaling, translation, clipping and other geometric transformation attacks, and shows very good robustness. The method specifically comprises the following steps:
(1) Obtaining a sufficient number of sample images, carrying out normalization processing on the sample images, taking an input image with the size of 512 multiplied by 512 as an example, if the sample images are larger than the size, carrying out random cutting to obtain the sample images, and if the sample images are smaller than the size, carrying out equal proportion amplification and then cutting to obtain the sample images; for the watermark image, firstly converting the watermark image into a gray level image, then calculating a global threshold value by using an Ostu (Ojin method) to binarize the gray level image, further performing a closing operation on the binary image by using a method of expanding before corroding, and finally removing a connected region with an excessively small area from the binary image to obtain a binary watermark image set.
(2) And carrying out joint training on the image watermark hiding network and the extraction network.
The constructed deep learning watermark network model is composed of two parts, a hidden network for performing watermark embedding operation and an extraction network for extracting watermark from watermark image. Both the hidden network and the extraction network use the modified ResNet as a backbone network, and the residual connection of ResNet can speed up convergence during training. The improved ResNet removes all pooling layers in ResNet, introduces hole convolution (Dilated/Atrous Convolution) in the residual Connection (Skip Connection) of the residual module, and leads out feature maps of different scales from different depths of the network for multi-scale fusion so as to simultaneously retain global features and local detail features. The adopted ResNet comprises 12 residual modules, the input is firstly processed by a convolution layer and then enters a depth network connected by the 12 residual modules in series, each residual module consists of 2 convolution layers, each convolution layer is corrected by a batch normalization (Batch Normalization) and an active layer correction linear unit (ReLU), and all residual modules do not have a pooling layer; the 9 th and 10 th residual modules adopt the hole convolution with the coefficient of 3, the 8 th and 11 th residual modules adopt the hole convolution with the coefficient of 2, and residual connection of other residual modules adopts common convolution layer processing; and (3) carrying out downsampling with a coefficient of 2 on the feature map after the 3 rd and the 6 th residual modules of the network to obtain features with different scales, leading out the outputs of the 5 th, the 9 th and the last residual modules, carrying out upsampling with amplification factors of 2,4 and 4 respectively by deconvolution, carrying out channel connection on the feature map led out from 3 branches after the feature map is consistent with the size of an input image to realize multiscale fusion, and obtaining a watermark-containing image (hidden network) or watermark (extraction network) after the connected feature map passes through a convolution layer.
The watermark is hidden in the brightness channel of the carrier image, so before the carrier image is input into a hiding network, the carrier image is firstly converted from RGB space to YCrCb space, then the Y channel is connected with a binary watermark in a channel way and then is used as the input of the hiding network, the Y channel is 2 channels, the Y channel containing the watermark is output as a single channel, and the Y channel containing the watermark is connected with the CrCb channel and is converted back to the RGB space, so that the color carrier image containing the watermark can be obtained; the input and output of the extraction network are single channels, namely a Y channel containing watermark and an extracted binary watermark image.
Optimizing network parameters by using an Adam algorithm, extracting a loss function of a network as binary cross entropy loss, and hiding the loss of the network, wherein the loss comprises two parts: regression loss from the hidden network itself and loss from the extraction network. The regression loss of the hidden network is designed by combining the chi-square distance and the mean square error, the chi-square distance reflects the distribution characteristic difference of the pixel values of the two images, the mean square error reflects the global statistical difference of the pixel values of the two images, and the two images and the training network are combined to enable the generated watermark image to have good transparency.
In order to make the network have adaptability to the input/output of different scales, a multi-scale cross training strategy is adopted in training, that is, each batch of training data randomly selects different scales, such as 128×128, 256×256, 320×320, 512×512 and the like, and after the batch of data is scaled to the scales, the data is input into the network for training, and because the network is in a full convolution structure, when the input/output scale of the network changes, the network structure does not need to be modified.
In order to improve the robustness of the network to JPEG compression, noise adding, filtering, contrast adjustment and other signal processing operations, scaling, translation, rotation and other geometric transformation attacks, different types of attacks are applied to the generated watermark-containing image in a course learning mode during training, the probability of applying the attacks is increased from low to high, and the strength of the attacks is also increased gradually. Through the training strategy, the hidden network and the extraction network can resist the signal processing operation and the geometric transformation attack applied to the watermark image, so that the watermark model has better robustness.
(3) A region in which the watermark is embedded is selected in the high resolution image.
Converting the high resolution image into a gray scale image, and scaling the gray scale image to a size, e.g., an area of 1024 x 1024, maintaining the aspect ratio of the image while scaling, and recording the scaling factor asAcquiring the most remarkable key points by using a SURF, SIFT and other key point detection algorithms, sorting the key points from large to small according to response (response), sequentially checking a square region R i (with a side length of a=128) with a fixed size and taking the key point i as a center, adding the region R i into an embedded region list RList if the region completely falls in an image and does not overlap other selected embedded regions, and discarding the key point i and the region R i if the region completely falls in the image; this is repeated until the number of regions in the embedded region list RList reaches m (typically taking m.gtoreq.4). Finally, the determined vertex coordinates of m non-overlapped square areas are multiplied by a scaling factor s to map the areas back to the original image, so that the side length of each embedded area is a×s. The normalized image area and the embedded region side length on the normalized image are fixed, so the embedded region side length mapped onto the original image is proportional to the square root of the original image area and the embedded region area is proportional to the original image area.
(4) And carrying out normalization processing on the binary watermark image.
On one hand, since the size of the embedded region is positively correlated with the size of the image, the size of the watermark image to be embedded is changed along with the change of the size of the image, and the watermark image to be embedded is correspondingly scaled so as to be matched with the size of the embedded region; on the other hand, when the high resolution watermark-containing image is subjected to geometric transformation attacks such as rotation, translation, cropping and the like, the embedding region determined when watermark extraction is performed can have a certain deviation in position and size from the watermark region when embedding is performed. The invention scales the binary watermark image to have the same size as the inscribed square of the inscribed circle of the embedded region, namely the area is 1/2 of the size of the embedded region, and the side length is the side length of the embedded regionThen filling 0 expansion is carried out on four sides of the watermark image, the size of the expanded watermark image is the same as that of the embedded area, and the original watermark is positioned in the center of the normalized watermark image; in this way, the original watermark will lie within the inscribed circle of the embedded region, and the watermark will always lie within the inscribed circle of the embedded region, regardless of how the image rotates.
(5) Performing watermark hiding for each embedded region with the image watermark hiding network trained in step (2).
Cutting out each embedded region, converting the embedded region into a YCrCb color space, connecting a Y channel in the embedded region with the watermark image normalized in the step (4) in a channel way, inputting a trained hiding network, hiding the watermark in the Y channel, and outputting the Y channel containing the transparent watermark by the hiding network; combining the watermark-containing Y channel with the original CrCb channel, converting the combined watermark-containing Y channel back to the RGB space to obtain watermark-containing embedded areas, hiding the watermarks in the same way in all the embedded areas to obtain a plurality of watermark-containing embedded areas, and finally replacing all the watermark-containing embedded areas with corresponding areas of an original image to obtain a high-resolution image with hidden watermarks;
(6) Extracting the watermark from the high resolution watermark-containing image.
When the watermark is extracted, the embedded area is determined in the mode of the step (3), and the Y channel of the embedded area is input into an extraction network, so that a binary watermark image hidden in the embedded area can be extracted; because the image may suffer from attacks such as signal processing operation or geometric transformation in the transmission process, the position or intensity of the most significant key point detected may fluctuate, the result of sorting the key points according to the response may not be consistent with the sorting result when the watermark is embedded, and the multi-region embedding can ensure the robustness of the watermark algorithm, i.e. copyright authentication can be realized as long as the hidden watermark can be extracted from one of a plurality of embedding regions.
Drawings
Fig. 1 is a flowchart of a high resolution image digital watermarking method according to an embodiment of the present invention;
FIG. 2 is an overall structure of a deep learning watermark model according to an embodiment of the invention;
FIG. 3 is a schematic diagram showing a change in position of an embedded watermark before and after image rotation according to an embodiment of the present invention;
fig. 4 is a specific network structure of a hidden network and an extraction network according to an embodiment of the present invention.
Detailed Description
In order to describe the present invention more specifically, the following detailed description of the technical solution of the present invention is given with reference to the accompanying drawings and the specific embodiments, and a flow chart of an embodiment of the method is shown in fig. 1. The invention relates to a high-resolution image robust digital watermarking method based on key point detection and deep learning, which comprises the following steps:
(1) Step 100, obtaining a sufficient number of sample images, wherein the images can be downloaded from a network or photographed by themselves;
(2) Step 101, carrying out normalization processing on a sample image, taking an input image with the size of 512 multiplied by 512 as an example, if the sample image is larger than the size, carrying out random cutting to obtain the sample image, and if the sample image is smaller than the size, carrying out equal proportion amplification and then cutting to obtain the sample image;
(3) Step 102, randomly dividing the sample image into two parts, wherein half of the sample image is used as a carrier image and the other half of the sample image is used as a watermark image;
(4) Step 103, graying the image used as the watermark, obtaining a threshold value by using an Ostu algorithm to carry out binarization, then carrying out a closing operation of expanding and then corroding, removing a too small connected region to carry out denoising, and finally obtaining a binary image set as the watermark during training;
(5) Step 104, training the hidden network and the extraction network in a combined way, wherein the deep learning watermark model constructed by an embodiment of the method is shown in fig. 2, the carrier image (200) is firstly converted into YCrCb color space, wherein the Y channel (202) is connected with the watermark (203) in a channel way and then is input into the hidden network (205), the hidden network (205) outputs a Y channel (207) containing the watermark, and the Y channel (207) containing the watermark is combined with the original CrCb channel (201) to obtain a color watermark carrier image (206); in the training process, an attack is applied to a Y channel (207) containing the watermark in a course learning mode, the attacked Y channel (208) is obtained and is used as the input of an extraction network (209), and the output of the extraction network (209) is the extracted watermark (210). The purpose of the training is to make the watermark-hidden carrier image (206) visually very close to the carrier image (200) while making the watermark (210) extracted by the extraction network as close as possible to the original watermark (203). In order to enable the network to have adaptability to the input/output of different sizes, a multi-scale cross training strategy is adopted in training, namely each batch of training data is randomly scaled to one scale of 128×128, 256×256, 320×320, 512×512 and other scales; meanwhile, in order to enable the network to still effectively extract the watermark under the signal processing operations of JPEG compression, noise adding, filtering and the like or under the geometric transformation attacks of rotation, scaling, translation and the like, a course learning mode is adopted during training, various attacks are gradually added, and the strength of the attacks is gradually increased. The specific operation is as follows: taking random real numbers p epsilon [0,1] for each batch of training data, comparing the random real numbers p epsilon [0,1] with threshold t epsilon [0,1], if p > t, applying random attack on watermark images generated during the batch of training, otherwise, not attacking; the value of the threshold t becomes smaller after each round of training is completed, and the intensity of each attack is appropriately increased. When the network loss is not obviously reduced, the network converges and the training is finished.
(6) Step 105 selects a plurality of watermark embedding regions in the high resolution image based on keypoint detection and scale normalization, and specifically comprises the following steps: firstly, normalizing a high-resolution image to an area with a specified size, keeping the aspect ratio of the image unchanged in the zooming process, and recording a zooming scale factor asConverting the normalized image into a gray level image, extracting key points from the image by using key point detection algorithms such as SURF, SIFT and the like, sequencing from strong to weak according to the intensity of the key points, sequentially checking square areas with the key points as centers and the side length of the square areas being a fixed value a, and taking the square areas as candidate areas for embedding watermarks if the square areas completely fall in the image and do not overlap with the embedding areas selected before; otherwise, discarding the area, and checking the next strongest key point; repeating the steps until m non-overlapped embedded areas are found; finally, the determined m embedded areas are mapped back to the original high-resolution image, namely, the vertexes of the embedded areas are positioned for scale multiplication s, and the continuous length of the embedded areas is a multiplied by s;
(7) Step 106, normalizing the hidden binary watermark image to hide the watermark in the proper position of the embedded area. Firstly converting the watermark into a square binary image, and if the original watermark is not square, performing 0 filling expansion; the square watermark is then scaled to the same size as the inscribed square (302) of the inscribed circle (301) corresponding to the embedded region (300) determined in step 105, and then the edge region is padded with 0 to the same size as the embedded region (300). On the one hand, the size of the watermark can be matched with the size of the embedded area, and the step 105 shows that the size of the embedded area is proportional to the size of the high-resolution image, so that the size of the embedded area can change along with the change of the size of the image, and the size of the watermark also needs to change along with the change of the size of the image; on the other hand, under the condition of geometric transformation such as rotation, translation and the like, an embedded region (303) extracted by the same key point when the watermark is extracted is slightly different from a region (300) determined when the watermark is embedded, and the original watermark (305) can still fall into the embedded region by normalizing the binary watermark by the method.
(8) Step 107 conceals the watermark generated in step 106 in the embedded area selected in step 105 using the concealment network trained in step 104. And (3) connecting the Y channel (202) of each selected embedded region with the watermark (203) in a channel way, inputting the Y channel into a hidden network (205), merging the Y channel (207) containing the watermark output by the hidden network (205) with the CrCb channel (201) to form an image region (206) containing the watermark, hiding the watermark (203) by using the hidden network (205) in all m embedded regions, and replacing the corresponding region in the original image to obtain the high-resolution image containing the watermark.
(9) Step 108 extracts the watermark from the high resolution image containing the watermark. First m embedded regions are selected with step 105 and then the watermark is extracted region by region. Inputting the Y channel (208) of the determined embedded region into an extraction network (209) trained in the step 104, comparing whether the output (210) of the Y channel has higher consistency with the pre-hidden watermark (203), if so, successfully extracting, and if not, continuously checking the next embedded region; if all the areas do not extract the pre-hidden watermark, the verification fails; as soon as one of them extracts the watermark, the verification is successful.
Both the hidden network and the extraction network of an embodiment of the present invention employ the structure shown in fig. 4. The hidden network and the extraction network are both constructed by taking a residual network ResNet as a backbone network, and comprise 12 residual modules (401), wherein the input is firstly processed by a convolution layer (400) and then enters a depth network formed by connecting the 12 residual modules in series, each residual module is formed by 2 convolution layers, each convolution layer is then modified (402) by a batch normalization (Batch Normalization) and an activation layer modification linear unit (ReLU), and all residual modules do not have a pooling layer; the residual Connection (Skip Connection) of the residual modules is processed (403) by adopting cavity convolution (Dilated/Atrous Convolution), the 9 th (404) and 10 th (405) residual modules adopt cavity convolution with the coefficient of 3, the 8 th (406) and 11 th (407) residual modules adopt cavity convolution with the coefficient of 2, and the residual Connection coefficients of other residual modules are processed by adopting a common convolution layer, namely, the residual Connection coefficients of other residual modules are processed by adopting 1. The hidden network and the extraction network adopt a multi-scale fusion strategy, the 3 rd and the 6 th residual modules of the network perform downsampling (408) with coefficients of 2 on the feature images to obtain features with different scales, the outputs of the 5 th, the 9 th and the last residual modules are led out, the upsampling with amplification factors of 2 (409), 4 (410) and 4 (411) is performed by deconvolution, the feature images led out from the 3 branches are connected with an input image after the sizes are consistent, multi-scale fusion is realized, and the connected feature images are subjected to a convolution layer (413) to obtain a watermark-containing image (hidden network) or a watermark image (extraction network). The difference between the hidden network and the extraction network is only that the number of channels input by the hidden network and the extraction network is different, the input of the hidden network is 2 channels, namely the connection between the carrier Y channel and the watermark, the output is a single-channel watermark-containing Y channel, and the input and the output of the extraction network are both single channels, namely the watermark-containing Y channel and the extracted watermark.
The loss of the hidden network consists of two parts, the regression loss, which consists of the difference between the watermark image and the carrier image generated by the hidden network, and the loss from the extraction network. The regression loss of the hidden network is designed based on the mean square error and the chi-square distance, and the extraction network adopts binary cross entropy loss. If C, S represent the carrier image and the watermark carrier image respectively, W and W' represent the original watermark and the extracted watermark respectively, the mean square error is shown as formula (1), the chi-square distance is calculated as formula (2), and the binary cross entropy loss is shown as formula (3):
MSE(C,S)=||C-S|| (1)
In the formula (2), n is the number of histogram bins, for the pixel value of the image, the pixel value usually falls between 0 and 255, and if the pixel value is in the interval of [0,1] the pixel value can be multiplied by 255 and converted into the interval of [0,255], so that the pixel value can be divided into n=256 bins, each bin counts the number of pixels corresponding to the pixel value, C i,si is the value of the ith bin of the images C and S, and e is a smaller positive number, so as to avoid calculation abnormality caused by the fact that the denominator is 0. The MSE and CSD are calculated taking into account the three-channel carrier image and the watermark image, not just the Y-channel. Since the pixel value of the binary watermark is always 0 or 1, and the output of the extraction network is a real number between 0 and 1, the invention binarizes it with 0.5 as a threshold, the result being the extracted watermark.
The hidden network and the extraction network are trained jointly, and the total loss of the hidden network is shown as a formula (4):
Lem=MSE(C,S)+γCSD(C,S)+ξBCE(W,W’) (4)
wherein γ, ζ is a positive hyper-parameter, used to control the weight of each loss in the formula. The loss function Lem described by equation (4) counter-propagates in the hidden network, and the BCE (W, W ') described by equation (3) counter-propagates in the extraction network, all optimized using Adam's algorithm.
The invention can hide and extract the binary watermark image in the color or gray high resolution image, and the following 4 different specific embodiments are provided:
Example 1
(1) Training data is prepared.
A sufficient number of image training samples are prepared (the training set data can be collected by itself to create or download some of the disclosed image data sets from the interconnection network, such as ImageNet, pascal VOC2012 and LFW, etc.), and the training samples are split into two parts, one part as a carrier and one part as a watermark image, and all the images are normalized to 512 x 512 size, as follows: and (3) directly performing random cutting on the image with the short side larger than 512, and performing equal proportion amplification on the image with the short side smaller than 512 until the short side is 512, and then performing random cutting. The image used as watermark is firstly grayed, then binarized and denoised for standby.
(2) Hidden network and extraction network for joint training
The carrier in the prepared training data set is randomly paired with the watermark, if the carrier is a color image, the carrier is converted into YCrCb space, a Y channel is connected with the watermark in the channel direction, if the image is a gray level image, the Y channel is directly connected with the watermark in the channel direction, data of 2 channels are formed and input into a hidden network, and meanwhile, the output of the hidden network is used as the input of an extraction network. At this time, the outputs of the hidden network and the extraction network are all single channels. The loss functions described by the formula (3) and the formula (4) are respectively used as the loss of the extraction network and the hidden network to carry out joint training on the whole network. During training, each batch of data is subjected to scale transformation, and one scale is selected from 128×128, 256×256, 320×320, 512×512 and other scales to implement multi-scale cross training. Meanwhile, attacks such as signal processing, geometric transformation and the like are applied to the output of the hidden network according to a certain probability, a course learning strategy is adopted, the probability of applying the attacks is increased from small to large in the training process, and the strength of the attacks is changed from weak to strong.
(3) Selecting multiple region hidden watermarks in color high resolution images
Firstly, the color high-resolution image is reduced to the area equal to 1024×1024, the length-width ratio is maintained during the scaling and the scaling factor is recordedThen converting to a gray scale map, extracting key points on the image by using a SURF operator, and sequencing the key points from large to small according to the response of the key points; sequentially checking square areas with the key points as the center and the side length of 128, discarding the key points if part of the square areas are out of the image range or overlap with other selected square areas, further checking the next key point, and adding the square areas into a candidate area list if the square areas are completely out of the image range and do not overlap with other selected areas; the above process is repeated until 4 square areas are found that fall completely within the image and that do not overlap. Finally, these 4 regions are mapped back to the original image, i.e. the vertex coordinates of all the determined 4 embedded regions are multiplied by the scaling factor s. And normalizing the binary watermark according to the size of the embedded region, firstly filling 0 into the binary watermark image on a short side to expand the binary watermark image into a square, then scaling the square watermark to the same size as the inscribed square of the inscribed circle of the embedded region, and finally filling 0 into the periphery to expand the square with the same size as the embedded region to obtain the normalized binary watermark. The 4 areas are cut out from the original image, Y channels are respectively extracted and connected with the normalized binary watermark image, a hidden network trained in the previous step is input to obtain a watermark-containing Y channel, the Y channel is combined with the original CrCb channel to obtain a watermark-containing embedded area, and the embedded area containing the watermark is replaced by the original area with the same size after three rows or three columns around the embedded area containing the watermark are removed because of insufficient edge pixel clues and weaker regression capability than the middle pixel when the deep learning network is regressed, so that the transparency is improved.
(4) Extracting watermarks from color high resolution images
The same method is used for determining 4 embedded areas from the image containing the watermark, cutting out the 4 areas, extracting the Y channel input extraction network of the 4 areas, and extracting the watermark, wherein the extraction is successful as long as the watermark which is hidden in advance is extracted from one of the areas. Because the image with hidden watermark is attacked by signal processing, geometric transformation and the like, SURF key point detection also fluctuates, namely the sequence and the position of the key points determined after attack are different from those before attack, and the watermark extraction failure after attack can be avoided by using a multi-region embedding method, so that the robustness of the watermark is improved.
Example 2
(1) Training data is prepared.
A sufficient number of image training samples are prepared (the training set data can be collected by itself to create or download some of the disclosed image data sets from the interconnection network, such as ImageNet, pascal VOC2012 and LFW, etc.), and the training samples are split into two parts, one part as a carrier and one part as a watermark image, and all the images are normalized to 512 x 512 size, as follows: and (3) directly performing random cutting on the image with the short side larger than 512, and performing equal proportion amplification on the image with the short side smaller than 512 until the short side is 512, and then performing random cutting. The image used as watermark is firstly grayed, then binarized and denoised for standby.
(2) Hidden network and extraction network for joint training
The carrier in the prepared training data set is randomly paired with the watermark, if the carrier is a color image, the carrier is converted into YCrCb space, a Y channel is connected with the watermark in the channel direction, if the image is a gray level image, the Y channel is directly connected with the watermark in the channel direction, data of 2 channels are formed and input into a hidden network, and meanwhile, the output of the hidden network is used as the input of an extraction network. At this time, the outputs of the hidden network and the extraction network are all single channels. The loss functions described by the formula (3) and the formula (4) are respectively used as the loss of the extraction network and the hidden network to carry out joint training on the whole network. During training, each batch of data is subjected to scale transformation, and a certain scale is selected from the scales of 128×128, 256×256, 320×320, 512×512 and the like to implement multi-scale cross training. Meanwhile, attacks such as signal processing, geometric transformation and the like are applied to the output of the hidden network according to a certain probability, a course learning strategy is adopted, the probability of applying the attacks is increased from small to large in the training process, and the strength of the attacks is changed from weak to strong.
(3) Selecting multiple region hidden watermarks in color high resolution images
Firstly, the color high-resolution image is reduced to the area equal to 1024×1024, the length-width ratio is maintained during the scaling and the scaling factor is recordedThen converting to a gray level map, extracting key points on the image by using a SIFT operator, and sequencing the key points from large to small according to the response of the key points; sequentially checking a square area with a side length of 128 by taking a key point as a center, discarding the key point if the square part falls outside the image range or overlaps other selected square areas, further checking the next key point, and adding the area into a candidate area list if the square area completely falls within the image range and does not overlap other selected areas; the above process is repeated until 4 square areas are found that fall completely within the image and that do not overlap. Finally, these 4 regions are mapped back to the original image, i.e. the vertex coordinates of all the determined 4 embedded regions are multiplied by the scaling factor s. The 4 areas are cut out from the original image, Y channels are respectively extracted and connected with the normalized binary watermark image, a hidden network trained in the previous step is input to obtain a watermark-containing Y channel, the Y channel is combined with the original CrCb channel to obtain a watermark-containing embedded area, and the embedded area containing the watermark is replaced by the original area with the same size after three rows or three columns around the embedded area containing the watermark are removed because of insufficient edge pixel clues and weaker regression capability than the middle pixel when the deep learning network is regressed, so that the transparency is improved.
(4) Extracting watermarks from color high resolution images
The same method is used for determining 4 embedded areas from the image containing the watermark, cutting out the 4 areas, extracting the Y channel input extraction network of the 4 areas, and extracting the watermark, wherein the extraction is successful as long as the watermark which is hidden in advance is extracted from one of the areas. Because the image with hidden watermark is attacked by signal processing, geometric transformation and the like, SIFT key point detection also fluctuates, namely the sequence and the position of the key points determined after being attacked are different from those of the key points before being attacked, and the watermark extraction failure after being attacked can be avoided by using a multi-region embedding method, so that the robustness of the watermark is improved.
Example 3
(1) Training data is prepared.
A sufficient number of image training samples are prepared (the training set data can be collected by itself to create or download some of the disclosed image data sets from the interconnection network, such as ImageNet, pascal VOC2012 and LFW, etc.), and the training samples are split into two parts, one part as a carrier and one part as a watermark image, and all the images are normalized to 512 x 512 size, as follows: and (3) directly performing random cutting on the image with the short side larger than 512, and performing equal proportion amplification on the image with the short side smaller than 512 until the short side is 512, and then performing random cutting. The image used as watermark is firstly grayed, then binarized and denoised for standby.
(2) Hidden network and extraction network for joint training
The carrier in the prepared training data set is randomly paired with the watermark, if the carrier is a color image, the carrier is converted into YCrCb space, a Y channel is connected with the watermark in the channel direction, if the image is a gray level image, the Y channel is directly connected with the watermark in the channel direction, data of 2 channels are formed and input into a hidden network, and meanwhile, the output of the hidden network is used as the input of an extraction network. At this time, the outputs of the hidden network and the extraction network are all single channels. The loss functions described by the formula (3) and the formula (4) are respectively used as the loss of the extraction network and the hidden network to carry out joint training on the whole network. During training, each batch of data is subjected to scale transformation, and a certain scale is selected from the scales of 128×128, 256×256, 320×320, 512×512 and the like to implement multi-scale cross training. Meanwhile, attacks such as signal processing, geometric transformation and the like are applied to the output of the hidden network according to a certain probability, a course learning strategy is adopted, the probability of applying the attacks is increased from small to large in the training process, and the strength of the attacks is changed from weak to strong.
(3) Selecting multiple region hidden watermarks in high resolution gray scale images
Firstly, reducing the high-resolution gray scale image to the area equal to 1024×1024, maintaining the length-width ratio and recording the scaling factor when scalingExtracting key points on the image by using a SURF operator, and sorting the key points from large to small according to the response of the key points; sequentially checking a square area with a side length of 128 by taking a key point as a center, discarding the key point if the square part falls outside the image range or overlaps other selected square areas, further checking the next key point, and adding the area into a candidate area list if the square area completely falls within the image range and does not overlap other selected areas; the above process is repeated until 4 square areas are found that fall completely within the image and that do not overlap. Finally, these 4 regions are mapped back to the original image, i.e. the vertex coordinates of all the determined 4 embedded regions are multiplied by the scaling factor s. The 4 areas are cut out from the original image, the 4 areas are connected with the normalized binary watermark image through a channel and input into a well trained hidden network to obtain a watermark-containing embedded area, and the embedded area with the watermark is replaced by the original area with the same size after three rows or three columns around the embedded area with the watermark are removed because edge pixel clues are insufficient and regression capability is weaker than that of middle pixels when the deep learning network regresses.
(4) Extracting watermarks from high resolution gray scale images
The same method is used for determining 4 embedded areas from the image containing the watermark, cutting out the 4 areas, inputting an extraction network to extract the watermark, and if one of the areas is extracted to the watermark hidden in advance, the extraction is successful. Because the image with hidden watermark is attacked by signal processing, geometric transformation and the like, SURF key point detection also fluctuates, namely the sequence and the position of the key points determined after attack are different from those before attack, and the watermark extraction failure after attack can be avoided by using a multi-region embedding method, so that the robustness of the watermark is improved.
Example 4
(1) Training data is prepared.
A sufficient number of image training samples are prepared (the training set data can be collected by itself to create or download some of the disclosed image data sets from the interconnection network, such as ImageNet, pascal VOC2012 and LFW, etc.), and the training samples are split into two parts, one part as a carrier and one part as a watermark image, and all the images are normalized to 512 x 512 size, as follows: and (3) directly performing random cutting on the image with the short side larger than 512, and performing equal proportion amplification on the image with the short side smaller than 512 until the short side is 512, and then performing random cutting. The image used as watermark is firstly grayed, then binarized and denoised for standby.
(2) Hidden network and extraction network for joint training
The carrier in the prepared training data set is randomly paired with the watermark, if the carrier is a color image, the carrier is converted into YCrCb space, a Y channel is connected with the watermark in the channel direction, if the image is a gray level image, the Y channel is directly connected with the watermark in the channel direction, data of 2 channels are formed and input into a hidden network, and meanwhile, the output of the hidden network is used as the input of an extraction network. At this time, the outputs of the hidden network and the extraction network are all single channels. The loss functions described by the formula (3) and the formula (4) are respectively used as the loss of the extraction network and the hidden network to carry out joint training on the whole network. During training, each batch of data is subjected to scale transformation, and a certain scale is selected from the scales of 128×128, 256×256, 320×320, 512×512 and the like to implement multi-scale cross training. Meanwhile, attacks such as signal processing, geometric transformation and the like are applied to the output of the hidden network according to a certain probability, a course learning strategy is adopted, the probability of applying the attacks is increased from small to large in the training process, and the strength of the attacks is changed from weak to strong.
(3) Selecting multiple region hidden watermarks in high resolution gray scale images
Firstly, reducing the high-resolution gray scale image to the area equal to 1024×1024, maintaining the length-width ratio and recording the scaling factor when scalingExtracting key points on the image by using a SIFT operator, and sequencing the key points from large to small according to the response of the key points; sequentially checking a square area with a side length of 128 by taking a key point as a center, discarding the key point if the square part falls outside the image range or overlaps other selected square areas, further checking the next key point, and adding the area into a candidate area list if the square area completely falls within the image range and does not overlap other selected areas; the above process is repeated until 4 square areas are found that fall completely within the image and that do not overlap. Finally, these 4 regions are mapped back to the original image, i.e. the vertex coordinates of all the determined 4 embedded regions are multiplied by the scaling factor s. The 4 areas are cut out from the original image, the 4 areas are connected with the normalized binary watermark image through a channel and input into a well trained hidden network to obtain a watermark-containing embedded area, and the embedded area with the watermark is replaced by the original area with the same size after three rows or three columns around the embedded area with the watermark are removed because edge pixel clues are insufficient and regression capability is weaker than that of middle pixels when the deep learning network regresses.
(4) Extracting watermarks from high resolution gray scale images
The same method is used for determining 4 embedded areas from the image containing the watermark, cutting out the 4 areas, inputting an extraction network to extract the watermark, and if one of the areas is extracted to the watermark hidden in advance, the extraction is successful. Because the image with hidden watermark is attacked by signal processing, geometric transformation and the like, SIFT key point detection also fluctuates, namely the sequence and the position of the key points determined after being attacked are different from those of the key points before being attacked, and the watermark extraction failure after being attacked can be avoided by using a multi-region embedding method, so that the robustness of the watermark is improved.
The high-resolution image robust digital watermarking method based on key point detection and deep learning can hide binary watermark images in color or gray high-resolution images, has good transparency, is highly matched with original watermark, and has high robustness to JPEG compression, noise adding, filtering, contrast change and other signal processing operations, rotation, scaling, translation and other geometric transformation attacks.
The previous description of the embodiments is provided to facilitate a person of ordinary skill in the art in order to make and use the present invention. It will be apparent to those having ordinary skill in the art that various modifications to the above-described embodiments may be readily made and the generic principles described herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above-described embodiments, and those skilled in the art, based on the present disclosure, should make improvements and modifications within the scope of the present invention.

Claims (6)

1. A high-resolution image digital watermarking method based on key point detection and deep learning comprises the following steps:
1) Obtaining a sufficient number of sample images, dividing the sample images into two groups, wherein one group is used as a carrier image, the other group is used as a watermark image to be hidden, the watermark image is converted into a binary image by using graying, binarizing, closing and island removing operations, and the two groups of images are processed into squares by using scaling and cutting;
2) Training a multi-scale fusion cavity convolution residual error network by using the image sample, wherein the multi-scale fusion cavity convolution residual error network comprises a hiding network for hiding a binary watermark image into a carrier image and an extracting network for extracting a watermark from a watermark-containing image, and the network is trained to be converged by using multi-scale cross training and course learning strategies;
3) Converting a high-resolution image into a gray level image, scaling the gray level image to a certain area in equal proportion, acquiring the most remarkable key points by using a key point detection algorithm, determining a plurality of watermark embedding areas which are not overlapped and have a certain size based on the key points, and mapping the embedding areas back to the original image;
4) Scaling the binary watermark image to an area of 1/2 of the embedded area, i.e. to the side of the embedded area Then filling 0 expansion is carried out on four sides of the watermark image, the size of the expanded watermark image is the same as that of the embedded area, and the original watermark is positioned in the center of the normalized watermark image;
5) Cutting out each embedded region, converting the embedded region from an RGB space to a YCrCb space, connecting a Y channel in the embedded region with a watermark image normalized in the previous step in a channel way, inputting a trained deep learning hidden network, hiding the watermark in the Y channel, and outputting the Y channel containing the transparent watermark by the hidden network;
6) Combining the watermark-containing Y channel with the original CrCb channel and converting the channel back to an RGB space, obtaining watermark-containing embedded areas, hiding the watermarks in the same way in all the embedded areas, obtaining a plurality of watermark-containing embedded areas, and finally replacing the original image corresponding areas with all the watermark-containing embedded areas to obtain a watermark-hiding high-resolution image;
7) When the watermark is extracted, the embedded region is extracted in the mode of the step 3), and the Y channel of the embedded region is input into an extraction network, so that a binary watermark image hidden in the embedded region can be extracted; because the image possibly encounters transformation or attack in the transmission process, the detected characteristic points can generate fluctuation, and the multi-region embedding can ensure the robustness of a watermark algorithm, namely, copyright authentication can be realized only by extracting hidden watermarks from one of a plurality of embedding regions;
Step 2) the hidden network and the extraction network are both constructed by taking a residual network ResNet as a backbone network, and comprise 12 residual modules, wherein the input is firstly processed by a convolution layer and then enters a depth network formed by connecting 12 residual modules in series, each residual module is formed by 2 convolution layers, each convolution layer is then corrected by a batch normalization and activation layer correction linear unit, no pooling layer exists, the residual connection of the residual modules is processed by adopting cavity convolution, the 9 th and 10 th residual modules adopt cavity convolution with the coefficient of 3, the 8 th and 11 th residual modules adopt cavity convolution with the coefficient of 2, and the residual connection of other residual modules adopts common convolution layer processing;
And 2) the hidden network and the extraction network adopt a multi-scale fusion strategy, the 3 rd residual error module and the 6 th residual error module of the network perform downsampling with the coefficient of 2 on the feature image to obtain features with different scales, the outputs of the 5 th residual error module, the 9 th residual error module and the last residual error module are led out, the upsampling is performed by using deconvolution to perform amplification factors of 2,4 and 4 respectively, the feature image led out from the 3 branches is connected with an input image through a channel after the size of the feature image is consistent, so that multi-scale fusion is realized, and the watermark-containing image or the extracted watermark is obtained from the connected feature image through a convolution layer.
2. The high resolution image digital watermarking method according to claim 1, wherein: and (3) sequencing the detected key points in the gray level image subjected to equal-proportion scaling from strong to weak according to the response, checking square areas with fixed side lengths taking the key points as the center one by one from strong to weak, deleting the key points if part of the areas are outside the image or overlap with other determined embedded areas, and detecting the next key points, so that a plurality of square embedded areas which are not overlapped and are in the image range are repeatedly determined.
3. The high resolution image digital watermarking method according to claim 1, wherein: the size of the watermark embedding area is proportional to the size of the image, so that the size of the actual embedding area changes along with the change of the size of the image, the watermark hiding network and the extracting network both adopt full convolution structures, the input and the output of the watermark hiding network can be accepted, the input of the watermark hiding network is two channels, the output is a single channel, and the input and the output of the watermark extracting network are all single channels.
4. The high resolution image digital watermarking method according to claim 1, wherein: the hidden network and the extraction network are subjected to multi-scale cross training, namely the sizes of input and output images of each batch of training data are changed within a certain range, so that the network can adapt to the input and output of different sizes.
5. The high resolution image digital watermarking method according to claim 1, wherein: when training a network, a course learning method is adopted, signal processing attacks and geometric attacks are gradually added into the watermark-containing image, each batch of training data firstly generates a random real number, a threshold value is used for controlling whether the random real number is added, when the random real number is larger than the threshold value, the random attack is applied to the watermark-containing image, and otherwise, the attack is not applied; during the training process, the threshold gradually decreases while the intensity of the attack gradually increases.
6. The high resolution image digital watermarking method according to claim 1, wherein: in the training process, the extraction network adopts binary cross entropy loss, and an Adam algorithm is used for optimizing network parameters; the regression loss of the extraction network is not only counter-propagating in the extraction network, but also counter-propagating in the hidden network; regression loss of the hidden network, i.e. the difference between the watermark image and the carrier image, is only counter-propagating in the hidden network.
CN202011022189.6A 2020-09-25 2020-09-25 High-resolution image robust digital watermarking method based on key point detection and deep learning Active CN114255151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011022189.6A CN114255151B (en) 2020-09-25 2020-09-25 High-resolution image robust digital watermarking method based on key point detection and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011022189.6A CN114255151B (en) 2020-09-25 2020-09-25 High-resolution image robust digital watermarking method based on key point detection and deep learning

Publications (2)

Publication Number Publication Date
CN114255151A CN114255151A (en) 2022-03-29
CN114255151B true CN114255151B (en) 2024-05-14

Family

ID=80790311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011022189.6A Active CN114255151B (en) 2020-09-25 2020-09-25 High-resolution image robust digital watermarking method based on key point detection and deep learning

Country Status (1)

Country Link
CN (1) CN114255151B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170378A (en) * 2022-06-13 2022-10-11 北京林业大学 Video digital watermark embedding and extracting method and system based on deep learning
CN115187443B (en) * 2022-07-05 2023-04-07 海南大学 Watermark embedding and detecting method and device based on spatial domain residual error feature fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1688156A (en) * 2005-03-28 2005-10-26 南方医科大学 Medical image fragile watermark method based on wavelet transform
CN104680473A (en) * 2014-12-20 2015-06-03 辽宁师范大学 Machine learning-based color image watermark embedding and detecting method
CN111640052A (en) * 2020-05-22 2020-09-08 南京信息工程大学 Robust high-capacity digital watermarking method based on mark code

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8300884B2 (en) * 2009-05-21 2012-10-30 Digimarc Corporation Combined watermarking and fingerprinting
CN104091302B (en) * 2014-07-10 2017-06-06 北京工业大学 A kind of robust watermarking insertion and extracting method based on multiscale space

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1688156A (en) * 2005-03-28 2005-10-26 南方医科大学 Medical image fragile watermark method based on wavelet transform
CN104680473A (en) * 2014-12-20 2015-06-03 辽宁师范大学 Machine learning-based color image watermark embedding and detecting method
CN111640052A (en) * 2020-05-22 2020-09-08 南京信息工程大学 Robust high-capacity digital watermarking method based on mark code

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于多尺度特征的彩色图像数字水印算法;王向阳;孟岚;杨红颖;;中国科学(F辑:信息科学);20090915(第09期);全文 *

Also Published As

Publication number Publication date
CN114255151A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
US9443277B2 (en) Method for embedding and extracting multi-scale space based watermark
Kumar et al. A fast DCT based method for copy move forgery detection
Sabeti et al. An adaptive LSB matching steganography based on octonary complexity measure
CN114255151B (en) High-resolution image robust digital watermarking method based on key point detection and deep learning
Alwan et al. Data embedding based on better use of bits in image pixels
CN110866455B (en) Pavement water body detection method
CN112785480B (en) Image splicing tampering detection method based on frequency domain transformation and residual error feedback module
Hou et al. Detection of hue modification using photo response nonuniformity
Zhao et al. Tampered region detection of inpainting JPEG images
CN112700363A (en) Self-adaptive visual watermark embedding method and device based on region selection
Gupta et al. A survey of watermarking technique using deep neural network architecture
CN108648130B (en) Totally-blind digital watermarking method with copyright protection and tampering positioning functions
Karathanassi et al. A thinning-based method for recognizing and extracting peri-urban road networks from SPOT panchromatic images
CN111284157A (en) Commodity package anti-counterfeiting printing and verifying method based on fractional order steganography technology
Rijati Nested block based double self-embedding fragile image watermarking with super-resolution recovery
Lin et al. Passive forgery detection for JPEG compressed image based on block size estimation and consistency analysis
Chetan et al. An intelligent blind semi-fragile watermarking scheme for effective authentication and tamper detection of digital images using curvelet transforms
Benseddik et al. Efficient interpolation-based reversible watermarking for protecting fingerprint images
Hosam et al. A hybrid ROI-embedding based watermarking technique using DWT and DCT transforms
CN108805786B (en) Steganalysis method and device based on least significant bit matching
Khan An efficient neural network based algorithm of steganography for image
Xing et al. Remote Sensing Image Zero-Watermark Algorithm Based on Bemd
WO2003056515A1 (en) Digital watermarking
Lin et al. A Lightweight Embedding Probability Estimation Algorithm Based on LBP for Adaptive Steganalysis
CN117635411B (en) Digital watermark processing method based on mixed domain decomposition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant