CN113706359B - Tang-dynasty digital watermarking method based on visual perception model - Google Patents

Tang-dynasty digital watermarking method based on visual perception model Download PDF

Info

Publication number
CN113706359B
CN113706359B CN202110890427.3A CN202110890427A CN113706359B CN 113706359 B CN113706359 B CN 113706359B CN 202110890427 A CN202110890427 A CN 202110890427A CN 113706359 B CN113706359 B CN 113706359B
Authority
CN
China
Prior art keywords
watermark
image
value
sub
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110890427.3A
Other languages
Chinese (zh)
Other versions
CN113706359A (en
Inventor
唐伶俐
黄天赐
解庆
刘永坚
胡桉澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110890427.3A priority Critical patent/CN113706359B/en
Publication of CN113706359A publication Critical patent/CN113706359A/en
Application granted granted Critical
Publication of CN113706359B publication Critical patent/CN113706359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a digital watermarking method of Thangka based on a visual perception model, which comprises the following steps: 1) Preprocessing the watermark image; 2) Selecting an ROI region of the Thangka element image; 3) Constructing a visual perception JND model; 4) Embedding the watermark image into the Thangka element image; 5) Increasing watermark embedding strength to the image to improve watermark compression resistance; 6) And (5) watermark extraction. The invention considers the self-adaptive watermark embedding strength determination of different areas of different images, does not use the same constant embedding strength, dynamically calculates the modifiable threshold value of each pixel point of different images by means of a visual perception JND model, and finally takes the threshold value as the watermark embedding strength. In order to improve the compression resistance robustness of the watermark, the invention provides a method for improving the robustness of the watermark by using a cyclic function of two variables of an image SSIM and a watermark NC, which can adaptively determine different watermark embedding intensities aiming at different TANG cards and effectively improve the invisibility and the robustness of the digital watermark.

Description

Tang-dynasty digital watermarking method based on visual perception model
Technical Field
The invention relates to the technical field of image processing, in particular to a digital watermarking method of Thangka based on a visual perception model.
Technical Field
With the development of internet big data, people have more and more ways to acquire digital resources, and the phenomenon of piracy and infringement is also serious. The earliest method for protecting copyright is to add visible watermarks such as author names, merchant logo and the like to digital pictures, but the method influences the ornamental value of some artistic pictures such as Thangka pictures, and as the water-jet printing technology is mature, the visible watermarks are very easy to remove and cannot effectively protect copyright. Based on this background, invisible digital watermarking methods are increasingly being studied.
Digital watermarking is the embedding of a piece of hidden information in an image using image redundancy. According to the difference of watermark embedding positions, the current digital watermarking methods can be divided into two types: spatial domain based digital watermarking and frequency domain based digital watermarking. The digital watermark based on the airspace directly modifies the value of the pixel point to embed the watermark, the robustness of the method is poor, the watermark is easy to be destroyed, and the embeddable watermark capacity is low. The digital watermarking method based on the frequency domain firstly converts the image into sub-bands with different frequencies through DCT, DWT and other transformation methods, and then embeds the watermark by modifying the coefficients of the sub-bands. The watermark of the method is better in signal attack resistance and compression attack resistance than the airspace method, so that more and more digital watermark methods are based on frequency domain research. The invention is based on DCT domain research watermark embedding.
The current digital watermarking methods based on the frequency domain are researched in pictures with the uniform size of 512x 512, and the methods require a proportional relation between a carrier picture and a watermark picture, but the TANG card has irregular self size and non-uniform size, and can not embed watermarks as the previous methods. In addition, most of the current watermark embedding methods adopt an empirical value studied by a former person or a constant obtained after continuous trial and error as watermark embedding strength when embedding the watermark, so that the situation that partial images have good effect but poor invisibility after changing the images can occur.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a digital watermarking method of Thangka based on a visual perception model, which is used for protecting copyright of a Thangka image.
In order to achieve the purpose, the invention provides a digital watermarking method based on a visual perception model, which is characterized by comprising the following steps:
1) Preprocessing the watermark image;
2) Selecting an ROI region of the Thangka element image;
3) Constructing a visual perception JND model;
4) Embedding the watermark image into the Thangka element image;
5) Increasing watermark embedding strength to the image to improve watermark compression resistance;
6) And (5) watermark extraction.
Further, the preprocessing of the watermark image in the step 1) includes: arnold transformation is carried out on the binary watermark image, and the position of each pixel point of the watermark image matrix is disordered, so that encryption processing is carried out on the watermark image.
Further, the specific step of ROI area selection in the step 2) includes:
201 Color space conversion is carried out on the Thangka element image, and the Thangka element image is converted into YUV space from RGB space;
202 A Y channel is extracted, and element 0 is filled at the right side and the lower side of the Y channel, so that the length and the width (M ', N') of the filled image can exactly divide the length and the width (M, N) of the watermark image;
203 Dividing the filled Thangka into The blocks are non-overlapped sub-blocks, and the size of each sub-block is equal to the size of the watermark image, so that the subsequent embedding of the watermark is convenient;
204 Calculating the content information entropy of each sub-block: k represents pixel points contained in each sub-block, and p i represents an information probability distribution value at an ith pixel point in each sub-block;
205 Performing canny edge detection on the image filled in the step 202) to obtain a line profile, and then repeating the step 203) to divide the image;
206 Calculating edge information entropy of each sub-block after segmentation
207 Entropy adding the content information entropy and edge information entropy of each sub-block to obtain: e sum=Evisual+Eedge;
208 According to the total information entropy E sum of each sub-block, selecting the block with the maximum value as the ROI area to be embedded with the watermark.
Still further, the visually perceived JND model in step 3) consists of three parts: a frequency sensitivity function model CSF, a brightness adaptation factor LA and a contrast masking factor CM; the visual perception JND model calculates the maximum modifiable threshold of the image which cannot be perceived by human eyes, the threshold is used as the watermark embedding strength of each pixel, and the human eyes cannot perceive the difference between the image embedded with the watermark and the original image.
Furthermore, in the step 4), watermark embedding rules are determined by utilizing the characteristics of the values of the watermarks 0 and 1): DCT transforming the extracted ROI region, if the current watermark bit value is 1, adding beta to the DCT transformed image to multiply JND threshold value, and adding a Q adjusting factor; if the current watermark bit has a value of 0, the DCT transformed image is subtracted by β times the JND threshold and then a Q-factor is added.
Furthermore, the method for increasing the watermark embedding strength to improve the watermark compression resistance in the step 5) comprises the following steps: and calculating the Tang-Card SSIM value after embedding the watermark and the NC value of the watermark image, and when the NC value is not smaller than the SSIM value, the adjusting factor Q value at the intersection point is the optimal solution.
Further, the watermark extraction rule in the step 6) is as follows: extracting a corresponding region of the Thangka after watermark embedding according to the region coordinates of the ROI; and performing DCT transformation on the area and the original image corresponding area respectively, then subtracting the two areas, wherein if the difference value is larger than 0, the watermark value of the corresponding position is 1, and if the difference value is smaller than 0, the watermark value of the corresponding position is 0.
Further, the frequency sensitivity function calculation method is as follows:
Wherein f min denotes the minimum value of the spatial frequency in all sub-blocks, T min is the minimum threshold value corresponding to the minimum value of the spatial frequency, K is the steepness parameter constant, C (i), C (j) denote the DCT transform normalization factors of the ith row and jth column, respectively, and f i,j n denotes the spatial frequency of the nth sub-block, which is calculated by the following method:
i. j represents the row and column corresponding to the ith row and j column sub-block, wx represents the horizontal width of the pixel in terms of the human eye viewing angle, and Wy represents the vertical height of the pixel in terms of the human eye viewing angle.
Further, the method for constructing the brightness self-adaptive factor comprises the following steps: the brightness self-adaptive factor is aimed at the self-limit of the frequency sensitivity function model, and the average brightness is replaced by the local brightness of the image to more accurately calculate the threshold value, and the brightness self-adaptive factor calculating method is as follows:
Where T i,j,k represents the base threshold for the Kth sub-block, i.e., the calculated value of i, f 0,0,k refers to the DCT DC coefficient DCT (0, 0) for the Kth sub-block, and α T is the masking intensity parameter.
Further, the method for constructing the contrast shading factor comprises the following steps: the areas with different contrast ratios of the image have different redundancies and become different shielding effects, and the contrast ratio shielding factor calculating method comprises the following steps:
Wherein i and j represent the row and column corresponding to the j-th row and j-th column sub-block, i m,jm represents the corresponding row and column of the constructed masking model matrix, Is a model parameter,/>The DCT coefficients representing the corresponding sub-blocks, W i,j,k is a parameter ranging between (0, 1) for controlling the shadowing effect, and is calculated as follows:
Wherein M represents the size of each sub-block, gamma represents the intensity adjustment parameter, the NVF function is used for setting a sliding window to slide in the Thangka element image, tang Kafen is a texture area, a smooth area and an edge area, and the W i,j,k parameter is dynamically determined according to different NVF values of different areas, so that different embedding intensities are set in different areas.
The invention relates to a Tang-dynasty digital watermarking method based on a visual perception model, which is a Tang-dynasty digital watermarking method combining an information entropy, edge detection and a minimum perceptible difference (Just Noticeable Difference, JND) model of a discrete cosine transform (Discrete Cosine Transform, DCT) domain. The beneficial effects of the invention are as follows: the self-adaptive watermark embedding strength determination of different areas of different images is considered, the same constant embedding strength is not used any more, the modifiable threshold value of each pixel point of different images is calculated dynamically by means of a visual perception JND model, and finally the threshold value is used as the watermark embedding strength. In addition, in order to improve the compression resistance robustness of the watermark, the invention provides a cyclic function of two variables of the image SSIM and the watermark NC to improve the robustness of the watermark. The invention can adaptively determine different watermark embedding intensities aiming at different Thangka, and effectively improves the invisibility and the robustness of the digital watermark.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the invention;
FIG. 2 is a diagram of an example of watermark image preprocessing in accordance with the present invention;
FIG. 3 is a diagram showing an example of edge detection in the selection of the ROI area according to the present invention;
FIG. 4 is a graph of information entropy calculation results in an embodiment of the present invention;
FIG. 5 is a diagram showing the effect of the present invention after embedding watermarks in multiple sets of different TANGKA instances;
FIG. 6 is a flow chart of a method for improving watermark compression robustness according to the present invention;
Fig. 7 is a graph of the extraction effect of multiple sets of the multi-tangka example watermarks of the present invention under different attacks.
Detailed Description
The invention is described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1, the tangka image vectorization method based on the diffusion curve provided by the invention comprises the following steps:
1) Preprocessing the watermark image;
2) Selecting an ROI region of the Thangka element image;
3) Constructing a visual perception JND model;
4) Embedding the watermark image into the Thangka element image;
5) Increasing watermark embedding strength to the image to improve watermark compression resistance;
6) And (5) watermark extraction.
The implementation process of the embodiment comprises the following steps:
1) Preprocessing watermark images:
as shown in fig. 2, the watermark image in this example is a binary image of 32×32 size. Arnold transformation is carried out on the binary watermark image, and the position of each pixel point of the watermark image matrix is disordered, so that encryption processing is carried out on the watermark image.
Arnold transformation formula is as follows:
that is, the coordinates of each pixel after transformation are:
In this example, the values of a and b in the above formula are respectively: 1.2, the encryption number N is 15.
2) Tangka ROI region selection:
as shown in fig. 3 and 4, the ROI area is extracted in one tangka example using edge detection and information entropy calculation. The image of the element Thangka in the example is an animal beast Thangka with a size of 899x 681.
201 The color information of the Thangka is an important information of the Thangka element image, and the information is not modified as much as possible when the watermark is embedded. So, firstly, the color space conversion needs to be carried out on the Thangka, and in the example, the animal beast Thangka is converted from the RGB space to the YUV space;
202 The Y channel is extracted and then filled with element 0 to the right and below the Y channel so that the filled image length and width (899,681) exactly the length and width (32, 32) of the watermark image. In this example, the animal beast tangka is filled with [899/32 ]. Times.32-899 elements 0 to the right so that its final value is 928, and is filled with [681/32 ]. Times.32-681 elements 0 to the bottom so that its final value is 704;
203 Dividing the filled Thangka into The blocks are non-overlapped sub-blocks, and the size of each sub-block is equal to the size of a watermark image (32 x 32), so that the subsequent watermark can be conveniently embedded;
204 According to the formula): Calculating content information entropy E 1、E2、E3……En of each sub-block, wherein k represents pixel points contained in each sub-block, P i represents information probability distribution values at the ith pixel point in each sub-block, and n represents the total number of all sub-blocks;
205 Carrying out canny edge detection on the image filled in the step 2 to obtain a line profile, and then repeating the step3 to divide the image;
206 According to the formula): Calculating edge information entropy e 1、e2、e3……en of each segmented sub-block;
207 Adding the two information entropies of each sub-block to obtain E sum:E1+e1、E2+e2…Ei+ei…En+en;
208 And finally, sorting according to the total information entropy E sum of each sub-block, and selecting a block with the maximum value as an ROI (region of interest) to be embedded with the watermark.
3) Constructing a visual perception JND model:
The visual perception JND model can calculate the maximum modifiable threshold of the image which cannot be perceived by human eyes, the threshold is used as the watermark embedding strength of each pixel, the human eyes cannot perceive the difference between the image embedded with the watermark and the original image, and compared with a single embedding strength method, the invisibility of the digital watermark is greatly improved. The JND model consists of three parts: frequency sensitivity function model (CSF), brightness adaptation factor (LA), contrast masking factor (CM).
301 Constructing a frequency sensitivity function model:
the frequency sensitivity function calculation method is as follows:
Wherein, the T min、fmin and the K respectively take the following values: 0.25, 3.1, 1.34C (i), C (j) represent the DCT transform normalization factor of the ith row, the jth column, respectively, and f i,j n refers to the spatial frequency of the nth sub-block, which is calculated mainly by the following method:
i. j represents the row and column corresponding to the j-th row and column sub-block, wx represents the horizontal width of the pixel in terms of the eye viewing angle, wy represents the vertical height of the pixel in terms of the eye viewing angle, and in this example, wx takes the value 7.1527/256 and Wy takes the value 7.1527/256.
302 Building a luminance adaptation factor):
The luminance adaptation factor is a self-limiting to the frequency sensitivity function model, considering replacing the average luminance with the image local luminance to calculate the threshold more accurately. The brightness self-adaptive factor calculating method comprises the following steps:
Where T i,j,k represents the base threshold of the Kth sub-block, i.e. the value T i,j,f0,0,k calculated in step 301) refers to the DCT DC coefficient DCT (0, 0) of the Kth sub-block, and α T is a masking intensity parameter, which is a value of 0.649 in the present invention.
303 Building contrast shading factors):
the different contrast areas of the image have different redundancies, i.e. have different masking effects. The contrast shading factor calculation method is as follows:
Where i, j represent the row and column corresponding to the ith row and j column sub-block, i m,jm represent the corresponding row and column of the constructed mask model matrix, Is a model parameter with a value of 1,/>, in this exampleThe DCT coefficients representing the corresponding sub-blocks, W i,j,k is a parameter with a value range between (0, 1), which is mainly used to control the shadowing effect, and its calculation method is as follows:
where M represents the size of each sub-block, the value of this example is 32. Gamma is an intensity adjustment parameter, mainly for adjusting the intensity of the shading. The value in this example is 1. The NVF function is to set a sliding window to slide in the Tangsha, and Tang Kafen is formed into a texture area, a smooth area and an edge area. The W i,j,k parameter is dynamically determined according to different NVF values of different areas, so that different embedding strengths are set in different areas.
304 Maximum imperceptible distortion threshold JND calculation:
the final maximum imperceptible distortion threshold JND calculation method is as follows:
Tjnd(n,x,y)=TCSF(n,x,y)·FLA(n)·FCM(n,x,y)
4) Watermark embedding:
As shown in fig. 5, the watermarked images in multiple sets of different tangs are compared to the original images. The invention uses binary image with 32x32 size as watermark image, when specific embedding rule is set, the common spread spectrum method is not adopted any more: iw=i+α W, iw =i (1+αw), and the embedding rule is determined by using the characteristic of the watermark 0, 1 values. The ROI area extracted from animal Tang Kazhong is first DCT transformed, then a binary watermark image is read, if the current watermark bit value is 1, the DCT transformed image is added with β times the maximum imperceptible distortion JND threshold calculated in step three, and then added with a Q adjustment factor. If the current watermark bit has a value of 0, the DCT transformed image is subtracted by β times the JND threshold and then a Q-factor is added. After the watermark is embedded, the DCT inverse transformation is carried out on the area, and finally the area is combined with other sub-blocks to reconstruct the Tang-Care after the watermark is embedded. The embedding rule is as follows:
for known ROI regions and corresponding JND thresholds
If JND (i, j) = 0
Marked(i,j)=Original_img(i,j)
Otherwise
If WATERMARK BIT = 1
Marked(i,j)=Original_img(i,j)+β*JND(i,j)+Q
Ending
If WATERMARK BIT = 0
Marked(i,j)=Original_img(i,j)-β*JND(i,j)+Q
Ending
Ending
5) And the watermark compression resistance robustness is improved:
The flow of the method adopted by the invention for compression attack is shown in fig. 6. The JND threshold calculated by the visual perception model is an image maximum redundancy threshold, and in order to improve watermark robustness, when determining watermark embedding rules, the boundary threshold is not directly used for calculation, but a beta coefficient is set: marked (i, j) =original_img (i, j) +β JND (i, j), marked (i, j) =original_img (i, j) - β JND (i, j). The value range of beta is (0, 1), and the value of the example is 0.5, so that the image with the embedded watermark still has certain redundancy and can be used for resisting compression attack. In addition, in order to improve the compression resistance, the invention also provides a compression adjustment factor Q: marked (i, j) =original_img (i, j) +β JND (i, j) + Q, marked (i, j) =original_img (i, j) - β JND (i, j) +q, keeping the total modification range within JND threshold. Thus, the invisibility of the watermark in the range is ensured, and then a certain amount of theta is continuously added to Q on the basis of the invisibility of the watermark, so that the embedding strength of the watermark is increased, and the compression resistance of the watermark is improved.
And calculating the Tang-Card SSIM value after watermark embedding and the NC value of the watermark image, wherein when the NC value is not smaller than SSIM, the Q value at the intersection point is the optimal solution, namely the optimal condition of compression resistance and invisibility.
6. Watermark extraction:
Firstly, extracting a corresponding region of the Thangka after watermark embedding according to the region coordinates of the ROI, then respectively performing DCT (discrete cosine transform) on the region and the corresponding region of the original image, subtracting the two, wherein if the difference value is larger than 0, the watermark value of the corresponding position is 1, and if the difference value is smaller than 0, the watermark value of the corresponding position is 0. The final watermark extraction rules are as follows:
And finally, performing Arnold inverse transformation decryption on the extracted watermark to obtain an extracted watermark image.
Fig. 7 depicts that the digital watermark of the present invention can still extract the watermark more completely under different attacks. It can be seen that the present invention provides a digital watermarking method for Thangka, which can effectively protect the copyright of Thangka
Finally, it should be noted that the above-mentioned embodiments are only for illustrating the technical solution of the present patent and not for limiting the same, and although the present patent has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present patent may be modified or equivalently replaced without departing from the spirit and scope of the technical solution of the present patent, and all such embodiments are included in the scope of the claims of the present patent.

Claims (6)

1. A Tang-dynasty digital watermarking method based on a visual perception model is characterized in that: the method comprises the following steps:
1) Preprocessing the watermark image;
2) Selecting an ROI region of the Thangka element image;
3) Constructing a visual perception JND model; the visual perception JND model consists of three parts: a frequency sensitivity function model CSF, a brightness adaptation factor LA and a contrast masking factor CM; the visual perception JND model calculates the maximum modifiable threshold of the image which cannot be perceived by human eyes, the threshold is used as the watermark embedding strength of each pixel, and the human eyes cannot perceive the difference between the image embedded with the watermark and the original image;
The frequency sensitivity function calculation method comprises the following steps:
Wherein f min denotes the minimum value of the spatial frequency in all sub-blocks, T min is the minimum threshold value corresponding to the minimum value of the spatial frequency, K is the steepness parameter constant, C (i), C (j) denote the DCT transform normalization factors of the ith row and jth column, respectively, and f i,j n denotes the spatial frequency of the nth sub-block, which is calculated by the following method:
i. j represents a row and a column corresponding to the j-th row and column sub-block, wx represents a horizontal width of a pixel in a unit of a human eye visual angle, and Wy represents a vertical height of the pixel in a unit of a human eye visual angle;
The method for constructing the brightness self-adaptive factor comprises the following steps: the brightness self-adaptive factor is aimed at the self-limit of the frequency sensitivity function model, and the average brightness is replaced by the local brightness of the image to more accurately calculate the threshold value, and the brightness self-adaptive factor calculating method is as follows:
Where T i,j,k represents the base threshold of the Kth sub-block, i.e., the calculated value of i, f 0,0,k denotes the DCT DC coefficient DCT (0, 0) of the Kth sub-block, and α T is the masking intensity parameter;
the method for constructing the contrast shading factor comprises the following steps: the areas with different contrast ratios of the image have different redundancies and become different shielding effects, and the contrast ratio shielding factor calculating method comprises the following steps:
Wherein i and j represent the row and column corresponding to the j-th row and j-th column sub-block, i m,jm represents the corresponding row and column of the constructed masking model matrix, Is a model parameter,/>The DCT coefficients representing the corresponding sub-blocks, W i,j,k is a parameter ranging between (0, 1) for controlling the shadowing effect, and is calculated as follows:
Wherein M represents the size of each sub-block, gamma represents an intensity adjustment parameter, the NVF function is used for setting a sliding window to slide in the Thangka element image, forming Tang Kafen into a texture area, a smooth area and an edge area, and dynamically determining W i,j,k parameters according to different NVF values of different areas, so that different embedding intensities are set in different areas;
4) Embedding the watermark image into the Thangka element image;
5) Increasing watermark embedding strength to the image to improve watermark compression resistance;
6) And (5) watermark extraction.
2. The visual perception model-based tangka digital watermarking method according to claim 1, wherein: the preprocessing process of the watermark image in the step 1) comprises the following steps: arnold transformation is carried out on the binary watermark image, and the position of each pixel point of the watermark image matrix is disordered, so that encryption processing is carried out on the watermark image.
3. The visual perception model-based tangka digital watermarking method according to claim 1, wherein: the specific steps for selecting the ROI area in the step 2) include:
201 Color space conversion is carried out on the Thangka element image, and the Thangka element image is converted into YUV space from RGB space;
202 A Y channel is extracted, and element 0 is filled at the right side and the lower side of the Y channel, so that the length and the width (M ', N') of the filled image can exactly divide the length and the width (M, N) of the watermark image;
203 Dividing the filled Thangka into The blocks are non-overlapped sub-blocks, and the size of each sub-block is equal to the size of the watermark image, so that the subsequent embedding of the watermark is convenient;
204 Calculating the content information entropy of each sub-block: k represents pixel points contained in each sub-block, and p i represents an information probability distribution value at an ith pixel point in each sub-block;
205 Performing canny edge detection on the image filled in the step 202) to obtain a line profile, and then repeating the step 203) to divide the image;
206 Calculating edge information entropy of each sub-block after segmentation
207 Entropy adding the content information entropy and edge information entropy of each sub-block to obtain: e sum=Evisual+Eedge;
208 According to the total information entropy E sum of each sub-block, selecting the block with the maximum value as the ROI area to be embedded with the watermark.
4. The visual perception model-based tangka digital watermarking method according to claim 1, wherein: in the step 4), watermark embedding rules are determined by utilizing the characteristics of the values of the watermarks 0 and 1: DCT transforming the extracted ROI region, if the current watermark bit value is 1, adding beta to the DCT transformed image to multiply JND threshold value, and adding a Q adjusting factor; if the current watermark bit has a value of 0, the DCT transformed image is subtracted by β times the JND threshold and then a Q-factor is added.
5. The visual perception model-based tangka digital watermarking method according to claim 4, wherein: the method for improving the watermark embedding strength to improve the watermark compression resistance in the step 5) comprises the following steps: and calculating the Tang-Card SSIM value after embedding the watermark and the NC value of the watermark image, and when the NC value is not smaller than the SSIM value, the adjusting factor Q value at the intersection point is the optimal solution.
6. The visual perception model-based tangka digital watermarking method according to claim 1, wherein: the watermark extraction rule in the step 6) is as follows: extracting a corresponding region of the Thangka after watermark embedding according to the region coordinates of the ROI; and performing DCT transformation on the area and the original image corresponding area respectively, then subtracting the two areas, wherein if the difference value is larger than 0, the watermark value of the corresponding position is 1, and if the difference value is smaller than 0, the watermark value of the corresponding position is 0.
CN202110890427.3A 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model Active CN113706359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110890427.3A CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110890427.3A CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Publications (2)

Publication Number Publication Date
CN113706359A CN113706359A (en) 2021-11-26
CN113706359B true CN113706359B (en) 2024-05-07

Family

ID=78651475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110890427.3A Active CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Country Status (1)

Country Link
CN (1) CN113706359B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268845B (en) * 2021-12-21 2024-02-02 中国电影科学技术研究所 Real-time watermarking method of 8K ultra-high definition video based on heterogeneous operation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366336A (en) * 2013-07-12 2013-10-23 陕西理工学院 Image watermarking method based on human eye contrast ratio sensitivity visual characteristics
CN106327501A (en) * 2016-08-31 2017-01-11 西北民族大学 Quality evaluation method for thangka image with reference after repair
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN110232650A (en) * 2019-06-06 2019-09-13 山东师范大学 A kind of Color digital watermarking embedding grammar, detection method and system
CN111968024A (en) * 2020-07-24 2020-11-20 南昌大学 Self-adaptive image watermarking method
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366336A (en) * 2013-07-12 2013-10-23 陕西理工学院 Image watermarking method based on human eye contrast ratio sensitivity visual characteristics
CN106327501A (en) * 2016-08-31 2017-01-11 西北民族大学 Quality evaluation method for thangka image with reference after repair
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN110232650A (en) * 2019-06-06 2019-09-13 山东师范大学 A kind of Color digital watermarking embedding grammar, detection method and system
CN111968024A (en) * 2020-07-24 2020-11-20 南昌大学 Self-adaptive image watermarking method
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium

Also Published As

Publication number Publication date
CN113706359A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
Pereira et al. Optimal transform domain watermark embedding via linear programming
Kankanhalli et al. Adaptive visible watermarking of images
Tabassum et al. A digital video watermarking technique based on identical frame extraction in 3-Level DWT
Huang et al. A contrast-sensitive visible watermarking scheme
Divecha et al. Implementation and performance analysis of DCT-DWT-SVD based watermarking algorithms for color images
CN105657431B (en) A kind of watermarking algorithm based on video frame DCT domain
Lee et al. An adaptive image steganographic model based on minimum-error lsb replacement
CN102238388A (en) Self-adaptive robust video watermarking method based on AVS (Audio Video Standard)
CN110910299B (en) Self-adaptive reversible information hiding method based on integer wavelet transform
Malonia et al. Digital image watermarking using discrete wavelet transform and arithmetic progression technique
Yesilyurt et al. A new DCT based watermarking method using luminance component
KR100948381B1 (en) Image Watermarking Method Using Human Visual System
Budiman et al. Genetics algorithm optimization of DWT-DCT based image Watermarking
CN113706359B (en) Tang-dynasty digital watermarking method based on visual perception model
CN111968024A (en) Self-adaptive image watermarking method
CN109859138A (en) A kind of infrared image enhancing method based on human-eye visual characteristic
CN105023237A (en) Method for improving concealment performance of image digital watermarks
Chen et al. Robust spatial LSB watermarking of color images against JPEG compression
Al-Gindy et al. A novel blind Image watermarking technique for colour RGB images in the DCT domain using green channel
Maity et al. An image watermarking scheme using HVS characteristics and spread transform
Tyagi et al. Image watermarking using genetic algorithm in DCT domain
Wajid et al. Robust and imperceptible image watermarking using full counter propagation neural networks
Al-Otum et al. Color image watermarking based on self-embedded color permissibility with preserved high image quality and enhanced robustness
Gao et al. A video dual watermarking algorithm against geometric attack based on integer wavelet and SIFT
Fragoso-Navarro et al. Visible watermarking technique in compressed domain based on JND

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant