CN113706359A - Thangka digital watermarking method based on visual perception model - Google Patents

Thangka digital watermarking method based on visual perception model Download PDF

Info

Publication number
CN113706359A
CN113706359A CN202110890427.3A CN202110890427A CN113706359A CN 113706359 A CN113706359 A CN 113706359A CN 202110890427 A CN202110890427 A CN 202110890427A CN 113706359 A CN113706359 A CN 113706359A
Authority
CN
China
Prior art keywords
watermark
image
thangka
value
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110890427.3A
Other languages
Chinese (zh)
Other versions
CN113706359B (en
Inventor
唐伶俐
黄天赐
解庆
刘永坚
胡桉澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110890427.3A priority Critical patent/CN113706359B/en
Publication of CN113706359A publication Critical patent/CN113706359A/en
Application granted granted Critical
Publication of CN113706359B publication Critical patent/CN113706359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a Thangka digital watermarking method based on a visual perception model, which comprises the following steps: 1) preprocessing the watermark image; 2) selecting an ROI (region of interest) of the Thangka element image; 3) constructing a visual perception JND model; 4) embedding the watermark image into the Thangka element image; 5) increasing watermark embedding strength to the image to improve the compression resistance of the watermark; 6) and (4) watermark extraction. The method considers the self-adaptive determination of watermark embedding strength of different areas of different images, does not use the same constant embedding strength, calculates the modifiable threshold of each pixel point of different images by means of the visual perception JND model, and finally uses the threshold as the watermark embedding strength. In order to improve the anti-compression robustness of the watermark, the invention provides a method for improving the robustness of the watermark by using a cyclic function of two variables of an image SSIM and a watermark NC, which can determine different watermark embedding strengths in a self-adaptive manner aiming at different Thangka and effectively improve the invisibility and the robustness of the digital watermark.

Description

Thangka digital watermarking method based on visual perception model
Technical Field
The invention relates to the technical field of image processing, in particular to a Thangka digital watermarking method based on a visual perception model.
Technical Field
With the development of internet big data, ways for people to acquire digital resources are more and more, and the piracy infringement phenomenon is more and more serious. The earliest method for copyright protection is to add visible watermarks such as author names, merchant logos and the like to digital pictures, but the method influences the ornamental value of some artistic pictures such as Thangka pictures, and the visible watermarks are easy to remove along with the gradual maturity of a watermark removing technology, so that the copyright protection cannot be effectively carried out. Based on this background, invisible digital watermarking methods are increasingly studied.
The digital watermarking technology is to embed a piece of hidden information in an image by utilizing image redundancy. According to the different watermark embedding positions, the current digital watermark method can be divided into two types: spatial domain-based digital watermarking and frequency domain-based digital watermarking. The digital watermark based on the airspace directly modifies the value of the pixel point to embed the watermark, and the method has poor robustness, easy damage to the watermark and low embeddable watermark capacity. The digital watermarking method based on the frequency domain firstly converts an image into sub-bands with different frequencies through a DCT (discrete cosine transform), DWT (discrete wavelet transform) and other transformation methods, and then embeds the watermark by modifying the coefficients of the sub-bands. The watermark of the method is better in signal attack resistance and compression attack resistance than a space domain method, so that more and more digital watermark methods are based on frequency domain research. The invention is based on the DCT domain research watermark embedding.
At present, digital watermarking methods based on frequency domains are researched in pictures with sizes being uniform, such as 512x 512, and the methods require that a proportional relation exists between a carrier picture and a watermarking picture, but the Thangka has irregular sizes and is not uniform in size, and watermarks can not be embedded in the images like the traditional methods. In addition, most of the existing watermark embedding methods adopt an experience value researched by a predecessor or a constant obtained after trial and error as watermark embedding strength when embedding watermarks, so that the situation that partial images are good in effect but are poor in invisibility after the images are changed occurs.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a Thangka digital watermarking method based on a visual perception model, which is used for copyright protection of Thangka images.
In order to achieve the above object, the Thangka digital watermarking method based on visual perception model designed by the present invention is characterized in that the method comprises the following steps:
1) preprocessing the watermark image;
2) selecting an ROI (region of interest) of the Thangka element image;
3) constructing a visual perception JND model;
4) embedding the watermark image into the Thangka element image;
5) increasing watermark embedding strength to the image to improve the compression resistance of the watermark;
6) and (4) watermark extraction.
Further, the process of preprocessing the watermark image in step 1) is as follows: and performing Arnold transformation on the binary watermark image, and disordering the position of each pixel point of a watermark image matrix, thereby encrypting the watermark image.
Further, the specific step of performing ROI region selection in step 2) includes:
201) performing color space conversion on the Thangka element image, and converting the Thangka element image from an RGB space to a YUV space;
202) extracting a Y channel, and filling the right side and the lower side of the Y channel with an element 0, so that the length and width (M ', N') of the filled image can exactly divide the length and width (M, N) of the watermark image;
203) dividing the filled Thangka into
Figure BDA0003195783980000021
Sub-blocks of non-overlapping blocks, each sub-blockThe size of the block is equal to the size of the watermark image, so that the subsequent watermark can be embedded conveniently;
204) calculating the content information entropy of each sub-block:
Figure BDA0003195783980000022
k denotes the pixel point contained in each sub-block, piRepresenting the information probability distribution value at the ith pixel point in each sub-block;
205) carrying out canny edge detection on the image filled in the step 202) to obtain a line profile, and then repeating the step 203) to segment the image;
206) calculating the entropy of the edge information of each sub-block after segmentation
Figure BDA0003195783980000031
207) Adding the content information entropy and the edge information entropy of each sub-block to obtain: esum=Evisual+Eedge
208) According to the total information entropy E of each sub-blocksumAnd sorting, and selecting a block with the maximum value as an ROI (region of interest) to be embedded with the watermark.
Further, the visually-aware JND model in step 3) is composed of three parts: a frequency sensitivity function model CSF, a brightness adaptive factor LA and a contrast shading factor CM; the visual perception JND model calculates the maximum modifiable threshold of the image which cannot be perceived by human eyes, and the threshold is used as the watermark embedding strength of each pixel point, so that the human eyes cannot perceive the difference between the image embedded with the watermark and the original image.
Furthermore, in the step 4), the watermark embedding rule is determined by using the characteristics of the values of the watermarks 0 and 1: performing DCT transformation on the extracted ROI, and if the value of the current watermark bit is 1, adding beta to the image subjected to DCT transformation to multiply a JND threshold value, and adding a Q adjustment factor; if the current watermark bit value is 0, subtracting beta from the DCT transformed image and multiplying the beta by a JND threshold value, and then adding a Q adjustment factor.
Further, the method for increasing the watermark embedding strength in the step 5) to improve the compression resistance of the watermark comprises the following steps: and calculating the Thangka SSIM value after embedding the watermark and the NC value of the watermark image, wherein when the NC value is just not less than the SSIM, the Q value of the adjusting factor at the intersection point is the optimal solution.
Further, the watermark extraction rule in the step 6) is as follows: extracting a Thangka corresponding region embedded with the watermark according to the ROI region coordinate; and respectively carrying out DCT (discrete cosine transformation) on the area and the area corresponding to the original image, subtracting the two areas, wherein if the difference value is greater than 0, the watermark value of the corresponding position is 1, and if the difference value is less than 0, the watermark value of the corresponding position is 0.
Further, the frequency sensitivity function calculation method is as follows:
Figure BDA0003195783980000041
wherein f isminRefers to the minimum value of the spatial frequency, T, among all sub-blocksminIs the corresponding minimum threshold value at the minimum value of the spatial frequency, K is the constant of the gradient parameter, C (i), C (j) respectively represent the DCT transformation normalization factors of the ith row and the jth column, fi,j nThe spatial frequency of the nth sub-block is calculated by the following method:
Figure BDA0003195783980000042
i. j denotes a row and a column corresponding to the i-th row and the j-th column of the subblock, Wx denotes a horizontal width of a pixel in a unit of a human visual angle, and Wy denotes a vertical height of the pixel in the unit of the human visual angle.
Furthermore, the method for constructing the brightness adaptive factor comprises the following steps: the luminance adaptive factor is a threshold value which is more accurately calculated by replacing the average luminance with the local luminance of the image aiming at the self-limit of the frequency sensitivity function model, and the calculation method of the luminance adaptive factor is as follows:
Figure BDA0003195783980000043
wherein T isi,j,kRepresenting the base threshold of the K sub-block, i.e. the value calculated in i, f0,0,kDCT DC coefficient DCT (0,0), alpha of the Kth sub-blockTIs the shading strength parameter.
Furthermore, the method for constructing the contrast shading factor comprises the following steps: different redundancy exists in different contrast areas of the image, the image is changed into different shading effects, and the contrast shading factor calculation method comprises the following steps:
Figure BDA0003195783980000044
wherein i and j represent the rows and columns corresponding to the ith row and j column sub-blocks, im,jmRepresenting the corresponding rows and columns of the constructed occlusion model matrix,
Figure BDA0003195783980000051
is the parameter of the model and is,
Figure BDA0003195783980000052
DCT coefficient, W, representing the corresponding sub-blocki,j,kIs a parameter with the value range between (0,1) and is used for controlling the shielding effect, and the calculation method is as follows:
Figure BDA0003195783980000053
wherein M represents the size of each sub-block, gamma represents an intensity adjustment parameter, NVF function is used for setting a sliding window to slide in the Thangka element image, dividing the Thangka into a texture area, a smooth area and an edge area, and dynamically determining W according to different NVF values of different areasi,j,kParameters to set different embedding strengths in different regions.
The invention relates to a Thangka digital watermarking method based on a visual perception model, which is a Thangka digital watermarking method combining information entropy, edge detection and a Just not able Difference (JND) model of a Discrete Cosine Transform (DCT) domain. The invention has the beneficial effects that: considering self-adaption determination of watermark embedding strength of different regions of different images, the same constant embedding strength is not used any more, but a modifiable threshold value of each pixel point of different images is dynamically calculated by means of a visual perception JND model, and finally the threshold value is used as the watermark embedding strength. In addition, in order to improve the robustness of watermark compression resistance, the invention provides a loop function about two variables of an image SSIM and a watermark NC to improve the robustness of the watermark. The invention can adaptively determine different watermark embedding strengths aiming at different Thangka, and effectively improves the invisibility and robustness of the digital watermark.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a diagram of an example of watermark image preprocessing according to the present invention;
FIG. 3 is a diagram illustrating an example of edge detection in ROI area selection according to the present invention;
FIG. 4 is a diagram illustrating the entropy calculation result according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of embedding watermarks in a plurality of different Thangka embodiments;
FIG. 6 is a schematic flow chart of a method for improving watermark compression robustness according to the present invention;
fig. 7 is a diagram of the extraction effect of multiple groups of Thangka example watermarks under different attacks.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
As shown in fig. 1, the method for vectorizing a Thangka image based on a diffusion curve provided by the present invention includes the following steps:
1) preprocessing the watermark image;
2) selecting an ROI (region of interest) of the Thangka element image;
3) constructing a visual perception JND model;
4) embedding the watermark image into the Thangka element image;
5) increasing watermark embedding strength to the image to improve the compression resistance of the watermark;
6) and (4) watermark extraction.
The specific implementation process of the embodiment includes:
1) preprocessing a watermark image:
as shown in fig. 2, the watermark image in this example is a binary image of 32 × 32 size. And performing Arnold transformation on the binary watermark image, and disordering the position of each pixel point of a watermark image matrix, thereby encrypting the watermark image.
The Arnold transformation formula is as follows:
Figure BDA0003195783980000061
that is, the coordinates of each pixel point after transformation are:
Figure BDA0003195783980000062
in this example, the values of a and b in the above formula are respectively: 1. 2, the encryption times N are 15.
2) Thangka ROI area selection:
as shown in fig. 3 and 4, the ROI region is extracted in one implementation of the nican using edge detection and entropy calculation. The image of the Thangka element in the example is an animal beast Thangka with the size of 899x 681.
201) The Thangka color information is an important information of the Thangka element image, and the information is not modified as much as possible when the watermark is embedded. Therefore, the Thangka is required to be subjected to color space conversion, in the example, the animal beast Thangka is converted into a YUV space from an RGB space;
202) the Y channel is extracted and then filled with element 0 to the right and below the Y channel so that the filled image length and width (899,681) is exactly the length and width (32,32) of the watermark image. In this example, the animal kanka right side is filled with [899/32] 32-899 0 elements such that its final value is 928, and the lower side is filled with [681/32] 32-681 0 elements such that its final value is 704;
203) dividing the filled Thangka into
Figure BDA0003195783980000071
Sub-blocks with non-overlapping blocks, each sub-block having a size equal to the size of the watermark image (32x32), to facilitate the embedding of subsequent watermarks;
204) according to the formula:
Figure BDA0003195783980000072
calculating the content information entropy E of each sub-block1、E2、E3……EnK denotes a pixel point contained in each sub-block, PiRepresenting the information probability distribution value at the ith pixel point in each sub-block, wherein n represents the total number of all the sub-blocks;
205) carrying out canny edge detection on the image filled in the step 2 to obtain a line profile, and then repeating the step 3 to segment the image;
206) according to the formula:
Figure BDA0003195783980000073
calculating the edge information entropy e of each divided sub-block1、e2、e3……en
207) Adding the two information entropies of each sub-block to obtain Esum:E1+e1、E2+e2…Ei+ei…En+en
208) Finally, according to the total information entropy E of each sub-blocksumAnd sorting, and selecting a block with the maximum value as an ROI (region of interest) to be embedded with the watermark.
3) Constructing a visual perception JND model:
the visual perception JND model can calculate the maximum modifiable threshold of an image which cannot be perceived by human eyes, the threshold is used as the watermark embedding strength of each pixel point, human eyes cannot perceive the difference between the image embedded with the watermark and an original image, and compared with a single embedding strength method, the invisibility of the digital watermark is greatly improved. The JND model consists of three parts: frequency sensitivity function model (CSF), luminance adaptation factor (LA), contrast masking factor (CM).
301) Constructing a frequency sensitivity function model:
the frequency sensitivity function calculation method is as follows:
Figure BDA0003195783980000081
wherein the invention Tmin、fminAnd K takes the values as follows: 0.25, 3.1, 1.34C (i), C (j) represent the DCT transform normalization factors of the ith row and the jth column, respectively, fi,j nThe spatial frequency of the nth sub-block is calculated by the following method:
Figure BDA0003195783980000082
i. j represents the row and the column corresponding to the i-th row and the j-th column, Wx represents the horizontal width of the pixel with the human visual angle as a unit, and Wy represents the vertical height of the pixel with the human visual angle as a unit, wherein the value of Wx is 7.1527/256, and the value of Wy is 7.1527/256.
302) Constructing a brightness adaptive factor:
the luminance adaptation factor is a self-limiting for the frequency sensitivity function model, allowing for more accurate calculation of the threshold by replacing the average luminance with the image local luminance. The brightness adaptive factor calculation method is as follows:
Figure BDA0003195783980000083
wherein T isi,j,kRepresenting the base threshold of the Kth sub-block, i.e. the value T calculated in step 301)i,j,f0,0,kDCT DC coefficient DCT (0,0), alpha of the Kth sub-blockTIs a shielding strength parameter, and the value of the invention is 0.649.
303) Constructing a contrast shading factor:
different areas of the image with different contrast have different redundancies, i.e. different occlusion effects. The contrast shading factor calculation method is as follows:
Figure BDA0003195783980000091
wherein i and j represent the rows and columns corresponding to the i-th row and j-th column sub-block, im,jmRepresenting the corresponding rows and columns of the constructed occlusion model matrix,
Figure BDA0003195783980000092
is a model parameter that in this example takes the value 1,
Figure BDA0003195783980000093
DCT coefficient, W, representing the corresponding sub-blocki,j,kThe method is a parameter with a value range between (0,1) and is mainly used for controlling the shielding effect, and the calculation method is as follows:
Figure BDA0003195783980000094
where M represents the size of each sub-block, the value of this example is 32. Gamma is an intensity adjustment parameter that is primarily used to adjust the shading intensity. The value in this example is 1. The NVF function is used for setting a sliding window to slide in the Thangka, and dividing the Thangka into a texture area, a smooth area and an edge area. Dynamically determining W according to different NVF values of different areasi,j,kParameters to set different embedding strengths in different regions.
304) Calculating a maximum imperceptible distortion threshold JND:
the final maximum imperceptible distortion threshold JND calculation method comprises the following steps:
Tjnd(n,x,y)=TCSF(n,x,y)·FLA(n)·FCM(n,x,y)
4) watermark embedding:
as shown in fig. 5, the images with watermarks embedded in a plurality of different Thangka groups are compared with the original image. The invention uses the binary image with the size of 32x32 as the watermark image, and does not adopt the following commonly used spread spectrum method when the specific embedding rule is set: i + α W, Iw is I (1+ α W), and the embedding rule is determined by using the characteristic that the watermark 0 and 1 take values. Firstly, performing DCT transformation on an ROI (region of interest) extracted from the animal magical animal Thangka, then reading a binary watermark image, and if the value of a current watermark bit is 1, adding beta to the image subjected to DCT transformation to multiply the maximum imperceptible distortion JND threshold calculated in the third step, and adding a Q regulating factor. If the current watermark bit value is 0, subtracting beta from the DCT transformed image and multiplying the beta by a JND threshold value, and then adding a Q adjustment factor. After embedding the watermark, performing inverse DCT transformation on the region, and finally combining with other sub-blocks to reconstruct the Thangka after embedding the watermark. The embedding rules are as follows:
for known ROI regions and corresponding JND thresholds
If JND (i, j) ═ 0
Marked(i,j)=Original_img(i,j)
Otherwise
If watermark bit is 1
Marked(i,j)=Original_img(i,j)+β*JND(i,j)+Q
End up
If watermark bit is 0
Marked(i,j)=Original_img(i,j)-β*JND(i,j)+Q
End up
End up
5) Improving the watermark compression robustness:
fig. 6 shows a flow of a method adopted by the present invention for compression attack. The JND threshold calculated by the visual perception model is the maximum redundancy threshold of the image, and in order to improve the watermark robustness, when a watermark embedding rule is determined, a boundary threshold is not directly used for calculation, but a beta coefficient is set: mark (i, j) ═ Original _ img (i, j) + β × JND (i, j), mark (i, j) ═ Original _ img (i, j) - β × JND (i, j). The value range of beta is (0,1), the value of the example is 0.5, and thus, the image with the embedded watermark still has certain redundancy to resist compression attack. In addition, in order to improve the compression resistance, the invention also provides a compression adjustment factor Q: mark (i, j) ═ Original _ img (i, j) + β × JND (i, j) + Q, Marked (i, j) ═ Original _ img (i, j) - β × JND (i, j) + Q, keeping the total modification range within the JND threshold. Thus, the invisibility of the watermark is ensured in the range, and then a certain amount theta is continuously added to Q on the basis of the invisibility to increase the embedding strength of the watermark so as to improve the compression resistance of the watermark.
And calculating the Thangka SSIM value after embedding the watermark and the NC value of the watermark image, wherein when the NC value is just not less than the SSIM, the Q value at the intersection point is the optimal solution, namely the optimal conditions of compressibility resistance and invisibility.
6. Watermark extraction:
firstly, extracting a Thangka corresponding region embedded with a watermark according to ROI region coordinates, then respectively carrying out DCT (discrete cosine transformation) transformation on the region and the region corresponding to an original image, then subtracting the two regions, wherein if the difference value is greater than 0, the watermark value of the corresponding position is 1, and if the difference value is less than 0, the watermark value of the corresponding position is 0. The final watermark extraction rules are as follows:
Figure BDA0003195783980000111
and finally, performing Arnold inverse transformation decryption on the extracted watermark to obtain an extracted watermark image.
Fig. 7 illustrates that the digital watermark of the present invention can still extract the watermark more completely under different attacks. The invention provides a Thangka digital watermarking method which can effectively protect the copyright of Thangka
Finally, it should be noted that the above detailed description is only for illustrating the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the patent can be modified or replaced by equivalents without departing from the spirit and scope of the technical solution of the patent, which should be covered by the claims of the patent.

Claims (10)

1. A Thangka digital watermarking method based on a visual perception model is characterized in that: the method comprises the following steps:
1) preprocessing the watermark image;
2) selecting an ROI (region of interest) of the Thangka element image;
3) constructing a visual perception JND model;
4) embedding the watermark image into the Thangka element image;
5) increasing watermark embedding strength to the image to improve the compression resistance of the watermark;
6) and (4) watermark extraction.
2. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: the process of preprocessing the watermark image in the step 1) is as follows: and performing Arnold transformation on the binary watermark image, and disordering the position of each pixel point of a watermark image matrix, thereby encrypting the watermark image.
3. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: the specific step of performing ROI region selection in step 2) includes:
201) performing color space conversion on the Thangka element image, and converting the Thangka element image from an RGB space to a YUV space;
202) extracting a Y channel, and filling the right side and the lower side of the Y channel with an element 0, so that the length and width (M ', N') of the filled image can exactly divide the length and width (M, N) of the watermark image;
203) dividing the filled Thangka into
Figure FDA0003195783970000011
Sub-blocks with non-overlapping blocks, wherein the size of each sub-block is equal to the size of the watermark image, so that the subsequent watermark can be conveniently embedded;
204) calculating the content information entropy of each sub-block:
Figure FDA0003195783970000012
k denotes the pixel point contained in each sub-block, piTo representThe information probability distribution value at the ith pixel point in each sub-block;
205) carrying out canny edge detection on the image filled in the step 202) to obtain a line profile, and then repeating the step 203) to segment the image;
206) calculating the entropy of the edge information of each sub-block after segmentation
Figure FDA0003195783970000021
207) Adding the content information entropy and the edge information entropy of each sub-block to obtain: esum=Evisual+Eedge
208) According to the total information entropy E of each sub-blocksumAnd sorting, and selecting a block with the maximum value as an ROI (region of interest) to be embedded with the watermark.
4. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: the visual perception JND model in the step 3) consists of three parts: a frequency sensitivity function model CSF, a brightness adaptive factor LA and a contrast shading factor CM; the visual perception JND model calculates the maximum modifiable threshold of the image which cannot be perceived by human eyes, and the threshold is used as the watermark embedding strength of each pixel point, so that the human eyes cannot perceive the difference between the image embedded with the watermark and the original image.
5. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: determining a watermark embedding rule by using the characteristics of the values of the watermarks 0 and 1 in the step 4): performing DCT transformation on the extracted ROI, and if the value of the current watermark bit is 1, adding beta to the image subjected to DCT transformation to multiply a JND threshold value, and adding a Q adjustment factor; if the current watermark bit value is 0, subtracting beta from the DCT transformed image and multiplying the beta by a JND threshold value, and then adding a Q adjustment factor.
6. The Thangka digital watermarking method based on the visual perception model as claimed in claim 5, wherein: the method for increasing the watermark embedding strength to improve the compression resistance of the watermark in the step 5) comprises the following steps: and calculating the Thangka SSIM value after embedding the watermark and the NC value of the watermark image, wherein when the NC value is just not less than the SSIM, the Q value of the adjusting factor at the intersection point is the optimal solution.
7. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: the rule of watermark extraction in the step 6) is as follows: extracting a Thangka corresponding region embedded with the watermark according to the ROI region coordinate; and respectively carrying out DCT (discrete cosine transformation) on the area and the area corresponding to the original image, subtracting the two areas, wherein if the difference value is greater than 0, the watermark value of the corresponding position is 1, and if the difference value is less than 0, the watermark value of the corresponding position is 0.
8. The Thangka digital watermarking method based on the visual perception model as claimed in claim 4, wherein: the frequency sensitivity function calculation method is as follows:
Figure FDA0003195783970000031
wherein f isminRefers to the minimum value of the spatial frequency, T, among all sub-blocksminIs the corresponding minimum threshold value at the minimum value of the spatial frequency, K is the constant of the gradient parameter, C (i), C (j) respectively represent the DCT transformation normalization factors of the ith row and the jth column, fi,j nThe spatial frequency of the nth sub-block is calculated by the following method:
Figure FDA0003195783970000032
i. j denotes a row and a column corresponding to the i-th row and the j-th column of the subblock, Wx denotes a horizontal width of a pixel in a unit of a human visual angle, and Wy denotes a vertical height of the pixel in the unit of the human visual angle.
9. The Thangka digital watermarking method based on the visual perception model as claimed in claim 4, wherein: the method for constructing the brightness adaptive factor comprises the following steps: the luminance adaptive factor is a threshold value which is more accurately calculated by replacing the average luminance with the local luminance of the image aiming at the self-limit of the frequency sensitivity function model, and the calculation method of the luminance adaptive factor is as follows:
Figure FDA0003195783970000033
wherein T isi,j,kRepresenting the base threshold of the K sub-block, i.e. the value calculated in i, f0,0,kDCT DC coefficient DCT (0,0), alpha of the Kth sub-blockTIs the shading strength parameter.
10. The Thangka digital watermarking method based on the visual perception model as claimed in claim 1, wherein: the method for constructing the contrast shading factor comprises the following steps: different redundancy exists in different contrast areas of the image, the image is changed into different shading effects, and the contrast shading factor calculation method comprises the following steps:
Figure FDA0003195783970000041
wherein i and j represent the rows and columns corresponding to the ith row and j column sub-blocks, im,jmRepresenting the corresponding rows and columns of the constructed occlusion model matrix,
Figure FDA0003195783970000042
is the parameter of the model and is,
Figure FDA0003195783970000044
DCT coefficient, W, representing the corresponding sub-blocki,j,kIs a parameter with the value range between (0,1) and is used for controlling the shielding effect, and the calculation method is as follows:
Figure FDA0003195783970000043
wherein M represents the size of each sub-block, gamma represents an intensity adjustment parameter, NVF function is used for setting a sliding window to slide in the Thangka element image, dividing the Thangka into a texture area, a smooth area and an edge area, and dynamically determining W according to different NVF values of different areasi,j,kParameters to set different embedding strengths in different regions.
CN202110890427.3A 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model Active CN113706359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110890427.3A CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110890427.3A CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Publications (2)

Publication Number Publication Date
CN113706359A true CN113706359A (en) 2021-11-26
CN113706359B CN113706359B (en) 2024-05-07

Family

ID=78651475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110890427.3A Active CN113706359B (en) 2021-08-04 2021-08-04 Tang-dynasty digital watermarking method based on visual perception model

Country Status (1)

Country Link
CN (1) CN113706359B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268845A (en) * 2021-12-21 2022-04-01 中国电影科学技术研究所 Real-time watermark adding method for 8K ultra-high-definition video based on heterogeneous operation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366336A (en) * 2013-07-12 2013-10-23 陕西理工学院 Image watermarking method based on human eye contrast ratio sensitivity visual characteristics
CN106327501A (en) * 2016-08-31 2017-01-11 西北民族大学 Quality evaluation method for thangka image with reference after repair
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN110232650A (en) * 2019-06-06 2019-09-13 山东师范大学 A kind of Color digital watermarking embedding grammar, detection method and system
CN111968024A (en) * 2020-07-24 2020-11-20 南昌大学 Self-adaptive image watermarking method
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366336A (en) * 2013-07-12 2013-10-23 陕西理工学院 Image watermarking method based on human eye contrast ratio sensitivity visual characteristics
CN106327501A (en) * 2016-08-31 2017-01-11 西北民族大学 Quality evaluation method for thangka image with reference after repair
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN110232650A (en) * 2019-06-06 2019-09-13 山东师范大学 A kind of Color digital watermarking embedding grammar, detection method and system
CN111968024A (en) * 2020-07-24 2020-11-20 南昌大学 Self-adaptive image watermarking method
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268845A (en) * 2021-12-21 2022-04-01 中国电影科学技术研究所 Real-time watermark adding method for 8K ultra-high-definition video based on heterogeneous operation
CN114268845B (en) * 2021-12-21 2024-02-02 中国电影科学技术研究所 Real-time watermarking method of 8K ultra-high definition video based on heterogeneous operation

Also Published As

Publication number Publication date
CN113706359B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
Moosazadeh et al. A new DCT-based robust image watermarking method using teaching-learning-based optimization
Pereira et al. Optimal transform domain watermark embedding via linear programming
Hu et al. Wavelet domain adaptive visible watermarking
Divecha et al. Implementation and performance analysis of DCT-DWT-SVD based watermarking algorithms for color images
CN102238388B (en) Self-adaptive robust video watermarking method based on AVS (Audio Video Standard)
CN1656501A (en) Human visual model for data hiding
CN105657431B (en) A kind of watermarking algorithm based on video frame DCT domain
Malonia et al. Digital image watermarking using discrete wavelet transform and arithmetic progression technique
Yesilyurt et al. A new DCT based watermarking method using luminance component
CN112700363A (en) Self-adaptive visual watermark embedding method and device based on region selection
CN111028850A (en) Audio watermark embedding method and audio watermark extracting method
CN113706359B (en) Tang-dynasty digital watermarking method based on visual perception model
CN111968024A (en) Self-adaptive image watermarking method
KR20040098770A (en) Image Watermarking Method Using Human Visual System
Chen et al. Robust spatial LSB watermarking of color images against JPEG compression
Wajid et al. Robust and imperceptible image watermarking using full counter propagation neural networks
Maheswari et al. Image Steganography using Hybrid Edge Detector and Ridgelet Transform.
Javadi et al. Adjustable contrast enhancement using fast piecewise linear histogram equalization
CN114205644A (en) Spatial domain robust video watermark embedding and extracting method based on intra-frame difference
Jadav Comparison of LSB and Subband DCT Technique for Image Watermarking
Hamid et al. A simple image-adaptive watermarking algorithm with blind extraction
Bedi et al. Robust watermarking of image in the transform domain using edge detection
Lee et al. Adaptive digital image watermarking using variable size of blocks in frequency domain
Jiao et al. Framelet image watermarking considering dynamic visual masking
CN103559677A (en) Self-adaptive image watermark embedding method based on wavelet transformation and visual characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant