CN103559677A - Self-adaptive image watermark embedding method based on wavelet transformation and visual characteristics - Google Patents

Self-adaptive image watermark embedding method based on wavelet transformation and visual characteristics Download PDF

Info

Publication number
CN103559677A
CN103559677A CN201310519922.9A CN201310519922A CN103559677A CN 103559677 A CN103559677 A CN 103559677A CN 201310519922 A CN201310519922 A CN 201310519922A CN 103559677 A CN103559677 A CN 103559677A
Authority
CN
China
Prior art keywords
watermark
wavelet coefficient
pixel
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310519922.9A
Other languages
Chinese (zh)
Other versions
CN103559677B (en
Inventor
焦润海
马思达
李博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN201310519922.9A priority Critical patent/CN103559677B/en
Publication of CN103559677A publication Critical patent/CN103559677A/en
Application granted granted Critical
Publication of CN103559677B publication Critical patent/CN103559677B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a self-adaptive image watermark embedding method based on wavelet transformation and visual characteristics in the technical field of digital image information security. The self-adaptive image watermark embedding method includes the steps of performing grey processing on an image, calculating a comprehensive visual masking value of each pixel in the image, performing wavelet transformation on the image to obtain a low-frequency component wavelet coefficient, a high-frequency horizontal component wavelet coefficient, a high-frequency vertical component wavelet coefficient and a high-frequency diagonal component wavelet coefficient of each pixel in the image, calculating visual masking values of the low-frequency component wavelet coefficients and visual marking values of the high-frequency diagonal component wavelet coefficients respectively, averagely dividing a watermark sequence into two watermark sub-sequences, determining the positions where the low-frequency component wavelet coefficients are embedded into a watermark, embedding the low-frequency component wavelet coefficients into one watermark sub-sequence, determining the positions where the high-frequency diagonal component wavelet coefficients are embedded into a watermark, embedding the high-frequency diagonal component wavelet coefficients into the other watermark sub-sequence, and finally performing wavelet transformation to obtain the image with the watermarks embedded. According to the self-adaptive image watermark embedding method, invisibility and robustness of the watermarks are improved.

Description

Adapting to image watermark embedding method based on wavelet transformation and visual characteristic
Technical field
The invention belongs to digital image information safety technique field, relate in particular to a kind of adapting to image watermark embedding method based on wavelet transformation and visual characteristic.
Background technology
In recent years; the appearance of Digital Media makes people can more easily process, share and transmission information; but along with fast development and the widespread use of digital media technology, the Copyright Protection of Digital Media more and more receives people's concern, and digital watermark technology arises at the historic moment.The carrier embedding according to watermark, digital watermarking is divided into image watermark, audio frequency watermark and video watermark, and wherein image watermark is paid close attention to because its application prospect is extensively more subject to researchist.
Digital image watermarking technology is that watermark information (a normally width Logo image) is embedded in original image in a kind of transparent mode, generates containing watermarking images and can from image, extract watermark whenever and wherever possible with the technology of checking copyright information.Watermark should have certain robustness, to carrying out, in the situation of attack (as compression, filtering, shearing etc.) to a certain degree, still therefrom extracting watermark information containing watermarking images.Therefore, robustness requires in image more embed watermark to improve the resistivity to image attack.Watermark also should have the transparency, requires to contain less embed watermark in image and to try one's best, reduces the impact on original image quality.Therefore, the balance transparency and robustness are important research contents of digital figure watermark.
At present, digital figure watermark can be divided into spatial domain watermark embedding and transform domain watermark embeds, and wherein transform domain watermark embeds because adopting Time-Frequency Analysis Method to have the better transparency and robustness, and becoming is one of study hotspot of digital figure watermark.Conventionally, use Y-PSNR (Power Signal-to-Noise Ratio, PSNR) to weigh containing the difference between watermarking images and original image (image fault).PSNR value is higher, shows that the difference containing between watermarking images and original image is less, and watermark embed process is less to the change of the pixel value of original image, and the invisibility of watermark is better.Yet existing experiment shows, containing the quality of watermarking images, can not only with PSNR, weigh, also relevant with people's subjective vision impression.Therefore, make full use of the feature of frequency field, and excavate human visual system's characteristic, watermark is embedded into the insensitive region of human eye, contribute to improve the invisibility of watermark.
Summary of the invention
The object of the invention is to, a kind of adapting to image watermark embedding method based on wavelet transformation and visual characteristic is provided, the deficiency existing for solving prior art.
To achieve these goals, the technical scheme that the present invention proposes is that a kind of adapting to image watermark embedding method based on wavelet transformation and visual characteristic, is characterized in that described method comprises:
Step 1: image is carried out to gray scale processing;
Step 2: comprehensive visual value of covering of each pixel in computed image;
Step 3: image is carried out to wavelet transformation, obtain low frequency component wavelet coefficient, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and the high frequency diagonal component wavelet coefficient of each pixel in image;
Step 4: utilize the comprehensive visual value of covering to calculate respectively the masking effect value of low frequency component wavelet coefficient and the masking effect value of high frequency diagonal component wavelet coefficient;
Step 5: watermark sequence is divided into 2 watermark subsequences;
Step 6: determine the position of low frequency component wavelet coefficient embed watermark and embed a watermark subsequence, determine the position of high frequency diagonal component wavelet coefficient embed watermark and embed another watermark subsequence;
Step 7: the high frequency diagonal component wavelet coefficient to the low frequency component wavelet coefficient of embed watermark sequence, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and embed watermark sequence carries out inverse wavelet transform, obtains the image of embed watermark.
In described computed image, comprehensive visual value of covering of each pixel adopts formula M f(i, j)=MAX (M l(i, j), MIN (M e(i, j), M t(i, j)));
Wherein, M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel in image;
M l(i, j) is brightness value of covering of the capable j row of i pixel in image;
M e(i, j) is texture value of covering of the capable j row of i pixel in image;
M t(i, j) is edge value of covering of the capable j row of i pixel in image.
The masking effect value of utilizing the comprehensive visual value of covering to calculate masking effect value/high frequency diagonal component wavelet coefficient of low frequency component wavelet coefficient adopts formula M ( i , j ) = Σ 1 ≤ i ≤ n 1 ≤ j ≤ m F ( i , j ) × M F ( i , j ) Σ 1 ≤ i ≤ n 1 ≤ j ≤ m F ( i , j ) ;
Wherein, M (i, j) is the masking effect value of masking effect value/high frequency diagonal component wavelet coefficient of the low frequency component wavelet coefficient of the capable j row of i pixel;
M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel;
F (i, j) be the capable j row of i pixel the comprehensive visual value of covering weights and F ( i , j ) = 1 p ( i , j ) &GreaterEqual; x &OverBar; ( x &OverBar; p ( i , j ) ) 3 p ( i , j ) < x &OverBar; , P (i, j) is the gray-scale value of the capable j row of i pixel,
Figure BDA0000403734110000033
the grey scale pixel value mean value of the block forming for the 5x5 pixel put centered by the capable j row of i pixel;
N is the line number of pixel in image;
M is the columns of pixel in image.
Described step 6 is specifically:
Step 101: the low frequency component wavelet coefficient of each pixel, according to the size sequence of the masking effect value of the low frequency component wavelet coefficient of this pixel, is obtained to First ray;
The high frequency diagonal component wavelet coefficient of each pixel, according to the size sequence of the masking effect value of the high frequency diagonal component wavelet coefficient of this pixel, is obtained to the second sequence;
Step 102: select the top n low frequency component wavelet coefficient of First ray to embed a watermark subsequence, select the top n high frequency diagonal component wavelet coefficient of the second sequence to embed another watermark subsequence;
Wherein, N is the number of watermark in watermark subsequence.
Watermark subsequence of top n low frequency component wavelet coefficient embedding of described selection First ray/select the top n high frequency diagonal component wavelet coefficient of the second sequence to embed another watermark subsequence employing formula: C k' (i, j)=C k(i, j)+watermark (k) * M (i, j) * embedpower;
Wherein, C k(i, j) is k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient in First ray/the second sequence;
I and j are respectively horizontal ordinate and the ordinate of the pixel that k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient is corresponding;
C k' (i, j) is the value after k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient embed watermark in First ray/the second sequence;
Watermark (k) is the k position watermark value in watermark subsequence;
M (i, j) is low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient C kthe masking effect value of (i, j);
Embedpower is preset proportion coefficient.
The present invention has effectively utilized wavelet transformation and human-eye visual characteristic, and watermark information is embedded in to the insensitive region of human eye, improves the invisibility of watermark, and embeds more watermark to improve the robustness of watermark according to sensitivity as far as possible.
Accompanying drawing explanation
Fig. 1 is the adapting to image watermark embedding method process flow diagram based on wavelet transformation and visual characteristic;
Fig. 2 is that the comprehensive visual of standard Lena figure is covered result figure;
Fig. 3 is Image Watermarking implementation process schematic diagram provided by the invention;
Fig. 4 is the design sketch that utilizes method embed watermark provided by the invention to extract after attacking again; Wherein, (a) be the design sketch that standard Boat figure embed watermark extracts after attacking again, (b) design sketch that standard Lena figure embed watermark extracts after attacking again, the design sketch that (c) standard P eppers figure embed watermark extracts after attacking again.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.Should be emphasized that, following explanation is only exemplary, rather than in order to limit the scope of the invention and to apply.
Fig. 1 is the adapting to image watermark embedding method process flow diagram based on wavelet transformation and visual characteristic.As shown in Figure 1, the adapting to image watermark embedding method based on wavelet transformation and visual characteristic provided by the invention comprises:
Step 1: image is carried out to gray scale processing.
The gray scale of image is processed and exactly coloured image is converted to gray level image, and the value by pixel in image on three color components (R, G, B) is converted into a value.The gray scale of image is processed a lot of methods, and such as maximum value process, the maximal value of capture element in three color components is as gray-scale value; Mean value method for another example, capture element at the mean value of three color components as gray-scale value; Also have weighted average method, three of pixel color components are multiplied by respectively after weights and sue for peace and average.It has been prior art that the gray scale of image is processed, and selects any one gray scale processing mode.
Step 2: comprehensive visual value of covering of each pixel in computed image.
In computed image, comprehensive visual value of covering of each pixel adopts formula:
M F(i,j)=MAX(M L(i,j),MIN(M E(i,j),M T(i,j))) (1)
In formula (1), M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel in image, M l(i, j) is brightness value of covering of the capable j row of i pixel in image, M e(i, j) is texture value of covering of the capable j row of i pixel in image, M t(i, j) is edge value of covering of the capable j row of i pixel in image.
Because the account form of the brightness value of covering, the texture value of covering and the edge value of covering of pixel in image is the common technology of this area, so the present invention only roughly describes.
First, in image, the computation process of brightness value of covering of pixel is:
Step 101: point centered by the pixel of brightness value of covering to be calculated, Selection Center point is the block of 5x5 pixel around.When central point is image border point, the mode of adopt expanding is expanded 5x5 block of pixels, and now, for the pixel in image not originally, its grey scale pixel value gets 0.
Step 102: adopt average weighted method to calculate the average weighted background luminance of this block, computing formula is:
L ( i , j ) = &Sigma; 1 &le; i &le; 5 1 &le; j &le; 5 &omega; ( i , j ) &times; p ( i , j ) &Sigma; 1 &le; i &le; 5 1 &le; j &le; 5 &omega; ( i , j ) - - - ( 2 )
In formula (2), p (i, j) is the gray-scale value of the capable j row of i pixel in block, and ω (i, j) is the weights of the capable j row of i pixel in block.In the present embodiment, the weight matrix that provides weights to form is W = [ &omega; ( i , j ) ] = 1 1 1 1 1 1 2 2 2 1 1 2 0 2 1 1 2 2 2 1 1 1 1 1 1 &CenterDot;
Step 103: the maximal value of calculating the weighted mean value of this block under the standard weight matrix of 4 Chou definition.
The weighted mean value computing formula of this block under the standard weight matrix of each Chou definition is:
value = 1 / 16 &times; &Sigma; 1 &le; i &le; 5,1 &le; j &le; 5 G ( i , j ) &times; p ( i , j ) - - - ( 3 )
In formula (3), G (i, j) is the value of the capable j column element of i of the standard weight matrix of Chou definition.The standard weight matrix of 4 Chou definition that the present embodiment provides is respectively G A = 0 0 0 0 0 1 3 8 3 1 0 0 0 0 0 - 1 - 3 - 8 - 3 - 1 0 0 0 0 0 , G B = 0 0 1 0 0 0 8 3 0 0 1 3 0 - 3 - 1 0 0 - 3 - 8 0 0 0 - 1 0 0 , G C = 0 0 1 0 0 0 0 3 8 0 - 1 - 3 0 3 1 0 - 8 - 3 0 0 0 0 - 1 0 0 With G D = 0 1 0 - 1 0 0 3 0 - 3 0 0 8 0 - 8 0 0 3 0 - 3 0 0 1 0 - 1 0 &CenterDot;
Calculate after the weighted mean value of this block under the standard weight matrix of each Chou definition, select its maximal value.
Step 104: the brightness value of covering M of calculating pixel l(i, j), computing formula is:
M L ( i , j ) = max { mg &times; ( 0.0001 &times; bg + 0.115 ) - 0.01 &times; bg , 17 &times; ( 1 - bg / 127 ) + 3 } bg &le; 127 max { mg &times; ( 0.0001 &times; bg + 0.115 ) - 0.01 &times; bg , 3 / 126 &times; ( bg - 127 ) + 3 } bg > 127 - - - ( 4 )
In formula (4), the average weighted background luminance L (i of the block that bg forms for the 5x5 pixel put centered by the capable j row of i pixel, the maximal value of the weighted mean value under the standard weight matrix that the block that j), mg forms for the 5x5 pixel of putting centered by the capable j row of i pixel defines at 4 Chou.
Secondly, in image, the computation process of texture value of covering of pixel is:
Step 201: point centered by the pixel of texture value of covering to be calculated, choose the described central point block of 3x3 pixel around.When central point is image border point, the mode of adopt expanding is expanded 3x3 block of pixels, and now, for this pixel in image not, its grey scale pixel value gets 0.
Step 202: calculate according to formula
Figure BDA0000403734110000075
calculate the texture value of covering; Wherein, x icentered by point grey scale pixel value,
Figure BDA0000403734110000076
pixel average (being that grey scale pixel value sum is divided by pixel count) for each pixel of block.
Finally, in image, the computation process of edge value of covering of pixel is:
Step 301: use Laplace filter to carry out filtering processing to original image, obtain edge feature image.
Step 302: use Canny operator edge characteristic image to carry out feature extraction.
Step 303: use expansion algorithm to carry out expansion process to the edge feature image through feature extraction.
Step 304: edge feature image and the respective pixel of the image of process expansion process are multiplied each other, obtain the edge value of the covering M of pixel t(i, j).
Step 301-304 can realize by Matlab software.
Comprehensive visual is covered the masking ability of each pixel to distortion in value representation image, also represents the JND(Just noticeable difference of each pixel simultaneously, and minimum can parallax) value.Utilize comprehensively value of covering of image, not only can compared pixels point between the size of masking ability, find watermark and embed point, but also can retrain the embedment strength of watermark.Fig. 2 is that the comprehensive visual of standard Lena figure is covered result figure, and in figure, gray-scale value is the larger point of the value of covering more a little louder.As can be seen from the figure, the value of covering of the smooth domain such as face and cap is lower, the value of covering of the texture complex regions such as hair and cap fringe is higher, the value of covering of other or some high-brightness region of face of shoulder a little more than around, this has illustrated rationality and the correctness of the comprehensive visual value of covering.
Step 3: image is carried out to wavelet transformation, obtain low frequency component wavelet coefficient, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and the high frequency diagonal component wavelet coefficient of each pixel in image.
Wavelet coefficient has localization property, and a wavelet coefficient is only relevant with pixel local in original image, therefore, when watermark is embedded into the change that certain wavelet coefficient causes, only can affect pixel local in original image.Hence one can see that, and the masking effect value of wavelet coefficient exists corresponding relation with the masking effect characteristic of the corresponding regional area of original image.
To adopt 9/7 small echo in JPEG2000 Standard of image compression, low-pass filter has 9 taps, and Hi-pass filter has 7 taps.Therefore, in low frequency component (LL) after 1 wavelet transformation, some wavelet coefficients block of pixels of 9x9 in original image determines, the wavelet coefficient of high frequency horizontal component (LH) block of pixels of 7x9 in original image determines, the wavelet coefficient of high frequency vertical component (HL) determines by the block of pixels of original image 9x7, and diagonal line details component is determined by the block of pixels of original image 7x7.
Step 4: utilize the comprehensive visual value of covering to calculate respectively the masking effect value of low frequency component wavelet coefficient and the masking effect value of high frequency diagonal component wavelet coefficient.
Because the values of covering of masking effect value a plurality of pixels in spatial domain of a wavelet coefficient determines, therefore can adopt the method for weighted mean value to solve the masking effect value of the wavelet coefficient of a certain component.The LL component wavelet coefficient of take is example, and according to short-board effect principle, spatial domain masking effect value less in original image 9x9 block of pixels is larger on the impact of the masking effect of LL wavelet coefficient.Therefore, when calculating weighted mean value, by the spatial domain less pixel of masking effect value, give larger weights, its weights are made as to exponential growth; Weights corresponding to pixel that are less than the part value of covering mean value for masking effect value amplify processing, and amplification coefficient is the business's of the local average value of covering and this some value of covering 3 powers.Can guarantee like this when the little value of covering occurs, the value of covering of wavelet coefficient corresponding point can reduce rapidly, prevents the situation of the value of covering virtual height.
The masking effect value of low frequency component wavelet coefficient of take is example, and the masking effect value of utilizing the comprehensive visual value of covering to calculate low frequency component wavelet coefficient adopts formula:
M ( i , j ) = &Sigma; 1 &le; i &le; n 1 &le; j &le; m F ( i , j ) &times; M F ( i , j ) &Sigma; 1 &le; i &le; n 1 &le; j &le; m F ( i , j ) - - - ( 5 )
In formula (5), M (i, j) is the masking effect value of the low frequency component wavelet coefficient of the capable j row of i pixel, M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel.F (i, j) is the comprehensive visual value of covering weights of the capable j row of i pixel, and F ( i , j ) = 1 p ( i , j ) &GreaterEqual; x &OverBar; ( x &OverBar; p ( i , j ) ) 3 p ( i , j ) < x &OverBar; , P (i, j) is the gray-scale value of the capable j row of i pixel, the grey scale pixel value mean value of the block forming for the 5x5 pixel put centered by the capable j row of i pixel.In formula (5), n is the line number of pixel in image, and m is the columns of pixel in image.
Utilize the comprehensive visual value of covering calculate high frequency diagonal component wavelet coefficient masking effect value method similarly.
Step 5: watermark sequence is divided into 2 watermark subsequences.
The characteristic of wavelet decomposition makes low frequency component have higher robustness, high fdrequency component has lower robustness, and robustness difference is little between the different wavelet coefficients of same Wavelet Component inside, when same Wavelet Component is embedded in watermark, be mainly to consider the transparency.In order to take into account robustness and the transparency, the present invention is equally divided into 2 parts by watermark, is embedded into respectively in the low frequency component and high frequency diagonal line component that 9/7 wavelet decomposition obtains.
Step 6: determine the position of low frequency component wavelet coefficient embed watermark and embed a watermark subsequence, determine the position of high frequency diagonal component wavelet coefficient embed watermark and embed another watermark subsequence.
The low frequency component wavelet coefficient embed watermark of take is example, and the process of determining the position of low frequency component wavelet coefficient embed watermark and embedding a watermark subsequence is:
Step 101: the low frequency component wavelet coefficient of each pixel, according to the size sequence of the masking effect value of the low frequency component wavelet coefficient of this pixel, is obtained to First ray C k(i, j).C k(i, j) is k low frequency component wavelet coefficient in First ray, and (i, j) is k the pixel that low frequency component wavelet coefficient is corresponding.
Step 102: select the top n low frequency component wavelet coefficient of First ray to embed a watermark subsequence.Wherein, N is the number of watermark in watermark subsequence.
Each watermark bit in watermark subsequence (1 or 0) is embedded in a wavelet coefficient, for single-point watermark, embed, common way is that watermark is turned to binary sequence, then changes into-1/1 sequence, for each 1, the wavelet coefficient that embeds point is increased; If-1, reduces the wavelet coefficient that embeds point.
For each Wavelet Component, watermark capacity is definite value, so the picture quality after embed watermark is only relevant with embedment strength with watermark embedded location.According to the present invention, determine the masking effect value of wavelet coefficient, the simplest method is to select the larger wavelet coefficient embed watermark of the value of covering in component.Therefore, the present invention selects top n low frequency component wavelet coefficient to embed a watermark subsequence, and it embeds and adopts formula:
C k' (i, j)=C kin (i, j)+watermark (k) * M (i, j) * embedpower (6) formula (6), C k' (i, j) is the value after k low frequency component wavelet coefficient embed watermark in First ray, and watermark (k) is the k position watermark value in watermark subsequence, and M (i, j) is low frequency component wavelet coefficient C kthe masking effect value of (i, j), embedpower is preset proportion coefficient.
Due to the JND information that the comprehensive visual value of covering is comprising original image, can represent that each pixel of original image allows the maximal value of embed watermark in the situation that not causing vision distortion.And the masking effect value of wavelet coefficient and comprehensive visual value of covering of original image have similarity, so the present invention adopts the masking effect value of wavelet coefficient to can be used as the control parameter of watermark embed strength, is the M in formula (6) f(i, j).In addition,, because human eye is different for the visual sensitivity of different components, the present invention also arranges different scale-up factors for each Wavelet Component and controls embedment strength, is the embedpower in formula.Such as, for the high fdrequency component with high transparent, can increase embedment strength to improve robustness.
For to high frequency diagonal component wavelet coefficient embed watermark, its process is consistent with step 11-12.
Step 7: the high frequency diagonal component wavelet coefficient to the low frequency component wavelet coefficient of embed watermark sequence, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and embed watermark sequence carries out inverse wavelet transform, obtains the image of embed watermark.
Be more than the implementation process of the adapting to image watermark embedding method based on wavelet transformation and visual characteristic that designs of the present invention, its schematic diagram as shown in Figure 3.The flow process of watermark extracting is the inverse process of telescopiny substantially, at this, is not described in detail.Fig. 4 has provided the watermarking images that contains that 512 * 512 gray level images (Boat, Lena, Peppers) of three width standards is adopted to watermark embedding method generation of the present invention, extracts the design sketch of watermark after the attack through multiple varying strength.Wherein, the PSNR value containing watermarking images after embed watermark is not less than 35dB, and visual quality is substantially undistorted.
The above; be only the present invention's embodiment preferably, but protection scope of the present invention is not limited to this, is anyly familiar with in technical scope that those skilled in the art disclose in the present invention; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (5)

1. the adapting to image watermark embedding method based on wavelet transformation and visual characteristic, is characterized in that described method comprises:
Step 1: image is carried out to gray scale processing;
Step 2: comprehensive visual value of covering of each pixel in computed image;
Step 3: image is carried out to wavelet transformation, obtain low frequency component wavelet coefficient, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and the high frequency diagonal component wavelet coefficient of each pixel in image;
Step 4: utilize the comprehensive visual value of covering to calculate respectively the masking effect value of low frequency component wavelet coefficient and the masking effect value of high frequency diagonal component wavelet coefficient;
Step 5: watermark sequence is divided into 2 watermark subsequences;
Step 6: determine the position of low frequency component wavelet coefficient embed watermark and embed a watermark subsequence, determine the position of high frequency diagonal component wavelet coefficient embed watermark and embed another watermark subsequence;
Step 7: the high frequency diagonal component wavelet coefficient to the low frequency component wavelet coefficient of embed watermark sequence, high frequency horizontal component wavelet coefficient, high frequency vertical component wavelet coefficient and embed watermark sequence carries out inverse wavelet transform, obtains the image of embed watermark.
2. method according to claim 1, is characterized in that comprehensive visual value of covering of each pixel in described computed image adopts formula M f(i, j)=MAX (M l(i, j), MIN (M e(i, j), M t(i, j)));
Wherein, M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel in image;
M l(i, j) is brightness value of covering of the capable j row of i pixel in image;
M e(i, j) is texture value of covering of the capable j row of i pixel in image;
M t(i, j) is edge value of covering of the capable j row of i pixel in image.
3. method according to claim 1, the masking effect value of the masking effect value/high frequency diagonal component wavelet coefficient that it is characterized in that utilizing the comprehensive visual value of covering to calculate low frequency component wavelet coefficient adopts formula
M ( i , j ) = &Sigma; 1 &le; i &le; n 1 &le; j &le; m F ( i , j ) &times; M F ( i , j ) &Sigma; 1 &le; i &le; n 1 &le; j &le; m F ( i , j ) ;
Wherein, M (i, j) is the masking effect value of masking effect value/high frequency diagonal component wavelet coefficient of the low frequency component wavelet coefficient of the capable j row of i pixel;
M f(i, j) is comprehensive visual value of covering of the capable j row of i pixel;
F (i, j) be the capable j row of i pixel the comprehensive visual value of covering weights and F ( i , j ) = 1 p ( i , j ) &GreaterEqual; x &OverBar; ( x &OverBar; p ( i , j ) ) 3 p ( i , j ) < x &OverBar; , P (i, j) is the gray-scale value of the capable j row of i pixel,
Figure FDA0000403734100000023
the grey scale pixel value mean value of the block forming for the 5x5 pixel put centered by the capable j row of i pixel;
N is the line number of pixel in image;
M is the columns of pixel in image.
4. method according to claim 3, is characterized in that described step 6 specifically comprises:
Step 101: the low frequency component wavelet coefficient of each pixel, according to the size sequence of the masking effect value of the low frequency component wavelet coefficient of this pixel, is obtained to First ray;
The high frequency diagonal component wavelet coefficient of each pixel, according to the size sequence of the masking effect value of the high frequency diagonal component wavelet coefficient of this pixel, is obtained to the second sequence;
Step 102: select the top n low frequency component wavelet coefficient of First ray to embed a watermark subsequence, select the top n high frequency diagonal component wavelet coefficient of the second sequence to embed another watermark subsequence;
Wherein, N is the number of watermark in watermark subsequence.
5. method according to claim 4, the top n high frequency diagonal component wavelet coefficient that the top n low frequency component wavelet coefficient that it is characterized in that described selection First ray embeds a watermark subsequence/select the second sequence embeds another watermark subsequence and adopts formula:
C k′(i,j)=C k(i,j)+watermark(k)×M(i,j)×embedpower;
Wherein, C k(i, j) is k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient in First ray/the second sequence;
I and j are respectively horizontal ordinate and the ordinate of the pixel that k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient is corresponding;
C k' (i, j) is the value after k low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient embed watermark in First ray/the second sequence;
Watermark (k) is the k position watermark value in watermark subsequence;
M (i, j) is low frequency component wavelet coefficient/high frequency diagonal component wavelet coefficient C kthe masking effect value of (i, j);
Embedpower is preset proportion coefficient.
CN201310519922.9A 2013-10-29 2013-10-29 Based on the adapting to image watermark embedding method of wavelet transformation and visual characteristic Expired - Fee Related CN103559677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310519922.9A CN103559677B (en) 2013-10-29 2013-10-29 Based on the adapting to image watermark embedding method of wavelet transformation and visual characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310519922.9A CN103559677B (en) 2013-10-29 2013-10-29 Based on the adapting to image watermark embedding method of wavelet transformation and visual characteristic

Publications (2)

Publication Number Publication Date
CN103559677A true CN103559677A (en) 2014-02-05
CN103559677B CN103559677B (en) 2016-04-20

Family

ID=50013917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310519922.9A Expired - Fee Related CN103559677B (en) 2013-10-29 2013-10-29 Based on the adapting to image watermark embedding method of wavelet transformation and visual characteristic

Country Status (1)

Country Link
CN (1) CN103559677B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344763A (en) * 2021-08-09 2021-09-03 江苏羽驰区块链科技研究院有限公司 Robust watermarking algorithm based on JND and oriented to screen shooting
CN114268845A (en) * 2021-12-21 2022-04-01 中国电影科学技术研究所 Real-time watermark adding method for 8K ultra-high-definition video based on heterogeneous operation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1377184A (en) * 2002-04-30 2002-10-30 中山大学 Method for embedding digital watermark
CN1389825A (en) * 2002-07-12 2003-01-08 哈尔滨工业大学 Method of embedding digital watermark into and separating and recovering digital watermark from media information
JP2004221950A (en) * 2003-01-15 2004-08-05 Ricoh Co Ltd Image processor, program, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1377184A (en) * 2002-04-30 2002-10-30 中山大学 Method for embedding digital watermark
CN1389825A (en) * 2002-07-12 2003-01-08 哈尔滨工业大学 Method of embedding digital watermark into and separating and recovering digital watermark from media information
JP2004221950A (en) * 2003-01-15 2004-08-05 Ricoh Co Ltd Image processor, program, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖亮灯: "一种利用人眼视觉掩盖的小波域数字水印", 《通信学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344763A (en) * 2021-08-09 2021-09-03 江苏羽驰区块链科技研究院有限公司 Robust watermarking algorithm based on JND and oriented to screen shooting
CN113344763B (en) * 2021-08-09 2021-12-21 江苏羽驰区块链科技研究院有限公司 Robust watermarking algorithm based on JND and oriented to screen shooting
CN114268845A (en) * 2021-12-21 2022-04-01 中国电影科学技术研究所 Real-time watermark adding method for 8K ultra-high-definition video based on heterogeneous operation
CN114268845B (en) * 2021-12-21 2024-02-02 中国电影科学技术研究所 Real-time watermarking method of 8K ultra-high definition video based on heterogeneous operation

Also Published As

Publication number Publication date
CN103559677B (en) 2016-04-20

Similar Documents

Publication Publication Date Title
Wang et al. Blind image data hiding based on self reference
Bhatnagar et al. A new robust reference watermarking scheme based on DWT-SVD
CN101990081B (en) Method for protecting copyright of virtual viewpoint image
CN102147912B (en) Adaptive difference expansion-based reversible image watermarking method
CN110232650B (en) Color image watermark embedding method, detection method and system
CN106131711B (en) The hidden watermark insertion of robust and extracting method towards 3D high-definition digital video
Roy et al. A perception based color image adaptive watermarking scheme in YCbCr space
CN104766269A (en) Spread transform dither modulation watermarking method based on JND brightness model
CN102024249A (en) Digital image watermarking method based on visual perception characteristics
CN103366336A (en) Image watermarking method based on human eye contrast ratio sensitivity visual characteristics
CN105512999A (en) Dual-transform color image holographic watermarking method
CN104284182A (en) Method and device for assessing the presence of a watermark in a video
CN103366332B (en) A kind of image watermark method based on depth information
CN104978706B (en) Watermark embedding and extracting method, device and system based on Fourier descriptor
CN104766263A (en) Color image watermark embedding and detecting method based on quaternion Legendre moment correction
CN112700363B (en) Self-adaptive visual watermark embedding method and device based on region selection
CN102722857A (en) Digital image watermark method based on visual attention mechanism
CN103559677B (en) Based on the adapting to image watermark embedding method of wavelet transformation and visual characteristic
CN103049880B (en) Digital watermarking method based on discrete cosine transformation
CN105023237A (en) Method for improving concealment performance of image digital watermarks
Abodena et al. Hybrid technique for robust image watermarking using discrete time fourier transform
Al-Gindy et al. A novel blind Image watermarking technique for colour RGB images in the DCT domain using green channel
Kekre et al. Robust Watermaking Technique Using Hybrid Wavelet Transform Generated From Kekre Transforms and Discrete Cosine Transform
Al-Otum et al. Color image watermarking based on self-embedded color permissibility with preserved high image quality and enhanced robustness
Jiao et al. Framelet image watermarking considering dynamic visual masking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160420

Termination date: 20181029