CN111062853A - Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system - Google Patents

Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system Download PDF

Info

Publication number
CN111062853A
CN111062853A CN201911328707.4A CN201911328707A CN111062853A CN 111062853 A CN111062853 A CN 111062853A CN 201911328707 A CN201911328707 A CN 201911328707A CN 111062853 A CN111062853 A CN 111062853A
Authority
CN
China
Prior art keywords
watermark
texture
image
embedding
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911328707.4A
Other languages
Chinese (zh)
Inventor
黄樱
牛保宁
关虎
张树武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Institute of Automation of Chinese Academy of Science
Original Assignee
Taiyuan University of Technology
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology, Institute of Automation of Chinese Academy of Science filed Critical Taiyuan University of Technology
Priority to CN201911328707.4A priority Critical patent/CN111062853A/en
Publication of CN111062853A publication Critical patent/CN111062853A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0028Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Abstract

The invention relates to a self-adaptive image watermark embedding method and system based on texture, and an extraction method and system, wherein the self-adaptive image watermark embedding method comprises the following steps: determining a watermark embedding area in a carrier image to be embedded with a watermark, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas; establishing a texture-based adaptive parameter-adaptive model by using a multivariate regression analysis method, wherein the texture-based adaptive parameter-adaptive model is used for adjusting the embedding parameters in the watermark embedding method; embedding a watermark in each of the watermark embedding areas using the watermark embedding method. The invention can truly measure the richness of the image texture; accurately positioning a rough texture area in the image; by means of multiple regression analysis, a functional relation between watermark embedding parameters and global texture values and local texture values of the rough texture regions is established, the embedding parameters of the watermark are adaptively adjusted according to the texture values of the regions, invisibility of the watermark is guaranteed to the maximum extent, and robustness of the watermark is enhanced.

Description

Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a texture-based adaptive image watermark embedding method and system and an extraction method and system.
Background
With the rapid development of computer, internet and digital media technologies, various types of digital media contents such as text, image, audio, video, etc. are widely used and spread over networks and different mobile devices. People can easily and quickly make, process, modify and store these digital media contents using digital devices, which also causes serious network copyright problems. In recent years, more and more network piracy acts have great impact on the interests of right owners, and striking network piracy acts become a problem which is generally concerned by various industries increasingly. Therefore, it is urgent to be able to effectively identify copyright infringement behaviors of digital media contents and to resort to laws.
Digital image watermarking technology is mainly used for solving the problems of copyright protection and infringement tracking of digital images, and a digital mark for identifying copyright is imperceptibly embedded into an image, wherein the mark is called watermark; when copyright disputes occur, ownership of the image can be determined by extracting the watermark.
The key to the effective application of watermarking technology in the field of copyright protection is: the embedded watermark has both invisibility and robustness. The invisibility means that the quality and the use of the image are not affected after the watermark is embedded into the image. Ensuring the invisibility of the watermark embedded in the image is a prerequisite for the application of watermarking techniques. Robustness requires that watermarks embedded in images can be correctly extracted even if the images are distorted by various attacks. The watermark embedded in the image can be correctly extracted to play a role of copyright authentication, and the improvement of the robustness of the watermark as much as possible is a research target of the watermark technology. The embedding strength is a key factor for determining the invisibility and the robustness of the watermark, and increasing the embedding strength of the watermark is beneficial to improving the robustness of the watermark, but can reduce the invisibility of the watermark; and vice versa. Thus, there is a conflict between robustness and invisibility: increased robustness is usually at the expense of invisibility, which are mutually restrictive.
An image usually contains smooth texture areas (gradual gray level change in unit space) and rough texture areas (drastic gray level change in unit space). Viewing the watermarked image (as shown in fig. 1), it was found that the watermark embedded in the smooth regions of the texture was more perceptible to the human eye than the watermark embedded in the rough regions of the texture. This shows that the texture smooth areas and the texture rough areas have different tolerances for the watermark. Therefore, the selection of the watermark embedding area and the accuracy of the positioning directly affect the robustness and invisibility of the watermark. Most of the existing watermarking technologies take a complete image as a watermark embedding area, so that a texture smooth area in the image cannot be avoided, the embedding strength of the watermark can only be reduced in order to avoid influencing the visual quality of the image, and meanwhile, the robustness of the watermark is also reduced. There are a few watermarking techniques, which, although embedding a watermark only in a partial area of an image, they generally use feature points to locate an embedding area, some feature points extracted from the image having translation, rotation or scale invariance as reference points, embedding the watermark in the non-overlapping areas where they are located. However, the positioning and synchronization of the feature areas are closely related to the stability of the feature points, and the feature areas before and after the image is attacked are difficult to ensure complete consistency, which affects the accuracy of watermark extraction; moreover, the feature points do not intentionally capture the rough texture region in the image, so that it is difficult to avoid the feature region including the smooth texture portion.
After the image watermarking technology determines the embedding area, the watermark is embedded and extracted in the embedding area by using a watermark embedding and extracting method. In the watermark embedding method, an embedding parameter is usually set to control the embedding strength of the watermark, and the setting of the embedding parameter needs to take robustness and invisibility into consideration. The self-adaptive watermark embedding method is characterized in that embedding parameters are modeled, and the embedding parameters are correspondingly adjusted by utilizing certain characteristics (mean value, variance, entropy and the like) of a carrier image, so that the embedding parameters can change along with the change of the carrier image, the invisibility of the embedded watermark in each image is improved, and the robustness of the watermark is improved. Although the adaptive watermark embedding method considers the invisibility of the watermark, because the characteristics are not directly related to the invisibility of the watermark, and the characteristics of the image influencing the invisibility of the watermark in the image are various and are difficult to be considered, the existing adaptive watermark embedding method only weakens the difference between the invisibility of each image, and can not enable all the images embedded with the watermark to have good and consistent invisibility of the watermark while improving the robustness of the watermark.
Disclosure of Invention
In order to solve the problems in the prior art, namely to solve the problem that the watermark is easy to hide in the rough texture area of the image, ensure the invisibility of the embedded watermark in each image and improve the robustness of the watermark to the maximum extent, the invention provides a texture-based adaptive image watermark embedding method and system.
In order to solve the technical problems, the invention provides the following scheme:
a texture-based adaptive image watermark embedding method, the adaptive image watermark embedding method comprising:
determining a watermark embedding area in a carrier image to be embedded with a watermark, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
establishing a texture-based adaptive parameter model corresponding to a preselected watermark embedding method by using a multivariate regression analysis method;
adjusting the embedding parameters in the watermark embedding method through a texture-based adaptive parameter-adapting model;
and embedding a watermark in each watermark embedding area by using the watermark embedding method to generate a carrier image containing the watermark.
Optionally, the determining a watermark embedding area in a carrier image to be embedded with a watermark specifically includes:
acquiring a rough texture area of the carrier image to be embedded with the watermark;
and deleting the overlapped rough texture area to obtain a watermark embedding area.
Optionally, the obtaining of the rough texture region of the carrier image to be embedded with the watermark specifically includes:
traversing the carrier image to be embedded with the watermark according to a set step length by utilizing a sliding window; wherein the size and the moving step length of the sliding window are proportional to the size of the carrier image to be embedded with the watermark;
for each of the steps of the method,
calculating a global texture value and a local texture value of a current window;
and judging whether the current window area is a rough texture area or not according to the local texture value: if the current window area is a rough texture area, the position information, the global texture value and the local texture value of the rough texture area are saved, and if not, the current window area is discarded.
Optionally, the calculating the global texture value and the local texture value of the current window includes:
the measure of the texture roughness of the image is defined as a texture value of the image, and the texture value is used for reflecting the richness degree of the texture of the image.
Calculating the texture value of the current image according to the following formula:
Figure BDA0002329039590000041
wherein, T _ v represents the texture value of the current image, i ≠ 0 or j ≠ 0, M and N are respectively the width and height of the current image, and AC (i, j) represents the AC coefficient positioned in the ith row and the jth column; the AC coefficients refer to all coefficients except for the coefficient at the coordinate (0,0) position in a coefficient matrix obtained by discrete cosine transforming the current image;
the global texture value refers to the texture value of a complete image or a complete area corresponding to the current window;
the local texture value is obtained by firstly carrying out average division on an image or a region corresponding to the current window to obtain the texture value of each block in the region, and the minimum texture value in all the blocks is the local texture value of the current window.
Optionally, the deleting the overlapped rough texture region to obtain a watermark embedding region specifically includes:
sequencing all the rough texture areas according to the sequence of local texture values from large to small, and traversing the rough texture areas one by one;
for each of the textured rough areas,
calculating the coordinate difference between the central point of the rough texture area and the central points of other rough texture areas;
if the horizontal coordinate difference and the vertical coordinate difference are respectively smaller than the length and the width of the rough texture area, judging that the two rough texture areas are overlapped, and deleting the rough texture area with a smaller local texture value;
and traversing all the rough texture areas to obtain a final watermark embedding area.
Optionally, the determining a watermark embedding area in the carrier image to be embedded with the watermark further includes:
and when none of the regions traversed by the sliding window in the carrier image can be judged as the rough texture region, selecting the first N window regions with larger local texture values and no overlap as watermark embedding regions.
Optionally, the method for establishing a texture-based adaptive parameter model corresponding to the preselected watermark embedding method by using a multivariate regression analysis method specifically includes:
the method comprises the steps of determining a watermark embedding method in advance, wherein the watermark embedding method is used for embedding watermarks into watermark embedding areas in each carrier image to be embedded with the watermarks;
selecting a plurality of images with different texture values, wherein the size of each image is the same as that of the watermark embedding area;
embedding watermarks into the selected images with different texture values by using the watermark embedding method, and adjusting the embedding parameters in the watermark embedding method so that the structural similarity of each image before and after the watermark is embedded is within the range of a similarity threshold;
and fitting the global texture value and the local texture value of the image with different texture values by utilizing multivariate linear regression to obtain a texture-based adaptive parameter model corresponding to the watermark embedding method.
Optionally, the adjusting, by the adaptive parameter-adaptive model based on the texture, the embedding parameters in the embedding method specifically includes:
and traversing the watermark embedding area.
For each of the watermark-embedding regions,
and substituting the local texture value and the global texture value of the current watermark embedding region into the texture-based adaptive reference model to obtain the corresponding embedding parameters of the watermark embedding method when the watermark is embedded in the embedding region.
In order to solve the technical problems, the invention also provides the following scheme:
a texture-based adaptive image watermark embedding system, the adaptive image watermark embedding system comprising:
the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for determining a watermark embedding area in a carrier image to be embedded with a watermark, and the watermark embedding area is a plurality of non-overlapped rough texture areas;
the modeling unit is used for establishing a texture-based adaptive parameter model corresponding to a preselected embedding method by utilizing a multiple regression analysis method;
the adjusting unit is used for adjusting the embedding parameters in the watermark embedding method through a texture-based adaptive parameter-adapting model;
and the embedding unit is used for embedding the watermark in each watermark embedding area by using the watermark embedding method to generate an embedded carrier image containing the watermark.
In order to solve the above problems in the prior art, that is, to improve the accuracy of watermark extraction, the present invention provides a texture-based adaptive image watermark extraction method and system.
In order to solve the technical problems, the invention provides the following scheme:
a texture-based adaptive image watermark extraction method, comprising:
determining a watermark embedding area in a carrier image of which the watermark is to be extracted, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
extracting the watermark from each embedded area by a watermark extraction method;
and analyzing the watermark sequences extracted from the plurality of embedded areas to obtain a final watermark sequence.
In order to solve the technical problems, the invention also provides the following scheme:
a texture-based adaptive image watermark extraction system, the adaptive image watermark extraction system comprising:
the second determining unit is used for determining watermark embedding areas in the carrier image of the watermark to be extracted, wherein the watermark embedding areas are a plurality of non-overlapped rough texture areas;
an extracting unit configured to extract a watermark from each of the embedded regions by using a watermark extraction method;
and the analysis unit is used for analyzing the watermark sequences extracted from the plurality of embedded areas to obtain a final watermark sequence.
According to the embodiment of the invention, the invention discloses the following technical effects:
in the invention, the designed measuring method of the image texture can truly reflect the richness degree of the image texture. The invention accurately positions the rough texture area of the image by using the local texture values of the sliding window and the area in the window, thereby ensuring the correctness of watermark extraction; embedding the watermark in the rough texture area to ensure the visual quality of the embedded watermark image; obtaining a functional relation between watermark embedding parameters and global texture values and local texture values of the rough texture regions through multivariate regression analysis, and adaptively adjusting the embedding parameters of the watermark according to the texture values of the regions, thereby ensuring the invisibility of the watermark and enhancing the robustness of the watermark to the maximum extent; the accuracy of watermark extraction is further improved by embedding the same watermark in a plurality of non-overlapping rough texture regions.
Drawings
Fig. 1 shows an image after embedding a watermark in a complete image by using a current latest watermark algorithm according to an exemplary embodiment (structural similarity SSIM before and after embedding the watermark is 96.28%);
FIG. 2 is a diagram illustrating partitioning of a region in an image when computing local texture values for a region in the image, according to an illustrative embodiment;
FIG. 3 is a flow chart of the texture-based adaptive image watermark embedding method of the present invention;
FIG. 4 illustrates a texture smooth image and a texture rough image in accordance with an exemplary embodiment;
FIG. 5 illustrates a matrix of pixels and DCT coefficients for a simulated texture smoothed image, according to an example embodiment;
FIG. 6 illustrates a matrix of pixels and DCT coefficients for a simulated textured rough image, according to an example embodiment;
FIG. 7 is a diagram illustrating a comparison of local texture values and global texture values for 100 regions that are randomly selected in accordance with an exemplary embodiment;
fig. 8 shows an image after a watermark is embedded in an image texture rough region by using a current latest watermark algorithm according to an exemplary embodiment (structural similarity SSIM between before and after the watermark is 99.61%);
FIG. 9 is a diagram illustrating an error between embedding parameters found using a fitted model and actually set embedding parameters, according to an exemplary embodiment;
FIG. 10 is a diagram illustrating Structural Similarity (SSIM) comparison of images before and after embedding a watermark in accordance with an exemplary embodiment;
FIG. 11 is a block diagram of an adaptive texture-based image watermark embedding system according to the present invention;
FIG. 12 is a flowchart illustrating a texture-based adaptive image watermark extraction method according to the present invention;
fig. 13 is a schematic block diagram of an adaptive image watermark extraction system based on texture according to the present invention.
Description of the symbols:
a first determining unit-11, a modeling unit-12, an adjusting unit-13, an embedding unit-14, a second determining unit-15, an extracting unit-16, and an analyzing unit-17.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
The invention aims to provide a texture-based adaptive image watermark embedding method, which is used for accurately positioning a rough texture area of an image, embedding a watermark only in the rough texture area, obtaining a functional relation between watermark embedding parameters and global texture values and local texture values of the rough texture area through multiple regression analysis, adaptively adjusting the watermark embedding parameters according to the texture values of the area, and ensuring the invisibility of the watermark and enhancing the robustness of the watermark to the maximum extent.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 3, the texture-based adaptive image watermark embedding method of the present invention includes:
step 100: determining a watermark embedding area in a carrier image to be embedded with a watermark, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
step 200: establishing a texture-based adaptive parameter model corresponding to a preselected watermark embedding method by using a multivariate regression analysis method;
step 300: adjusting the embedding parameters in the watermark embedding method through a texture-based adaptive parameter-adapting model;
step 400: and embedding a watermark in each watermark embedding area by using the watermark embedding method to generate a carrier image containing the watermark.
In step 100, the determining a watermark embedding area in a carrier image to be embedded with a watermark specifically includes:
110: acquiring a rough texture area of the carrier image to be embedded with the watermark;
step 120: and deleting the overlapped rough texture area to obtain a watermark embedding area.
In step 110, the obtaining of the rough texture region of the carrier image to be embedded with the watermark specifically includes:
step 111: and traversing the carrier image to be embedded with the watermark according to the set step length by utilizing a sliding window. Wherein the size and the moving step length of the sliding window are proportional to the size of the carrier image to be embedded with the watermark.
Setting the ratio of the size of the sliding window to the size of the carrier image to be embedded with the watermark as W, and setting the ratio of the transverse moving step length and the longitudinal moving step length to the width and the height of the carrier image to be embedded with the watermark as S.
It should be noted that if the size of the sliding window is set too large, the area in the window is likely to include some smooth texture portions, thereby reducing the visual quality of the image after the watermark is embedded; if the size of the sliding window is set to be too small, the watermark embedding capacity is reduced, the watermark expression capacity is influenced, and the watermark embedding efficiency is reduced. The size of the moving step is related to the calculation overhead and the accuracy of the positioning of the rough texture region, and the smaller the moving step is set, the more time is taken for the traversing process of the sliding window, but the more accurate the capturing of the rough texture region in the whole image is, and vice versa.
Step 112: for each step, the global texture value and the local texture value of the current window are calculated.
In the present embodiment, the measure of the texture roughness of the image (region) is defined as the texture value of the image (region), and the texture value is used to reflect the richness of the texture of the image (region).
The texture value of the current image (region) is calculated according to the following formula:
Figure BDA0002329039590000111
wherein, T _ v represents the texture value of the current image (region), i ≠ 0 or j ≠ 0, M and N are respectively the width and height of the current image (region), and AC (i, j) represents the AC coefficient located in the ith row and jth column; the AC coefficients refer to all coefficients except the coefficient at the position of coordinates (0,0) in a coefficient matrix obtained by discrete cosine transforming a current image (region);
as shown in fig. 4 (where part (a) in fig. 4 is a texture smooth image and T _ v is 4.403; and part (b) in fig. 4 is a texture rough image and T _ v is 29.323), it can be seen that the texture value can well reflect the texture condition of the image (region) and is consistent with the human intuitive visual perception.
The global texture value refers to a texture value of a complete image or a complete region corresponding to the current window.
The local texture value is obtained by performing average division on a complete region corresponding to the current window (as shown in fig. 2) to obtain a texture value of each block in the region, and a minimum texture value in all blocks is a local texture value of the current window.
The analysis process of the texture value calculation method is as follows:
this has prompted us to think of the Discrete Cosine Transform (DCT) since the image texture is related to the frequency of the image. DCT is characterized by changing the frequency components of the image that are scattered into an ordered distribution. The upper left corner of the DCT coefficient matrix of the image represents low frequency coefficients and the lower right corner represents high frequency coefficients. The coefficient whose coordinates are (0,0) position is independent of the cosine function (cos0 ═ 1), and is the mean value of the image sample signal, and is called the DC coefficient (DC coefficient) of DCT, and the other DCT coefficients are all obtained by the cosine function, and are called the AC coefficient (AC coefficient). The DC coefficient reflects the gray level of the image, and is independent of the distribution of the gray value; the AC coefficient is related to the image frequency, and can be used to reflect the degree of gray level change of the image, and thus is suitable for describing the texture of the image.
Two sets of random integers with a mean of 100, a variance of 10 and 50, respectively, and a value in the range of 0-255 are randomly generated to form two 8 × 8 matrices, which are used to model the texture smooth image (as shown in part (a) of fig. 5) and the texture rough image (as shown in part (a) of fig. 6), respectively, and the corresponding DCT coefficient matrix (retaining only the integers) (as shown in part (b) of fig. 5 and part (b) of fig. 6). Although the absolute values of the AC coefficients at different positions of the DCT coefficient matrices of the two images are large or small, the absolute values of the AC coefficients of the textured coarse image are generally larger than those of the textured smooth image as a whole. This is because the gray value of the texture-smoothed image changes slowly, and the projected values of the image signal in the two sets of orthogonal functions have a large number of positive and negative cancellation scenarios, resulting in a small AC coefficient in value. In contrast, the gray value of the rough texture image varies greatly, so that the positive and negative offsets occur relatively rarely, and the obtained AC coefficient value is relatively large.
It follows that the sum of the absolute values of the AC coefficients of the texture-rough image will necessarily be larger than the sum of the absolute values of the AC coefficients of the texture-smooth image, and the richer the texture, i.e. the more drastic the change in the grey value of the image, the larger the sum of the absolute values of the AC coefficients. Therefore, the texture of the image can be well described by the sum of the absolute values of the AC coefficients of the image DCT.
Step 113: and judging whether the current window area is a rough texture area or not according to the local texture value: if the current window area is a rough texture area, the position information, the global texture value and the local texture value of the rough texture area are saved, and if not, the current window area is discarded.
The local texture values characterize the texture more finely, so for a region the local texture values are usually smaller than the global texture values, as shown in fig. 7, which gives the global texture values and the local texture values for 100 regions chosen randomly.
Wherein, the determining whether the current window area is a rough texture area specifically includes: if the local texture value of the current window area in the image is larger than a preset threshold value, the current window area is judged as a rough texture area.
Further, in step 120, the deleting the overlapped rough texture area to obtain a watermark embedding area specifically includes:
step 121: sequencing all the rough texture areas according to the sequence of local texture values from large to small, and traversing the rough texture areas one by one;
step 122: calculating the coordinate difference between the central point of each rough texture area and the central points of other rough texture areas;
step 123: if the horizontal coordinate difference and the vertical coordinate difference are respectively smaller than the length and the width of the rough texture area, judging that the two rough texture areas are overlapped, and deleting the rough texture area with a smaller local texture value;
step 124: and traversing all the rough texture areas to obtain a final watermark embedding area.
In this embodiment, not all of the textured rough areas are used for embedding the watermark. If the two rough texture areas are overlapped, the watermark information embedded in the rough texture areas can interfere with each other, so that the correctness of watermark extraction is influenced. Therefore, the non-overlapping textured rough areas in the image for embedding the watermark can be defined as the watermark embedding areas.
Further, in step 100, the determining a watermark embedding area to be embedded in the carrier image where the watermark is to be embedded further includes:
and when none of the regions traversed by the sliding window in the carrier image can be judged as the rough texture region, selecting the first N window regions with larger local texture values and no overlap as watermark embedding regions.
Through the steps, the watermark embedding area with rough texture in the carrier image to be embedded with the watermark can be accurately positioned, and even if the image is subjected to geometric attack (scaling, shearing, overturning or rotating by integral multiple of 90 degrees), the obtained watermark embedding area can be kept consistent with the watermark embedding area of the carrier image to be embedded. Moreover, the embedded area can be prevented from containing a small texture smooth part.
Any watermark embedding and extracting method can be adopted to embed and extract the watermark in the embedded area positioned by the invention, and the invention belongs to the protection scope as long as the embedded area is positioned by adopting the method of the invention.
In step 200, the establishing a texture-based adaptive parameter model corresponding to the preselected watermark embedding method by using a multiple regression analysis method specifically includes:
step 210: and predetermining a watermark embedding method, wherein the watermark embedding method is used for embedding the watermark into the watermark embedding area in each carrier image to be embedded with the watermark.
The embedding method is a watermark embedding method for embedding the watermark in the watermark embedding area in the carrier image to be embedded with the watermark. Preferably, in the present embodiment, a discrete cosine transform-based differential quantization watermark embedding method is selected to embed a watermark for each watermark embedding area.
Step 220: and selecting a plurality of images with different texture values, wherein the size of each image is the same as that of the watermark embedding area.
The more the number of the selected images is, the closer the texture values of the images are to uniform distribution, and the more accurate the regression model is; the size of the image is the same as the size of the watermark embedding area. Preferably, the present embodiment selects 100 images with texture values substantially subject to uniform distribution and the same size as the embedding region.
Step 230: and embedding watermarks into the selected images with different texture values by using the watermark embedding method, and adjusting the embedding parameters in the watermark embedding method so that the structural similarity of each image before and after the watermark is embedded is within the similarity threshold range.
If the watermark invisibility of each embedded watermark image is to be ensured, a consistent and good watermark invisibility index needs to be provided between each embedded watermark image and the carrier image to be embedded, because the invisibility of the watermark is measured by some evaluation indexes reflecting the change condition of the image before and after the watermark is embedded. There are strong correlations between pixels of natural images, which carry important information about the structure of objects in the visual scene. The Human Visual System (HVS) mainly acquires structural information from within the visible region, so that the distortion of the image can be approximately perceived by detecting whether the structural information changes. The Structural Similarity (SSIM) is a method for measuring subjective perceptual quality of digital images and videos. SSIM is derived from the principle of HVS perception of images, and the evaluation result is closer to the subjective feeling of human. Therefore SSIM is suitable as an evaluation index for invisibility of watermarking technology. The closer the value of SSIM of the image before and after embedding the watermark is to 1, the better the invisibility of the watermark is.
Step 240: and fitting the global texture value and the local texture value of the image with different texture values by utilizing multivariate linear regression to obtain a texture-based adaptive parameter model corresponding to the watermark embedding method.
As shown in fig. 8, the same watermark embedding method is used to embed the same amount of watermark as that in fig. 1 in the textured rough region in fig. 8, and it can be seen that the watermark embedded in the textured rough region has better invisibility (higher SSIM value); in contrast, the more abundant the texture of an image (region), the more amount of variation of pixels that the human eye can tolerate, i.e., the larger the texture value of an image (region) can be set with larger embedding parameters under the same invisibility condition. In this embodiment, a multiple linear regression method is used to obtain the texture-based adaptive parameter model, and the analysis process is as follows:
to obtain the functional relationship between the embedding parameters and the texture values, the local texture values (T _ l), the global texture values (T _ g), the power of the local texture values (T _ l) are referred to herein2) And the square of the global texture value (T _ g)2) The parameters (P _ e) were embedded as independent variables, and the corresponding data of 100 images were fitted by a multivariate linear regression method, and table 1 shows the relational expressions obtained for various independent variable combinations and the fitting of the data points thereof.
TABLE 1 multiple Linear regression results
Figure BDA0002329039590000161
***p<0.01,**p<0.05,*p<0.01
(1) Tables 3 to 7 show the coefficients and constant values of the independent variables in the respective models, respectively. Taking the model (6) as an example, the relation represented by the model is as follows: p _ e ═ 0.053×T_l2+0.0251×T_g2+4.654×T_l+1.948×T_g+19.33。R2Representing the proximity of the statistical data to the fitted regression model. In general, R2The higher the fit of the model to the data the better. As for model (1), R2Indicating that 89% of the data points are separately interpretable by the local texture value. The p-value represents the level of significance of the estimated coefficient for each independent variable in the model, reflecting whether the coefficient has statistical significance.
As shown in table 1, the coefficients of models (1), (2), (3), (5) are significant, and model (5) can fit more data points. The error values between the embedding parameters estimated by these models for each image and the embedding parameters actually set are calculated, as shown in fig. 9, (a), (b), (c), and (d) in fig. 9 represent the error values of the models (1), (2), (3), and (5), respectively, and it can be seen that the fluctuation range of the error value of the model (5) is the smallest, thereby reflecting the highest fitting degree of the model (5).
As can be seen from the results in table 1, both models (5) and (6) fit all data points well, but for model (6), the quadratic terms of local texture values and global texture values do not behave significantly, in the case of overfitting, whereas the three estimates in model (5) are all significant, and therefore the relationship between the embedding parameters and the texture values can be represented by model (5).
Therefore, the adaptive parameter model based on texture in this embodiment is:
P_e=3.686*T_l+2.732*T_g+17.87;
where T _ g and T _ l represent the global texture value and the local texture value of the embedded region, respectively. P _ e represents an embedding parameter of the embedding area. According to the adaptive parameter-adaptive model, the texture value of each embedded region can be used for calculating the corresponding embedded parameter when the watermark is embedded in the embedded region by the watermark embedding method.
Through analysis, the adaptive parameter-adaptive model is generated under the given SSIM condition after the watermark is embedded, so that the image after the watermark is embedded can obtain basically consistent SSIM. By the model, the texture value of the embedding region can be used for adaptively adjusting the corresponding embedding parameter, and the embedding parameter corresponding to each embedding region in each image is optimized. The invisibility and the robustness of the watermark are ensured to the maximum extent.
It should be noted that the adaptive parameter-adaptive model is only applicable to this embodiment, but if other watermark embedding and extracting methods are applied to the embedded region, the adaptive parameter-adaptive model that corresponds to the embedded parameter and the texture value of the embedded region is obtained by using the multivariate linear fitting method, which still falls within the protection scope of the present invention.
Optionally, in step 300, the adjusting, by the adaptive parameter-adaptive model based on the texture, the embedding parameters in the embedding method specifically includes:
step 310: and traversing the watermark embedding area.
Step 320: and substituting the local texture value and the global texture value of the current watermark embedding region into the texture-based adaptive reference model aiming at each watermark embedding region to obtain the corresponding embedding parameter of the watermark embedding method when the watermark is embedded in the embedding region.
The embodiment adopts a difference quantization watermark embedding method based on discrete cosine transform. The implementation process is as follows:
step S331: the size of the watermark embedding area is adjusted.
And judging whether the size of the embedding area meets the preset size requirement, if not, amplifying the size of the embedding area to the minimum extent to enable the embedding area to meet the size requirement so as to ensure that the embedding area can contain enough watermark capacity. The embedded region is required to satisfy that both the height and the width are larger than K × K and can be evenly divided by K. Wherein k is equal to N+,K∈N+And K is2Where m denotes the length of the original watermark.
Step S332: a discrete cosine transform is performed on the watermark embedding area.
In a preferred embodiment, the carrier image is subjected to a two-stage discrete cosine transform. In order to simplify the description, the discrete cosine transform is described below as DCT in a detailed description of the respective steps.
Firstly, block DCT is executed on an embedded area, the number of blocks is KxK, coefficients of the ith row and the ith column in a block DCT coefficient matrix are selected to form a KxK first-stage DCT coefficient matrix, global DCT processing is carried out on the first-stage DCT coefficient matrix, and finally a KxK two-stage DCT coefficient matrix is obtained. Wherein, the coefficient of the ith row and the ith column is to represent the medium and low frequency coefficient in the block.
Step S333: and selecting the feature vector of the watermark embedding area.
In this embodiment, a coefficient having a stable characteristic in the embedding region is selected as a value to be quantized, and a feature vector is constructed.
The construction process of the feature vector is as follows:
and selecting coefficients with even numbers of row numbers and column numbers of the two-stage DCT coefficient matrixes of the embedded area to form the eigenvector of the corresponding area. Firstly, extracting coefficients on a main diagonal line in a two-stage DCT coefficient matrix; sequentially extracting a coefficient at the upper right of the main diagonal line and a coefficient at the lower left of the main diagonal line by taking the main diagonal line as a symmetry axis, and alternately carrying out the steps; the extracted coefficients are concatenated to form a feature vector, which is expressed as
Figure BDA0002329039590000181
n represents the number of embedded regions.
Step S334: and (3) embedding the watermark in the feature vector of the embedded area by taking the embedding parameter obtained in the step (320) as the embedding parameter of the watermark embedding method.
Assume that the original watermark sequence is W ═ W0,w1,…,wm-1]Length of m, wiE {0,1}, i ═ 0,1, …, m-1. The feature vector after embedding the watermark is represented as
Figure BDA0002329039590000182
Figure BDA0002329039590000183
The modification rule of the feature vector is as follows:
when w isi1 and
Figure BDA0002329039590000191
then
Figure BDA0002329039590000192
Figure BDA0002329039590000193
When w isiIs equal to 0 and
Figure BDA0002329039590000194
then
Figure BDA0002329039590000195
Figure BDA0002329039590000196
Otherwise
Figure BDA0002329039590000197
Step 335: and updating the transformation coefficient matrix of the embedded area by using the modified feature vector.
Step 336: and performing two-stage inverse discrete cosine transform on the updated transform coefficient matrix.
And performing global inverse discrete cosine transform processing on the two-stage DCT coefficient matrixes after the watermark is embedded to obtain a first-stage DCT coefficient matrix, replacing the coefficient of the ith row and the ith column in each block DCT coefficient matrix with each coefficient in the first-stage DCT coefficient matrix, performing block inverse discrete cosine transform processing on each updated block DCT coefficient matrix, and determining an (amplified) embedded area in which the watermark is embedded.
Step 337: the (enlarged) embedded region after embedding the watermark is reduced to the same size as the original embedded region.
In order to show that the present embodiment can ensure the invisibility of the watermark in each image, 100 images are randomly selected, the 100 images are embedded with the watermark by using the present embodiment, and fig. 10 shows SSIM before and after the 100 images are embedded with the watermark, and it can be seen that SSIM of all the images is basically close to and higher than 99%, thereby proving that the present embodiment can ensure the invisibility of the watermark embedded in all the images.
In addition, the invention also provides a texture-based adaptive image watermark embedding system. As shown in fig. 11, the texture-based adaptive image watermark embedding system of the present invention includes a first determining unit 11, a modeling unit 12, an adjusting unit 13, and an embedding unit 14.
The first determining unit 11 is configured to determine a watermark embedding area in a carrier image to be embedded with a watermark, where the watermark embedding area is a plurality of non-overlapping rough texture areas; the modeling unit 12 is configured to establish a texture-based adaptive reference model corresponding to a preselected embedding method by using a multivariate regression analysis method; the adjusting unit 13 is configured to adjust an embedding parameter in the embedding method through an adaptive parameter-based model; the embedding unit 14 is configured to embed a watermark in each of the watermark embedding areas by using the watermark embedding method, and generate an embedded carrier image including the watermark.
Compared with the prior art, the texture-based adaptive image watermark embedding system has the same beneficial effects as the texture-based adaptive image watermark embedding method, and is not repeated herein.
Furthermore, in order to accurately extract the watermark from the embedded carrier image containing the watermark, the invention also provides a texture-based adaptive image watermark extraction method. As shown in fig. 12, the texture-based adaptive image watermark extraction method of the present invention includes:
step 500: determining a watermark embedding area in a carrier image of which the watermark is to be extracted, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
step 600: extracting the watermark from each embedded area by a watermark extraction method;
step 700: and analyzing the watermark sequences extracted from the plurality of embedded areas to obtain a final watermark sequence.
Step 500 obtains the embedding area of the carrier image from which the watermark is to be extracted in the same manner as step 100 in the watermark embedding process. Since the image may have been subjected to some attacks when the watermark is extracted, the texture condition does not necessarily coincide completely with the embedding condition, and therefore, the number of the texture rough areas located at this time is assumed to be n'.
Step 600 extracts the watermark from each embedded region using a watermark extraction method corresponding to the watermark embedding method selected by the texture-based adaptive image watermark embedding method of the present invention.
Specifically, the method for extracting the difference quantization watermark based on the discrete cosine transform specifically comprises the following steps:
for each watermark embedding area, the feature vector is extracted by the method described in steps S331-S333 in the watermark embedding process. Representing the feature vector as
Figure BDA0002329039590000211
Figure BDA0002329039590000212
It is assumed that the watermark sequence extracted from the embedded region is represented as
Figure BDA0002329039590000213
Figure BDA0002329039590000214
The length is m, t is 0,1, …, n' -1. The extraction rules are as follows:
Figure BDA0002329039590000215
wherein i is 0,1, …, m-1. The watermark sequence extracted in each embedded region is saved.
Preferably, in step 700, two cases are distinguished depending on whether the embedded original watermark sequence is provided:
in the first case: the original watermark sequence is provided.
The final watermark sequence determination method comprises the following steps: firstly, calculating the bit error rate between the original watermark and the extracted watermark, and if the bit error rate of the watermark extracted from a certain area is greater than a given detection threshold value, considering the watermark as a false detection watermark and excluding the false detection watermark. And after all false detection watermarks are eliminated, comparing the rest watermark sequences according to bits, and taking the mode of each bit to form a final watermark sequence.
The Bit Error Rate (BER) represents the proportion of inconsistent bits in the total bits of the watermark information by comparing the watermark information extracted from the image with the original watermark information in bits. The lower the bit error rate, the fewer bits that indicate the extraction error, the better the robustness of the watermark.
Figure BDA0002329039590000216
Wherein num (errorbits) represents the number of inconsistent bits when the extracted watermark sequence is compared with the original watermark sequence; num (totalbits) represents the total number of bits of the original watermark sequence or the extracted watermark sequence.
In this embodiment, the detection threshold TwAnd the image false detection rate Pfp-IThe relationship between T and T is shown in Table 2wWhen equal to 0.25, Pfp-I1.93e-07, it is stated that when the bit error rate of the watermark extracted from the embedded area is less than 25%, the probability of false detection occurring is extremely small. Therefore, the detection threshold may be set to 25%.
TABLE 2 detection threshold and false detection Rate
Figure BDA0002329039590000221
In the second case: there is no original watermark sequence.
The final watermark sequence determination method comprises the following steps: and (3) performing correlation calculation on all the watermark sequences pairwise, and if the correlation between a certain watermark sequence and more than half of other watermark sequences is higher than a given correlation threshold, taking the watermark sequence as a pending watermark sequence. Finally, all the sequences to be watermarked are compared according to bits, and the mode of each bit is taken to form the final watermark sequence.
Since the same watermark sequence is embedded in each embedded region, the watermark sequence extracted from the watermark-embedded region still has strong correlation even if the extraction error occurs in the individual watermark bits. However, the false detection watermark is a watermark sequence extracted from a region without embedded watermark, and the false detection watermark does not meet the rule of watermark embedding and extraction and is generated randomly, so that the correlation between the false detection watermark and the real watermark is weak. According to the correlation principle described above, a false detection watermark can be excluded even when the original watermark information is not provided.
The two modes can correct the error of the individual watermark bit of the extracted watermark, thereby achieving the purposes of further improving the watermark extraction accuracy and enhancing the watermark robustness.
The method comprises the steps of respectively implementing various image processing attacks and geometric attacks on 100 images with embedded watermarks obtained by the texture-based adaptive image watermark embedding method, extracting the watermarks in the attacked images with watermarks by the texture-based adaptive image watermark extraction method, and comparing the watermarks with the original watermarks to obtain the average bit error rate, specific attack types and parameter settings of 100 extracted watermarks, wherein the average bit error rate, the specific attack types and the specific attack parameters are shown in tables 3 and 4.
TABLE 3 BERs for extraction of watermarks following image processing attacks
Figure BDA0002329039590000231
TABLE 4 BERs for extraction of watermarks following geometric attacks
Figure BDA0002329039590000241
As can be seen from the results shown in tables 3 and 4 and fig. 10, in the face of common image processing attacks and geometric attacks, the embodiment of the present invention can basically extract 100% of the watermarks, and can also ensure that each image has a high SSIM of 99% or more before and after watermark embedding, thereby proving that the present invention not only can effectively resist various image processing attacks and geometric attacks, has strong robustness, but also can ensure invisibility of the watermarks in each image.
In addition, the invention also provides a texture-based adaptive image watermark extraction system. As shown in fig. 13, the texture-based adaptive image watermark extraction system of the present invention includes a second determination unit 15, an extraction unit 16, and an analysis unit 17.
The second determining unit 15 is configured to determine a watermark embedding area in the carrier image from which the watermark is to be extracted, where the watermark embedding area is a plurality of non-overlapping rough texture areas; the extracting unit 16 is configured to extract a watermark from each embedded area by using a watermark extraction method; the analyzing unit 17 is configured to analyze the watermark sequences extracted from the multiple embedded regions to obtain a final watermark sequence.
Compared with the prior art, the texture-based adaptive image watermark extraction system and the texture-based adaptive image watermark extraction method have the same beneficial effects, and are not repeated herein.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (11)

1. An adaptive image watermark embedding method based on texture, the adaptive image watermark embedding method comprising:
determining a watermark embedding area in a carrier image to be embedded with a watermark, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
establishing a texture-based adaptive parameter model corresponding to a preselected watermark embedding method by using a multivariate regression analysis method;
adjusting the embedding parameters in the watermark embedding method through a texture-based adaptive parameter-adapting model;
and embedding a watermark in each watermark embedding area by using the watermark embedding method to generate a carrier image containing the watermark.
2. The texture-based adaptive image watermark embedding method according to claim 1, wherein the determining a watermark embedding area in the carrier image to be embedded with the watermark specifically comprises:
acquiring a rough texture area of the carrier image to be embedded with the watermark;
and deleting the overlapped rough texture area to obtain a watermark embedding area.
3. The texture-based adaptive image watermark embedding method according to claim 2, wherein the obtaining of the textured rough region of the carrier image to be embedded with the watermark specifically comprises:
traversing the carrier image to be embedded with the watermark according to a set step length by utilizing a sliding window; wherein the size and the moving step length of the sliding window are proportional to the size of the carrier image to be embedded with the watermark;
for each of the steps of the method,
calculating a global texture value and a local texture value of a current window;
and judging whether the current window area is a rough texture area or not according to the local texture value: if the current window area is a rough texture area, the position information, the global texture value and the local texture value of the rough texture area are saved, and if not, the current window area is discarded.
4. The adaptive texture-based image watermark embedding method according to claim 3, wherein the calculating the global texture value and the local texture value of the current window specifically includes:
defining the measurement of the texture roughness of the image as a texture value of the image, wherein the texture value is used for reflecting the richness of the texture of the image;
calculating a texture value of the corresponding current image according to the following formula:
Figure FDA0002329039580000021
wherein, T _ v represents the texture value of the current image, i ≠ 0 or j ≠ 0, M and N are respectively the width and height of the current image, and AC (i, j) represents the AC coefficient positioned in the ith row and the jth column; the AC coefficients refer to all coefficients except for the coefficient at the coordinate (0,0) position in a coefficient matrix obtained by discrete cosine transforming the current image;
the global texture value refers to a texture value of a complete image or a complete area corresponding to a current window;
the local texture value is obtained by firstly carrying out average division on a complete image corresponding to a current window to obtain the texture value of each block in the region, and the minimum texture value in all the blocks is the local texture value of the current window.
5. The texture-based adaptive image watermark embedding method according to claim 2, wherein the deleting of the overlapped coarse texture region to obtain the watermark embedding region specifically comprises:
sequencing all the rough texture areas according to the sequence of local texture values from large to small, and traversing the rough texture areas one by one;
for each of the textured rough areas,
calculating the coordinate difference between the central point of the rough texture area and the central points of other rough texture areas;
if the horizontal coordinate difference and the vertical coordinate difference are respectively smaller than the length and the width of the rough texture area, judging that the two rough texture areas are overlapped, and deleting the rough texture area with a smaller local texture value;
and traversing all the rough texture areas to obtain a final watermark embedding area.
6. The texture-based adaptive image watermark embedding method according to any one of claims 2 to 5, wherein the determining a watermark embedding area in the carrier image in which the watermark is to be embedded further comprises:
and when none of the regions traversed by the sliding window in the carrier image can be judged as the rough texture region, selecting the first N window regions with larger local texture values and no overlap as watermark embedding regions.
7. The adaptive texture-based image watermark embedding method according to claim 1, wherein the establishing of the adaptive texture-based parameter model corresponding to the preselected watermark embedding method by using the multivariate regression analysis method specifically comprises:
the method comprises the steps of determining a watermark embedding method in advance, wherein the watermark embedding method is used for embedding watermarks into watermark embedding areas in each carrier image to be embedded with the watermarks;
selecting a plurality of images with different texture values, wherein the size of each image is the same as that of the watermark embedding area;
embedding watermarks into the selected images with different texture values by using the watermark embedding method, and adjusting the embedding parameters in the watermark embedding method so that the structural similarity of each image before and after the watermark is embedded is within the range of a similarity threshold;
and fitting the global texture value and the local texture value of the image with different texture values by utilizing multivariate linear regression to obtain a texture-based adaptive parameter model corresponding to the watermark embedding method.
8. The adaptive texture-based image watermark embedding method according to claim 1, wherein the adjusting of the embedding parameters in the embedding method through the adaptive texture-based parameter adaptation model specifically comprises:
traversing the watermark embedding area;
for each of the watermark-embedding regions,
and substituting the local texture value and the global texture value of the current watermark embedding region into the texture-based adaptive reference model to obtain the corresponding embedding parameters of the watermark embedding method when the watermark is embedded in the embedding region.
9. A texture-based adaptive image watermark embedding system, wherein the adaptive image watermark embedding system embeds a watermark into a carrier image to be embedded with a watermark using the embedding method of any one of claims 1 to 8, the adaptive image watermark embedding system comprising:
the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for determining a watermark embedding area in a carrier image to be embedded with a watermark, and the watermark embedding area is a plurality of non-overlapped rough texture areas;
the modeling unit is used for establishing a texture-based adaptive parameter model corresponding to a preselected embedding method by utilizing a multiple regression analysis method;
the adjusting unit is used for adjusting the embedding parameters in the watermark embedding method through a texture-based adaptive parameter-adapting model;
and the embedding unit is used for embedding the watermark in each watermark embedding area by using the watermark embedding method to generate an embedded carrier image containing the watermark.
10. A texture-based adaptive image watermark extraction method for extracting a watermark embedded by the embedding method of claims 1 to 8, the extraction method comprising:
determining a watermark embedding area in a carrier image of which the watermark is to be extracted, wherein the watermark embedding area is a plurality of non-overlapped rough texture areas;
extracting the watermark from each embedded area by a watermark extraction method;
and analyzing the watermark sequences extracted from the plurality of embedded areas to obtain a final watermark sequence.
11. A texture-based adaptive image watermark extraction system, wherein the adaptive image watermark extraction system extracts a watermark from a carrier image from which the watermark is to be extracted by using the extraction method of claim 10, the adaptive image watermark extraction system comprising:
the second determining unit is used for determining watermark embedding areas in the carrier image of the watermark to be extracted, wherein the watermark embedding areas are a plurality of non-overlapped rough texture areas;
an extracting unit configured to extract a watermark from each of the embedded regions by using a watermark extraction method;
and the analysis unit is used for analyzing the watermark sequences extracted from the plurality of embedded areas to obtain a final watermark sequence.
CN201911328707.4A 2019-12-20 2019-12-20 Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system Pending CN111062853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911328707.4A CN111062853A (en) 2019-12-20 2019-12-20 Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911328707.4A CN111062853A (en) 2019-12-20 2019-12-20 Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system

Publications (1)

Publication Number Publication Date
CN111062853A true CN111062853A (en) 2020-04-24

Family

ID=70300835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911328707.4A Pending CN111062853A (en) 2019-12-20 2019-12-20 Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system

Country Status (1)

Country Link
CN (1) CN111062853A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564634A (en) * 2022-12-05 2023-01-03 杭州海康威视数字技术股份有限公司 Video anti-watermark embedding method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010109486A (en) * 2008-10-28 2010-05-13 Seiko Instruments Inc Image processing device, and image processing program
CN102903076A (en) * 2012-10-24 2013-01-30 兰州理工大学 Method for embedding and extracting reversible watermark of digital image
CN106097237A (en) * 2016-05-25 2016-11-09 中国科学院自动化研究所 The embedding grammar of image watermark and extracting method and associated method
CN106485640A (en) * 2016-08-25 2017-03-08 广东工业大学 A kind of reversible water mark computational methods based on multi-level IPVO
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN109300078A (en) * 2018-08-31 2019-02-01 太原理工大学 A kind of image spread-spectrum watermark embedding grammar with adaptive feed-forward network intensity
CN109493271A (en) * 2018-11-16 2019-03-19 中国科学院自动化研究所 Image difference quantisation watermarking embedding grammar, extracting method, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010109486A (en) * 2008-10-28 2010-05-13 Seiko Instruments Inc Image processing device, and image processing program
CN102903076A (en) * 2012-10-24 2013-01-30 兰州理工大学 Method for embedding and extracting reversible watermark of digital image
CN106097237A (en) * 2016-05-25 2016-11-09 中国科学院自动化研究所 The embedding grammar of image watermark and extracting method and associated method
CN106485640A (en) * 2016-08-25 2017-03-08 广东工业大学 A kind of reversible water mark computational methods based on multi-level IPVO
CN108280797A (en) * 2018-01-26 2018-07-13 江西理工大学 A kind of Arithmetic on Digital Watermarking of Image system based on Texture complication and JND model
CN109300078A (en) * 2018-08-31 2019-02-01 太原理工大学 A kind of image spread-spectrum watermark embedding grammar with adaptive feed-forward network intensity
CN109493271A (en) * 2018-11-16 2019-03-19 中国科学院自动化研究所 Image difference quantisation watermarking embedding grammar, extracting method, equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
IOAN-CATALIN DRAGOI等: "Textures and reversible watermarking:IEEE,watermark and texture and adaptive", 《2014 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO)》 *
YING HUANG等: "Enhancing Image Watermarking With Adaptive Embedding Parameter and PSNR Guarantee", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
张文华: "基于DWT域纹理区域的水印嵌入算法", 《科学技术与工程》 *
黄樱等: "基于图像纹理的自适应水印算法", 《北京航空航天大学学报》 *
龚一珉等: "纹理特征自适应的全息水印算法", 《包装工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564634A (en) * 2022-12-05 2023-01-03 杭州海康威视数字技术股份有限公司 Video anti-watermark embedding method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Iakovidou et al. Content-aware detection of JPEG grid inconsistencies for intuitive image forensics
Yang et al. A 3D steganalytic algorithm and steganalysis-resistant watermarking
CN108109147B (en) No-reference quality evaluation method for blurred image
Yaghmaee et al. Estimating watermarking capacity in gray scale images based on image complexity
Li et al. Detection of tampered region for JPEG images by using mode-based first digit features
US20140307916A1 (en) Method and device for localized blind watermark generation and detection
Chen et al. JSNet: a simulation network of JPEG lossy compression and restoration for robust image watermarking against JPEG attack
NZ565552A (en) Embedding bits into segmented regions of an image based on statistical features
Yao et al. An improved first quantization matrix estimation for nonaligned double compressed JPEG images
Wu et al. Visual structural degradation based reduced-reference image quality assessment
Liao et al. First step towards parameters estimation of image operator chain
Ouyang et al. A semi-fragile watermarking tamper localization method based on QDFT and multi-view fusion
CN111062853A (en) Self-adaptive image watermark embedding method and system and self-adaptive image watermark extracting method and system
CN109493271B (en) Image difference quantization watermark embedding method, image difference quantization watermark extracting equipment and storage medium
CN116757909B (en) BIM data robust watermarking method, device and medium
Kay et al. Robust content based image watermarking
Nie et al. Robust video hashing based on representative-dispersive frames
Zhang et al. Towards perceptual image watermarking with robust texture measurement
Cao et al. Invisible watermarking for audio generation diffusion models
CN111192302A (en) Feature matching method based on motion smoothness and RANSAC algorithm
CN114359009B (en) Watermark embedding method, watermark embedding network construction method, system and storage medium for robust image based on visual perception
CN114390154A (en) Robust steganography method and system for selecting embedded channel based on channel matching network
Tang et al. Reversible data hiding for JPEG images based on block difference model and Laplacian distribution estimation
CN111724373A (en) Visual security measurement method based on perceptually encrypted light field image
CN111260533B (en) Image watermarking method and system for fusing texture rule features in image blocks and between blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200424

RJ01 Rejection of invention patent application after publication