CN110796583A - Stylized visible watermark adding method - Google Patents

Stylized visible watermark adding method Download PDF

Info

Publication number
CN110796583A
CN110796583A CN201911020395.0A CN201911020395A CN110796583A CN 110796583 A CN110796583 A CN 110796583A CN 201911020395 A CN201911020395 A CN 201911020395A CN 110796583 A CN110796583 A CN 110796583A
Authority
CN
China
Prior art keywords
image
watermark
size
stylized
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911020395.0A
Other languages
Chinese (zh)
Inventor
王超
李静
崔员宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911020395.0A priority Critical patent/CN110796583A/en
Publication of CN110796583A publication Critical patent/CN110796583A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention overcomes the defects that the existing visible watermark influences the visual impression and is easy to be automatically removed by an algorithm, and provides a stylized visible watermark adding method. The method changes the watermark style by carrying out content extraction, style representation and image style conversion on the watermark; the method comprises the steps of obtaining the size and the position of a significance target of an image to be watermarked by using an image significance target detection method, determining the size and the adding position of a selected watermark, and adding the stylized watermark to the periphery of the significance target. The influence of the watermark on the visual impression of the image is reduced by adding the watermark to the non-significant target; by means of the style conversion of the watermarks, the watermarks are enabled to be thousands of figures and thousands of faces, the difficulty of automatic watermark removal of the algorithm is increased, the method can reduce the visual influence of the watermarks on the images, and meanwhile, the method has a certain automatic watermark removal function.

Description

Stylized visible watermark adding method
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for adding a visible watermark to an image.
Background
Visible watermark[1]The method is widely applied to source and copyright protection of marked images, and a common method of the method is to cover a semi-transparent image containing a name or a logo on a source image. However, the visible watermark appears in the image, affecting the visual effect of the image, and in particular, the visible watermark appears on a salient object in the image. Such as a photographic picture of a portrait, the cheek of which is covered with a visible watermark in order to mark the origin of the image, which is very visually objectionable to a user who wishes to enjoy the image. However, if the image is merely marked with an invisible digital watermark, there are some private persons or smallThe type company refuses to purchase the image and directly uses the image privately, so that users who need to purchase the image are reduced, and the existence of the visible watermark is quite necessary. The design goal of the visible watermark is to make the cost for people to remove the watermark in the image larger than the expense for purchasing the image, and ensure that the cost for removing the watermark is far higher than the profit, namely, the copyright of the image can be ensured. Meanwhile, if the visible watermark is too large, the impression of the image is influenced, and if the visible watermark is too small, the visible watermark is easy to remove.
However, Google[2]The method is characterized in that a large number of images added with the same watermark are utilized, the watermark structure in the images is automatically estimated through the consistency of the watermarks, then the images are restored with high precision, the watermarks in the images are removed in batches, and the marginal cost for removing the watermarks of the images is almost 0. As shown in FIG. 1, pictures are taken from the Canstock website, Google because of the character watermark Canstock, which is uniformly added by the website[2]The algorithm can automatically estimate the watermark structure in the image through consistency, then restore the image, remove the image watermark and obtain the original image without adding the watermark. The method utilizes the consistency of watermarks in a plurality of images to carry out watermark removing attack, so that the attack resistance of the visible watermark added with the unified template at present is greatly reduced. Meanwhile, the paper verifies that the difficulty of removing the watermark can be improved if the watermark added in the image is transformed. Therefore, the invention designs a new watermark adding method for image copyright protection. The method combines the stylized image algorithm to change the watermarks, so that the watermarks in the images are different, and the automatic removal of the watermarks is effectively prevented.
Disclosure of Invention
In view of Google[2]The newly proposed watermark removing algorithm can effectively remove uniformly added character watermarks, and as shown in fig. 1, the invention aims to provide a stylized visible watermark method, which changes watermarks so that the watermarks in images are different from each other, and effectively prevents the automatic removal of the watermarks.
The invention discloses a stylized visible watermark adding method, which changes the watermark style by carrying out content extraction, style representation and image style conversion on the watermark; the size and the position of a significance target of an image to be watermarked are obtained by an image significance target detection method, the size and the adding position of the selected watermark are determined, and the stylized watermark is added to the periphery of the significance target. The influence of the watermark on the visual impression of the image is reduced by adding the watermark to the non-significant target; by means of the style conversion of the watermarks, the watermarks are changed from one to another, the difficulty of automatic watermark removal through an algorithm is increased, the visual influence of the watermarks on the images can be reduced, and meanwhile the method has a certain automatic watermark removal function.
The invention comprises the following steps: the method comprises a stylized image module, a saliency detection module and a watermark adding module, wherein the module architecture diagram of the method is shown in figure 2.
Step 1: constructing a stylized image module, and extracting a content function in the image by local feature extraction and loss mean square error aiming at the watermark image; and outputting the style representation of the acquired image through a gram matrix and a convolution layer aiming at the image to be added with the watermark. Updating the loss function by setting the weight of the image style and the weight of the image content, retaining the image content of the watermark, representing the content in the watermark by the style of the image to be watermarked, and outputting the stylized watermark image.
Step 1-1, extracting image content, wherein the image content is obtained by connecting a plurality of layers of neural networks, each layer of neural network further extracts more complex features by using the output of the previous layer until the complex features can be used for representing the image content, and each layer is regarded as a plurality of extractors of local features.
Step 1-2: the image style representation is characterized in that a style matrix of an image is obtained through a feature space, the feature space is constructed on filter responses of any layer of a network and is composed of correlations among different filter responses, the style of the image can be reserved through a gram matrix, and the stable and multi-scale style representation of an input image is obtained through the characteristic correlation containing multiple layers.
Step 1-3: and image style conversion, namely, on the basis of reserving the content of the target image, referring the image style to the target image. The input image style is converted into the designated artistic style by initializing a Gaussian distribution image and minimizing the distance between the image style defined by a plurality of layers in the deep neural network and the image content representation.
Step 2: constructing a saliency detection module of the image, and obtaining a background template fusing semantic information by calculating an initial background template and image semantic information; and then, the background template is used as a background dictionary in a sparse and low-rank matrix recovery model, structural constraints between adjacent super-pixel blocks are combined, a saliency map is obtained by solving, the saliency map is output, and a background region and a saliency target of the image to be added with the watermark can be obtained through the saliency map.
And step 3: the method comprises the steps of constructing a stylized watermark adding module, inputting an original image to be watermarked, a saliency map of the original image after being processed by a saliency detection module and a stylized watermark image after being processed by a stylized image module, acquiring an image background area and a foreground target by utilizing the saliency image, and selecting a watermark adding area based on a spatial position constraint model. And scaling and adding the watermark by combining the size of the image foreground object and the size of the watermark. And finally, adding the stylized watermark to the selected position to be added with the watermark, and outputting the image added with the watermark.
Step 3-1: firstly, preprocessing the detection of a salient object is carried out on the image to obtain the salient object and a background area; and selecting a proper watermark adding position according to the size of the saliency target and the size of the image watermark. The shape of the watermark will also affect the optimal watermark adding position, as shown in fig. 9, there are different watermark shapes, some are square, some are rectangular, and different areas should be selected for adding the watermark, for example, four watermarks in the first row are square watermarks and should be preferentially added to the square background area, and a watermark in the second row is a rectangular watermark and should be preferentially added to the rectangular background area. In order to achieve that the watermark is neither easily removed nor visually impaired, the watermark should be added as close as possible to the salient object of the image, but not on top of the salient object. Since different watermark sizes will also determine the location where the watermark is added, the larger the watermark size, the larger the required background area will be, and the more difficult it is to find a suitable background area around the salient object. By acquiring the size of the watermark in advance, a more appropriate watermark adding position is selected. According to the method, the adding position of the watermark is obtained based on the spatial position constraint model, so that the watermark is close to the salient object of the image but not on the salient object as much as possible.
Step 3-2: a method for selecting the size of a watermark based on an image saliency target is designed, the saliency target and a background area are firstly obtained, and then the size of the watermark is matched to be proper according to the size of the saliency target and the size of the image watermark. Most watermarks are irrelevant to picture narrative, and the larger the area is, the more the impression is influenced; if the watermark is too small, the watermark is easy to remove, and the function of marking the place and preventing other people from stealing the picture is achieved. Namely, when facing images of different foreground objects and different background areas, the watermark adding system needs to add watermarks with different sizes. Therefore, the size of the watermark needs to be determined through a proper selection strategy, and the balance between the image appearance and the image anti-theft image is achieved. The invention designs a selection strategy for selecting the size of the watermark based on the image saliency target, firstly obtains the saliency target and the background area, and then coordinates the size of the saliency target and the size of the image watermark to obtain the proper size of the watermark.
Step 3-3: and adding the stylized watermark into the image by combining the watermark adding method of the stylized watermark and the background template and combining the size selection strategy and the watermark position selection strategy of the watermark. Firstly, detecting a foreground target of an image and a background area of the image by using a saliency target detection algorithm, and analyzing the saliency target of the image to obtain the size of a self-adaptive watermark, so that the watermark area is reduced, and the influence on the impression due to overlarge watermark is prevented; the adding position most suitable for the watermark is obtained by analyzing the background template of the image, so that the watermark is added at a position which is not easily noticed by people in vision. The watermark is stylized through an image stylization algorithm, so that the style of the watermark is consistent with the style of an image to be added, the image is prevented from being stolen, and the expression of the image can be assisted.
Drawings
FIG. 1 is a schematic diagram of a Google batch automatic watermarking algorithm;
FIG. 2 is a diagram of a stylized watermarking method system implementation;
FIG. 3 is a stylized graphics module input-output functional diagram;
FIG. 4 is a different artistic style of picture;
FIG. 5 is a schematic diagram of a stylized image;
FIG. 6 is a schematic diagram of the significance detection module input-output function;
FIG. 7 is a stylized watermarking module input-output functional diagram;
FIG. 8 is a saliency detection map of a different picture;
fig. 9 is a schematic diagram of a different watermark;
fig. 10 is a schematic diagram of watermarking positions based on saliency target detection.
FIG. 11 is a block diagram of a stylized watermarking method based on salient object detection
FIG. 12 is a schematic diagram of a stylized watermarking method based on salient object detection
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
The invention relates to a stylized visible watermark adding method, which specifically comprises three steps of stylized image, saliency detection and stylized watermark adding, and is shown in the following details as shown in figure 2:
step 1: a stylized image module. As shown in fig. 3, the input and output functional diagrams of the stylized image module are respectively input with a watermark image and an image to be watermarked, and a content function in the image is extracted by local feature extraction and loss mean square error for the watermark image; and outputting the style representation of the acquired image through a gram matrix and a convolution layer aiming at the image to be added with the watermark. Updating the loss function by setting the weight of the image style and the weight of the image content, retaining the image content of the watermark, representing the content in the watermark by the style of the image to be watermarked, and outputting the stylized watermark image. The stylized image module includes the steps of:
step 1-1 image content extraction
Gatys[3]An image style migration algorithm based on a neural network is provided, and the algorithm realizes the style migration of the image by separating and recombining the content and the style of the image. The extraction of the image semantic content is to acquire the image content by connecting a plurality of layers of neural networks, each layer of neural network further extracts more complex features by using the output of the previous layer until the complex features can be used for representing the image content, and each layer is regarded as an extractor of a plurality of local features. Assume that each convolutional layer contains NlA filter will obtain NlA feature matrix, each feature matrix having a size Ml(length by width), data of l layers can be stored by one matrix.
Figure BSA0000193118020000041
Figure BSA0000193118020000042
Indicating the activation value of the ith filter of the ith layer at the j position. So now there is one picture of the content
Figure BSA0000193118020000043
A picture being generated
Figure BSA0000193118020000044
(the initial value is Gaussian distribution), and the corresponding characteristic representation can be obtained through one convolution layer: plAnd FlThen the corresponding loss is taken as the mean square error.
Figure BSA0000193118020000051
F and P are two matrices of size Nl×MlI.e. the number of l layers of filters and the value of the length times the width of the feature matrix.
Step 1-2 image style representation
Image style is considered to be a class of features contained in an image, essentially texture, color features in the image at different scales. The image is roughly divided into Chinese wind, stereology, impression group, reality, expression, etc. in the artistic world, and as shown in fig. 4, the image is a picture with different artistic styles.
Regarding the problem of image style migration, it is common practice for many researchers to: the style of a certain type of image is analyzed, and model representation of the style of the image is established through a mathematical model aiming at the texture and color change of the image. And then, the image to be migrated is changed to better conform to the established model, but the image style migration result is poor, and one model can only be suitable for a certain style, cannot perform specific image style migration on the image, and has a poor application range. With the large-scale application of the deep neural network, the image style of any image can be extracted, and the image style can be changed into a thousand-pattern and a thousand-thousand style. The style matrix of an image is obtained by a feature space that is built on top of the filter response of any layer of the network. It consists of correlations between different filter responses, the representation of style here using the gram matrix:
Figure BSA0000193118020000052
the calculation method is shown in formula (3).
Wherein the content of the first and second substances,
Figure BSA0000193118020000054
is the inner product of the l-th layer feature matrices i and j. The gram matrix calculates the correlation between every two characteristics, and if the two characteristics appear at the same time, the weight of the gram matrix is higher; if both features are the trade-off, its weight will be lower. Therefore, the style of the image can be reserved through the gram matrix, and the stable and multi-scale style representation of the input image is obtained by utilizing the characteristic correlation containing multiple layers.
Suppose that
Figure BSA0000193118020000055
The style image is a photograph of a person,
Figure BSA0000193118020000056
is to generate an image, AlAnd GlRepresenting the gram matrix at a level l, the loss at this level is shown in equation (4).
Wherein N islIs the number of filters, MlIs the size (length by width) of each feature matrix. Since the style information is extracted using the outputs of the plurality of convolution layers, the total loss function of the image style information is as shown in equation (5).
Figure BSA0000193118020000061
Wherein wlIs the loss weight for each layer.
Step 1-3 image style conversion
The style conversion refers to that the image style is referred to on the target image on the basis of keeping the content of the target image. The method comprises the steps of initializing a Gaussian distribution image, minimizing the distance between the image style defined by a plurality of layers in the deep neural network and the image content representation, and converting the input image style into a specified artistic style. Let the input content image be
Figure BSA0000193118020000062
The style image is
Figure BSA0000193118020000063
Generate an image of
Figure BSA0000193118020000064
The loss function of the image style conversion is as shown in equation (6).
Figure BSA0000193118020000065
The method comprises the steps of generating images by using an L-BFGS method, wherein α IS the weight of image content, β IS the weight of image style reconstruction, a stylized image IS shown in FIG. 5, the first column IS an image selected from HKU _ IS data set and needing to be modified in style, the second row IS a Sansku classic painting of sky, namely a wanted style, and the third column IS a stylized image.
In the stylized watermark adding method, the saliency detection module mainly realizes the saliency target detection function of the image, inputs the original image to be added with the watermark, outputs the processed saliency image, can acquire the background area and the saliency target of the image from the image, and provides support for selecting the watermark size and adding the position. The module applies a saliency target detection algorithm of an image, including but not limited to finding a more accurate background template, using the more accurate background template as a saliency detection algorithm for optimizing sparse and low-rank matrix recovery of a dictionary matrix, solving the model by using an alternating direction multiplier method, obtaining a reconstruction error matrix of a sparse part and a representation coefficient matrix of a low-rank part, and fusing the two matrixes to obtain a result graph of saliency target detection.
Step 2: the input and output functional schematic diagram of the significance detection module is shown in fig. 6, a picture to be added with a watermark is input, and a background template fused with semantic information is obtained by calculating an initial background template and image semantic information; and then, the background template is used as a background dictionary in a sparse and low-rank matrix recovery model, a significant image is obtained by solving by combining with structural constraint between the adjacent super-pixel blocks, the significant image is output, and a background region and a significant target of the image to be added with the watermark can be obtained through the significant image.
And step 3: as shown in fig. 7, the input/output functional diagram of the stylized watermark adding module is to input an original image to be watermarked, a saliency map of the original image after being processed by the saliency detection module, and a stylized watermark image after being processed by the stylized image module, acquire an image background area and a foreground target by using the saliency image, and select a watermark adding area based on the spatial position constraint model. And scaling and adding the watermark by combining the size of the image foreground target and the size of the watermark. And finally, adding the stylized watermark to the selected position to be added with the watermark, and outputting the image added with the watermark. According to the stylized watermark adding method, the stylized watermark adding module is mainly used for adding the stylized watermark to the proper position of the image to be watermarked. Most watermarks will affect the visual appearance of the image, and if the watermark is added to a salient object of the image, the most significant situation will be; however, if the watermark is added to the most marginal situation, the image thief can easily remove the watermark by cutting the image in a simple and low-cost manner, and the watermark is not easy to remove and influence the appearance, and the watermark is probably close to the saliency target of the image but is not added on the saliency target of the image. The method of the chapter obtains the position of the image saliency target through an image saliency target detection method, and adds the stylized watermark to a background area around the saliency target. The stylized watermark adding module mainly comprises the following steps:
step 3-1 design method for selecting watermark adding position based on image background area
The location of the watermark addition depends on the size of the salient objects and the size of the background area in the image. The watermark in most images affects the visual impression, wherein the watermark is directly added to a salient object of the image, which is a condition affecting the impression most; but if the watermark is added to the edge of the image, it is easy to remove the watermark by cropping the image. In order to achieve that the watermark is neither easily removed nor visually influenced, the watermark should be added as close as possible to the salient object of the image, but not on top of the salient object. Different images have background areas with different sizes, and the adding position of the watermark needs to be determined by a proper selection strategy, so that the balance between the appearance and the anti-theft function is achieved. This section designs a watermark adding position selection strategy based on salient object detection, which mainly includes two steps: firstly, preprocessing the salient target detection of an image to obtain a salient target and a background area; and secondly, selecting a proper watermark adding position according to the size of the saliency target and the size of the image watermark.
(1) Obtaining a background region and a foreground region of an image
Firstly, a salient target image S and a background template L of the obtained image are obtained by using a salient target detection method. Assuming that I represents the original image, there is a constraint of equation (7) because the result of detecting the salient object of the image is not particularly perfect, there is a certain middle area, and it cannot be determined whether it is a salient area or a background area.
Figure BSA0000193118020000071
Wherein, aIRepresenting the length of the image I, bIRepresenting the width, a, of the image IL,aS,bL,bSThe length of the image background template, the length of the saliency target, the width of the background template, and the width of the saliency target are respectively shown. The watermark size must be smaller than the image background size, assuming that the watermark image is Wm, aWmAnd bWmRespectively, the length and width of the watermark, then the constraints described by equation (8) exist.
Figure BSA0000193118020000081
Then, a background area of the image is obtained according to a saliency target detection method, and L is assumed to represent a set of the background area, and a background area point LdThe acquisition is shown in formula (9).
Ld=L∩(I-S) (9)
And (4) using a contour segmentation algorithm on the basis of the background region points to clarify the boundary of the background region. The length and width of each independent background area on the image are respectively obtained. Suppose that n independent blocks are contained in the image background region, and the area of each independent block is: s1,...Si,SnThe calculation formula is shown in formula (10).
Si=ai*bi(10)
Wherein a isi,biRespectively, the length and width of the background area of the image, and the length and width of the initial watermark, respectively, are aWm,bWm. The watermark size is compared with the size of the background area, respectively. As shown in fig. 8, only two background areas in the first image are suitable for being used as watermark adding areas, the second image has three areas to be added with watermarks at the leftmost, middle and upper right, the third image needs to consider the area between two cups, and the fourth image is the area at the upper left, upper right and lower corner.
(2) Adding position for acquiring watermark based on space position constraint model
The shape of the watermark will also affect the optimal watermark adding position, as shown in fig. 9, there are different watermark shapes, some watermarks are square, some watermarks are rectangular, different areas of the watermarks with different shapes should be selected for adding the watermark, for example, four watermarks in the first row are square watermarks and should be preferentially added to the square background area, and a rectangular watermark in the second row should be preferentially added to the rectangular background area. In order to achieve that the watermark is neither easily removed nor visually influenced, the watermark should be added as close as possible to the salient object of the image, but not on top of the salient object. Since different watermark sizes will also determine the location where the watermark is added, the larger the watermark size, the larger the required background area will be, and the more difficult it will be to find a suitable background area around the salient object. By acquiring the size of the watermark in advance, a more appropriate watermark adding position is selected.
The method comprises the steps of obtaining the adding position of the watermark based on a space position constraint model, firstly inputting an initial watermark picture Wm, and determining the length and the width a of the initial watermarkWm,bWm. The scale factor of the watermark size is mu, generally mu < 1, and the length and width of the watermark are mua respectivelyWm,μbWm,βiThe addition coefficient is selected for the watermark and the calculation formula is shown in (11).
Figure BSA0000193118020000091
Wherein, ai,biβ for each possible added background areaiRepresenting the watermark, the length of the background area is greater than the length of the watermark multiplied by the width of the background area compared to the width of the watermark, βiThe larger the value of (b), the easier the watermark is added to the block region. The selection constraint of the background area becomes as shown in equation (12).
Figure BSA0000193118020000092
Solving the constraint problem to obtain the optimal watermark adding area, wherein in the solving process, firstly setting mu to be 1, solving the optimal watermark adding position, and if the optimal watermark adding position does not meet mu aWm≤aLOr μ bWm≤bLThen the size of μ is reduced in turn until the specification is met. As shown in fig. 10, that is, an example of adding a position to a watermark based on saliency target detectionIt is intended that the first column is the image original, the second column is the saliency detection true value map, the third column is the result map of the saliency detection algorithm based on sparse and low rank matrix recovery proposed herein, the fourth column is the result map optimized by contour detection, and the fifth column is the resulting optimal additive watermark position map.
Step 3-2 method for selecting watermark size based on image saliency target
Most watermarks are irrelevant to picture narrative, and the larger the area is, the more the impression is influenced; if the watermark is too small, the watermark is easy to remove, and the function of marking the place and preventing other people from stealing the picture is achieved. Namely, when facing images of different foreground objects and different background areas, the watermark adding system needs to add watermarks of different sizes. Therefore, the size of the watermark needs to be determined through a proper selection strategy, and the balance between the image appearance and the image anti-theft image is achieved. The section designs a selection strategy for selecting the size of the watermark based on the image saliency target, firstly obtains the saliency target and a background area, and then coordinates the size of the saliency target and the size of the image watermark to obtain a proper size of the watermark.
Assuming S represents a set of salient objects, salient region points SdThe acquisition is as shown in equation (13).
Sd=S∩(I-L) (13)
And (4) using a contour segmentation algorithm on the basis of the background region points to clarify the boundary of the background region. In order to reduce the influence of the watermark on the appearance of the image, the scaling coefficient of the image watermark is further determined by comparing the size of the image saliency target with the size of the image watermark. Generally, the size of the watermark should be smaller than the size of the saliency target. By the method, the optimal adding watermark is obtained. And solving the watermark scaling coefficient mu through a formula (14) to obtain the optimal scaling coefficient of the picture.
Wherein a isS,bSLength and width of the saliency target, respectively, aWm,bWmIs a watermarkLength and width.
Step 3-3 watermark adding method combining stylized watermark and background template
A block diagram of a stylized watermarking method based on salient object detection is shown in fig. 11. Firstly, detecting a foreground target of an image and a background region of the image by using a significant target detection algorithm based on sparse and low-rank matrix recovery, and analyzing the significant target of the image to obtain the size of a self-adaptive watermark, so that the area of the watermark is reduced, and the influence on the impression caused by overlarge watermark is prevented; the adding position most suitable for the watermark is obtained by analyzing the background template of the image, so that the watermark is added at a position which is not easily noticed by people in vision. The watermark is stylized through an image stylization algorithm, so that the style of the watermark is consistent with the style of the image to be added, the image is prevented from being stolen, and the expression of the image can be assisted.
Combining the above watermark size selection strategy and watermark position selection strategy, the stylized watermark is added to the image, and fig. 12 shows an effect diagram of each step of the method. As shown in the figure, the watermark style is changed by style extraction, content reconstruction and style reconstruction and image style conversion of the image watermark to be added; the size and the adding position of the watermark are optimized through a saliency detection method based on saliency target detection, and the watermark style collision is visually reduced. By combining the three steps, the difficulty of automatic watermark removal can be increased, a certain automatic watermark removal function is achieved, the collision between the watermark and the original image can be visually reduced, and a new idea is provided for the watermark adding mode of the anti-theft image of the image website.
Reference to the literature
[1]CHEN C-C,YEH H-C.Using dynamic pixel value mapping method toconstruct visible and reversible image watermarking scheme[J].MultimediaTools and Applications,2018,77(15): 19327-19346.
[2]DEKEL T,RUBINSTEIN M,LIU C et.al.On the Effectiveness of VisibleWatermarks[C]//IEEE Conference on Computer Vision and PatternRecognition.2017:6864-6872.
[3]GATYS L,ECKER A S,BETHGE M.Texture synthesis using convolutionalneural networks[C]//Advances in Neural Information Processing Systems.2015:262-270.

Claims (3)

1. A stylized visible watermark adding method is characterized in that the watermark style is changed by carrying out content extraction, style representation and image style conversion on the watermark; the method comprises the steps of obtaining the size and the position of a significance target of an image to be watermarked by using an image significance target detection method, determining the size and the adding position of a selected watermark, and adding the stylized watermark to the periphery of the significance target. The method is characterized by comprising the following steps:
step 1: a stylized image module is constructed, and content functions in the watermark image are extracted through local feature extraction and loss mean square error; and outputting the style representation of the acquired image through a gram matrix and a convolution layer aiming at the image to be added with the watermark. Updating the loss function by setting the weight of the image style and the weight of the image content, retaining the image content of the watermark, representing the content in the watermark by the style of the image to be watermarked, and outputting the stylized watermark image.
Step 1-1, extracting image content, wherein the image content is obtained by connecting a plurality of layers of neural networks, each layer of neural network further extracts more complex features by using the output of the previous layer until the complex features can be used for representing the image content, and each layer is regarded as a plurality of extractors of local features.
Step 1-2: the image style representation is characterized in that a style matrix of an image is obtained through a feature space, the feature space is constructed on filter responses of any layer of a network and is composed of correlations among different filter responses, the style of the image can be reserved through a gram matrix, and the stable and multi-scale style representation of an input image is obtained through the characteristic correlation containing multiple layers.
Step 1-3: and image style conversion, namely, on the basis of reserving the content of the target image, referring the image style to the target image. The input image style is converted into the designated artistic style by initializing a Gaussian distribution image and minimizing the distance between the image style defined by a plurality of layers in the deep neural network and the image content representation.
Step 2: constructing a saliency detection module of the image, and obtaining a background template fused with semantic information by calculating an initial background template and image semantic information; and then, the background template is used as a background dictionary in a sparse and low-rank matrix recovery model, a significant image is obtained by solving in combination with structural constraints between the adjacent super-pixel blocks, the significant image is output, and a background region and a significant target of the image to be added with the watermark can be obtained through the significant image.
And step 3: the method comprises the steps of constructing a stylized watermark adding module, inputting an original image to be watermarked, a saliency map of the original image after being processed by a saliency detection module and a stylized watermark image after being processed by a stylized image module, acquiring a background area and a foreground target of the image by utilizing the saliency image, and selecting a watermark adding area based on a spatial position constraint model. And scaling and adding the watermark by combining the size of the image foreground object and the size of the watermark. And finally, adding the stylized watermark to the selected position to be added with the watermark, and outputting the image added with the watermark.
Step 3-1: firstly, preprocessing the salient object detection of the image to obtain a salient object and a background area; and selecting a proper watermark adding position according to the size of the saliency target and the size of the image watermark.
Step 3-2: a method for selecting the size of a watermark based on an image saliency target is designed, the saliency target and a background area are firstly obtained, and then the size of the watermark is matched to be proper according to the size of the saliency target and the size of the image watermark.
Step 3-3: and adding the stylized watermark into the image by combining the watermark adding method of the stylized watermark and the background template and combining the size selection strategy and the watermark position selection strategy of the watermark.
2. A stylized visible watermarking method according to claim 1, comprising the steps of: adding the stylized watermark to a proper position of an image to be watermarked, A, designing a method for selecting a watermark adding position based on an image background area, firstly, acquiring a background area and a foreground area of the image by using an image saliency detection algorithm, and then acquiring the adding position of the watermark by using a space position constraint model; B. designing a method for selecting the size of a watermark based on an image saliency target, firstly acquiring the saliency target and a background area, and then coordinating the size of the saliency target and the size of the image watermark to obtain a proper size of the watermark; C. and designing a watermark adding method combining the stylized watermark and the background template, and adding the stylized watermark into the image to be watermarked by combining the size selection strategy and the watermark position selection strategy of the watermark.
3. A stylized visible watermarking method according to claim 2, comprising the steps of: acquiring the optimal watermarking area and the optimal watermarking size of an image by using a saliency detection algorithm of the image, and A, acquiring a background area and a foreground area of the image by using the saliency detection algorithm of the image; B. the watermark position adding method based on the space position constraint model is constructed by the fact that the size of the watermark is smaller than the size of the background space and the shape of the watermark is equivalent to the shape of the background space. C. And further determining the scaling coefficient of the image watermark by comparing the size of the image saliency target with the size of the image watermark to obtain the optimal watermark adding image.
CN201911020395.0A 2019-10-25 2019-10-25 Stylized visible watermark adding method Pending CN110796583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911020395.0A CN110796583A (en) 2019-10-25 2019-10-25 Stylized visible watermark adding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911020395.0A CN110796583A (en) 2019-10-25 2019-10-25 Stylized visible watermark adding method

Publications (1)

Publication Number Publication Date
CN110796583A true CN110796583A (en) 2020-02-14

Family

ID=69441367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911020395.0A Pending CN110796583A (en) 2019-10-25 2019-10-25 Stylized visible watermark adding method

Country Status (1)

Country Link
CN (1) CN110796583A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626124A (en) * 2020-04-24 2020-09-04 平安国际智慧城市科技股份有限公司 OCR image sample generation method, OCR image sample generation device, OCR image sample printing body verification equipment and OCR image sample printing body verification medium
CN111680340A (en) * 2020-05-28 2020-09-18 上海上咨工程造价咨询有限公司 Building material price information pushing method, system and device and storage medium thereof
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium
CN112700363A (en) * 2021-01-08 2021-04-23 北京大学 Self-adaptive visual watermark embedding method and device based on region selection
CN113284035A (en) * 2021-06-01 2021-08-20 江苏鑫合易家信息技术有限责任公司 System and method for generating dynamic picture with two-dimensional code watermark
CN112330522B (en) * 2020-11-09 2024-06-04 深圳市威富视界有限公司 Watermark removal model training method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903071A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Watermark adding method and system as well as watermark identifying method and system
CN108961350A (en) * 2018-07-17 2018-12-07 北京工业大学 One kind being based on the matched painting style moving method of significance
CN109636764A (en) * 2018-11-01 2019-04-16 上海大学 A kind of image style transfer method based on deep learning and conspicuousness detection
CN109754358A (en) * 2019-01-02 2019-05-14 东南大学 A kind of image watermark method and system based on conspicuousness detection and contourlet transform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903071A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Watermark adding method and system as well as watermark identifying method and system
CN108961350A (en) * 2018-07-17 2018-12-07 北京工业大学 One kind being based on the matched painting style moving method of significance
CN109636764A (en) * 2018-11-01 2019-04-16 上海大学 A kind of image style transfer method based on deep learning and conspicuousness detection
CN109754358A (en) * 2019-01-02 2019-05-14 东南大学 A kind of image watermark method and system based on conspicuousness detection and contourlet transform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENG DING 等: "《Visual tracking using Locality-constrained Linear Coding and saliency map for visible light and infrared image sequences》", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *
姜枫 等: "《基于内容的图像分割方法综述》", 《软件学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626124A (en) * 2020-04-24 2020-09-04 平安国际智慧城市科技股份有限公司 OCR image sample generation method, OCR image sample generation device, OCR image sample printing body verification equipment and OCR image sample printing body verification medium
CN111626124B (en) * 2020-04-24 2024-06-11 平安国际智慧城市科技股份有限公司 OCR image sample generation and printing experience verification method, device, equipment and medium
CN111680340A (en) * 2020-05-28 2020-09-18 上海上咨工程造价咨询有限公司 Building material price information pushing method, system and device and storage medium thereof
CN111680340B (en) * 2020-05-28 2022-08-23 上海上咨工程造价咨询有限公司 Building material price information pushing method, system and device and storage medium thereof
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium
CN112330522B (en) * 2020-11-09 2024-06-04 深圳市威富视界有限公司 Watermark removal model training method, device, computer equipment and storage medium
CN112700363A (en) * 2021-01-08 2021-04-23 北京大学 Self-adaptive visual watermark embedding method and device based on region selection
CN113284035A (en) * 2021-06-01 2021-08-20 江苏鑫合易家信息技术有限责任公司 System and method for generating dynamic picture with two-dimensional code watermark

Similar Documents

Publication Publication Date Title
CN110796583A (en) Stylized visible watermark adding method
CN102246204B (en) Devices and methods for processing images using scale space
Shen et al. Depth-aware image seam carving
CN109829353B (en) Face image stylizing method based on space constraint
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
JP5766620B2 (en) Object region detection apparatus, method, and program
CN111563908B (en) Image processing method and related device
CN112884669B (en) Image restoration method based on multi-scale content attention mechanism, storage medium and terminal
CN109544561A (en) Cell mask method, system and device
CN105321177A (en) Automatic hierarchical atlas collaging method based on image importance
JP3740351B2 (en) Image processing apparatus and method, and recording medium on which execution program for the method is recorded
CN110909778B (en) Image semantic feature matching method based on geometric consistency
CN113744142B (en) Image restoration method, electronic device and storage medium
CN114782645A (en) Virtual digital person making method, related equipment and readable storage medium
US20240020810A1 (en) UNIVERSAL STYLE TRANSFER USING MULTl-SCALE FEATURE TRANSFORM AND USER CONTROLS
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN116309494B (en) Method, device, equipment and medium for determining interest point information in electronic map
CN115439615B (en) Distributed integrated management system based on three-dimensional BIM
CN114723973A (en) Image feature matching method and device for large-scale change robustness
CN109492579A (en) A kind of video object detection method and system based on ST-SIN
CN113838199B (en) Three-dimensional terrain generation method
CN106469437B (en) Image processing method and image processing apparatus
Chang et al. Artistic painting style transformation using a patch-based sampling method
CN109146886B (en) RGBD image semantic segmentation optimization method based on depth density
CN107481184B (en) Low polygon style diagram generation interactive system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
DD01 Delivery of document by public notice

Addressee: Li Jing

Document name: Deemed withdrawal notice

DD01 Delivery of document by public notice
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200214

WD01 Invention patent application deemed withdrawn after publication