CN113379585B - Ceramic watermark model training method and embedding method for frameless positioning - Google Patents

Ceramic watermark model training method and embedding method for frameless positioning Download PDF

Info

Publication number
CN113379585B
CN113379585B CN202110700418.3A CN202110700418A CN113379585B CN 113379585 B CN113379585 B CN 113379585B CN 202110700418 A CN202110700418 A CN 202110700418A CN 113379585 B CN113379585 B CN 113379585B
Authority
CN
China
Prior art keywords
watermark
ceramic
image
loss function
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110700418.3A
Other languages
Chinese (zh)
Other versions
CN113379585A (en
Inventor
王俊祥
陈欣
曾文超
倪江群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdezhen Ceramic Institute
Original Assignee
Jingdezhen Ceramic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdezhen Ceramic Institute filed Critical Jingdezhen Ceramic Institute
Priority to CN202110700418.3A priority Critical patent/CN113379585B/en
Publication of CN113379585A publication Critical patent/CN113379585A/en
Application granted granted Critical
Publication of CN113379585B publication Critical patent/CN113379585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • CCHEMISTRY; METALLURGY
    • C04CEMENTS; CONCRETE; ARTIFICIAL STONE; CERAMICS; REFRACTORIES
    • C04BLIME, MAGNESIA; SLAG; CEMENTS; COMPOSITIONS THEREOF, e.g. MORTARS, CONCRETE OR LIKE BUILDING MATERIALS; ARTIFICIAL STONE; CERAMICS; REFRACTORIES; TREATMENT OF NATURAL STONE
    • C04B41/00After-treatment of mortars, concrete, artificial stone or ceramics; Treatment of natural stone
    • C04B41/45Coating or impregnating, e.g. injection in masonry, partial coating of green or fired ceramics, organic coating compositions for adhering together two concrete elements
    • C04B41/50Coating or impregnating, e.g. injection in masonry, partial coating of green or fired ceramics, organic coating compositions for adhering together two concrete elements with inorganic materials
    • CCHEMISTRY; METALLURGY
    • C04CEMENTS; CONCRETE; ARTIFICIAL STONE; CERAMICS; REFRACTORIES
    • C04BLIME, MAGNESIA; SLAG; CEMENTS; COMPOSITIONS THEREOF, e.g. MORTARS, CONCRETE OR LIKE BUILDING MATERIALS; ARTIFICIAL STONE; CERAMICS; REFRACTORIES; TREATMENT OF NATURAL STONE
    • C04B41/00After-treatment of mortars, concrete, artificial stone or ceramics; Treatment of natural stone
    • C04B41/80After-treatment of mortars, concrete, artificial stone or ceramics; Treatment of natural stone of only ceramics
    • C04B41/81Coating or impregnation
    • C04B41/85Coating or impregnation with inorganic materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Ceramic Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Inorganic Chemistry (AREA)
  • Materials Engineering (AREA)
  • Structural Engineering (AREA)
  • Organic Chemistry (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a ceramic watermark model training method and an embedding method with frameless positioning, which solve the problem that a watermark image needs to be positioned and detected in a framing mode and other modes after being printed and shot, and simultaneously, a frameless extraction algorithm is combined with a decoder network, so that a decoder can quickly position the image and can correctly extract secret information embedded in the watermark image.

Description

Ceramic watermark model training method and embedding method for frameless positioning
Technical Field
The invention relates to the technical field of ceramics, in particular to a frameless positioning ceramic watermark model training method and an embedding method.
Background
Digital images are widely used in the modern times as important components of multimedia information. With the popularization of smart phones and the development of mobile internet, the number of digital images has increased explosively. The digital image is a digital asset and has economic value. As multimedia technology advances, it becomes easier to steal or destroy images. Digital image watermarking is an effective means for protecting the copyright or integrity of an image as a technology for embedding information into the image, and various robust watermarking technologies are applied by people to carry out copyright authentication or anti-counterfeiting tracing.
When copyright authentication is performed at an intelligent mobile terminal such as a mobile phone, a watermark image is often positioned and corrected and then sent to a decoder for watermark detection, so as to ensure that carriers of an embedded terminal and an extraction terminal are synchronous (namely, geometric deformation-distortion does not occur). Generally speaking, the more accurate the watermark image positioning, the better the carrier synchronization can be maintained, and the smaller the carrier distortion at this time, the higher the decoding accuracy of the decoder. Therefore, the image positioning technology is an essential link for watermark detection and is an important step for ensuring that a decoder can accurately extract watermark information.
At present, the image positioning technology mainly comprises an edge detection technology, a SIFT technology, a SURF technology and the like. In the edge detection technology, an outer frame is often required to be added outside the periphery of the watermark image for auxiliary positioning during watermark extraction, as shown in fig. 1 and 2, and black and white color limitation is also performed on the ground color of the watermark image, so that the visual quality of the watermark image is sacrificed. And SIFT and SURF need to store a matching image library of the watermark image in advance, and image positioning is carried out by calculating the number of characteristic points between the watermark image and the corresponding matching image and the matching rate of the characteristic points. The method increases the detection time and cannot realize quick, effective and real-time detection. In addition, in some scenarios, it is not practical to have a large number of image templates pre-stored. Therefore, it is becoming more urgent to find a fast and efficient image localization technique that does not affect the aesthetic appearance of the image.
Disclosure of Invention
In view of this, the embodiment of the present invention provides a frameless positioning ceramic watermark model training method and an embedding method.
According to a first aspect, an embodiment of the present invention provides a frameless positioning ceramic watermark model training method, including:
acquiring a training image, and intercepting a watermark adding area in the training image;
acquiring watermark information;
inputting the watermark information and the watermark adding area into a current encoder to generate a residual image, and obtaining an area watermark image according to the residual image and the watermark adding area;
calculating loss values of all loss functions in a loss function set between the regional watermark image and the watermark adding region, and adjusting the current encoder according to the loss values to obtain an updated encoder until all the loss functions in the loss function set reach a preset first convergence condition;
splicing the region watermark image with a residual region to obtain an integral watermark image, wherein the residual region is a region except the watermark adding region in the training image;
putting the whole watermark image into a preset noise layer for noise processing;
sending the whole watermark image subjected to noise processing into a current decoder for decoding to obtain secret information, and obtaining a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder until the cross entropy loss function reaches a preset second convergence condition.
With reference to the first aspect, in a first implementation manner of the first aspect, when an original size of the watermarking region does not conform to a standard size of an image required by the current encoder, before inputting the watermarking information and the watermarking region into the current encoder to generate a residual image, the method further includes:
transforming the original size of the watermarking region into the standard size;
before the region watermark image is spliced with the residual region, the method further comprises the following steps: transforming the region watermark image into the original size.
With reference to the first aspect, in a second implementation manner of the first aspect, an area of the watermarking region occupies M% or more of an area of the training image, where M is greater than or equal to 50.
With reference to the first aspect, in a third implementation manner of the first aspect, before adjusting the current encoder according to the loss value to obtain an updated encoder, the method further includes: obtaining a weight value of each loss function in the loss function set; the adjusting the current encoder according to the loss value to obtain an updated encoder includes: adjusting the current encoder by using the loss value and the corresponding weight value of each loss function in the loss function set to obtain an updated encoder;
and/or, before updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder, the method further includes: acquiring a weight value of the cross entropy loss function; the updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder comprises: and updating the current decoder by using the weight value of the cross entropy loss function and the loss value of the cross entropy loss function to obtain an updated decoder.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the set of loss functions includes: LPIPS penalty function and L2 penalty function;
before the step number is preset, only assigning values to the weight values of the cross entropy loss function;
after a preset number of steps, the weight value of the cross entropy loss function is greater than that of the L2 loss function, and the weight value of the L2 loss function is greater than that of the LPIPS loss function.
According to a second aspect, an embodiment of the present invention provides a ceramic watermark model training apparatus without frame positioning, including:
the first acquisition module is used for acquiring a training image;
the second acquisition module is used for acquiring watermark information;
the intercepting module is used for intercepting a watermark adding area in the training image;
the watermark generating module is used for inputting the watermark information and the watermark adding area into a current encoder to generate a residual image and obtaining an area watermark image according to the residual image and the watermark adding area;
the first adjusting module is used for calculating loss values of all loss functions in a loss function set between the regional watermark image and the watermark adding region, adjusting the current encoder according to the loss values to obtain an updated encoder until all the loss functions in the loss function set reach a preset first convergence condition;
the splicing module is used for splicing the regional watermark image with a residual region to obtain an overall watermark image, wherein the residual region is a region of the training image except the watermark adding region;
the noise processing module is used for putting the whole watermark image into a preset noise layer for noise processing;
the second adjusting module is used for sending the whole watermark image subjected to noise processing into a current decoder for decoding to obtain secret information, and obtaining a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder until the cross entropy loss function reaches a preset second convergence condition.
With reference to the second aspect, in a first embodiment of the second aspect, the noise floor includes one or more of: geometric distortion, motion blur, color shift, gaussian noise, and JEPG compression.
With reference to the first embodiment of the first aspect, in a second embodiment of the first aspect,
the distortion coefficient of the geometric distortion is less than 1;
and/or, the motion blur adopts a linear blur kernel, the pixel width of the linear kernel is not more than 10, the linear angle is randomly selected, and the range is not more than 1/2 pi;
and/or the offset value of the color offset is required to be uniformly distributed, and the offset value is-0.2-0.3;
and/or, the compression quality factor of the JEPG compression is greater than 50.
According to a third aspect, an embodiment of the present invention provides an encoder, which is trained by using the ceramic watermark model training method for borderless positioning of ceramic process described in the first aspect to the fourth aspect.
According to a fourth aspect, an embodiment of the present invention provides a decoder, which is trained by using the ceramic watermark model training method for borderless positioning of ceramic process described in the first aspect to the fourth aspect.
According to a fifth aspect, an embodiment of the present invention further provides a method for embedding and densifying a ceramic, including:
respectively acquiring an original image and watermark information;
inputting the original image and the watermark information into the encoder of the third aspect for encoding to obtain an electronic watermark image;
and after the electronic watermark image is transferred to the ceramic prefabricated product, firing the ceramic prefabricated product to obtain the ceramic with the watermark image.
With reference to the fifth aspect, in a first embodiment of the fifth aspect, the transferring the electronic watermark image onto the ceramic preform includes:
inputting the electronic watermark image into a preset ceramic ink-jet injection machine, and carrying out ink-jet on the ceramic prefabricated product by using the ceramic ink-jet injection machine so as to transfer the electronic watermark image onto the ceramic prefabricated product;
or, generating paper edition stained paper according to the electronic watermark image; and (3) paving the paper pattern paper on the ceramic prefabricated product to transfer the electronic watermark image to the ceramic prefabricated product.
With reference to the fifth aspect, in a second embodiment of the fifth aspect, when the ceramic preform is a commodity ceramic preform, firing the ceramic preform to obtain a ceramic with a watermark image includes: firing the domestic ceramic prefabricated product at 800-1380 ℃ to obtain domestic ceramic; when the ceramic preform is a sanitary ceramic preform, firing the ceramic preform to obtain a ceramic with a watermark image comprises: firing the sanitary ceramic prefabricated product at 800-1380 ℃ to obtain sanitary ceramic; when the ceramic prefabricated product is a building ceramic prefabricated product, firing the ceramic prefabricated product to obtain the ceramic with the watermark image comprises the following steps: and firing the architectural ceramic prefabricated product at 800-1380 ℃ to obtain the architectural ceramic.
According to a sixth aspect, an embodiment of the present invention provides a method for decrypting a ceramic watermark, including:
positioning the watermark image on the ceramic;
and inputting the positioned watermark image into the decoder of the fourth aspect for decoding to obtain the watermark information in the watermark image.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
fig. 1 is a schematic diagram of a specific example of a watermark image;
fig. 2 is a schematic diagram of the watermark image of fig. 1 after being framed;
FIG. 3 is a schematic representation of the watermark image of FIG. 1 after geometric distortion;
FIG. 4 is a schematic diagram of the watermark image of FIG. 1 containing redundant portions;
FIG. 5 is a schematic structural diagram of a digital watermark model without frame positioning;
fig. 6 is a schematic flowchart of a ceramic watermark model training method without frame positioning in embodiment 1 of the present invention;
FIG. 7 is a schematic flow chart of a borderless abstraction network;
FIG. 8 is a diagram illustrating a moving range of a center coordinate point;
FIG. 9 is a schematic diagram of an encoder network;
FIG. 10 is a perspective transformation diagram;
FIG. 11 is a schematic diagram of a decoder network;
fig. 12 is a schematic structural diagram of a frameless positioning ceramic watermark model training device in embodiment 2 of the present invention;
FIG. 13 is a flow chart of a method for making a ceramic watermark pattern based on an ink jet process;
FIG. 14 is a flow chart of a method of making a ceramic watermark pattern based on screen printing;
FIG. 15 is a schematic flow diagram of ceramic copyright encryption and decryption based on an inkjet process;
fig. 16 is a schematic flow chart of ceramic copyright encryption and decryption based on screen printing.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Research shows that when an intelligent mobile terminal (such as a mobile phone) performs copyright authentication, the following three phenomena occur:
phenomenon 1: when image capturing and copyright authentication are performed using a mobile terminal, a captured image may be geometrically distorted due to a non-parallel capturing angle with respect to a captured object, as shown in fig. 3.
Phenomenon 2: when the mobile terminal is used for image shooting and copyright authentication, compared with the traditional algorithm with mark positioning (such as SIFT and the like), the shot image cannot be exactly consistent with the original image content due to no reference under the condition of no frame positioning. Based on this, in order to ensure that the entire image content is not lost, the outer boundary content of the image needs to be captured, as shown in fig. 4.
Phenomenon 3: when the mobile terminal is used for image shooting and copyright authentication, due to the two situations, a little error occurs when the shot image is decoded.
Based on this, embodiment 1 of the present invention provides a frameless positioning ceramic watermark model training method. Fig. 5 is a schematic structural diagram of a frameless positioned digital watermark model, fig. 6 is a schematic flow chart of a method for training a frameless positioned ceramic watermark model in embodiment 1 of the present invention, as shown in fig. 6, the method for training a frameless positioned ceramic watermark model in embodiment 1 of the present invention includes the following steps:
s101: and acquiring a training image, and intercepting a watermark adding area in the training image.
First, a training set Large Logo set (LLD) is prepared, and training images are acquired in the training set.
In a specific embodiment, the area of the watermarking region occupies M% or more of the area of the training image, where M is greater than or equal to 50.
As a specific implementation manner, when the original size of the watermarking region does not conform to the standard size of the image required by the current encoder, before inputting the watermarking information and the watermarking region into the current encoder to generate a residual image, the method further includes: and transforming the original size of the watermarking area into the standard size.
Example 1, as shown in fig. 7, a square region picture (i.e., a watermark adding region) with an area of M × M is randomly captured from a training image, where the area of the square region needs to be M% or more of the area of the training image, and M is greater than or equal to 50. Watermark information will subsequently be embedded in the truncated picture (i.e. the watermarking area). To ensure that the square region pictures match the encoder network, the dimensions (scaling) of the square region pictures are converted to a size that meets the specifications of the image required by the encoder.
Specifically, the following steps can be adopted to intercept the watermark adding area in the training image:
1) and determining the side length size M of the intercepted square area. The algorithm is implemented as follows:
and setting a parameter d, wherein the numerical value of d is less than 0.5, and the numerical value represents the proportionality coefficient of the side length of the intercepted square area and the side length of the carrier image, and the intercepted area can be ensured to be not less than 50%. And setting the side length of the intercepted square as M and the side length of the carrier image as N.
M=N×2d
2) And determining the center point of the intercepted square area. The algorithm is implemented as follows:
and setting a parameter d ', wherein the value range of the parameter d ' is 0-0.1, and the parameter d ' represents the coordinate offset coefficient of the center point of the intercepted square area. And setting a parameter modification1, wherein the value range is (-d ', d'), and the parameter modification1 represents the quantity of the coordinate offset system of the image horizontal axis point. And setting a parameter modification2, wherein the value range is (-d ', d'), and the parameter modification2 represents the number of the coordinate offset systems of the longitudinal axis points of the image. Let the coordinates of the center point of the carrier image be (C, C). Then there are:
d′=(1-2d)÷2
modification1∈(-d′,d′)
modification2∈(-d′,d′)
the central coordinate points of the truncated square are (C + M × modification1, C + M × modification 2. as shown in fig. 8, the moving range of the central coordinate points of the truncated square area is a small gray square, so as to ensure the randomness of the truncated square area.
S102: and acquiring watermark information.
In embodiment 1 of the present invention, the watermark information may be a binary watermark sequence, and the binary watermark sequence may be appropriately deformed to have the same size as any training image in the LLD data set described in step S101.
S103: and inputting the watermark information and the watermark adding area into a current encoder to generate a residual image, and obtaining an area watermark image according to the residual image and the watermark adding area.
Specifically, the encoder inputs the watermark adding area image and the watermark information, outputs the watermark adding area image and the watermark adding area image as a residual image, and performs pixel addition on the residual image and the watermark adding area image to obtain an area watermark image containing secret information. The encoder model comprises a down-sampling convolution layer and an up-sampling convolution part as shown in fig. 9, wherein the down-sampling convolution part is used for performing convolution calculation on the watermark adding area image to perform feature extraction to form a feature body with high dimensionality, and the up-sampling convolution part is used for performing feature addition on the feature body of each layer of the down-sampling convolution and the input after up-sampling to gradually restore image details and finally form a residual image which is the same as the watermark adding area image.
S104: and calculating the loss value of each loss function in the loss function set between the regional watermark image and the watermark adding region, and adjusting the current encoder according to the loss value to obtain an updated encoder.
Further, the updated encoder is used as the current encoder, the step of obtaining the training image is returned, and the training of the encoder is completed through the iteration of the steps S101, S102, S103 and S104 until each loss function in the loss function set reaches a preset first convergence condition. Specifically, the first convergence condition may be that an area watermark image obtained by adding the residual image generated by the encoder to the watermark addition area image is hardly distinguishable from the watermark addition area image by naked eyes.
S105: and splicing the region watermark image with a residual region to obtain an integral watermark image, wherein the residual region is a region except the watermark adding region in the training image.
As a specific implementation manner, when in step S101, when the original size of the watermarking region does not conform to the standard size of the image required by the current encoder, before inputting the watermark information and the watermarking region into the current encoder to generate a residual image, the original size of the watermarking region is further converted into the standard size, and further, before splicing the region watermarking image with the remaining region, the original size of the region watermarking image is further converted into the original size.
After the above example 1, the scaled square region image is put into an encoder network to generate a region watermark image, and then the region watermark image is restored to M × M size, and the image and the remaining part of the training image are spliced to generate an overall watermark image having the same size as the training image. And finally, the whole watermark image passing through the noise layer network is put into a decoder to extract secret information. In the process, information is embedded in the square region image (not less than 50% of the area of the carrier image), and the idea of decoding the training image is used, so that the attack condition that the embedded image contains redundant boundaries is well simulated.
S106: and putting the whole watermark image into a preset noise layer for noise processing.
In embodiment 1 of the present invention, in order to make the watermark image withstand the distortion in the printing or shooting process, a noise layer capable of simulating a real physical scene is designed between an encoder and a decoder, so as to simulate various noises that may exist in the watermark image in the ceramic manufacturing process. When embedding copyright watermark information, the encoder needs to ensure the visual consistency of the output watermark pattern and the original input pattern as much as possible so as to ensure the final ceramic presentation effect.
Based on the mechanism, the counter-generating digital watermark model can generate a robust watermark image which can resist ceramic making attack on one hand, and ensures the invisibility of the image vision of the watermark after being embedded on the other hand. In order to ensure the concrete implementation of this technology, the following focuses on a noise floor design concept that is resistant to ceramic processes.
In the process of transferring the ceramic watermark pattern to the ceramic, each procedure generates noise attack and has important influence on correctly extracting watermark information by a decoding network, so that the noise attack caused by each procedure needs to be simulated, and the specific description is as follows: in step 1, the ceramic watermark pattern is subjected to a JEPG compression operation when the corresponding AI document is manufactured. In step 2, the ceramic watermark pattern needs to be exposed by chemical agents when passing through the printing process, and the step has certain influence on the brightness, the contrast, the color and the tone of the ceramic watermark pattern. In step 3, the toning is divided into manual toning and machine toning. When the colors in the ceramic watermark pattern exceed four colors, manual color matching is needed, and the manual color matching can cause color deviation of the ceramic pattern. The resulting color shift is negligible due to the precision of machine toning. Based on the analysis, the invention builds a noise layer network capable of simulating all process attacks, wherein the noise layer network comprises geometric distortion, motion blur, color shift, Gaussian noise and JEPG compression. The motion blur and the geometric distortion are mainly used for simulating noise attack for shooting ceramic watermark patterns to carry out copyright authentication. The five attack noises are randomly valued in a certain range, and the noise attack in the process of transferring the ceramic watermark image electronic plate into the paper plate is fully simulated. The noise layer designed aiming at the screen printing process is combined with the counter-generation type digital watermark model algorithm, so that the feasibility of a ceramic watermark authentication framework based on the screen printing process is ensured.
The following mainly describes the noise layer design of the ceramic copyright certification technology process based on ink jet printing. The ink-jet technology is essentially that a ceramic watermark image is pre-stored in an automatic ink-jet computer, the computer carries out color matching according to the ceramic watermark image, and then the ink-jet computer carries out painting on a ceramic carrier. The ink jet printer may cause certain color errors when color matching is performed, which may have certain influence on the color and tone of the ceramic watermark pattern. Furthermore, since the color pigments are drawn directly onto the ceramic support, the effect of the ceramic support material itself on the pigments, including brightness, contrast, color and hue, cannot be neglected. Furthermore, since the verification stage of the copyright information follows, geometric distortion and motion blur also need to be considered. Based on this, noise layer attacks against the inkjet process are mainly: geometric distortion, motion blur, color shift, and gaussian noise. The four attack noises are randomly valued in a certain range, and the noise attack of the ceramic watermark image drawn on the ceramic carrier is fully simulated. The noise layer designed aiming at the ink jet printing process is combined with the counter-generating digital watermarking algorithm, so that the feasibility of the ceramic watermarking authentication framework based on the ink jet printing process is ensured.
Because the ceramic trademark printing process is under the high temperature condition of 700-1100 ℃, the pigment colored by the ceramic is greatly influenced by temperature, humidity and air atmosphere, so that the image distortion is also large, and the watermark image contrast, saturation and color tone distortion range are larger. In addition, when the ceramic screen printing is carried out, the color shift phenomenon can also occur to a certain extent when the printing ink is influenced by the temperature, and the watermark image can still extract the watermark information without distortion after the image is distorted, so that a noise layer is constructed between an encoder and a decoder. The noise layer is constructed mainly for simulating the attack situation possibly suffered by the ceramic firing process, namely, the distortion possibly caused by the ceramic printing and shooting process is measured and analyzed according to the empirical analysis. The noise layer network mainly has: geometric distortion, motion blur, color transformation, noise attack and JEPG compression, wherein the geometric distortion and the motion blur are used for simulating attacks received during shooting, the intensity of the noise attack is a random value, and the value range is set to be changed according to environmental changes. Because the ceramic trademark is subjected to stronger color conversion attack during firing, a larger value range is set for the color conversion attack by the network, because the ceramic watermark image can be attached to the ceramic only by high-temperature firing, and through the experience of actual firing, the pattern (watermark image) attached to the ceramic has certain color after being fired at high temperature, so that the strength of the color conversion which the ceramic watermark image is subjected to at high temperature during firing is stronger, the range for simulating the color attack is synchronously enlarged, and the fact that the secret information can be correctly extracted by a decoder after the ceramic watermark image under the color attack with the strength is ensured.
Specifically, in order to simulate the above phenomenon 1 (when image capturing and copyright authentication are performed using a mobile terminal, a captured image is geometrically distorted due to the fact that the capturing angle is not parallel to the subject), a Perspective Transformation attack (Perspective Transformation) is set in the noise floor as shown in fig. 10. The method is characterized in that a perspective transformation attack is arranged in a noise layer to fully simulate the phenomenon of geometric distortion of a shot image caused by nonparallel shooting angles, and network training is added. Finally, a decoding end in the training network is forced to successfully extract secret information from the image generating geometric distortion.
S107: sending the whole watermark image subjected to noise processing into a current decoder for decoding to obtain secret information, and obtaining a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value.
Further, the updated decoder is used as the current decoder, the step of "acquiring a training image" is returned, and the training of the decoder is completed through the iteration of the steps S101, S102, S103, S105, S106, and S107 until the cross entropy loss function reaches a preset second convergence condition, where the specific second convergence condition may be that the whole watermark image after passing through the noise layer can be correctly extracted by the decoder.
The decoder model is shown in fig. 11, which contains a downsampled convolution module and a full connection layer module. The down-sampling module is used for performing convolution calculation on the watermark image, extracting watermark features to form a watermark information feature map, and the full-connection layer further compresses the watermark information feature map and converts the watermark information feature map into a binary bit sequence so as to realize secret information extraction.
It should be noted that, in embodiment 1 of the present invention, when the encoder updates, the quality of the generated watermark image is forced to be improved, so that the decoding difficulty is increased, and the decoding accuracy of the decoder is decreased due to the increase of the decoding difficulty, which may push the improvement of the decoding capability of the decoder. The decoding capability is improved, the quality of the watermark image generated by the encoder is reduced, and therefore the encoder and the decoder are improved in the process of resisting the two parties.
As a further implementation, before adjusting the current encoder according to the loss value to obtain an updated encoder, the method further includes: and acquiring the weight value of each loss function in the loss function set. Further, the adjusting the current encoder according to the loss value to obtain an updated encoder includes: adjusting the current encoder by using the loss value and the corresponding weight value of each loss function in the loss function set to obtain an updated encoder;
as a further implementation, before updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder, the method further includes: and acquiring the weight value of the cross entropy loss function. Further, the updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder includes: and updating the current decoder by using the weight value of the cross entropy loss function and the loss value of the cross entropy loss function to obtain an updated decoder.
As a specific embodiment, the set of loss functions includes: LPIPS penalty function (Loss)lpips) And L2 loss function
Figure BDA0003129970950000131
The above-mentioned
Figure BDA0003129970950000132
The loss function is expressed as follows:
Figure BDA0003129970950000133
wherein, YO′,UO′,VO' conversion of original image to Y, U, V channel components in YUV channels, Yw′,Uw′,Vw' conversion of watermark image to YUV channel Y, U, V component, WY,WU,WVThe weights on the three YUV channels are indicated, H indicates the height of the image and W indicates the width of the image.
The LosslpipsThe loss function is expressed as follows:
img_diff=flpips(image_input)-flpips(encoded_image)/H×W
wherein f islpipsThe representation image is based on a human eye visual quality scoring function, the higher the scoring coefficient is, the better the image quality is, H and W respectively represent the length and width of the image, and image _ input and encoded _ image respectively represent the carrier original image and the watermark image.
The LossserectThe loss function is a cross entropy loss function, X and Y are respectively used for representing a watermark sequence input by the encoding network and a watermark sequence output by the decoding network, X is represented by a binary sequence of 0 and 1, and Y is represented by a probability between 0 and 1 as follows:
Figure BDA0003129970950000134
in order to better adjust the network model and control the balance relationship between the visual quality of the watermark image and the robustness of the watermark extraction network, corresponding weight values are respectively added to all the loss functions.
Figure BDA0003129970950000135
WserectRepresenting the respective weight values, the total loss function is then expressed as follows:
Figure BDA0003129970950000136
in order to ensure the convergence of the encoder and decoder, proper network training skills are required in addition to designing proper loss functions. Now, the following is explained: before the step number is preset, only assigning values to the weight values of the cross entropy loss function; after a preset number of steps, the weight value of the cross entropy loss function is greater than that of the L2 loss function, and the weight value of the L2 loss function is greater than that of the LPIPS loss function.
For example, the predetermined number of steps is 2500 to 5000, that is, only the cross entropy loss function is trained before 2500 to 5000 steps.
After a preset number of steps, the weight value of the cross entropy loss function is greater than that of the L2 loss function, and the weight value of the L2 loss function is greater than that of the LPIPS loss function. When the area of the square area is intercepted, the area of the square area is gradually reduced to prevent the network from collapsing. The decoding accuracy of the final model can reach 93%. Therefore, when the network is trained, the decoding rate of the decoder network is trained firstly, the decoder can be ensured to extract the watermark information correctly, and then the visual quality (imperceptibility) of the watermark image is improved.
In the example, in the network training, the total number of the used picture training sets LLD is 13 ten thousand, the total training step length is about twenty to thirty thousand times, and in the single training, eight to sixteen pictures are trained at one time.
The invention integrates the positioning and information extraction of the watermark image, designs a network design of frameless watermark image positioning detection based on deep learning, and solves the problem that the watermark image needs to be positioned and detected in a framing mode and other modes after being printed and shot. In addition, the borderless positioning information extraction network has sufficiently high robustness. When the watermark information of the watermark image is extracted in a real environment, the requirement of a photographer for shooting the watermark image can be reduced, and the secret information can be extracted only by shooting most of the area (more than 50%) of the watermark image.
As shown in fig. 5, the digital watermark model for borderless positioning includes an encoder, a decoder, a noise layer network, and a borderless positioning algorithm (mechanism). The encoder embeds the secret information into a carrier picture to generate a watermark image (hereinafter referred to as a watermark image) containing watermark information. The noise layer is used for simulating noise attack on the watermark image during printing and shooting, wherein the noise attack includes perspective transformation attack, brightness noise attack, saturation noise attack, chroma noise attack, Gaussian noise attack and Jpeg compression noise attack. The decoder is used for extracting the secret information of the watermark image after passing through the noise layer. The frameless extraction algorithm can fully simulate the frameless positioning of the watermark image in a real environment. By means of the advantage of accurate positioning of the algorithm, the decoder can perform frameless positioning on the watermark image on one hand, and can accurately extract the secret information embedded in the watermark image on the other hand.
Example 2
Corresponding to embodiment 1 of the present invention, embodiment 2 of the present invention provides a frameless positioning ceramic watermark model training apparatus, fig. 12 is a schematic structural diagram of the frameless positioning ceramic watermark model training apparatus according to embodiment 2 of the present invention, and as shown in fig. 12, the frameless positioning ceramic watermark model training apparatus according to embodiment 2 of the present invention includes a first obtaining module 20, a second obtaining module 21, an intercepting module 22, a watermark generating module 23, a first adjusting module 24, a splicing module 25, a noise processing module 26, and a second adjusting module 27.
A first obtaining module 20, configured to obtain a training image.
A second obtaining module 21, configured to obtain watermark information.
An intercepting module 22, configured to intercept a watermark adding area in the training image;
and the watermark generating module 23 is configured to input the watermark information and the watermark adding area into a current encoder to generate a residual image, and obtain an area watermark image according to the residual image and the watermark adding area.
A first adjusting module 24, configured to calculate a loss value of each loss function in a loss function set between the regional watermark image and the watermark adding region, and adjust the current encoder according to the loss value to obtain an updated encoder until each loss function in the loss function set reaches a preset first convergence condition.
And a splicing module 25, configured to splice the region watermark image with a remaining region to obtain an overall watermark image, where the remaining region is a region of the training image except the watermark adding region.
A noise processing module 26, configured to place the overall watermark image in a preset noise layer for noise processing;
a second adjusting module 27, configured to send the whole watermark image subjected to noise processing to a current decoder for decoding to obtain secret information, and obtain a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder until the cross entropy loss function reaches a preset second convergence condition.
The specific details of the digital watermark model training apparatus can be understood by referring to the corresponding related descriptions and effects in the embodiments shown in fig. 1 to fig. 11, which are not described herein again.
Example 3
Embodiment 3 of the present invention provides an encoder, which is obtained by training using the frameless positioning ceramic watermark model training method described in embodiment 1 of the present invention.
Example 4
Embodiment 4 of the present invention provides a decoder, which is obtained by training with the frameless positioning ceramic watermark model training method described in embodiment 1 of the present invention.
Example 5
Embodiment 5 of the invention provides a ceramic embedding and densifying method. The embedding and densifying method of the ceramic in the embodiment 5 of the invention comprises the following steps:
s501: and respectively acquiring an original image and watermark information.
S502: and inputting the original image and the watermark information into an encoder of embodiment 3 of the present invention to encode, so as to obtain an electronic watermark image.
S503: and after the electronic watermark image is transferred to the ceramic prefabricated product, firing the ceramic prefabricated product to obtain the ceramic with the watermark image.
As specific embodiments, the electronic watermark image can be transferred to the ceramic preform in two ways: inputting the electronic watermark image into a preset ceramic ink-jet injection machine, and carrying out ink-jet on the ceramic prefabricated product by using the ceramic ink-jet injection machine so as to transfer the electronic watermark image onto the ceramic prefabricated product; or generating paper edition stained paper according to the electronic watermark image; and (3) paving the paper pattern paper on the ceramic prefabricated product to transfer the electronic watermark image to the ceramic prefabricated product.
In a specific embodiment, when the ceramic preform is a commodity ceramic preform, firing the ceramic preform to obtain a ceramic with a watermark pattern comprises: firing the daily ceramic prefabricated product at 800-1380 ℃ to obtain daily ceramic; when the ceramic preform is a sanitary ceramic preform, firing the ceramic preform to obtain a ceramic having a watermark pattern comprises: firing the sanitary ceramic prefabricated product at 800-1380 ℃ to obtain sanitary ceramic; when the ceramic preform is an architectural ceramic preform, firing the ceramic preform to obtain a ceramic with a watermark pattern comprising: and firing the architectural ceramic prefabricated product at 800-1380 ℃ to obtain the architectural ceramic.
For example, fig. 13 is a flow chart of a method for manufacturing a ceramic watermark pattern based on an inkjet process, and as shown in fig. 13, a ceramic electronic trademark or pattern is first given, copyright watermark information is embedded into the electronic trademark (or pattern) by using a robust watermarking technology based on a digital image, so as to form a trademark containing the copyright information, then the trademark containing the copyright information is sent to a ceramic inkjet injector to color a ceramic carrier, and then the colored ceramic carrier is sent to a kiln to be fired at a high temperature, so as to finally form the ceramic carrier containing the copyright information. Fig. 14 is a flow chart of a method for making a ceramic watermark pattern based on screen printing, and as shown in fig. 14, an electronic version ceramic trademark or pattern is firstly given, copyright information is embedded according to a robust watermarking technology, and an electronic version trademark pattern containing the copyright information is formed. Then, generating paper plate stained paper (a special paper for decorating ceramic) by relying on the electronic plate watermark picture, wherein the forming of the paper plate stained paper comprises the following procedures: making plate with stained paper, printing plate, mixing colors and making sample. Then the paper pattern paper containing copyright information is spread on the ceramic and is put into a kiln for firing. Finally, the patterns of the copyrighted stained paper fired by the kiln can be completely transferred to the ceramic, so that the copyright protection of the ceramic is realized.
Further, for phenomenon 3): even if the two attack simulation schemes are adopted and the decoding network is forced to accurately extract the secret information through network training, a few error codes still occur in the actual process, namely the extracted secret information is not completely consistent with the original embedded information. At the moment, the invention carries out BCH error correction coding on the secret information at the embedded end, ensures that the secret information can be correctly extracted even if a few code bit errors occur at the decoding end, and improves the accuracy of the decoder. The BCH error correcting code is a linear block code in a limited domain and has the capability of correcting a plurality of random errors, wherein the BCH error correcting code is a binary BCH code and mainly performs binary calculation.
The frameless image positioning network design based on deep learning provided by the embodiment of the invention can realize frameless positioning detection of the watermark image, so that the application range of ceramic copyright authentication can be enlarged, such as large-scale building ceramic use, and the efficient and rapid extraction of watermark information is ensured.
Example 6
Embodiment 6 of the invention provides a decryption method of a ceramic watermark. The embodiment 5 of the invention discloses a decryption method of a ceramic watermark pattern, which comprises the following steps:
s601: the watermark pattern on the ceramic is positioned.
S602: and inputting the positioned watermark pattern into a decoder of the embodiment 4 of the invention for decoding to obtain the watermark information in the watermark pattern.
As a specific implementation manner, the decryption method of the ceramic watermark pattern may adopt the following technical scheme: firstly, positioning and detecting the watermark pattern on the ceramic product by a high-precision scanner or a picture camera, then correcting the size of the positioned and detected picture, sending the corrected picture into a mobile phone or a computer, and then extracting the copyright information in the corrected picture by means of a robust watermark extraction algorithm in the mobile phone or the computer. And finally, comparing the copyright information content to judge whether the ceramic is infringed or not so as to achieve the function of copyright authentication.
For example, the copyright information content can be arbitrarily designed to form a watermark according to the intention of an author, such as the name of the author, company information, brand name, ceramic number and the like, so as to prove that the ceramic copyright belongs to. And then embedding the watermark into a ceramic trademark or pattern prepared in advance by using a robust watermark algorithm to obtain an electronic version watermark picture containing the watermark. Fig. 15 is a schematic diagram of a process of encrypting and decrypting a ceramic copyright based on an ink-jet process, wherein if the ink-jet process is adopted, an electronic version watermark picture is directly sent to a ceramic ink-jet machine to print and color a ceramic carrier, and then the ceramic carrier is sent to a kiln to be fired at 1170 ℃ to obtain a ceramic product containing copyright information. Fig. 16 is a schematic flow chart of encryption and decryption of ceramic copyright based on screen printing, in case of screen printing process, the electronic version watermark picture is further processed through the steps of pattern making of stained paper, plate burning, color mixing, sample preparation and the like to form a paper version watermark picture, then the ceramic technology of overglaze, overglaze and underglaze is selected according to different application scenes of the ceramic product, the manufactured paper version watermark picture and the ceramic carrier are put into a kiln together for burning after the corresponding ceramic technology is selected, and finally the ceramic product containing copyright information is obtained.
The method comprises the following steps of screening copyright information after a customer purchases a ceramic product:
firstly, positioning and detecting trademarks or patterns on the ceramic product through a high-precision scanner or a picture camera, correcting the size of the detected picture, then putting the corrected picture into a mobile phone or a computer transplanted with a robust watermark extraction algorithm to extract copyright information, and then comparing the content of the copyright information to judge whether the ceramic product is infringed so as to achieve the function of copyright authentication.
Example 7
Embodiments of the present invention further provide an electronic device, which may include a processor and a memory, where the processor and the memory may be connected by a bus or in another manner.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory is a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (for example, the obtaining module 20, the watermark generating module 21, the first adjusting module 22, the amplifying module 23, the noise processing module 24, and the second adjusting module 25 shown in fig. 8) corresponding to the borderless positioning ceramic watermark model training method in the embodiment of the present invention, and the processor executes various functional applications and data processing of the processor by running the non-transitory software program, instructions, and modules stored in the memory, so as to implement the training method of the digital watermark model in the above-described method embodiment.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and when executed by the processor, perform a method of training a digital watermark model as in the embodiments of fig. 1-7.
The details of the electronic device may be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 1 to fig. 8, and are not described herein again.
Those skilled in the art will appreciate that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and the processes of the embodiments of the methods described above can be included when the computer program is executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (13)

1. A ceramic watermark model training method without frame positioning is characterized by comprising the following steps:
acquiring a training image, and intercepting a watermark adding area in the training image;
acquiring watermark information;
inputting the watermark information and the watermark adding area into a current encoder to generate a residual image, and obtaining an area watermark image according to the residual image and the watermark adding area;
calculating loss values of all loss functions in a loss function set between the regional watermark image and the watermark adding region, and adjusting the current encoder according to the loss values to obtain an updated encoder until all the loss functions in the loss function set reach a preset first convergence condition;
splicing the region watermark image with a residual region to obtain an integral watermark image, wherein the residual region is a region except the watermark adding region in the training image;
putting the whole watermark image into a preset noise layer for noise processing;
sending the whole watermark image subjected to noise processing into a current decoder for decoding to obtain secret information, and obtaining a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder until the cross entropy loss function reaches a preset second convergence condition.
2. The method of claim 1, wherein when the original size of the watermarking region does not conform to the standard size of the image required by the current encoder, before inputting the watermark information and the watermarking region into the current encoder to generate a residual image, further comprising:
transforming the original size of the watermarking region into the standard size;
before the region watermark image is spliced with the residual region, the method further comprises the following steps: transforming the region watermark image into the original size.
3. The method of claim 1, wherein the watermarking region has an area that is M% of an area of the training image, wherein M is greater than or equal to 50.
4. The method of claim 1, wherein:
before adjusting the current encoder according to the loss value to obtain an updated encoder, the method further includes: obtaining a weight value of each loss function in the loss function set;
the adjusting the current encoder according to the loss value to obtain an updated encoder includes: adjusting the current encoder by using the loss value and the corresponding weight value of each loss function in the loss function set to obtain an updated encoder;
and/or, before updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder, the method further includes: acquiring a weight value of the cross entropy loss function;
the updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder comprises: and updating the current decoder by using the weight value of the cross entropy loss function and the loss value of the cross entropy loss function to obtain an updated decoder.
5. The method of claim 4, wherein: the set of loss functions includes: LPIPS penalty function and L2 penalty function;
before the step number is preset, only assigning values to the weight values of the cross entropy loss function;
after a preset number of steps, the weight value of the cross entropy loss function is greater than that of the L2 loss function, and the weight value of the L2 loss function is greater than that of the LPIPS loss function.
6. A ceramic watermark model training device without frame positioning is characterized by comprising:
the first acquisition module is used for acquiring a training image;
the second acquisition module is used for acquiring watermark information;
the intercepting module is used for intercepting a watermark adding area in the training image;
the watermark generating module is used for inputting the watermark information and the watermark adding area into a current encoder to generate a residual image and obtaining an area watermark image according to the residual image and the watermark adding area;
the first adjusting module is used for calculating loss values of all loss functions in a loss function set between the regional watermark image and the watermark adding region, adjusting the current encoder according to the loss values to obtain an updated encoder until all the loss functions in the loss function set reach a preset first convergence condition;
the splicing module is used for splicing the regional watermark image with a residual region to obtain an overall watermark image, wherein the residual region is a region of the training image except the watermark adding region;
the noise processing module is used for putting the overall watermark image into a preset noise layer for noise processing;
the second adjusting module is used for sending the whole watermark image subjected to noise processing into a current decoder for decoding to obtain secret information, and obtaining a cross entropy loss function loss value according to the secret information and the watermark information; and updating the current decoder according to the cross entropy loss function loss value to obtain an updated decoder until the cross entropy loss function reaches a preset second convergence condition.
7. The apparatus of claim 6, wherein the noise layer comprises one or more of: geometric distortion, motion blur, color shift, gaussian noise, and JEPG compression.
8. The apparatus of claim 7, wherein:
the distortion coefficient of the geometric distortion is less than 1;
and/or the motion blur adopts a linear blur kernel, the pixel width of the linear blur kernel is not more than 10, the linear angle is randomly selected, and the range is not more than 1/2 pi;
and/or the offset value of the color offset is uniformly distributed, and the offset value is-0.2-0.3;
and/or, the compression quality factor of the JEPG compression is greater than 50.
9. A method for caulking ceramic, comprising:
respectively acquiring an original image and watermark information;
inputting the original image and the watermark information into a ceramic watermark model obtained by training through the frameless positioning ceramic watermark model training method of any one of claims 1-5, and coding to obtain an electronic watermark image;
and after the electronic watermark image is transferred to the ceramic prefabricated product, firing the ceramic prefabricated product to obtain the ceramic with the watermark image.
10. The method of claim 9, wherein transferring the electronic watermark image onto the ceramic preform comprises:
inputting the electronic watermark image into a preset ceramic ink-jet injection machine, and carrying out ink-jet on the ceramic prefabricated product by using the ceramic ink-jet injection machine so as to transfer the electronic watermark image onto the ceramic prefabricated product;
or, generating paper edition stained paper according to the electronic watermark image; and (3) paving the paper pattern paper on the ceramic prefabricated product to transfer the electronic watermark image to the ceramic prefabricated product.
11. The method of claim 9, wherein:
when the ceramic prefabricated product is a daily ceramic prefabricated product, the ceramic prefabricated product is fired to obtain the ceramic with the watermark image, and the ceramic prefabricated product comprises the following components: firing the domestic ceramic prefabricated product at 800-1380 ℃ to obtain domestic ceramic;
when the ceramic preform is a sanitary ceramic preform, firing the ceramic preform to obtain a ceramic with a watermark image comprises: firing the sanitary ceramic prefabricated product at 800-1380 ℃ to obtain sanitary ceramic;
when the ceramic prefabricated product is a building ceramic prefabricated product, firing the ceramic prefabricated product to obtain the ceramic with the watermark image comprises the following steps: and firing the architectural ceramic prefabricated product at 800-1380 ℃ to obtain the architectural ceramic.
12. A method for decrypting a ceramic watermark, comprising:
positioning the watermark image on the ceramic;
inputting the positioned watermark image into a ceramic watermark model obtained by training with the frameless positioning ceramic watermark model training method of any one of claims 1 to 5, and decoding to obtain watermark information in the watermark image.
13. The method of claim 12, further comprising:
error correction is performed using an error correction code.
CN202110700418.3A 2021-06-23 2021-06-23 Ceramic watermark model training method and embedding method for frameless positioning Active CN113379585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110700418.3A CN113379585B (en) 2021-06-23 2021-06-23 Ceramic watermark model training method and embedding method for frameless positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110700418.3A CN113379585B (en) 2021-06-23 2021-06-23 Ceramic watermark model training method and embedding method for frameless positioning

Publications (2)

Publication Number Publication Date
CN113379585A CN113379585A (en) 2021-09-10
CN113379585B true CN113379585B (en) 2022-05-27

Family

ID=77578677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110700418.3A Active CN113379585B (en) 2021-06-23 2021-06-23 Ceramic watermark model training method and embedding method for frameless positioning

Country Status (1)

Country Link
CN (1) CN113379585B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330583A (en) * 2022-09-19 2022-11-11 景德镇陶瓷大学 Watermark model training method and device based on CMYK image
CN117974414B (en) * 2024-03-28 2024-06-07 中国人民解放军国防科技大学 Digital watermark signature verification method, device and equipment based on converged news material

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210380A (en) * 2020-04-20 2020-05-29 成都华栖云科技有限公司 Deep learning based fragment type image digital watermark embedding and decrypting method and system
CN111598761A (en) * 2020-04-17 2020-08-28 中山大学 Anti-printing shot image digital watermarking method based on image noise reduction
CN111882746A (en) * 2020-07-30 2020-11-03 周晓明 Porcelain product body copyright protection method embedded with invisible identification image
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598761A (en) * 2020-04-17 2020-08-28 中山大学 Anti-printing shot image digital watermarking method based on image noise reduction
CN111210380A (en) * 2020-04-20 2020-05-29 成都华栖云科技有限公司 Deep learning based fragment type image digital watermark embedding and decrypting method and system
CN111882746A (en) * 2020-07-30 2020-11-03 周晓明 Porcelain product body copyright protection method embedded with invisible identification image
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《A Novel Two-stage Separable Deep Learning Framework for Practical Blind Watermarking》;YangLiu et al;《MM "19: Proceedings of the 27th ACM International Conference on Multimedia》;20191031;第1511-1512页 *

Also Published As

Publication number Publication date
CN113379585A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
US11238556B2 (en) Embedding signals in a raster image processor
US7995790B2 (en) Digital watermark detection using predetermined color projections
CN113052745B (en) Digital watermark model training method, ceramic watermark image manufacturing method and ceramic
CN113379585B (en) Ceramic watermark model training method and embedding method for frameless positioning
Fang et al. A camera shooting resilient watermarking scheme for underpainting documents
Yu et al. Print-and-scan model and the watermarking countermeasure
JP2000299778A (en) Method and device for adding watermark, method and device for reading and recording medium
CN113222804B (en) Ceramic process-oriented up-sampling ceramic watermark model training method and embedding method
US20080205697A1 (en) Image-processing device and image-processing method
Gou et al. Data hiding in curves with application to fingerprinting maps
CN110796586A (en) Blind watermarking method and system based on digital dot matrix and readable storage medium
CN107391976A (en) A kind of document protection method and apparatus based on ambient noise and vector watermark
Ma et al. Adaptive spread-transform dither modulation using a new perceptual model for color image watermarking
CN113538201B (en) Ceramic watermark model training method and device based on bottom changing mechanism and embedding method
Thongkor et al. Robust image watermarking for camera-captured image using image registration technique
Mizumoto et al. Robustness investigation of DCT digital watermark for printing and scanning
CN105869104B (en) The digital watermark method and system stable to JPEG compression based on image content
CN113837915B (en) Ceramic watermark model training method and embedding method for binaryzation of boundary region
CN112184533B (en) Watermark synchronization method based on SIFT feature point matching
CN110189241B (en) Block mean value-based anti-printing noise image watermarking method
Lee et al. Photograph watermarking
JP3884891B2 (en) Image processing apparatus and method, and storage medium
CN110648271A (en) Method for embedding digital watermark in halftone image by using special dots
Gu et al. Robust Watermarking of Screen-Photography Based on JND.
KR102645176B1 (en) Dot group encoding method and apparatus, and decoding method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant