CN112217958A - Method for preprocessing digital watermark carrier image irrelevant to device color space - Google Patents

Method for preprocessing digital watermark carrier image irrelevant to device color space Download PDF

Info

Publication number
CN112217958A
CN112217958A CN202010967712.6A CN202010967712A CN112217958A CN 112217958 A CN112217958 A CN 112217958A CN 202010967712 A CN202010967712 A CN 202010967712A CN 112217958 A CN112217958 A CN 112217958A
Authority
CN
China
Prior art keywords
image
watermark
color space
color
carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010967712.6A
Other languages
Chinese (zh)
Other versions
CN112217958B (en
Inventor
郭凌华
穆萌
马策践
刘国栋
李楠
丁亭文
海敬溥
王宾杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi University of Science and Technology
Original Assignee
Shaanxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi University of Science and Technology filed Critical Shaanxi University of Science and Technology
Priority to CN202010967712.6A priority Critical patent/CN112217958B/en
Publication of CN112217958A publication Critical patent/CN112217958A/en
Application granted granted Critical
Publication of CN112217958B publication Critical patent/CN112217958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32309Methods relating to embedding, encoding, decoding, detection or retrieval operations in colour image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0061Embedding of the watermark in each block of the image, e.g. segmented watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a method for preprocessing a digital watermark carrier image irrelevant to a device color space, which comprises the following steps: dividing an original carrier image into super pixel blocks; marking the superpixel blocks by adopting Hash mapping, and extracting each superpixel block; calculating characteristic information of the superpixel block through the gray level co-occurrence matrix; constructing a self-organizing competitive neural network, classifying the image characteristic information of the superpixel block through the neural network, and mapping the classification result to the superpixel block through a mapping table to obtain a classified carrier image; determining the color channel embedded in each type of image block through the color moment, and separating each color channel to obtain a preprocessed embedded watermark image; and embedding and extracting the watermark of the preprocessed image through a DCT-SVD algorithm. The invention discloses a method for preprocessing a digital watermark carrier image irrelevant to a device color space, which solves the problems of poor printing resistance and poor scanning performance of watermarks in the prior art.

Description

Method for preprocessing digital watermark carrier image irrelevant to device color space
Technical Field
The invention belongs to the technical field of image processing and anti-counterfeiting methods, and relates to a method for preprocessing a digital watermark carrier image irrelevant to an equipment color space.
Background
Today, high precision printers and scanners make it easier to digitize, tamper with, and copy paper products. The application of digital watermarking technology in the fields of copyright protection, printing anti-counterfeiting and the like is one of effective and low-cost methods. Digital watermarking is a technique that can embed and extract meaningful or meaningless identification information into host information by a certain method.
A plurality of digital watermarking methods provided for digital images directly process the whole image or embed watermarks after partitioning the image through geometric partitioning, but the geometric partitioning method does not consider the characteristics of the image such as color, texture and the like, the partitioning result is rough, and the problem of image color distortion after embedding watermarks is caused. Although most watermarking algorithms enable watermarks to have good performance of resisting geometric attacks, the phenomena of poor extraction effect, unstable performance and the like can occur in the aspect of resisting non-geometric attacks similar to printing scanning and printing photographing. Therefore, at present, researches on the image preprocessing problem of the digital watermarking technology and how to improve the printing and scanning resistance and the printing and photographing performance of the watermark become hot spots.
Disclosure of Invention
The invention aims to provide a method for preprocessing a digital watermark carrier image irrelevant to the color space of equipment, which solves the problems of poor printing resistance and poor scanning performance of watermarks in the prior art.
The technical scheme adopted by the invention is that the method for preprocessing the digital watermark carrier image irrelevant to the color space of the equipment is implemented according to the following steps:
step 1, dividing an original carrier image into super pixel blocks by a SLIC super pixel division method;
step 2, marking the super-pixel blocks by adopting Hash mapping, extracting each super-pixel block, and storing the super-pixel blocks as an independent image;
step 3, calculating the characteristic information of the superpixel block through the gray level co-occurrence matrix;
step 4, constructing a self-organizing competitive neural network, setting a classification number C, classifying the image characteristic information of the superpixel blocks through the neural network, and mapping the classification result to the superpixel blocks through a mapping table to obtain a classified carrier image;
and 5, determining the embedded color channel of each type of image block through the color moment, and separating each color channel to obtain the preprocessed embedded watermark image.
And 6, embedding and extracting the watermark of the preprocessed image through a DCT-SVD algorithm.
The present invention is also characterized in that,
the original carrier image in the step 1 has the resolution of 300-600 dpi and the size range of 128 px-1024 px.
The step 1 specifically comprises the following steps:
step 1.1, converting the color space of the original carrier image: reading an original carrier image I through MATLAB, and if the color space of the original image is CIELab, keeping the color space unchanged to obtain an image I_LabOtherwise, converting the color mode of the image I into a CIELab color space which is irrelevant to the equipment to obtain the image I_Lab
Step 1.2, call superpixels segmentation function superpixels in MATLAB (R) ((R))) Setting the number n of pre-divided super pixel blocks, the iteration number NI of the algorithm class stage, and converting the image I after color space through the function_LabAnd (4) carrying out segmentation to obtain a label matrix L and the number N of actually segmented super pixel blocks.
The step 2 specifically comprises the following steps:
step 2.1, storing a label matrix L, calling a function label2Idx () in MATLAB, converting the label matrix L into a linear index unit grid Idx, namely converting N super pixel areas marked in the label matrix L into 1 x N unit grids, mapping the pixel position corresponding to the nth super pixel block by each Idx { N }, and enabling N to be the [1, N ];
step 2.2, traversing the cells Idx {1} to Idx { N }, taking out pixel position information stored in each cell Idx { N }, extracting the pixel value corresponding to the current super pixel block, and realizing the extraction of each super pixel block by a method of setting the pixel values of the corresponding positions of the rest super pixel blocks to be 0, and storing the extracted image blocks as a picture to obtain the 1-N super pixel block images P divided from the super pixels1To PN
The step 3 specifically comprises the following steps:
step 3.1 reading superpixel block images P in MATLAB1To PNConverting each color super pixel block image into a gray image, calling a function interface Graycomatrix () for calculating a two-dimensional gray co-occurrence matrix of the image, setting the gray level of the image, and setting the distance between a concerned pixel and an adjacent point pixel, namely the offset; finally, calculating a gray level co-occurrence matrix GLCM of the image;
step 3.2, calling a calculation information entropy function to calculate the image information entropy Q1Then, the function of Graycospros () is called to calculate the image characteristic information derived from the gray level co-occurrence matrix, including the contrast Q2Homogeneity Q3Correlation Q3Energy Q5
And 3.3, storing the calculated image characteristic information into a folder of an X0.xlsx format of the specified path through file operation.
The step 4 specifically comprises the following steps:
step 4.1, firstly, calling a composable () function in MATLAB, wherein the function defaults a learning rule to a Kohonen learning rule, then setting the classified category number and the learning rate and the good heart rate of a neural network, and constructing an initialized self-organizing competitive neural network initNet;
step 4.2, passing information entropy Q in the image characteristic information1Contrast ratio Q2Homogeneity Q3Correlation Q3Energy Q5Training the network initNet to obtain the trained neural network net, and obtaining the characteristic information Q1-Q5Classifying feature information of the image by taking the category number C expected to be classified as an output layer input of a network net to obtain a classification result array classes of the superpixel block;
step 4.3, calling a function label2Idx () in MATLAB, converting the classification result array classes into linear index unit cells Idx _ c, wherein each Idx _ c { i } maps the sequence number of the ith class image corresponding to the super pixel block, and i belongs to [1, c ]]C is the classification number of the image, simultaneously maps the file names stored by all the superpixels of the corresponding classes, traverses the superpixel blocks corresponding to each Idx { I } and accumulates to finally obtain an image I classified according to image characteristics such as image color distance, image texture and the like1,I2,I3…Ic
The step 5 specifically comprises the following steps:
step 5.1, image I1,I2,I3…IcIs converted from CIELab to RGB color space;
step 5.2, calculate image I1,I2,I3…IcThe color moments in the RGB color space include the color moments of the individual color channels of each type of image R, G, B, and then the color channel with the largest color moment in R, G, B is selected as the embedding channel for the watermark of that type of image.
The step 6 specifically comprises the following steps:
step 6.1, after the classified images are separated by the color channels appointed in the step 5, the color channels corresponding to the various images are subjected to Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD) to obtain a carrier matrix S of the watermark1_xWherein x ═1,2,3, …, c, c is the number of classifications of the image;
step 6.2, selecting a pair of images with the same size as the carrier as watermark images, and performing DCT (discrete cosine transformation) and SVD (singular value decomposition) on the watermark images to obtain a watermark singular value matrix Sm
Step 6.3, for each type of different matrix S1_xWith a predetermined embedding strength kxUsing the method of formula (1) to react SmEmbedded in matrix SxObtaining a eigenvalue matrix S of the embedded image2_x
S2_x=S1_x+kxSm,x=1,2,3…c (1)
Step 6.4, for S2_xCarrying out inverse SVD transformation and inverse DCT transformation to obtain an image I of each type of image embedded with watermarkw_x
6.5, carrying out printing scanning and printing and photographing attacks on the image embedded with the watermark;
step 6.6, the reverse process of the watermark embedding process is used for extracting the watermark from the embedded image to obtain the watermark image W extracted from each type of imagexAnd x is 1,2,3, … c, and the final extracted watermark image W is obtained by adding equation (2), that is, the final extracted watermark image W is obtained
W=W1+W2+W3+…+Wc (2)。
The naming rules for saving P1 to PN pictures are "1. bmp", "2. bmp", "3. bmp" … "N-1. bmp", "n.bmp".
The gray scale level selects 8 gray scale levels.
The invention has the beneficial effects that:
(1) according to the invention, through image color mode conversion, the image is converted from the RGB mode to the CIELab color space irrelevant to the display color gamut of the equipment, so that the equipment independence of image processing is ensured; the image is divided into irregular super pixel blocks according to the color distance and the space distance through SLIC super pixel division, and a more exquisite image division result is obtained; calculating the information entropy, the contrast, the homogeneity (inverse difference), the correlation and the energy of each super-pixel block through the gray level co-occurrence matrix to describe the characteristic information such as the color, the texture and the like of the image; constructing a self-organizing competitive Bible network to classify the characteristic information of the image to obtain an image block of the carrier image after classification according to the characteristics of color, texture and the like; and calculating color moments of the various classified image blocks, separating components corresponding to the maximum values of the color moments in the color components, and embedding and extracting watermarks in the separated color channels through a DCT-SVD digital watermark algorithm.
(2) The present inventor has also appreciated that converting an image from an RGB color mode to a CIELab color mode, since Lab colors are closer to human color vision, describes how the colors are displayed rather than the amount of a particular colorant needed for the colors, making the image processing method independent of the display device, increasing the applicability of the method.
(3) The method saves the result of image segmentation and the result after neural network classification, establishes a label matrix between the image processing characteristic numerical value and the pixel value of the image block, obtains a mapping table, simplifies the image process, reduces the time complexity of the algorithm, and improves the algorithm efficiency.
(4) The invention separates the monochrome channel from the color multichannel host image by the monochrome channel separation method, and then restores the monochrome channel into the multicolor image, so that the watermark can be embedded and extracted not only in the gray level image, but also in any color channel of the color image.
(5) The invention uses SVD method to process the image to obtain the characteristic value matrix of the carrier image, then embeds the watermark image information into the characteristic value matrix of the carrier image, because the characteristic value matrix has strong robustness, the digital watermark has the performance of resisting printing scanning and printing photographing, and simultaneously reduces the requirements of printing scanning and printing photographing on the equipment condition. Furthermore, the visual characteristics of the watermark are restored to the characteristic value matrix by using SVD inverse transformation, so that the visual effect of extracting the watermark after printing and scanning or printing and photographing is better, and the printing and scanning resistance is enhanced.
Drawings
FIG. 1 is a flow chart of a method of device color space independent digital watermark carrier image pre-processing of the present invention;
FIG. 2 is an original carrier image in an embodiment of the method of device color space independent digital watermark carrier image pre-processing of the present invention;
FIG. 3 is a diagram of a watermark image in an embodiment of the method for device color space independent digital watermark carrier image pre-processing of the present invention
FIG. 4 is a super-pixel segmentation signature diagram in an embodiment of the method of the invention for device color space independent digital watermark carrier image pre-processing;
FIG. 5 is the first 10 blocks of a superpixel block diagram in a method embodiment of the invention for device color space independent digital watermark carrier image pre-processing;
FIG. 6 is a classified first type of image in an embodiment of the method for pre-processing a digital watermark carrier image independent of device color space;
FIG. 7 is a classified second type of image in an embodiment of the method of pre-processing a digital watermark carrier image independent of device color space according to the present invention;
FIG. 8 is a classified third type of image in an embodiment of the method for pre-processing a digital watermark carrier image independent of device color space;
FIG. 9 is a first type of image after embedding a watermark in an embodiment of a method of pre-processing a digital watermark carrier image that is independent of device color space;
FIG. 10 is a diagram of a second type of image after embedding a watermark in an embodiment of a method for pre-processing a digital watermark carrier image independent of device color space;
FIG. 11 is a third type of image after embedding a watermark in an embodiment of a method for pre-processing a digital watermark carrier image that is independent of device color space;
FIG. 12 is a block diagram of a complete image after embedding a watermark in an embodiment of a method for pre-processing a digital watermark carrier image independent of device color space according to the present invention;
fig. 13 is an extracted watermark image without attack in the method embodiment of the invention for preprocessing the digital watermark carrier image independent of the device color space;
FIG. 14 is a method embodiment of the invention for pre-processing a digital watermark carrier image independent of device color space, wherein the embedded watermark image is printed and scanned;
FIG. 15 is a method embodiment of the present invention for pre-processing a digital watermark carrier image independent of the device color space, wherein the embedded watermark image is printed and photographed;
FIG. 16 is a watermark image extracted after printing and scanning of an embedded watermark image in an embodiment of the method for preprocessing a digital watermark carrier image independent of the device color space;
fig. 17 is a watermark image extracted after printing and photographing an embedded watermark image in the method embodiment of the invention for preprocessing a digital watermark carrier image independent of the color space of the device.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a method for preprocessing a digital watermark carrier image irrelevant to a device color space, which has a flow as shown in figure 1 and is implemented according to the following steps:
step 1, dividing an original carrier image into super pixel blocks by a SLIC super pixel division method; the method specifically comprises the following steps:
step 1.1, converting the color space of the original carrier image: reading an original carrier image I through MATLAB, and if the color space of the original image is CIELab, keeping the color space unchanged to obtain an image I_LabOtherwise, converting the color mode of the image I into a CIELab color space which is irrelevant to the equipment to obtain the image I_LabWherein the original carrier image has a resolution of 300-600 dpi and a size range of 128 px-1024 px;
step 1.2, calling a superpixel segmentation function superpixels () in MATLAB, setting the number n of the pre-segmented superpixel blocks, the iteration times NI of an algorithm class stage, and converting an image I after color space through the function_LabAnd (4) carrying out segmentation to obtain a label matrix L and the number N of actually segmented super pixel blocks.
Step 2, marking the super-pixel blocks by adopting Hash mapping, extracting each super-pixel block, and storing the super-pixel blocks as an independent image; the method specifically comprises the following steps:
step 2.1, storing a label matrix L, calling a function label2Idx () in MATLAB, converting the label matrix L into a linear index unit grid Idx, namely converting N super pixel areas marked in the label matrix L into 1 x N unit grids, mapping the pixel position corresponding to the nth super pixel block by each Idx { N }, and enabling N to be the [1, N ];
step 2.2, traversing the cells Idx {1} to Idx { N }, taking out pixel position information stored in each cell Idx { N }, extracting the pixel value corresponding to the current super pixel block, and realizing the extraction of each super pixel block by a method of setting the pixel values of the corresponding positions of the rest super pixel blocks to be 0, and storing the extracted image blocks as a picture to obtain the 1-N super pixel block images P divided from the super pixels1To PNPreservation of P1To PNThe naming rules of the images are "1. bmp", "2. bmp", "3. bmp" … "N-1. bmp", "n.bmp";
step 3, calculating the characteristic information of the superpixel block through the gray level co-occurrence matrix; the method specifically comprises the following steps:
step 3.1 reading superpixel block images P in MATLAB1To PNConverting each color super pixel block image into a gray image, calling a function interface Graycomtrix () for calculating a two-dimensional gray co-occurrence matrix of the image, setting the gray level of the image, usually selecting 8 gray levels, and setting the distance between a concerned pixel and an adjacent point pixel, namely the offset; finally, calculating a gray level co-occurrence matrix GLCM of the image;
step 3.2, calling a calculation information entropy function to calculate the image information entropy Q1Then, the function of Graycospros () is called to calculate the image characteristic information derived from the gray level co-occurrence matrix, including the contrast Q2Homogeneity Q3Correlation Q3Energy Q5
Step 3.3, storing the calculated image characteristic information into a folder of an X0.xlsx format of the specified path through file operation;
step 4, constructing a self-organizing competitive neural network, setting a classification number C, classifying the image characteristic information of the superpixel blocks through the neural network, and mapping the classification result to the superpixel blocks through a mapping table to obtain a classified carrier image; the method specifically comprises the following steps:
step 4.1, firstly, calling a composable () function in MATLAB, wherein the function defaults a learning rule to a Kohonen learning rule, then setting the classified category number and the learning rate and the good heart rate of a neural network, and constructing an initialized self-organizing competitive neural network initNet;
step 4.2, passing information entropy Q in the image characteristic information1Contrast ratio Q2Homogeneity Q3Correlation Q3Energy Q5Training the network initNet to obtain the trained neural network net, and obtaining the characteristic information Q1-Q5Classifying feature information of the image by taking the category number C expected to be classified as an output layer input of a network net to obtain a classification result array classes of the superpixel block;
step 4.3, calling a function label2Idx () in MATLAB, converting the classification result array classes into linear index unit cells Idx _ c, wherein each Idx _ c { i } maps the sequence number of the ith class image corresponding to the super pixel block, and i belongs to [1, c ]]C is the classification number of the image, simultaneously maps the file names stored by all the superpixels of the corresponding classes, traverses the superpixel blocks corresponding to each Idx { I } and accumulates to finally obtain an image I classified according to image characteristics such as image color distance, image texture and the like1,I2,I3…Ic
Step 5, determining the color channel embedded in each type of image block through the color moment, and separating each color channel to obtain a preprocessed embedded watermark image; the method specifically comprises the following steps:
step 5.1, image I1,I2,I3…IcIs converted from CIELab to RGB color space;
step 5.2, calculate image I1,I2,I3…IcColor moments in the RGB color space, including the color moments of the individual color channels of each type of image R, G, B, are then selected R, G, B as the color with the largest color momentThe channel is used as an embedding channel of the watermark of the image;
step 6, embedding and extracting the watermark of the preprocessed image through a DCT-SVD algorithm; the method specifically comprises the following steps:
step 6.1, after the classified images are separated by the color channels appointed in the step 5, the color channels corresponding to the various images are subjected to Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD) to obtain a carrier matrix S of the watermark1_xWherein x is 1,2,3, …, c, c is the classification number of the image;
step 6.2, selecting a pair of images with the same size as the carrier as watermark images, and performing DCT (discrete cosine transformation) and SVD (singular value decomposition) on the watermark images to obtain a watermark singular value matrix Sm
Step 6.3, for each type of different matrix S1_xWith a predetermined embedding strength kxUsing the method of formula (1) to react SmEmbedded in matrix SxObtaining a eigenvalue matrix S of the embedded image2_x
S2_x=S1_x+kxSm,x=1,2,3…c (1)
Step 6.4, for S2_xCarrying out inverse SVD transformation and inverse DCT transformation to obtain an image I of each type of image embedded with watermarkw_x
6.5, carrying out printing scanning and printing and photographing attacks on the image embedded with the watermark;
step 6.6, the reverse process of the watermark embedding process is used for extracting the watermark from the embedded image to obtain the watermark image W extracted from each type of imagexAnd x is 1,2,3, … c, and the final extracted watermark image W is obtained by adding equation (2), that is, the final extracted watermark image W is obtained
W=W1+W2+W3+…+Wc (2)。
The invention relates to a method for preprocessing a digital watermark carrier image irrelevant to equipment, which relates to an image color mode conversion technology, a SLIC superpixel image segmentation technology and a self-organizing competitive neural network classification technology, and simultaneously adopts an image transformation technology of Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD).
The image color mode conversion involves the conversion of the image RGB color space to the CIELab color space. The CIELab color space is a color space irrelevant to equipment and is also a color model based on human eye physiological characteristics, the component L of the CIELab color space represents the brightness of a pixel, the value range is [1,100], and the CIELab color space represents a black-to-white color range; the a component represents the color range from dark green to gray to bright pink red, and the value range is [ -128,128 ]; b represents the color range of bright blue to gray to bright yellow [ -128,128 ]. A color representation space commonly used by RGB color space electronic devices, where R represents a red component and the value range is [0, 255 ]; g represents a blue component, and the value range is [0, 255 ]; b represents green components, the value range is [0, 255], the RGB space cannot be directly converted into the Lab color space, and the RGB color space needs to be converted into the XYZ color space first and then converted into the Lab space; when the Lab space is converted into the RGB space, the conversion principle is as follows:
the conversion relationship between the RGB color space and the XYZ color space is as follows:
Figure BDA0002682945130000111
Figure BDA0002682945130000121
XYZ color space to Lab color space:
Figure BDA0002682945130000122
wherein the content of the first and second substances,
Figure BDA0002682945130000123
lab color space to XYZ color space:
Figure BDA0002682945130000124
wherein the content of the first and second substances,
Figure BDA0002682945130000125
in the formula L*、a*、b*Respectively representing corresponding values of three channels of the Lab color space; x, Y, Z represents tristimulus values, X is red primary color stimulus amount, Y is green primary color stimulus amount, and Z is blue primary color stimulus amount; xn、Yn、ZnTypically 95.047, 100, 108.883 by default.
Slic (simple Linear Iterative clustering), i.e., simple Linear Iterative clustering. The algorithm is an algorithm which is simple in concept and convenient to implement and is proposed in 2010. The superpixel blocks generated by the method are compact and regular, and the neighborhood characteristics are clear in performance and ideal in running speed. The segmentation effect of the SLIC superpixel accords with the visual characteristics of human eyes, and the method is a common superpixel segmentation method, and has the following principle formula:
Figure BDA0002682945130000126
Figure BDA0002682945130000131
the self-organizing competitive neural network adopts an unsupervised learning mode, the learning rule is a Kohonen learning rule, in the core layer, only the neuron wins each time, and only the weight of the neuron is corrected when the weight is adjusted. Assuming that the input layer is m neurons, the core layer is n neurons, the input neural vector is p ═ p1, p2, p3 …, pm ], the weight is m × n matrix, and the network output is as shown in formula (7)
Y=PW (7)
Y=[Y1,Y2,Y3…,Yn]. Assuming that the winning neuron among the n output neurons is YkIf so, the corresponding weight is adjusted according to the following formula:
Δωik=η(Piik)Yk (8)
the Discrete Cosine Transform (DCT) is an image approximated by the sum of a set of Cosine functions of different frequencies and amplitudes, which is defined as follows, and the two-dimensional Discrete Cosine Transform (DCT) is defined as follows:
Figure BDA0002682945130000132
the inverse two-dimensional discrete cosine transform is defined as:
Figure BDA0002682945130000133
in fact, the discrete cosine transform is a real part of the fourier transform, and most of visual information of an image is concentrated on a few transform coefficients for one image due to the discrete cosine variable. Therefore, the discrete cosine variable is a common transform coding method for image data compression, which can concentrate highly correlated data energy, making it very suitable for image compression.
Singular Value Decomposition (SVD) in the transform domain is a method for diagonalizing a matrix, and Singular values of an image have strong stability and do not change significantly when the image is slightly disturbed, so that the transparency, the concealment and the security of the watermark can be ensured by embedding the watermark in the Singular values.
Examples
The Lena image is now taken as the host image, and is shown in fig. 2 as having a size of 256 × 256; taking the pattern of the letter "T" as a watermark image, as shown in fig. 3, the size is 256 × 256, that is, the watermark capacity is maximum, editing the code using an MATLAB tool, and performing preprocessing on the host image, specifically:
step 1, dividing a carrier image into superpixel blocks by a SLIC superpixel division method;
specifically, firstly, a carrier picture is converted from a common RGB color mode to a CIELab color space in MATLAB, so that the device independence of image processing is guaranteed, then the number of pre-divided super-pixel blocks is set to be 100, the number of iterations of an algorithm class stage is set to be 10, and then the image is divided through an SLIC super-pixel division algorithm to obtain a label matrix L and the number N of actually divided super-pixel blocks. The division results are shown in FIG. 4. To improve the algorithm efficiency, color conversion is built into the algorithm for SLIC superpixel segmentation.
Step 2, marking the super-pixel blocks by adopting a Hash mapping idea, extracting each super-pixel block, and storing the super-pixel blocks as an independent image;
specifically, the tag matrix L is first stored, and the position in the matrix having the same tag value is written into a cell array, so as to obtain a matrix index unit array having a mapping relationship. The content corresponding to the index array is the pixel value corresponding to the class label, the index unit array is represented by Idx, the number of the array is the same as the number of the class of the label in the matrix L, and the pixel value corresponding to each class of image is mapped in each Idx { n }, n epsilon [1,100 ].
Then, traversing the index arrays Idx {1} -Idx { n }, where n is 100, implementing the extraction of each pixel block by keeping the pixel value corresponding to the pixel block to be extracted unchanged and setting the pixel values corresponding to the remaining pixel blocks to 0, and storing the extracted image block as another picture to obtain the 1 st to 100 th super pixel block images divided by super pixels, where the previous 10 super pixel blocks are taken as an example, as shown in fig. 5.
Step 3, calculating the characteristic information of the superpixel block through the gray level co-occurrence matrix, wherein the characteristic information comprises information entropy (Q)1) Contrast ratio (Q)2) Homogeneity (Q)3) Correlation (Q)4) Energy (Q)5);
Specifically, firstly, a color carrier image is converted into a gray image, then a function interface for calculating a two-dimensional gray co-occurrence matrix of the image is called, the gray level of the image is set, usually 8 gray levels are selected, the distance between a concerned pixel and an adjacent point pixel, namely the offset, is set, and the gray co-occurrence matrix of the image is calculated.
And then calculating image characteristics derived from the gray level co-occurrence matrix, including contrast, homogeneity, correlation and energy, and then calling a calculation information entropy function to calculate the image information entropy. And finally, saving the calculated feature core machine into a folder in an xlsx format of the specified path through file operation.
And 4, step 4: and constructing a self-organizing competitive neural network, and classifying the superpixel blocks.
Specifically, first, a learning rate of 0.001 and a good heart rate of 0.0001 for the neural network are set, and the number of classes to be classified is 3, using the Kohonen learning rule as the learning rule of the network, thereby constructing the self-organizing competitive neural network net.
Then, the entropy (Q) of the information in the image features is used1) Contrast ratio (Q)2) Homogeneity (Q)3) Correlation (Q)4) Energy (Q)5) And training and classifying the characteristic information of the image by taking the number of classes expected to be classified as output of an input layer.
And finally, establishing an index unit array between the classification result and each superpixel block, wherein the method is the same as the establishing method in the step 2. Traversing and accumulating the corresponding super-pixel blocks of each type of image to obtain an image I classified according to image characteristics such as image color distance, image texture and the like1、I2、I3As shown in fig. 6, 7 and 8.
And 5, determining the embedded color channel of each type of image block through the color moment.
Specifically, because most of the existing digital watermarking algorithms adopt the RGB color space for image processing, in order to make the image preprocessing method have better applicability, the step first converts the image color space into the RGB color space, and then calculates I obtained in step 41、I2、I3And finally, selecting a color channel with the maximum color moment as an embedding channel of the watermark of the image. The secondary moments of the colors of the images in this example are shown in Table 1:
TABLE 1 color second moment of each color channel of each image block
Figure BDA0002682945130000161
As shown in table 1, the second moment of the red color component of each image is the largest, so that the influence of embedding the watermark in the red channel on the original image is smaller, and therefore, the red channels of each image block should be selected for embedding the watermark.
And 6, embedding and extracting the watermark of the preprocessed image through a DCT-SVD algorithm.
Specifically, firstly, after the classified images are separated by the color channels designated in step 5, Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD) are performed on the color channels corresponding to the various images to obtain a carrier matrix S of the watermark1_xWherein x is 1,2,3, and represents each image block of the classification or;
then, selecting a pair of images with the same size as the carrier as watermark images, and performing DCT (discrete cosine transformation) and SVD (singular value decomposition) on the watermark to obtain a watermark singular value matrix S1_1,S1_2,S1_3With embedding intensity k1=0.01,k2=0.03,k3Embedding watermark singular value matrix into matrix S as 0.02mIn (1), its embedded mathematical expression is "S2_x=S1_x+kxSmAnd x is 1,2,3 ″, obtaining a characteristic value matrix S of the embedded image2_1,S2_2,S2_3.
Then to S2_xCarrying out inverse SVD transformation and inverse DCT transformation to obtain an image I of each type of image embedded with watermarkw_xAnd x is 1,2,3, as shown in fig. 9, 10, 11. Watermark embedded image Iw=I1+I2+I3As shown in fig. 12.
Then, the watermark-embedded image is subjected to print scanning and print photographing attacks, wherein the image after print scanning is shown in fig. 14, the image after print photographing is shown in fig. 15,
finally, the watermark is extracted from the embedded image in the reverse process of the watermark embedding process to obtain the watermark image W extracted from each type of imagexX is 1,2,3, cumulativeWxObtaining the final extracted watermark image W ═ W1+W2+W3As shown in fig. 13.
The effect of the proposed image preprocessing method and the performance of the digital watermarking algorithm against print scanning attack are verified by using the extracted watermark image W, wherein fig. 16 and 17 are the extracted watermark images after image print scanning and after large print photographing, respectively. From PSNR equal to 25.3051 being greater than 20 between fig. 3 and fig. 13 and NC values between each extracted watermark image and the original image being between 0.88 and 1.0 in fig. 14, fig. 16 and fig. 17, it can be seen that the method for preprocessing a digital watermark carrier image described herein improves the relationship between the robustness and the transparency of the watermark; and combining with a digital watermark algorithm, the watermark has certain printing scanning and shooting resistance.

Claims (10)

1. The method for preprocessing the digital watermark carrier image irrelevant to the color space of the equipment is characterized by comprising the following steps:
step 1, dividing an original carrier image into super pixel blocks by a SLIC super pixel division method;
step 2, marking the super-pixel blocks by adopting Hash mapping, extracting each super-pixel block, and storing the super-pixel blocks as an independent image;
step 3, calculating the characteristic information of the superpixel block through the gray level co-occurrence matrix;
step 4, constructing a self-organizing competitive neural network, setting a classification number C, classifying the image characteristic information of the superpixel blocks through the neural network, and mapping the classification result to the superpixel blocks through a mapping table to obtain a classified carrier image;
step 5, determining the color channel embedded in each type of image block through the color moment, and separating each color channel to obtain a preprocessed embedded watermark image;
and 6, embedding and extracting the watermark of the preprocessed image through a DCT-SVD algorithm.
2. The method of device color space independent digital watermark carrier image pre-processing according to claim 1, wherein the original carrier image resolution in step 1 is 300-600 dpi and the size range is 128px by 128px to 1024px by 1024 px.
3. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 1, wherein the step 1 specifically comprises:
step 1.1, converting the color space of the original carrier image: reading an original carrier image I through MATLAB, and if the color space of the original image is CIELab, keeping the color space unchanged to obtain an image I_LabOtherwise, converting the color mode of the image I into a CIELab color space which is irrelevant to the equipment to obtain the image I_Lab
Step 1.2, calling a superpixel segmentation function superpixels () in MATLAB, setting the number n of the pre-segmented superpixel blocks, the iteration times NI of an algorithm class stage, and converting an image I after color space through the function_LabAnd (4) carrying out segmentation to obtain a label matrix L and the number N of actually segmented super pixel blocks.
4. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 3, wherein the step 2 is specifically:
step 2.1, storing a label matrix L, calling a function label2Idx () in MATLAB, converting the label matrix L into a linear index unit grid Idx, namely converting N super pixel areas marked in the label matrix L into 1 x N unit grids, mapping the pixel position corresponding to the nth super pixel block by each Idx { N }, and enabling N to be the [1, N ];
step 2.2, traversing the cells Idx {1} to Idx { N }, taking out pixel position information stored in each cell Idx { N }, extracting the pixel value corresponding to the current super pixel block, and realizing the extraction of each super pixel block by a method of setting the pixel values of the corresponding positions of the rest super pixel blocks to be 0, and storing the extracted image blocks as a picture to obtain the 1-N super pixel block images P divided from the super pixels1To PN
5. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 1, wherein the step 3 is specifically:
step 3.1 reading superpixel block images P in MATLAB1To PNConverting each color super pixel block image into a gray image, calling a function interface Graycomatrix () for calculating a two-dimensional gray co-occurrence matrix of the image, setting the gray level of the image, and setting the distance between a concerned pixel and an adjacent point pixel, namely the offset; finally, calculating a gray level co-occurrence matrix GLCM of the image;
step 3.2, calling a calculation information entropy function to calculate the image information entropy Q1Then, the function of Graycospros () is called to calculate the image characteristic information derived from the gray level co-occurrence matrix, including the contrast Q2Homogeneity Q3Correlation Q3Energy Q5
And 3.3, storing the calculated image characteristic information into a folder of an X0.xlsx format of the specified path through file operation.
6. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 1, wherein the step 4 is specifically:
step 4.1, firstly, calling a composable () function in MATLAB, wherein the function defaults a learning rule to a Kohonen learning rule, then setting the classified category number and the learning rate and the good heart rate of a neural network, and constructing an initialized self-organizing competitive neural network initNet;
step 4.2, passing information entropy Q in the image characteristic information1Contrast ratio Q2Homogeneity Q3Correlation Q3Energy Q5Training the network initNet to obtain the trained neural network net, and obtaining the characteristic information Q1-Q5Classifying feature information of the image by taking the category number C expected to be classified as an output layer input of a network net to obtain a classification result array classes of the superpixel block;
step 4.3, inCalling a function label2Idx () in MATLAB, converting the classification result array classes into linear index unit cells Idx _ c, wherein each Idx _ c { i } maps the sequence number of the ith class image corresponding to the super pixel block, i belongs to [1, c ]]C is the classification number of the image, simultaneously maps the file names stored by all the superpixels of the corresponding classes, traverses the superpixel blocks corresponding to each Idx { I } and accumulates to finally obtain an image I classified according to image characteristics such as image color distance, image texture and the like1,I2,I3…Ic
7. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 6, wherein the step 5 is specifically:
step 5.1, image I1,I2,I3…IcIs converted from CIELab to RGB color space;
step 5.2, calculate image I1,I2,I3…IcThe color moments in the RGB color space include the color moments of the individual color channels of each type of image R, G, B, and then the color channel with the largest color moment in R, G, B is selected as the embedding channel for the watermark of that type of image.
8. The method for preprocessing the digital watermark carrier image independent of the device color space according to claim 7, wherein the step 6 is specifically:
step 6.1, after the classified images are separated by the color channels appointed in the step 5, the color channels corresponding to the various images are subjected to Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD) to obtain a carrier matrix S of the watermark1_xWherein x is 1,2,3, …, c, c is the classification number of the image;
step 6.2, selecting a pair of images with the same size as the carrier as watermark images, and performing DCT (discrete cosine transformation) and SVD (singular value decomposition) on the watermark images to obtain a watermark singular value matrix Sm
Step 6.3, for each type of different matrix S1_xWith a predetermined embedding strength kxThe method adopting the formula (1)Method of preparingmEmbedded in matrix SxObtaining a eigenvalue matrix S of the embedded image2_x
S2_x=S1_x+kxSm,x=1,2,3…c (1)
Step 6.4, for S2_xCarrying out inverse SVD transformation and inverse DCT transformation to obtain an image I of each type of image embedded with watermarkw_x
6.5, carrying out printing scanning and printing and photographing attacks on the image embedded with the watermark;
step 6.6, the reverse process of the watermark embedding process is used for extracting the watermark from the embedded image to obtain the watermark image W extracted from each type of imagexAnd x is 1,2,3, … c, and the final extracted watermark image W is obtained by adding equation (2), that is, the final extracted watermark image W is obtained
W=W1+W2+W3+…+Wc (2)。
9. The method for device color space independent digital watermark carrier image pre-processing according to claim 4, wherein the naming convention of P1 to PN image is "1. bmp", "2. bmp", "3. bmp" … "N-1. bmp", "N.bmp".
10. The method for device color space independent digital watermark carrier image pre-processing according to claim 5, wherein the gray scale level selects 8 gray scale levels.
CN202010967712.6A 2020-09-15 2020-09-15 Method for preprocessing digital watermark carrier image irrelevant to device color space Active CN112217958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010967712.6A CN112217958B (en) 2020-09-15 2020-09-15 Method for preprocessing digital watermark carrier image irrelevant to device color space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010967712.6A CN112217958B (en) 2020-09-15 2020-09-15 Method for preprocessing digital watermark carrier image irrelevant to device color space

Publications (2)

Publication Number Publication Date
CN112217958A true CN112217958A (en) 2021-01-12
CN112217958B CN112217958B (en) 2022-04-22

Family

ID=74050449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010967712.6A Active CN112217958B (en) 2020-09-15 2020-09-15 Method for preprocessing digital watermark carrier image irrelevant to device color space

Country Status (1)

Country Link
CN (1) CN112217958B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691885A (en) * 2021-09-09 2021-11-23 深圳万兴软件有限公司 Video watermark removing method and device, computer equipment and storage medium
CN117061768A (en) * 2023-10-12 2023-11-14 腾讯科技(深圳)有限公司 Video watermark processing method, video watermark processing device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950406A (en) * 2010-08-10 2011-01-19 浙江大学 Transform domain-based image water mark adding method
CN102184519A (en) * 2011-05-26 2011-09-14 江苏技术师范学院 Method for embedding and extracting watermark images
CN107533760A (en) * 2015-04-29 2018-01-02 华为技术有限公司 A kind of image partition method and device
CN107633522A (en) * 2017-08-30 2018-01-26 山东财经大学 Brain image dividing method and system based on local similarity movable contour model
CN109242749A (en) * 2018-08-09 2019-01-18 北京交通大学 The blind digital image watermarking method resisting printing and retaking
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
CN110992236A (en) * 2019-11-28 2020-04-10 陕西科技大学 Method, device and equipment for determining digital watermark embedding environment and readable storage medium
CN111583274A (en) * 2020-04-30 2020-08-25 贝壳技术有限公司 Image segmentation method and device, computer-readable storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950406A (en) * 2010-08-10 2011-01-19 浙江大学 Transform domain-based image water mark adding method
CN102184519A (en) * 2011-05-26 2011-09-14 江苏技术师范学院 Method for embedding and extracting watermark images
CN107533760A (en) * 2015-04-29 2018-01-02 华为技术有限公司 A kind of image partition method and device
CN107633522A (en) * 2017-08-30 2018-01-26 山东财经大学 Brain image dividing method and system based on local similarity movable contour model
CN109242749A (en) * 2018-08-09 2019-01-18 北京交通大学 The blind digital image watermarking method resisting printing and retaking
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
CN110992236A (en) * 2019-11-28 2020-04-10 陕西科技大学 Method, device and equipment for determining digital watermark embedding environment and readable storage medium
CN111583274A (en) * 2020-04-30 2020-08-25 贝壳技术有限公司 Image segmentation method and device, computer-readable storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任少盈等: "基于汉明纠错机制的抗帧操作的视频水印", 《应用科学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691885A (en) * 2021-09-09 2021-11-23 深圳万兴软件有限公司 Video watermark removing method and device, computer equipment and storage medium
CN113691885B (en) * 2021-09-09 2024-01-30 深圳万兴软件有限公司 Video watermark removal method and device, computer equipment and storage medium
CN117061768A (en) * 2023-10-12 2023-11-14 腾讯科技(深圳)有限公司 Video watermark processing method, video watermark processing device, electronic equipment and storage medium
CN117061768B (en) * 2023-10-12 2024-01-30 腾讯科技(深圳)有限公司 Video watermark processing method, video watermark processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112217958B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
US10127623B2 (en) Geometric enumerated watermark embedding for colors and inks
de Queiroz et al. Color to gray and back: color embedding into textured gray images
EP0785669A2 (en) Image processing method and apparatus
CN112217958B (en) Method for preprocessing digital watermark carrier image irrelevant to device color space
KR101140699B1 (en) Forgery Detection System for Security Printing Resource Based on Digital Forensic Technology and Method Therefor
CN109102451A (en) A kind of anti-fake halftoning intelligent digital watermarking method of paper media's output
CN105512999B (en) A kind of color image holographic watermark method of double transformation
CN101122995A (en) Binary image digit water mark embedding, extraction method and device
CN101950407A (en) Method for realizing color image digital watermark for certificate anti-counterfeiting
CN104637026B (en) One kind is based on continuous multipage text image watermark insertion and extracting method
CN109815653A (en) A kind of extraction of pdf Text Watermarking and comparison method based on deep learning
Mizumoto et al. Robustness investigation of DCT digital watermark for printing and scanning
CN105427231B (en) A kind of SVD double-layer digital water mark methods avoiding false alarm
Borges et al. Robust and transparent color modulation for text data hiding
WO2016016040A1 (en) Digital image watermarking system and method
CN102722857A (en) Digital image watermark method based on visual attention mechanism
EP1510967A2 (en) A system and method for digital watermarking in a calibrated printing path
Xu et al. Single color image super-resolution using sparse representation and color constraint
CN106327416B (en) A kind of site water mark method based on printed matter
CN111445378A (en) Neural network-based image blind watermark embedding and detecting method and system
CN113538201B (en) Ceramic watermark model training method and device based on bottom changing mechanism and embedding method
Eerola et al. Full reference printed image quality: Measurement framework and statistical evaluation
Cu et al. A robust data hiding scheme using generated content for securing genuine documents
Chen et al. Distortion Model-Based Spectral Augmentation for Generalized Recaptured Document Detection
CN113065407A (en) Financial bill seal erasing method based on attention mechanism and generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant