CN113179407A - Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation - Google Patents

Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation Download PDF

Info

Publication number
CN113179407A
CN113179407A CN202110463286.7A CN202110463286A CN113179407A CN 113179407 A CN113179407 A CN 113179407A CN 202110463286 A CN202110463286 A CN 202110463286A CN 113179407 A CN113179407 A CN 113179407A
Authority
CN
China
Prior art keywords
frame
watermark
video
embedded
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110463286.7A
Other languages
Chinese (zh)
Other versions
CN113179407B (en
Inventor
王成优
周杨铭
周晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110463286.7A priority Critical patent/CN113179407B/en
Publication of CN113179407A publication Critical patent/CN113179407A/en
Application granted granted Critical
Publication of CN113179407B publication Critical patent/CN113179407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Abstract

The invention discloses a video watermark embedding and extracting method based on interframe DCT coefficient correlation, which comprises the following steps: reading an original host video, and extracting a brightness component of the original host video from the original host video; dividing each frame of the brightness component into image blocks which are not overlapped with each other, calculating the ratio of the motion blocks of all the frames, and extracting an embedded frame and a reference frame; performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, calculating coefficient difference and modulating coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, and embedding watermark images in all embedded frames to obtain a frame containing the watermark; splicing all the frames containing the watermarks with other video frames without the watermarks to obtain videos containing the watermarks; the video watermark extraction method comprises the following steps: reading a video containing the watermark, and extracting a watermark brightness component from the video; extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark; and performing DCT (discrete cosine transformation) on the image blocks containing the watermark frame and the reference frame thereof, obtaining a watermark image according to the coefficient difference in the DCT blocks at the same position of the watermark frame and the reference frame, and obtaining a final watermark image by using a voting strategy. Based on the correlation between video frames, the property that the difference value of the coefficients of adjacent low-motion frames is relatively stable is utilized, so that the method has better imperceptibility and robustness, and the copyright of the video can be effectively protected.

Description

Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation
Technical Field
The invention belongs to the field of digital video watermarks, and particularly relates to a video watermark embedding and extracting method and system based on interframe Discrete Cosine Transform (DCT) coefficient correlation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the development of internet and multimedia technology, videos are more and more compact with people's lives, and meanwhile, the piracy problem of videos is more and more serious due to the widening of video acquisition modes. In order to protect video copyright, the video watermarking technology embeds copyright information (namely watermarks) into a video in an invisible mode, and extracts the watermarks in the video to prove copyright ownership when copyright disputes occur. However, the current video watermark still has defects in imperceptibility and robustness, and the performance needs to be further improved.
DCT has good energy concentration and a fast algorithm with low calculation complexity, and is widely applied to video watermarking, most of the current video watermarking based on DCT takes a single frame divided by a video as an image, and modifies DCT coefficients thereof based on characteristics in the frame to embed watermarks.
The "watermark and blank watermark detection in DCT domain using inter-block coefficient differential" published in Digital Signal Processing in 2019, J.Ko, C.T.Huang, G.Horng, S.J.Wang, the paper "Robust and blank image watermark in DCT domain using inter-block coefficient correlation" published in Information Science 2019, and S.A.Parah, J.A.Sheikh, N.A.Loan, 2016, the paper "Robust and blank image detection in DCT domain using inter-block coefficient correlation" published in Digital Signal Processing. However, for video, the correlation of coefficients between video frames is higher than that between blocks, and the existing method does not effectively utilize the correlation of video. Therefore, how to embed the watermark by using the correlation of the video is one of the problems to be solved for improving the imperceptibility and robustness of the video watermark.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a video watermark embedding and extracting method based on interframe DCT coefficient correlation by utilizing the correlation existing in the video in order to improve the imperceptibility and robustness of the video watermark, and the watermark is embedded or extracted by modifying or judging the DCT coefficient of the previous frame in two adjacent frames.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
in a first aspect of the present disclosure, a video watermark embedding and extracting method based on inter-frame DCT coefficient correlation is provided, including:
the video watermark embedding method comprises the following steps:
reading an original host video, and extracting a brightness component of the original host video from the original host video;
dividing each frame of the brightness component into image blocks which are not overlapped with each other, calculating the ratio of the motion blocks of all the frames, and extracting an embedded frame and a reference frame;
performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, calculating coefficient difference and modulating coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, and embedding watermark images in all embedded frames to obtain a frame containing the watermark;
splicing all the frames containing the watermarks with other video frames without the watermarks to obtain videos containing the watermarks;
the video watermark extraction method comprises the following steps:
reading a video containing the watermark, and extracting a watermark brightness component from the video;
extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark;
and performing DCT (discrete cosine transformation) on the image blocks containing the watermark frame and the reference frame thereof, obtaining a watermark image according to the coefficient difference in the DCT blocks at the same position of the watermark frame and the reference frame, and obtaining a final watermark image by using a voting strategy.
In a further technical scheme, dividing each frame of the luminance component into non-overlapping image blocks, calculating the ratio of the motion blocks of all frames, and extracting the embedded frame and the reference frame specifically include:
horizontally scanning each image block of the current frame and the next frame, and calculating the Euclidean distance between the image block of the current frame and the image block of the next frame at the same position;
calculating the ratio of the motion blocks of all frames according to Euclidean distance information;
if the ratio of the motion blocks of the current frame is smaller than the ratio of the motion blocks of the previous frame, the ratio of the motion blocks of the next frame and a preset motion frame threshold value at the same time, the current frame is a low motion frame, namely an embedded frame, and meanwhile, the next frame is a reference frame.
In a further technical scheme, the difference of coefficients of the same position in the DCT blocks of the same position of the embedded frame and the reference frame is:
d=C(u,v)-C′(u,v),1≤u,v≤H
wherein, C is a DCT block embedded, that is, a DCT block that actually modulates the modified coefficient according to whether the watermark bit to be embedded is "0" or "1"; c ' is a reference DCT block, without modification, C (u, v) and C ' (u, v) are coefficients of the u-th row and v-th column of DCT blocks C and C ' of two adjacent frames, and H is the size of the block.
In a further technical scheme, the modulation coefficient difference specifically includes:
selecting and modifying the low-frequency position of the embedded block, and modulating the coefficient difference to a specified size range by the specific process as follows:
when the embedded watermark w is 1:
if d is<2T+E&&d is more than or equal to 3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to 2T + E; if d is<3T/2&&d>T-E, then circularly assigning C (u, v) ═ C (u, v) -pmUntil d is less than or equal to T-E; if d is<E&&d is more than or equal to T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to E; if d is<-T/2&&d>-T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-T-E; if d is<-2T + E, then the cyclic assignment C (u, v) ═ C (u, v) + pmUntil d is more than or equal to-2T + E;
when the embedded watermark w is 0:
if d is>2T-E, then circulation is givenThe value C (u, v) ═ C (u, v) -pmUntil d is less than or equal to 2T-E; if d is<T+E&&d is more than or equal to T/2, the value C (u, v) ═ C (u, v) + p is circularly assignedmUntil d is more than or equal to T + E; if d is<T/2&&d>E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-E; if d is<-T+E&&d is more than or equal to-3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to-T + E; if d is<-3T/2&&d>-2T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-2T-E.
P in embedding ProcessmFor the modified parameters in the loop:
Figure BDA0003035524700000041
wherein V is a scale variable, MLSum of absolute values of low-frequency ac coefficients scanned for a current coefficient block, CDCIs the DC coefficient of the current block, FMBIs the motion block flag, T is the decision threshold, and E is the embedding factor.
In a further technical scheme, the calculating the euclidean distance between the current frame image block and the next frame co-located image block specifically includes:
Figure BDA0003035524700000042
wherein, Bi,j,kH is the size of the image block and represents the H multiplied by H image block of the ith row, the jth column and the kth frame,
Figure BDA0003035524700000045
is an image block Bi,j,kThe range of the elements in the mth row and the nth column is more than or equal to 1 and less than or equal to i and less than or equal to M/8, more than or equal to 1 and less than or equal to j and less than or equal to N/8, more than or equal to 2 and less than or equal to K and less than or equal to K-1,
Figure BDA0003035524700000043
is Bi,j,kThe average value of (a) of (b),
Figure BDA0003035524700000044
is Bi,j,k+1Is measured.
In a further technical solution, the calculating the ratio of the motion blocks of all frames according to the euclidean distance information specifically includes:
motion block ratio, i.e. the ratio of the number of motion blocks per frame to the total number of blocks per frame:
Figure BDA0003035524700000051
wherein S (-) is a motion block threshold T1The threshold function of (c):
Figure BDA0003035524700000052
where S (i, j, k) ═ 1 denotes that the current block is considered as a motion block, and otherwise, is considered as a non-motion block.
In the further technical scheme, for the watermark image to be embedded, Arnold scrambling is used for preprocessing the binary watermark image, the same watermark image is repeatedly embedded into a plurality of embedded frames, and the watermark image embedded into each embedded frame randomly adopts different scrambling times;
the position information of the embedded frame in the video is stored as a key 1, and the information of the scrambling times of different embedded frames is stored as a key 2.
A second aspect of the present disclosure provides a video watermark embedding and extraction system based on inter-frame DCT coefficient correlation, including:
a video watermark embedding unit configured to:
reading an original host video, and extracting a brightness component of the original host video from the original host video;
dividing each frame of the brightness component into image blocks which are not overlapped with each other, calculating the ratio of the motion blocks of all the frames, and extracting an embedded frame and a reference frame;
performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, calculating coefficient difference and modulating coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, and embedding watermark images in all embedded frames to obtain a frame containing the watermark;
splicing all the frames containing the watermarks with other video frames without the watermarks to obtain videos containing the watermarks;
a video watermark extraction unit configured to:
the video watermark extraction method comprises the following steps:
reading a video containing the watermark, and extracting a watermark brightness component from the video;
extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark;
and performing DCT (discrete cosine transformation) on the image blocks containing the watermark frame and the reference frame thereof, obtaining a watermark image according to the coefficient difference in the DCT blocks at the same position of the watermark frame and the reference frame, and obtaining a final watermark image by using a voting strategy.
A third aspect of the present disclosure provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the video watermark embedding and extracting method based on inter-frame DCT coefficient correlation according to the first aspect of the present disclosure when executing the program.
A fourth aspect of the present disclosure is a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video watermark embedding and extraction method based on inter-frame DCT coefficient correlation according to the first aspect of the present disclosure.
The above one or more technical solutions have the following beneficial effects:
1. the technical scheme of the method and the device for embedding or extracting the watermark by modifying or judging the DCT coefficient of the previous frame in the two adjacent frames is used for effectively utilizing the correlation of the video, based on the correlation between the video frames and the relatively stable property of the difference value of the coefficients of the adjacent low-motion frames, has better imperceptibility and robustness compared with the existing watermark method based on the block correlation, and can effectively protect the copyright of the video.
2. According to the method for calculating the coefficient difference d based on the interframe coefficient correlation, the correlation of the two coefficients enables the common value of the coefficient difference d to be smaller, the modulation intensity of DCT coefficients can be reduced, the imperceptibility is improved, meanwhile, the correlation enables the size of d to be more stable, and the robustness is better.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a schematic flowchart of a video watermark embedding and extracting method based on inter-frame DCT coefficient correlation according to an embodiment of the present disclosure;
FIG. 2 is a method for calculating a coefficient difference d based on inter-frame coefficient correlation in an embodiment of the present disclosure;
fig. 3 is a modulation method of the coefficient difference d in the embodiment of the present disclosure;
FIG. 4 is a test video and a test watermark image used experimentally in an example embodiment of the present disclosure;
fig. 5 shows the first frame of the video containing the watermark after the attack and the extracted watermark as the subjective effect in the embodiment of the disclosure.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment discloses a video watermark embedding and extracting method based on interframe DCT coefficient correlation, and the aim of the embodiment is as follows: the watermark is embedded or extracted by modifying or judging the DCT coefficient of the previous frame in two adjacent frames, so that the correlation of the video is effectively utilized.
Fig. 1 shows a flowchart of a video watermark embedding and extracting method based on inter-frame DCT coefficient correlation in an embodiment of the present invention. As shown in fig. 1, the upper half S1 shows a main flowchart of a video watermark embedding method, and as shown in fig. 1, the video watermark embedding step includes:
step S11: reading an original host video, extracting a brightness component of the original host video from the original host video, and using the brightness component as a carrier for embedding a watermark;
reading a YUV original host video V with the resolution of M multiplied by N and the length of K frames, wherein YUV is divided into three components, and Y represents brightness, namely a gray value; the "U" and "V" represent the chromaticity, which is used to describe the color and saturation of the image, and is used to specify the color of the pixel; extracting the brightness component V of the original host video V from the original host video VYAs a carrier for watermark embedding, the Y component of the video contains more information than the U and V components, and more redundancy is available for embedding the watermark.
Step S12: dividing each frame of the luminance component into image blocks which do not overlap each other based on the extracted watermark-embedded luminance component, calculating the moving block ratio of all frames, and extracting the embedded frame and the reference frame.
Step S13: and for the watermark image to be embedded, performing preprocessing on the watermark image by using Arnold scrambling.
Since 1 watermark bit is embedded in each 8 × 8 block in the present embodiment, when the size of the video to be watermarked is M × N, the size of the watermark image is (M/8) × (N/8). As the Arnold scrambling is only suitable for the square matrix, the left square part and the right square part of the watermark image are scrambled in sequence respectively in the implementation process, the same scrambling times are used in the two times of scrambling, and the middle parts of the watermark image are crossed. In order to improve the robustness of the watermark to various attacks, the same watermark image is repeatedly embedded into a plurality of embedded frames, the watermark image embedded into each embedded frame randomly adopts different scrambling times (the random range is in a scrambling period), and the scrambling time information of different embedded frames is stored as a secret key 2 for recovery after watermark extraction.
Step S14: performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, sequentially calculating coefficient difference and modulating the coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, modulating the coefficient difference to a specified size range, and embedding watermarks in all embedded frames;
the DCT transform is adopted because it is an image-approximating optimal transform, and is low in computational complexity and suitable for a watermarking technique.
Step S15: and before the watermarks are embedded in all the embedded frames, embedding the same preprocessed watermark images, and splicing all the frames containing the watermarks with other video frames without the watermarks to obtain the video containing the watermarks.
All embedded frames are embedded with the same watermark image, but before each embedding, the watermark image is scrambled and encrypted by using a different key. After embedding is finished, the brightness Y component V of the watermark-containing imageY wAnd (4) recombining with the U, V component, and splicing all the frames containing the watermark with other frames of the video without the watermark to obtain the video containing the watermark.
As shown in fig. 1, the lower half S2 shows a main flowchart of a video watermark extraction method, and as shown in fig. 1, the specific steps of extracting the video watermark include:
step S21: reading YUV water-containing printing video with resolution of M × N and length of K frames, and extracting water-containing printing brightness component V from YUV water-containing printing videoY' for watermark extraction.
Step S22: and extracting the watermark-containing frame and the reference frame according to the position of the watermark-containing frame in the key 1.
Step S23: DCT transformation is carried out on the frame containing the watermark and the reference frame thereof, the coefficient difference of the same position in the DCT block of the same position of the frame containing the watermark and the reference frame is calculated in sequence, the coefficient difference is modulated to a specified size range according to the size of the coefficient difference, and the watermark is extracted;
for each sub-block, the watermark is extracted according to the size of d, i.e. where d is located, separated by the dashed line in fig. 3.
The specific process is as follows:
if d is more than 2T | | d < T & & d >0| | | d < -T & & d > -2T, extracting the watermark w' ═ 1;
if d <2T & & d > T | | d <0& & d > -T | | d < -2T, the watermark w' ═ 0 is extracted.
Repeating the step S23 on the current watermark-containing frame, and extracting all watermark bits of one frame;
step S24: then, the watermark extracted from the current frame is subjected to inverse Arnold scrambling according to the scrambling times in the secret key 2, and a complete watermark image is obtained.
Step S25: repeating the steps S23 to S24 to obtain a plurality of watermark images extracted from all the watermark frames, then obtaining a final watermark image by using a voting strategy, and setting the watermark image extracted from the first frame of the total L watermark-containing frames as WlThen the final watermark image WFComprises the following steps:
Figure BDA0003035524700000101
where R [. cndot. ] is a rounding function and L represents the number of extracted watermark images.
The above process steps are explained in detail below:
for step S12, the dividing each frame of the luminance component into non-overlapping image blocks, calculating the ratio of the motion blocks of all frames, and extracting the embedded frame and the reference frame specifically includes the following steps:
step S121: a luminance component VYIs divided into H × H image blocks which do not overlap with each other, each image block of the current frame (the k-th frame) and the next frame (the k + 1-th frame) is horizontally scanned, and a current frame image block B is calculatedi,j,kIs identical to the next frame and is located in the image block Bi,j,k+1Euclidean distance of (a):
Figure BDA0003035524700000102
wherein, Bi,j,kH multiplied by H image blocks which represent the ith row, the jth column and the kth frame of the video,
Figure BDA0003035524700000103
is an image block Bi,j,kI is more than or equal to 1 and less than or equal to M/H, j is more than or equal to 1 and less than or equal to N/H, K is more than or equal to 2 and less than or equal to K-1, wherein M is the video frame width, N is the video frame height, K is the total video frame number, in particular, to reduce the influence of the video brightness change, Bi,j,kAnd Bi,j,k+1Respectively subtracting the mean value thereof
Figure BDA0003035524700000104
And
Figure BDA0003035524700000105
the reason for the range 1 ≦ K ≦ K-1 is that the first frame (last frame) of the video has no previous frame (next frame).
Preferably, in this embodiment, H × H is 8 × 8.
Step S122: and further calculating the ratio of the motion blocks of all frames through the calculated Euclidean distance information, wherein the ratio of the number of the motion blocks in each frame to the total number of the blocks in each frame is as follows:
Figure BDA0003035524700000106
wherein S (-) is a motion block threshold T1The threshold function of (c):
Figure BDA0003035524700000111
where S (i, j, k) ═ 1 denotes that the current block is considered as a motion block, and otherwise, is considered as a non-motion block.
Preferably, in this example, let T1=80。
Step S123: if the operation of the k-th frame (current frame)Ratio of moving mass
Figure BDA0003035524700000112
While being smaller than the ratio of the motion block of the (k-1) th frame (the previous frame)
Figure BDA0003035524700000113
Motion block ratio of (k + 1) th frame (next frame)
Figure BDA0003035524700000114
And a preset motion frame threshold T2The current frame is a low motion frame, i.e. an embedded frame, and the next frame is a reference frame.
In this example, for a video of length K frames, T is adjusted multiple times2K/10 frames are extracted as embedded frames, and the position information of the embedded frames in the video is stored as a secret key 1 for watermark extraction.
The method for extracting low motion frames in steps S121-S123 of this embodiment aims to extract an embedded frame and a reference frame with higher correlation, so as to facilitate embedding a watermark by better utilizing the inter-frame correlation of a video in the following.
In step S14, the DCT transform is performed on the image blocks of the current embedded frame and the reference frame, the difference and the modulation coefficient difference of the coefficients at the same position in the DCT blocks at the same position of the embedded frame and the reference frame are sequentially calculated, and the watermark is embedded in all the embedded frames, which specifically includes the following steps:
step S141: in order to embed 1 watermark bit in the H multiplied by H image block in sequence, calculating the difference value of the coefficients at the same position in the DCT blocks C and C' at the same position of the embedded frame and the reference frame in sequence, and selecting a low-frequency position to embed the watermark;
d=C(u,v)-C′(u,v),1≤u,v≤H
wherein C (u, v) and C '(u, v) are coefficients of the u row and the v column of the DCT blocks C and C' of two adjacent frames, and have a certain correlation therebetween. C is an embedded block, i.e. a DCT block that actually modulates the modified coefficients according to whether the watermark bit to be embedded is "0" or "1"; c' is a reference block, not modified. The difference in embedding position (u, v) affects the imperceptibility and robustness of the watermark. Since the low frequency coefficients are more stable, the low frequency location is chosen for this example to embed the watermark, preferably (2,2) as the low frequency location.
For the sake of understanding, fig. 2 shows a method for calculating the coefficient difference d based on the correlation between the frame coefficients in the present embodiment. Because the image contents of adjacent frames are similar and have stronger correlation, the common value of the difference d of the two coefficients at the same position is smaller, and the modulation intensity of DCT coefficients can be reduced by embedding watermarks into the modulation d, so that the imperceptibility is improved; meanwhile, the correlation enables the size of d to be stable, and robustness is good.
Step S142: modulating the coefficient difference d to a specified size range by modifying the low frequency position C (u, v) of the embedded block to embed the watermark bit, fig. 3 shows a modulation method of the coefficient difference d, as shown in fig. 3, if d is in an unshaded area separated by a solid line, modulating to a shaded area; if d is in the shadow region, no modification is required. The inter-frame correlation makes most coefficient difference values close to 0, but a part of blocks with larger coefficient difference values still exist, so that the setting of a plurality of embedded areas can prevent the DCT coefficients from being excessively modified, and the influence on the visual quality is reduced.
In this example, the decision threshold T is 80 and the embedding factor E is 12.
The specific process is as follows:
when the embedded watermark w is 1:
if d is<2T+E&&d is more than or equal to 3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to 2T + E; if d is<3T/2&&d>T-E, then circularly assigning C (u, v) ═ C (u, v) -pmUntil d is less than or equal to T-E; if d is<E&&d is more than or equal to T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to E; if d is<-T/2&&d>-T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-T-E; if d is<-2T + E, then the cyclic assignment C (u, v) ═ C (u, v) + pmUntil d is not less than-2T + E.
When the embedded watermark w is 0:
if d is>2T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to 2T-E; if d is<T+E&&d is more than or equal to T/2, the value C (u, v) ═ C (u, v) + p is circularly assignedmUntil d is more than or equal to T+ E; if d is<T/2&&d>E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-E; if d is<-T+E&&d is more than or equal to-3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to-T + E; if d is<-3T/2&&d>-2T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-2T-E.
P in embedding ProcessmFor the modified parameters in the loop:
Figure BDA0003035524700000131
where V is a scale variable, set to 0.05 in this example, MLSum of absolute values of Low frequency Alternating Current (AC) coefficients for the first 9 Zigzag scans of the Current HxH coefficient block, CDCIs the Direct Current (DC) coefficient of the Current block. FMBIf the current block is a motion block according to the judgment criteria of step S122, FMB1, otherwise FMB=2。
Step S143: repeating the step S141 and the step S142, embedding watermark bits in all blocks of the current embedded frame, performing inverse DCT, completing the embedding of one frame, and obtaining a frame containing the watermark;
step S144: and repeating the step S143, and embedding the watermark in all the embedded frames.
In this embodiment, according to the video watermark embedding and extracting method based on inter-frame DCT coefficient correlation shown in fig. 1, the first 300 frames of the standard test video "basetballkill" with the size of 832 × 480 as shown in fig. 4(a) are selected as host videos, and the binary image "SDU" with the size of 104 × 60 as shown in fig. 4(b) is used as a watermark image to verify the performance of the watermark method.
The present embodiment embeds the watermark shown as binary image "SDU" shown in fig. 4(b) into the host video shown in fig. 4(a) by the method of Ko, Parah, etc. and the correlation method based on inter-frame DCT coefficients, respectively, to obtain Peak Signal-to-Noise Ratio (PSNR) of 42.4915dB (Ko, etc.), 42.4590dB (Parah, etc.) and 44.0610dB (correlation method based on inter-frame DCT coefficients), respectively, and the result shows that the watermark of the present invention has good imperceptibility.
Table 1 shows watermark images and Bit Error Rates (BER) of watermarks extracted from watermarked videos obtained by embedding watermarks in 3 methods and performing different attacks, respectively, where the watermarks in the videos subjected to geometric attacks such as scaling and rotation are scaled and rotated to original sizes and angles when extracted.
Table 1: the watermark BER of the different watermarking methods is compared.
Figure BDA0003035524700000132
Figure BDA0003035524700000141
As can be seen from table 1, the inter-frame DCT coefficient correlation-based method of the present embodiment has higher robustness than the existing method. In order to more intuitively show the effect of the watermarking method of the present invention, as shown in fig. 5, the first frame of the video containing the watermark after the attack and the extracted watermark are shown in this embodiment, it can be seen that the watermark is still clear and visible after different attacks, and the effect of the watermark can be better realized.
The embodiment provides a video watermarking method based on the correlation between video frames and by utilizing the relatively stable property of the difference value of the coefficients of the adjacent low motion frames, has better imperceptibility and robustness compared with the existing watermarking method based on block correlation, and can effectively protect the copyright of the video.
Example two:
the embodiment of the specification provides a video watermark embedding and extracting system based on interframe DCT coefficient correlation, which is realized by the following technical scheme:
the method comprises the following steps:
a video watermark embedding unit configured to:
reading an original host video, and extracting a brightness component of the original host video from the original host video;
dividing each frame of the brightness component into image blocks based on the extracted brightness component embedded with the watermark, calculating the ratio of the motion blocks of all the frames, comparing the ratio of the motion blocks of the current frame, the ratio of the motion blocks of the previous frame, the ratio of the motion blocks of the next frame and a preset motion frame threshold value, and extracting an embedded frame and a reference frame;
DCT transformation is carried out on image blocks of the current embedded frame and the reference frame, the coefficient difference of the same position in DCT blocks of the same position of the embedded frame and the reference frame is calculated in sequence, the DCT coefficient adjustment coefficient difference of the embedded frame is modulated, watermark bits are embedded in all the image blocks of the current embedded frame, and inverse DCT is carried out to obtain a frame containing the watermark;
preprocessing a watermark image to be embedded by using Arnold scrambling;
embedding the same preprocessed watermark image into the watermark-containing frame, and splicing all the watermark-containing frames with other video frames without watermarks to obtain a watermark-containing video;
a video watermark extraction unit configured to:
reading a video containing the watermark, and extracting a video containing watermark brightness from the video;
extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark;
DCT transformation is carried out on the watermark-containing frame and the reference frame thereof, the coefficient difference in the DCT blocks at the same position of the watermark-containing frame and the reference frame is calculated in sequence, and the watermark bit of one frame is extracted according to the magnitude of the coefficient difference;
and carrying out inverse Arnold scrambling on the watermark extracted from the current frame according to the scrambling times to obtain a complete watermark image, and obtaining a final watermark image by using a voting strategy.
Example three:
the third embodiment of the present disclosure provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the video watermark embedding and extracting method based on inter-frame DCT coefficient correlation according to the first embodiment of the present disclosure when executing the program.
Example four:
the fourth embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video watermark embedding and extracting method based on inter-frame DCT coefficient correlation according to the first embodiment of the present disclosure.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. The video watermark embedding and extracting method based on the interframe DCT coefficient correlation is characterized by comprising the following steps:
the video watermark embedding method comprises the following steps:
reading an original host video, and extracting a brightness component of the original host video from the original host video;
dividing each frame of the brightness component into image blocks which are not overlapped with each other, calculating the ratio of the motion blocks of all the frames, and extracting an embedded frame and a reference frame;
performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, calculating coefficient difference and modulating coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, and embedding watermark images in all embedded frames to obtain a frame containing the watermark;
splicing all the frames containing the watermarks with other video frames without the watermarks to obtain videos containing the watermarks;
the video watermark extraction method comprises the following steps:
reading a video containing the watermark, and extracting a watermark brightness component from the video;
extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark;
and performing DCT (discrete cosine transformation) on the image blocks containing the watermark frame and the reference frame thereof, obtaining a watermark image according to the coefficient difference in the DCT blocks at the same position of the watermark frame and the reference frame, and obtaining a final watermark image by using a voting strategy.
2. The method as claimed in claim 1, wherein the video watermark embedding method based on inter-frame DCT coefficient correlation is characterized by dividing each frame of the luminance component into non-overlapping image blocks, calculating the ratio of motion blocks of all frames, and extracting the embedded frame and the reference frame, and specifically comprises:
horizontally scanning each image block of the current frame and the next frame, and calculating the Euclidean distance between the image block of the current frame and the image block of the next frame at the same position
Calculating the ratio of the motion blocks of all frames according to Euclidean distance information;
if the ratio of the motion blocks of the current frame is smaller than the ratio of the motion blocks of the previous frame, the ratio of the motion blocks of the next frame and a preset motion frame threshold value at the same time, the current frame is a low motion frame, namely an embedded frame, and meanwhile, the next frame is a reference frame.
3. The method of claim 1, wherein the co-located coefficient difference in the co-located DCT blocks of the embedded frame and the reference frame is:
d=C(u,v)-C′(u,v),1≤u,v≤H
wherein, C is a DCT block embedded, that is, a DCT block that actually modulates the modified coefficient according to whether the watermark bit to be embedded is "0" or "1"; c ' is a reference DCT block, without modification, C (u, v) and C ' (u, v) are coefficients of the u-th row and v-th column of DCT blocks C and C ' of two adjacent frames, and H is the size of the block.
4. The method of claim 1, wherein the modulation coefficient difference is selected from the group consisting of:
selecting and modifying the low-frequency position of the embedded block, and modulating the coefficient difference to a specified size range by the specific process as follows:
when the embedded watermark w is 1:
if d is<2T+E&&d is more than or equal to 3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to 2T + E; if d is<3T/2&&d>T-E, then circularly assigning C (u, v) ═ C (u, v) -pmUntil d is less than or equal to T-E; if d is<E&&d is more than or equal to T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to E; if d is<-T/2&&d>-T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-T-E; if d is<-2T + E, then the cyclic assignment C (u, v) ═ C (u, v) + pmUntil d is more than or equal to-2T + E;
when the embedded watermark w is 0:
if d is>2T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to 2T-E; if d is<T+E&&d is more than or equal to T/2, the value C (u, v) ═ C (u, v) + p is circularly assignedmUntil d is more than or equal to T + E; if d is<T/2&&d>E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-E; if d is<-T+E&&d is more than or equal to-3T/2, the value C (u, v) is circularly assigned to C (u, v) + pmUntil d is more than or equal to-T + E; if d is<-3T/2&&d>-2T-E, then cyclically assign C (u, v) ═ C (u, v) -pmUntil d is less than or equal to-2T-E.
P in embedding ProcessmFor the modified parameters in the loop:
Figure FDA0003035524690000031
wherein V is a scale variable, MLSum of absolute values of low-frequency ac coefficients scanned for a current coefficient block, CDCIs the DC coefficient of the current block, FMBIs the motion block flag, T is the decision threshold, and E is the embedding factor.
5. The method as claimed in claim 2, wherein the step of calculating the euclidean distance between the current frame image block and the next frame co-located image block comprises:
Figure FDA0003035524690000032
wherein, Bi,j,kH multiplied by H image blocks of the ith row, the jth column and the kth frame,
Figure FDA0003035524690000033
is an image block Bi,j,kThe range of the elements in the mth row and the nth column is more than or equal to 1 and less than or equal to i and less than or equal to M/8, more than or equal to 1 and less than or equal to j and less than or equal to N/8, more than or equal to 2 and less than or equal to K and less than or equal to K-1,
Figure FDA0003035524690000034
is Bi,j,kThe average value of (a) of (b),
Figure FDA0003035524690000035
is Bi,j,k+1Is measured.
6. The method as claimed in claim 2, wherein the ratio of motion blocks of all frames is calculated by euclidean distance information, and the method comprises:
motion block ratio, i.e. the ratio of the number of motion blocks per frame to the total number of blocks per frame:
Figure FDA0003035524690000036
wherein S (-) is a motion block threshold T1The threshold function of (c):
Figure FDA0003035524690000037
where S (i, j, k) ═ 1 denotes that the current block is considered as a motion block, and otherwise, is considered as a non-motion block.
7. The video watermark embedding method based on inter-frame DCT coefficient correlation as claimed in claim 1, wherein for the watermark image to be embedded, Arnold scrambling is used to preprocess the binary watermark image, the same watermark image is repeatedly embedded into a plurality of embedded frames, and the watermark image embedded in each embedded frame randomly adopts different scrambling times;
the position information of the embedded frame in the video is stored as a key 1, and the information of the scrambling times of different embedded frames is stored as a key 2.
8. The video watermark embedding and extracting system based on interframe DCT coefficient correlation is characterized by comprising the following components:
a video watermark embedding unit configured to:
reading an original host video, and extracting a brightness component of the original host video from the original host video;
dividing each frame of the brightness component into image blocks which are not overlapped with each other, calculating the ratio of the motion blocks of all the frames, and extracting an embedded frame and a reference frame;
performing DCT transformation on image blocks of the current embedded frame and the reference frame thereof, calculating coefficient difference and modulating coefficient difference in DCT blocks at the same position of the embedded frame and the reference frame, and embedding watermark images in all embedded frames to obtain a frame containing the watermark;
splicing all the frames containing the watermarks with other video frames without the watermarks to obtain videos containing the watermarks;
a video watermark extraction unit configured to:
reading a video containing the watermark, and extracting a watermark brightness component from the video;
extracting a frame containing the watermark and a reference frame according to the position of the frame containing the watermark;
and performing DCT (discrete cosine transformation) on the image blocks containing the watermark frame and the reference frame thereof, obtaining a watermark image according to the coefficient difference in the DCT blocks at the same position of the watermark frame and the reference frame, and obtaining a final watermark image by using a voting strategy.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method for video watermark embedding and extraction based on inter-frame DCT coefficient correlation according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for embedding and extracting a video watermark based on inter-frame DCT coefficient correlation according to any of claims 1 to 7.
CN202110463286.7A 2021-04-23 2021-04-23 Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation Active CN113179407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463286.7A CN113179407B (en) 2021-04-23 2021-04-23 Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463286.7A CN113179407B (en) 2021-04-23 2021-04-23 Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation

Publications (2)

Publication Number Publication Date
CN113179407A true CN113179407A (en) 2021-07-27
CN113179407B CN113179407B (en) 2022-08-12

Family

ID=76926680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463286.7A Active CN113179407B (en) 2021-04-23 2021-04-23 Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation

Country Status (1)

Country Link
CN (1) CN113179407B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782041A (en) * 2021-09-14 2021-12-10 随锐科技集团股份有限公司 Method for embedding and positioning watermark based on audio frequency-to-frequency domain
CN115278314A (en) * 2022-07-08 2022-11-01 南京大学 Multi-valued digital video watermark embedding and blind extraction method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001111973A (en) * 1999-08-04 2001-04-20 Ddi Corp Moving image electronic watermark device
CN101277438A (en) * 2008-04-23 2008-10-01 山东大学 Video watermark method based on movement zone location
JP2009124300A (en) * 2007-11-13 2009-06-04 Nippon Telegr & Teleph Corp <Ntt> Apparatus, method and program for jpeg encoding with watermark embedding, record medium recorded with the encoding program, and apparatus, method and program for detecting tampering of the jpeg image data with watermark embedding, record medium recorded with the detecting program
CN101504759A (en) * 2009-03-17 2009-08-12 陕西科技大学 Digital image watermark extraction method based on DCT algorithm
CN101651837A (en) * 2009-09-10 2010-02-17 北京航空航天大学 Reversible video frequency watermark method based on interframe forecast error histogram modification
CN102547297A (en) * 2012-02-28 2012-07-04 中国传媒大学 MPEG2 (Moving Picture Experts Group 2) video watermarking realization method based on DC (Discrete Cosine) coefficient
CN103237209A (en) * 2013-04-01 2013-08-07 南京邮电大学 H264 video watermarking method based on regional DCT (discrete cosine transform) coefficients
CN104539965A (en) * 2015-01-12 2015-04-22 中国电子科技集团公司第三十八研究所 H.264/AVC compressed-domain video watermark embedding and extracting method
CN109493271A (en) * 2018-11-16 2019-03-19 中国科学院自动化研究所 Image difference quantisation watermarking embedding grammar, extracting method, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001111973A (en) * 1999-08-04 2001-04-20 Ddi Corp Moving image electronic watermark device
JP2009124300A (en) * 2007-11-13 2009-06-04 Nippon Telegr & Teleph Corp <Ntt> Apparatus, method and program for jpeg encoding with watermark embedding, record medium recorded with the encoding program, and apparatus, method and program for detecting tampering of the jpeg image data with watermark embedding, record medium recorded with the detecting program
CN101277438A (en) * 2008-04-23 2008-10-01 山东大学 Video watermark method based on movement zone location
CN101504759A (en) * 2009-03-17 2009-08-12 陕西科技大学 Digital image watermark extraction method based on DCT algorithm
CN101651837A (en) * 2009-09-10 2010-02-17 北京航空航天大学 Reversible video frequency watermark method based on interframe forecast error histogram modification
CN102547297A (en) * 2012-02-28 2012-07-04 中国传媒大学 MPEG2 (Moving Picture Experts Group 2) video watermarking realization method based on DC (Discrete Cosine) coefficient
CN103237209A (en) * 2013-04-01 2013-08-07 南京邮电大学 H264 video watermarking method based on regional DCT (discrete cosine transform) coefficients
CN104539965A (en) * 2015-01-12 2015-04-22 中国电子科技集团公司第三十八研究所 H.264/AVC compressed-domain video watermark embedding and extracting method
CN109493271A (en) * 2018-11-16 2019-03-19 中国科学院自动化研究所 Image difference quantisation watermarking embedding grammar, extracting method, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周国志 等: ""基于DCT 的数字视频水印算法研究"", 《电视技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782041A (en) * 2021-09-14 2021-12-10 随锐科技集团股份有限公司 Method for embedding and positioning watermark based on audio frequency-to-frequency domain
CN113782041B (en) * 2021-09-14 2023-08-15 随锐科技集团股份有限公司 Method for embedding and positioning watermark based on audio variable frequency domain
CN115278314A (en) * 2022-07-08 2022-11-01 南京大学 Multi-valued digital video watermark embedding and blind extraction method
CN115278314B (en) * 2022-07-08 2023-10-13 南京大学 Multi-value digital video watermark embedding and blind extraction method

Also Published As

Publication number Publication date
CN113179407B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
US6285775B1 (en) Watermarking scheme for image authentication
US7995790B2 (en) Digital watermark detection using predetermined color projections
CN113179407B (en) Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation
CN110232650B (en) Color image watermark embedding method, detection method and system
Rakhmawati et al. Blind Robust and Self-Embedding Fragile Image Watermarking for Image Authentication and Copyright Protection with Recovery Capability.
CN114359012B (en) Robust combined domain color image zero watermark embedding and extracting method
Dittmann Content-fragile watermarking for image authentication
Riad et al. Pre-processing the cover image before embedding improves the watermark detection rate
Xiao et al. Toward a better understanding of DCT coefficients in watermarking
Bei et al. A color image watermarking scheme Against geometric rotation attacks based on HVS and DCT-DWT
WO2002019269A2 (en) A method for encoding and decoding image dependent watermarks
Sakib et al. A robust DWT-based compressed domain video watermarking technique
Chen et al. A robust watermarking method based on wavelet and zernike transform
EP2544143B1 (en) Method for watermark detection using reference blocks comparison
Thongkor et al. Digital image watermarking based on regularized filter
CN113766242A (en) Watermark embedding method, watermark extracting method and device
Agung et al. Video scene characteristic detection to improve digital watermarking transparency
Yang et al. An adaptive video watermarking technique based on DCT domain
Jamali et al. Robustness and Imperceptibility Enhancement in Watermarked Images by Color Transformation
Liu et al. Image desynchronization for secure collusion-resilient fingerprint in compression domain
CN116579908B (en) Method and device for implanting encrypted hidden information into image
Phadikar et al. QIM data hiding for tamper detection and correction in digital images using wavelet transform
Liu et al. An improved spatial spread-spectrum video watermarking
Dainaka et al. Dual-plane watermarking for color pictures immune to rotation, scale, translation, and random bending
JP4944966B2 (en) How to mark a digital image with a digital watermark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant