WO2019083191A1 - Watermarking method and method for watermark detection - Google Patents
Watermarking method and method for watermark detectionInfo
- Publication number
- WO2019083191A1 WO2019083191A1 PCT/KR2018/011805 KR2018011805W WO2019083191A1 WO 2019083191 A1 WO2019083191 A1 WO 2019083191A1 KR 2018011805 W KR2018011805 W KR 2018011805W WO 2019083191 A1 WO2019083191 A1 WO 2019083191A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- region
- watermark
- area
- code
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2389—Multiplex stream processing, e.g. multiplex stream encrypting
- H04N21/23892—Multiplex stream processing, e.g. multiplex stream encrypting involving embedding information at multiplex stream level, e.g. embedding a watermark at packet level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4627—Rights management associated to the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
Definitions
- Embodiments relate to a watermarking method and a detection method.
- the present invention relates to a method and apparatus for inserting / extracting a double watermark, and more particularly, to prevent straying, tampering, and copyright of an image that may occur in a general content image.
- Watermarking technology is a technology that protects copyright more efficiently by inserting a signal that human beings can not recognize in illegal copying and alteration of general contents video which is protected by intellectual property rights.
- Embodiments provide a watermarking method and a detection method.
- a watermarking method and a watermarking method are provided in which watermarking is difficult for a person to perceive deformation of image data.
- the present invention also provides a watermarking method and a detection method in which robustness is improved from changes such as compression or rotation.
- a watermarking method includes: receiving first image data; Detecting an insertion region from the first image data; And inserting a watermark corresponding to the user information into the insertion area and combining the watermark with the first image data, wherein the step of detecting the insertion area comprises: Detecting a first region having a first region; Generating a virtual code to be applied to the first area; Synthesizing the virtual code with the first image data to generate second image data; And comparing the second image data with the first image data to detect a second area and extracting the second area into the insertion area.
- the second image data may include:
- the first image data may be compressed by compressing the first image data at different compression ratios.
- the second region may be a region that is commonly extracted from the plurality of second image data.
- step of synthesizing the first image data includes:
- the watermark corresponding to the user information includes a data key and a recovery code
- step of synthesizing the first image data includes:
- the recovery code includes:
- ECC error correction code
- the method of detecting watermarking includes: receiving a fourth image data; Matching the fourth image data with pre-stored first image data; Loading an inserted region of the matched first image data; Extracting a watermark by comparing the insertion region of the first image data with the comparison region of the fourth image data corresponding to the insertion region of the first image data; Retrieving user information corresponding to the watermark; And outputting the user information.
- a watermarking method and a detection method in which a video providing time is shortened can be implemented.
- FIG. 2 is a flowchart of a method of detecting an insertion region in FIG. 1,
- FIG. 3 is a diagram exemplarily showing first image data
- FIG. 4 is a view showing the first region detected in FIG. 3, and FIG.
- FIG. 5 is a diagram for explaining a method of detecting the first area by way of example
- FIG. 6 is a diagram exemplarily showing virtual codes reflected in the first area of FIG. 4,
- FIGS. 7A to 7C are diagrams showing second image data generated with different compression ratios
- FIG. 9 is a flowchart illustrating a method of combining the first image data with the first image data in FIG. 1,
- Figure 10 is a diagram of a method for creating a third region
- FIG. 11 is a diagram illustrating a method of inserting a watermark corresponding to user information into the insertion area in FIG. 10,
- Figure 12 is a diagram of a method for generating a fourth region
- FIG. 13 is a diagram showing third image data
- 16 is a view for explaining a method of extracting a watermark
- 17 is a conceptual diagram of a watermarking system according to an embodiment.
- the terms including ordinal, such as second, first, etc. may be used to describe various elements, but the elements are not limited to these terms. The terms are used only for the purpose of distinguishing one component from another.
- the second component may be referred to as a first component, and similarly, the first component may also be referred to as a second component.
- / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.
- FIG. 1 is a flowchart of a watermarking method according to an embodiment
- FIG. 2 is a flowchart of a method of detecting an insertion region in FIG. 1
- FIG. 3 is a diagram exemplarily showing first image data
- FIG. 5 is a view for explaining a method of detecting a first area
- FIG. 6 is a diagram illustrating a virtual code reflected in the first area of FIG. 4 as an example
- FIGS. 7A to 7C are diagrams showing second image data generated with different compression ratios
- FIG. 8 is a diagram illustrating a second region extracted from second image data.
- a watermarking method includes a step S110 of inputting first image data, a step S120 of detecting an insertion region from the first image data, And inserting the mark and combining the first image data with the first image data (S130).
- the server receives the first image data from a plurality of providers using the digital document.
- the digital document may include digital information on images, images, and the like, and may include, but not limited to, a webtoon as an example.
- the provider may be a copyright holder of a video, etc., who has provided the first video data with a concept including an author, a licensee, and the like.
- the server performs the watermarking method and the detection method according to the embodiment, and can receive the first video data and the fourth video data, which will be described later, by an administrator or the like.
- the step of detecting the insertion region from the first video data may detect the insertion region into which the watermark is inserted from the first video data.
- detecting an embedded region from the first image data includes detecting (S121) a first region having a first frequency range from the first image data, (S122) of synthesizing the virtual code with the first image data to generate second image data (S123), and comparing the second image data with the first image data to generate the second image data And extracting the second area as an insertion area (S124).
- the first region is filtered from the first image data received from the provider, 1 < / RTI > range.
- the first image data I1 is converted into a frequency domain, passes through a high pass filter, and only a high frequency domain can be extracted. That is, the first image data I1 may be detected as a first region having a frequency in a first range.
- the first range is set to a predetermined range according to the image in the above-mentioned high frequency range, and the high frequency band pass filter can also be set to have the pass band in the corresponding range.
- the first region may be composed of each pixel having the first range of frequencies in the first image data I1.
- the first image data I1 when the first image data I1 is an image as described above, the first image data I1 includes a plurality of pixels, and thus can be partitioned by each pixel.
- each pixel may have coordinates.
- each pixel may be a unit area of the code as an area provided as a bit.
- the first image data I1 is divided into nine pixels, and each of the rows / columns may be three, and the rows / columns may have the same size.
- the nine pixels are represented by (0,0), (0,1), (0,2), (1,0), (1,1), (1,2) 1) and (2, 2).
- such a configuration is not limited to the above-described contents, but will be described below with the above-mentioned contents.
- the first image data I1 may be separated into pixels in the high frequency region by differentials or the like in a portion where the brightness intensity (brightness) is large.
- the first image data L1 may be an image converted to gray scale.
- the first image data I1 is an image and can be processed as discrete data. For example, it may be processed by a finite difference method, but other discrete data processing methods may be applied.
- the first image data I1 has a value of -1 in (0,1), (1,0), (2,1), (1,2) ≪ / RTI >
- the (1,1) pixel is a high frequency region having a frequency larger than the adjacent four pixels ((0,1), (1,0), (2,1) 1 < / RTI > area.
- the first region may be composed of a plurality of pixels as described later.
- the first area S1 detected through the first image data I1 may be a part where the brightness intensity (brightness) difference is large in the first image data I1.
- the first area S1 may be positioned around a line having a large intensity of brightness in the first image data I1.
- the step S122 of generating the virtual code applied to the first area may generate the virtual code corresponding to the first area S1.
- the virtual code may be located on the first area S1 detected in the first image data.
- the first area S1 may be composed of a plurality of pixels as described above, and one pixel may correspond to one bit as described above.
- the first area S1 may have gray scale information.
- the first area S1 may have a value from 0 to 7.
- each pixel may have a different value depending on the difference in brightness.
- a portion having a high brightness corresponds to '7'
- a portion having a relatively low brightness may correspond to '0'.
- the shape may differ depending on the value of brightness. For example, if the value of lightness is small, it may be a circular shape and may be smaller than a pixel.
- the present invention is not limited to this configuration.
- the first area S1 may be Gaussian blurred. Accordingly, since the first region is filtered at frequencies below a predetermined range of frequencies, the filtered first region may have frequencies below the range. That is, even if a virtual code is inserted into a peripheral region of a high frequency in the case where a high frequency (the above-mentioned first range of frequencies) exists in the pixel, the frequency is finally reduced so that a person recognizes a change It is possible to provide a difficult effect. As a result, watermarks can be inserted into at least a part of the area where the virtual code is inserted, as described later, so that a person can be prevented from feeling a sense of heterogeneity.
- the virtual code may be data having any bit, and illustratively may be composed of bits applied to all the pixels of the first area S1.
- the virtual code may be data in which the data having a predetermined bit is repeated.
- the virtual code may be formed in the form of repeated 64-bit data.
- the virtual code is data for pre-selecting pixels having high robustness even in a change in filtering, resizing, etc. in the first area S1.
- various changes filtering, Etc.
- the step S123 of synthesizing the virtual code with the first image data to generate the second image data may combine the first image data (original image) and the first region IS (see FIG. 6) to which the virtual code is applied have.
- the second image data can be generated by synthesizing the first area to which the first image data and the virtual code are applied by alpha merge.
- the alpha merge means composing the first image data with the first area while modifying the alpha value (brightness intensity).
- the alpha value of the first image data can be processed at an intermediate level. For example, if the alpha value has a range of 1 to 100, the ratio of the alpha value to the alpha value may be set to 1: 0.3 to 1: 0.8, preferably 1: 0.5. When the ratio is less than 1: 0.3, there is a problem that detection is difficult after synthesis, and when the ratio is more than 1: 0.8, there is a limit that a person recognizes a change.
- the alpha value can be changed and synthesized. However, it can be set to a range that can be easily detected while the human being can not recognize the change.
- the synthesis between regions is made by extracting the common region, and the synthesis between the region and the image data can be performed by the above-described alpha merge or the like.
- the second image data may be compared with the first image data to detect the second area, and the second area may be extracted as an insertion area (step S124). And such an embedding area can be stored.
- the second image data I2 can detect the second region based on the first image data and a threshold value for various changes (brightness, color, contrast, and the like).
- brightness will be used as a reference.
- the second area may be composed of at least one pixel as in the first area.
- the second region can be located in the first region.
- the second region may consist of the same number of pixels and locations as the first region.
- a pixel having brightness equal to or greater than the reference value of brightness in each pixel of the first area S1 is processed as '1', and a pixel having brightness less than the reference value can be processed as '0'. Accordingly, the second area S2 can be detected.
- the second area S2 can be detected as a part of the first area S1 shown in Fig. 6 as described above.
- the second region S2 detected as described above may be an insertion region to which a watermark to be described later is applied.
- the second region is made up of pixels whose robustness is improved even when the third region or the like is changed.
- the third region can be detected easily even if the third region is deformed. Such an effect can be obtained by comparing the synthesized second image data and the first image data in which the virtual code is inserted.
- a plurality of second image data may be generated with different compression ratios.
- the second image data may include the second-first image data I2a, the second-second image data I2b, and the second-third image data I2c.
- the change in brightness or the like may be different for each frequency, so that the second region extracted by comparing with the first image data may also be different.
- the second-first area S2A is extracted from the second-first image data I2a
- the second-second area S2B is extracted from the second-second image data I2b
- the area S2C can be extracted from the second-third image data I2c.
- the number of pixels in the second-1 area S2A is seven
- the number of pixels in the second-second area S2B is six
- the number of second-area S2C is four.
- the present invention is not limited to this, and the same number and position may be used.
- the second area can be determined as the area commonly extracted in the second-1 area S2A, the second-2 area S2B and the second-3 area S2C.
- the second region which is an insertion region, provides improved toughness even at different compression ratios, allowing the watermark to be easily extracted even when the third party is deformed.
- a watermark corresponding to the user information may be inserted into the insertion area and combined with the first image data (S130). This makes it possible to combine the first image data with a watermark inserted in the insertion area robust to the change and to provide the first image data to the user.
- FIG. 10 is a diagram illustrating a method of generating a third area
- FIG. 11 is a flowchart illustrating a method of generating a watermark corresponding to user information in FIG. 10
- Fig. 12 is a diagram illustrating a method for generating a fourth region
- Fig. 13 is a diagram showing third image data
- Fig. 14 is a flowchart for a method of inserting a watermark .
- the step of synthesizing the first image data with the first image data includes a step S131 of creating a third region in which a watermark is inserted and a first region synthesized, filtering the third region, (S132), and generating the third image data by combining the fourth region and the first image data (S133).
- a first area S1 and an embedded area S2 'into which a watermark is embedded may be generated.
- the first area S1 may be a region having a frequency in a first range detected from the first image data as described with reference to FIG.
- the insertion area S2 'into which the watermark is inserted may be a region to which a watermark is applied in the second area detected as the insertion area.
- the watermark may include a data key and a recovery code corresponding to the user information.
- the insertion area S2 'in which the watermark is inserted may be provided in a circular shape and white in a pixel as in the first area.
- the common area can be extracted by combining the first area S1 and the inserted area S2 'into which the watermark is inserted.
- the first area S1 and the embedded area S2 'into which the watermark is inserted can be performed by an AND operation between bits.
- the first region, the second region, and the third region may be circular as described above, and the size of the circle may be variously controlled according to the size of the image.
- the third region S4 may be generated by filtering the third region.
- the filtering may be Gaussian Blur. Accordingly, the third region is filtered at a frequency lower than the first frequency, so that the fourth region can have a frequency lower than the first frequency. That is, even if a watermark is inserted in a peripheral region of a high frequency in the case where a high frequency (frequency of the first range described above) exists in the pixel, the frequency is finally reduced so that the person recognizes the change due to the watermark insertion It is possible to provide a difficult effect.
- a different index may be provided for each user requesting an image or the like from a server.
- the index may be illustratively a number with more than 32 bits. Therefore, since the watermark can be generated irrespective of the number of users, there is no limit to the number of users.
- different indexes may correspond to each user information.
- the user information may include various information of the user such as the user ID, the IP address, and the access time, the user illegally circulated through the user information can be detected.
- the index can be represented by a binary number corresponding to the number, and the binary number can be reflected as a watermark data value. Accordingly, it is possible to extract the user information through the data value of the watermark in reverse order to find out who illegally circulated.
- '1394878345' in the index can be represented by 01010101110110, 00010111000111, 11110111010111, 00011101110110. And can be applied to the data value P1 of the watermark.
- the watermark may include a recovery code P2.
- the recovery code P2 may include an error correction code (ECC).
- the recovery code P2 may further include at least one of a cyclic redundancy check (CRC) and a Reed-Solomon code (R-S code).
- CRC cyclic redundancy check
- R-S code Reed-Solomon code
- the execution of the cyclic repetition check (CRC) and the Reed-Solomon code (R-S code) can be controlled by the restoration code P2 in the number of insertion areas.
- the watermark can recover the original data even if the data value is distorted or transformed through the recovery code.
- the recovery code P2 can compensate for this.
- the watermark may be composed of a data value, a cyclic iteration check, and a Reed Solomon code. Accordingly, the integrity check is performed through the cyclic repetition code, and the error can be easily corrected in the case where it is distorted adjacent through the Reed-Solomon code.
- the watermark can be made up of a convolutional code structure.
- the watermark can be composed of a structure in which the data value, the cyclic repetition check, and the Reed-Solomon code are repeated two or more times, so that the error detection and correction ability can be improved.
- the watermark indicates that the data value, the cyclic repetition check and the Reed- Or more.
- the number of watermark bits is compared with the number of inserted regions (S134)
- the watermark can be inserted only in the first image data (S135). However, this step may be performed before the above-described insertion of the watermark and a third area of the first area are made (S131), thereby enhancing the robustness at the time of inserting the watermark.
- the watermark may not be inserted.
- the watermark can be inserted only into the first video data whose number of insertion regions is one or more times the number of bits of the watermark. With this configuration, it is possible to prevent an error from being generated due to distortion and deformation.
- the watermark corresponding to the user information may be combined with the third region as a common region of the insertion region, and the fourth region may be generated through filtering.
- the third image data may be generated by combining the fourth region and the first image data (S133).
- the third image data may be generated by compositing the first image data and the fourth image, which are the original images, in an alpha merge.
- the alpha merge means synthesizing the first image data with the first area while modifying the alpha value (brightness intensity), and the alpha value of the first image data is processed at the intermediate level .
- the ratio of the alpha value to the alpha value may be set to 1: 0.3 to 1: 0.8, preferably 1: 0.5.
- the ratio is less than 1: 0.3, there is a problem that detection is difficult after synthesis, and when the ratio is more than 1: 0.8, there is a limit that a person recognizes a change.
- the alpha value can be changed and synthesized. However, it can be set to a range that can be easily detected while the human being can not recognize the change.
- the third image data thus synthesized can be transmitted to the user corresponding to the watermark.
- FIG. 15 is a flowchart of a method of extracting watermarking according to an embodiment
- FIG. 16 is a diagram illustrating a method of extracting a watermark.
- the method of extracting watermarking includes inputting fourth image data (S210), matching fourth image data with previously stored first image data, A step S230 of comparing the insertion area of the first image data with the comparison area of the fourth image data corresponding to the insertion area of the first image data to extract a watermark, Step S240 of loading user information corresponding to the mark and step S250 of outputting the user information.
- the fourth image data can be input (S210).
- the fourth image data may be image data illegally distributed by a third party.
- the fourth image data may be third image data reflecting a watermark, and may be third image data modified by various changes.
- the fourth image data may be image data in which the third image data is transformed into various sizes and brightness.
- the fourth image data may be matched with the previously stored first image data, and the inserted region of the first image data may be retrieved (S220). That is, the first image data corresponding to the user can be retrieved from the fourth image data. This can be done by an administrator or by various matching methods between the fourth image data and the first image data.
- the watermark can be extracted by comparing the insertion region of the first image data with the comparison region of the fourth image data corresponding to the insertion region of the first image data (S230). Since the comparison area has the same position as the insertion area of the first image data, comparison between image data can be easily performed.
- the extraction region can be detected based on a threshold value for various changes (brightness, color, contrast, etc.) between the comparison region of the fourth image data and the insertion region of the first image data.
- the extraction region may be a pre-inserted watermark or a distorted, distorted watermark.
- the change will be described based on brightness.
- the extraction region may be composed of at least one or more pixels, such as an insertion region and a comparison region.
- a pixel having brightness equal to or greater than a reference value for brightness in each pixel of the comparison region is processed as '1'
- a pixel having a smaller brightness can be processed as '0' so that the extraction region can be detected.
- the extraction region may have a bit of a pre-inserted watermark or a modified watermark, and may include a data value Pl and a recovery code P2.
- the watermark may have a cyclic iteration check and a Reed-Solomon code bit repeated two or more times. For example, if the data value, the cyclic iteration check, and the Reed-Solomon code are 32 bits total, the watermark may be 64 bits or more.
- the extraction region can recover the error through the recovery code as described above. Thereby, reliability in data value extraction can be improved.
- the watermark since the watermark has a structure in which the data value P1 and the recovery code P2 are repeated (for example, n times), a plurality of data values P1 and a recovery code P2 are combined ,
- the data value can be recovered.
- the error can be recovered by a combination of the first recovery code, the second recovery code, and the third recovery code.
- the recovered bits can be guaranteed to be integrity in the sense that an output is provided that can not be recovered in case recovery is unrecoverable.
- various combinations are possible, so that the recovery rate can be greatly improved.
- the recovered data value P1 indicates the index as described above, the corresponding user information can be retrieved.
- the loaded user information can be output (S250).
- the user information may be provided to the manager or the like in various manners.
- the user information may be provided to the administrator via the display device, but is not limited thereto.
- the administrator can detect the third party who illegally circulated the image data more accurately by improving the robustness even if the third party distorts and deforms the image data as described above.
- 17 is a conceptual diagram of a watermarking system according to an embodiment.
- the watermarking system includes a server 1000, a user 2000, and a provider 3000.
- the server 1000 may provide a digital document to the user 2000.
- the digital document may include image data.
- the image data may include an image as described above, and the image may be a webtoon or the like.
- the server 1000 may receive first image data from a plurality of providers 3000 using a digital document, and may insert a watermark into the received first image data. At this time, the server 1000 can receive the first image data directly or through various means and paths from a plurality of providers 3000.
- the server 1000 may combine the first image data with the embedded region in which the watermark is inserted, and send the combined third image data to the user 2000 .
- the server 1000 may provide the user 2000 with the third image data directly or by various means and paths.
- the server 1000 receives the third image data from the third party, detects the watermark embedded in the third image data, and detects the user 2000 providing the third image data. That is, the server 1000 can detect the watermark of the video data distributed without the permission of the provider 3000, and detect the user 2000 who circulated without permission.
- the third party includes the provider 3000 and may be a person who has received the distribution permission or the like for the video data from the provider 3000.
- the server 1000 may include a receiving unit 1100, a transmitting unit 1200, a preprocessing unit 1300, an inserting unit 1400, and a detecting unit 1500.
- the receiving unit 1100 may receive the first image data from the provider 3000 and receive the second image data transmission request signal from the user 2000.
- the receiving unit 1100 may include a first receiving unit 1110 and a second receiving unit 1120.
- the first receiving unit 1110 can receive the first image data received from the provider 3000.
- the provider 3000 may be a copyright holder of image data and an author.
- the second receiving unit 1100 may receive the fourth image data from the user 2000.
- the fourth image data may include an embedded region in which a watermark provided to the user 2000 is inserted in the server 1000 and image data obtained by combining the first image data.
- the fourth image data may be third image data.
- the present invention is not limited to this, and the fourth image data may include all of the image data processed by the user or third party in addition to the third image data.
- the transmitting unit 1200 transmits the third image data obtained by combining the first image data and the inserted watermarked image to the user 2000, and transmits the user information detected through the watermark in the fourth image data to the administrator or the like Can be transmitted.
- the administrator may be an administrator of the server 1000, but the present invention is not limited thereto.
- the transmitter 1200 may include a first transmitter 1210 and a second transmitter 1220.
- the first transmitting unit 1210 can transmit the third image data obtained by combining the first image data and the embedded region into which the watermark is inserted, to the user 2000.
- the second transmitting unit 1220 can transmit the detected user information to the manager using the watermark in the fourth video data.
- the preprocessing unit 1300 can detect the insertion area from the first image data.
- the preprocessing unit 1300 may include a first region detection unit 1310, a first filter unit 1320, a virtual code generation unit 1330, a second image data generation unit 1340, a second region extraction unit 1350).
- the first area detecting unit 1310 may detect a first area having a frequency of a first range using the first image data. As described above, the first area detecting unit 1310 can detect a first area having a frequency corresponding to the first range, which is filtered from the first image data.
- the first area detecting unit 1310 may convert the first image data into a frequency domain as described above. That is, the first area detector 1310 passes the first image data through a high pass bandpass filter, and extracts only the high frequency band. That is, the first area detecting unit 1310 can detect the first area having the frequency of the first range from the first image data.
- the above description can be applied to the same contents.
- the first region may be composed of pixels having a first frequency in the first image data I1.
- the first filter unit 1320 may perform filtering on the detected first region.
- the first filter unit 1320 may perform Gaussian Blur processing on the first area. Accordingly, since the first region is filtered at frequencies below a predetermined range of frequencies, the filtered first region may have frequencies below the range. That is, even if a virtual code is inserted into a peripheral region of a high frequency in the case where a high frequency (the above-mentioned first range of frequencies) exists in the pixel, the frequency is finally reduced so that a person recognizes a change It is possible to provide a difficult effect.
- the contents of the filtering of the first area can be applied to the same contents as described above.
- the virtual code generation unit 1330 can generate virtual codes applied to the first area.
- the virtual code may be located on the first area detected in the first image data as described above.
- the pseudo code may be data having any bit, and may be composed of bits applied to all the pixels.
- the pseudo code may be in the form of repeated data having certain bits.
- the virtual code is data for pre-selecting pixels having high robustness even in a change in filtering, resizing, etc. in the first area S1. In the watermarking method according to the embodiment, various changes (filtering, Etc.) can be improved. The above description about the generation of the virtual code and the virtual code can be applied equally.
- the second image data generation unit 1340 may generate the second image data by combining the first image data and the first region to which the virtual code is applied.
- the second image data generator 1340 can generate the second image data in the alpha merge. That is, the second image data generation unit 1340 may synthesize the second image data by combining the first image data and the first region to which the virtual code is applied, while modifying the brightness intensity.
- the description of the portion for generating the second image data such as the alpha value and the synthesis method may be applied to the same contents as described above.
- the second region extracting unit 1350 may extract the second region by comparing the second image data with the first image data.
- the second region may be at least a partial region of the first region.
- the extracted second region may be an insertion region to which a watermark is applied as described later. Accordingly, as described above, the second region is made up of pixels having improved robustness even when the third region or the like is changed. As a result, it is possible to detect a third party that has easily leaked even if the third party is deformed. Such an effect can be obtained by comparing the synthesized second image data and the first image data in which the virtual code is inserted.
- the contents of detecting or extracting the second area may be applied to the same contents as described above.
- the inserting unit 1400 may generate a third region in which a watermark is inserted, and generate a third image data by combining the filtered fourth region with the first image data.
- the insertion unit 1400 may include a third region generation unit 1410, a second filter unit 1420, and a third image data generation unit 1430.
- the third region generator 1410 may generate a third region by combining the insertion region extracted through the plurality of second image data with the first region in the pre-processing unit.
- the third area generator 1410 may synthesize the third area through the bit operation between the first area and the embedded area. At this time, a watermark corresponding to the user information can be inserted into the insertion area.
- the watermark may include a recovery code P2.
- the recovery code P2 may include an error correction code (ECC).
- the recovery code P2 may further include at least one of a cyclic redundancy check (CRC) and a Reed-Solomon code (R-S code).
- CRC cyclic redundancy check
- R-S code Reed-Solomon code
- the execution of the cyclic repetition check (CRC) and the Reed-Solomon code (R-S code) can be controlled by the restoration code P2 in the number of insertion areas.
- the watermark can recover the original data even if the data value is distorted or transformed through the recovery code.
- the recovery code P2 can compensate for this.
- the watermark may be composed of a data value, a cyclic iteration check, and a Reed Solomon code. Accordingly, the integrity check is performed through the cyclic repetition code, and the error can be easily corrected in the case where it is distorted adjacent through the Reed-Solomon code.
- the watermark can be made up of a convolutional code structure.
- the watermark can be composed of a structure in which the data value, the cyclic repetition check, and the Reed-Solomon code are repeated two or more times, so that the error detection and correction ability can be improved.
- the watermark indicates that the data value, the cyclic repetition check and the Reed- Or more.
- the watermark can be inserted only in the first image data whose number of insertion regions is one or more times the number of bits of the watermark. However, such an operation may be performed before the insertion of the watermark into the watermarked area and the third area of the first area are performed. With this configuration, robustness can be improved from the time of watermark embedding.
- the watermark may not be inserted.
- the watermark when a plurality of first video data is input, the watermark can be inserted only into the first video data whose number of insertion regions is one or more times the number of bits of the watermark. With this configuration, it is possible to prevent an error from being generated due to distortion and deformation.
- the description of the third area in the third area generator 1410 can be applied to the same contents as described above.
- the second filter unit 1420 may filter the third area to generate the fourth area S4.
- the filtering may be Gaussian Blur. Accordingly, the third region is filtered at a frequency lower than the first frequency, so that the fourth region can have a frequency lower than the first frequency. That is, even if a watermark is inserted in a peripheral region of a high frequency in the case where a high frequency (frequency of the first range described above) exists in the pixel, the frequency is finally reduced so that the person recognizes the change due to the watermark insertion It is possible to provide a difficult effect.
- the third image data generation unit 1430 may generate the third image data by combining the fourth region and the first image data.
- the third image data may be generated by compositing the first image data and the fourth image, which are the original image, in an alpha merge.
- the alpha merge means synthesizing the first image data with the first area while modifying the alpha value (brightness intensity), and the alpha value of the first image data is processed at the intermediate level .
- the ratio of the alpha value to the alpha value may be set to 1: 0.3 to 1: 0.8, preferably 1: 0.5.
- the ratio is less than 1: 0.3, there is a problem that detection is difficult after synthesis, and when the ratio is more than 1: 0.8, there is a limit that a person recognizes a change.
- the alpha value can be changed and synthesized. However, it can be set to a range that can be easily detected while the human being can not recognize the change.
- the third image data thus synthesized can be transmitted to the user corresponding to the watermark.
- the above description of the third image data generation can be applied equally.
- the detection unit 1500 can detect the watermark from the second video data circulated without permission and detect the user information as described above.
- the second video data circulated without permission may be input from an administrator or the like, but is not limited thereto.
- the detection unit 1500 may include an image matching unit 1510, a watermark detection unit 1520, a watermark matching unit 1530, and a user information detection unit 1540.
- the image matching unit 1510 may match the first image data from the fourth image data received through the second receiving unit 1120.
- the image matching unit 1510 may match the fourth image data (for example, the outgoing third image data or the image data in which the third image data is modified) with the previously stored first image data.
- the image matching unit 1510 may match the fourth image data with the first image data through the SURF.
- the image matching unit 1510 may retrieve the first image data corresponding to the user from the fourth image data. This can be done by an administrator or by various matching methods between the fourth image data and the first image data.
- the image matching unit 1510 may represent a plurality of points of the image data, for example, a first point and a second point as vectors. Because of the plurality of first points and the plurality of second points, the number of vectors may be plural. In this case, the image matching unit 1510 may match the first image data and the second image data to be compared when a plurality of vectors are all matched. In addition, the image matching unit 1510 may match the first image data with the second image data even when the plurality of vectors have the same direction in the first image data and the second image data of the same size.
- the present invention is not limited thereto.
- the watermark detection unit 1520 can extract the watermark by comparing the insertion area of the matched first image data with the comparison area of the fourth image data corresponding to the insertion area of the first image data. Since the comparison area has the same position as the insertion area of the first image data, comparison between image data can be easily performed.
- the watermark detection unit 1520 may extract the comparison region of the fourth image data corresponding to the insertion region of the first image data, and compare the insertion region and the comparison region to extract the watermark.
- the watermark detection unit 1520 detects an extraction region based on a threshold value for various changes (brightness, color, contrast, etc.) between the comparison region of the fourth image data and the insertion region of the first image data Can be detected.
- the extraction region may be a pre-inserted watermark or a distorted, distorted watermark.
- the change will be described based on brightness.
- the extracted region may be composed of at least one or more pixels, such as an insertion region and a comparison region.
- a pixel having brightness equal to or greater than a reference value for brightness in each pixel of the comparison region is processed as '1' Pixels having brightness can be processed as " 0 " so that the extraction region can be detected.
- the watermark matching unit 1530 may receive watermark information on the first image data matched with the fourth image data from the third region generating unit.
- the present invention is not limited to this, and the watermark information may be data-processed corresponding to the fourth image data.
- the watermark matching unit 1530 can retrieve the user information corresponding to the watermark. That is, the extraction region may have a bit of a pre-inserted watermark or a modified watermark, and may include a data value Pl and a recovery code P2.
- the watermark may have a cyclic iteration check and a Reed-Solomon code bit repeated two or more times. For example, if the data value, the cyclic iteration check, and the Reed-Solomon code are 32 bits total, the watermark may be 64 bits or more.
- the extraction region can recover the error through the recovery code as described above. Thereby, reliability in data value extraction can be improved.
- the watermark has a structure in which the data value P1 and the recovery code P2 are repeated (for example, n times) as described above, a plurality of data values P1 and recovery codes P2 are combined ,
- the data value can be recovered.
- the error can be recovered by a combination of the first recovery code, the second recovery code, and the third recovery code.
- the recovered bits can be guaranteed to be integrity in the sense that an output is provided that can not be recovered in case recovery is unrecoverable.
- various combinations are possible, so that the recovery rate can be greatly improved.
- the recovered data value P1 indicates the index as described above, the corresponding user information can be retrieved.
- the user information detection unit 1540 may output the user information corresponding to the watermark information matched by the watermark matching unit 1530.
- the watermark may correspond to each third image data, and the watermark may be stored for each user.
- the user information detection unit 1540 can output the corresponding user information.
- the user information may be provided to the administrator in various ways.
- the user information may be provided to the administrator via the display device, but is not limited thereto.
- the administrator can detect the third party who illegally circulated the image data more accurately by improving the robustness even if the third party distorts and deforms the image data as described above.
- " portion " refers to a hardware component such as software or an FPGA (field-programmable gate array) or ASIC, and 'part' performs certain roles.
- 'part' is not meant to be limited to software or hardware.
- &Quot; to " may be configured to reside on an addressable storage medium and may be configured to play one or more processors.
- 'parts' may refer to components such as software components, object-oriented software components, class components and task components, and processes, functions, , Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
- the functions provided in the components and components may be further combined with a smaller number of components and components or further components and components.
- the components and components may be implemented to play back one or more CPUs in a device or a secure multimedia card.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
Claims (7)
- 제1 영상 데이터를 입력받는 단계;Receiving first image data;상기 제1 영상 데이터로부터 삽입 영역을 검출하는 단계; 및Detecting an insertion region from the first image data; And상기 삽입 영역에 사용자 정보에 대응하는 워터 마크를 삽입하고 상기 제1 영상 데이터와 합성하는 단계;를 포함하고,And inserting a watermark corresponding to the user information into the insertion area and combining the watermark with the first image data,상기 삽입 영역을 검출하는 단계는,Wherein the step of detecting the insertion region comprises:상기 제1 영상 데이터로부터 제1 범위의 주파수를 갖는 제1 영역을 검출하는 단계;Detecting a first region having a frequency in a first range from the first image data;상기 제1 영역에 적용되는 가상 코드를 생성하는 단계;Generating a virtual code to be applied to the first area;상기 가상 코드를 상기 제1 영상 데이터에 합성하여 제2 영상 데이터를 생성하는 단계; 및Synthesizing the virtual code with the first image data to generate second image data; And상기 제2 영상 데이터를 상기 제1 영상 데이터와 비교하여 제2 영역을 검출하고 상기 제2 영역을 상기 삽입 영역으로 추출하는 단계;를 포함하는 워터 마킹 방법.Comparing the second image data with the first image data to detect a second area, and extracting the second area into the insertion area.
- 제1항에 있어서,The method according to claim 1,상기 제2 영상 데이터는,The second image data may include:복수 개이며 상기 제1 영상 데이터를 상이한 압축률로 압축하여 재생성되는 워터 마킹 방법.And compressing the first image data at different compression ratios to regenerate the first image data.
- 제2항에 있어서,3. The method of claim 2,상기 삽입 영역으로 저장하는 단계에서,In the step of storing as the insert area,상기 제2 영역은 상기 복수 개의 제2 영상 데이터 중 공통으로 추출된 영역인 워터 마킹 방법.Wherein the second region is an area commonly extracted from the plurality of second image data.
- 제1항에 있어서,The method according to claim 1,상기 제1 영상 데이터와 합성하는 단계는,Wherein the step of synthesizing the first image data includes:상기 워터 마크가 삽입된 삽입 영역과 상기 제1 영역을 합성한 제3 영역을 생성하는 단계;Creating a third region in which the watermark embedded region and the first region are combined;상기 제3 영역을 필터링하여 제4 영역을 생성하는 단계; 및Filtering the third region to generate a fourth region; And상기 제4 영역과 상기 제1 영상 데이터를 합성하여 제3 영상 데이터를 생성하는 단계를 포함하는 워터 마킹 방법.And generating third image data by combining the fourth region and the first image data.
- 제1항에 있어서,The method according to claim 1,상기 사용자 정보에 대응하는 워터 마크는 데이터 키 및 복구 코드를 포함하고,Wherein the watermark corresponding to the user information includes a data key and a recovery code,상기 제1 영상 데이터와 합성하는 단계는,Wherein the step of synthesizing the first image data includes:상기 워터 마크의 비트 개수와 상기 삽입 영역의 개수를 비교하는 단계; 및Comparing the number of bits of the watermark with the number of insertion areas; And상기 삽입 영역의 개수가 상기 워터 마크의 비트 개수의 1배보다 큰 제1 영상 데이터에 대해서만 상기 워터 마크를 삽입하는 단계를 포함하는 워터 마킹 방법.And inserting the watermark only for the first video data in which the number of insertion regions is larger than the number of bits of the watermark.
- 제5항에 있어서,6. The method of claim 5,상기 복구 코드는, 에러 정정 코드(ECC)를 포함하며, 순환 반복 체크(CRC) 및 리드-솔로몬 코드(R-S code) 중 적어도 하나를 더 포함하는 워터 마킹 방법.Wherein the recovery code comprises an error correction code (ECC) and further comprising at least one of a cyclic redundancy check (CRC) and a Reed-Solomon code (R-S code).
- 제4 영상 데이터를 입력받는 단계;Receiving fourth image data;상기 제4 영상 데이터를 기 저장된 제1 영상 데이터와 매칭하는 단계;Matching the fourth image data with pre-stored first image data;상기 매칭된 제1 영상 데이터의 삽입 영역을 불러오는 단계;Loading an inserted region of the matched first image data;상기 제1 영상 데이터의 삽입 영역과 상기 제1 영상 데이터의 삽입 영역에 대응하는 상기 제4 영상 데이터의 비교 영역을 비교하여 워터 마크를 추출하는 단계;Extracting a watermark by comparing the insertion region of the first image data with the comparison region of the fourth image data corresponding to the insertion region of the first image data;상기 워터 마크에 대응하는 사용자 정보를 불러오는 단계; 및 Retrieving user information corresponding to the watermark; And상기 사용자 정보를 출력하는 단계를 포함하는 상기 워터 마킹 검출 방법.And outputting the user information.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0140990 | 2017-10-27 | ||
KR20170140990 | 2017-10-27 | ||
KR20180037383 | 2018-03-30 | ||
KR10-2018-0037383 | 2018-03-30 | ||
KR1020180102694A KR101959479B1 (en) | 2017-10-27 | 2018-08-30 | Watermarking method and detecting method for the same |
KR10-2018-0102694 | 2018-08-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019083191A1 true WO2019083191A1 (en) | 2019-05-02 |
Family
ID=65949037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/011805 WO2019083191A1 (en) | 2017-10-27 | 2018-10-08 | Watermarking method and method for watermark detection |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101959479B1 (en) |
WO (1) | WO2019083191A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100896618B1 (en) * | 2002-06-27 | 2009-05-08 | 주식회사 케이티 | Apparatus and method for inserting and detecting digital image watermarking |
KR20090104349A (en) * | 2008-03-31 | 2009-10-06 | 주식회사 케이티 | Wartermark insertion/detection apparatus and method thereof |
KR101418394B1 (en) * | 2010-10-26 | 2014-07-09 | 한국전자통신연구원 | Video Watermarking Embedding And Detection Apparatus And Method Using Temporal Modulation And Error-Correcting Code |
KR20140122609A (en) * | 2013-04-10 | 2014-10-20 | 삼성테크윈 주식회사 | Apparatus and method for processing watermark, and apparatus for photographing image |
KR101522555B1 (en) * | 2009-02-20 | 2015-05-26 | 삼성전자주식회사 | Method and apparatus for video display with inserting watermark |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4079620B2 (en) * | 2001-10-30 | 2008-04-23 | ソニー株式会社 | Digital watermark embedding processing apparatus, digital watermark embedding processing method, and computer program |
KR101785194B1 (en) * | 2016-02-29 | 2017-10-12 | 한국과학기술원 | Template Based Watermarking Method for Depth-Image-Based Rendering Based 3D Images and Apparatus Therefor |
-
2018
- 2018-08-30 KR KR1020180102694A patent/KR101959479B1/en active IP Right Grant
- 2018-10-08 WO PCT/KR2018/011805 patent/WO2019083191A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100896618B1 (en) * | 2002-06-27 | 2009-05-08 | 주식회사 케이티 | Apparatus and method for inserting and detecting digital image watermarking |
KR20090104349A (en) * | 2008-03-31 | 2009-10-06 | 주식회사 케이티 | Wartermark insertion/detection apparatus and method thereof |
KR101522555B1 (en) * | 2009-02-20 | 2015-05-26 | 삼성전자주식회사 | Method and apparatus for video display with inserting watermark |
KR101418394B1 (en) * | 2010-10-26 | 2014-07-09 | 한국전자통신연구원 | Video Watermarking Embedding And Detection Apparatus And Method Using Temporal Modulation And Error-Correcting Code |
KR20140122609A (en) * | 2013-04-10 | 2014-10-20 | 삼성테크윈 주식회사 | Apparatus and method for processing watermark, and apparatus for photographing image |
Also Published As
Publication number | Publication date |
---|---|
KR101959479B1 (en) | 2019-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6546113B1 (en) | Method and apparatus for video watermarking | |
US6510233B1 (en) | Electronic watermark insertion device | |
US6556688B1 (en) | Watermarking with random zero-mean patches for printer tracking | |
US8175322B2 (en) | Method of digital watermark and the corresponding device, and digital camera which can append watermark | |
US6523114B1 (en) | Method and apparatus for embedding authentication information within digital data | |
US20060050926A1 (en) | Data processing method and apparatus | |
US20050185820A1 (en) | Data processing apparatus and method, and storage medium therefor | |
US20100254569A1 (en) | Method and apparatus for inserting a removable visible watermark in an image and method and apparatus for removing such watermarks | |
US7197161B2 (en) | Embedding information in images using two-layer conjugate screening | |
WO2010011035A2 (en) | Apparatus and method for generating structurally multi-patterned watermark, watermark insertion apparatus and method using the same, and watermark detection apparatus and method using the same | |
US20070223778A1 (en) | Method And Apparatus For Video/Image Communication With Watermarking | |
CN107346528B (en) | Image tampering detection method based on double-image reversible fragile watermark | |
WO2019083191A1 (en) | Watermarking method and method for watermark detection | |
WO2013042843A1 (en) | Method for authenticating images on the basis of block units using a reversible watermarking based on a progressive differential histogram | |
JPH1175055A (en) | Method for embedding information and method for extracting information and device for embedding information and device for extracting information and storage medium | |
EP1405519B1 (en) | Video/image communication with watermarking | |
WO2021133133A1 (en) | Electronic drawing security management method using colors | |
Hong et al. | An efficient reversible authentication scheme for demosaiced images with improved detectability | |
WO2021084812A1 (en) | Electronic device | |
US8209543B2 (en) | Watermarking of a processing module | |
US8374491B2 (en) | Methods for reading watermarks in unknown data types, and DVD drives with such functionality | |
US7197159B2 (en) | Amplitude shifted information embedding and detection method based upon the phase equalization | |
JP2000316083A (en) | Information processor, information processing system, information processing method and storage medium | |
JP2002082612A (en) | Device for embedding and detecting digital watermark | |
KR100299728B1 (en) | Electronic Watermark Insertion Device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18870994 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18870994 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.01.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18870994 Country of ref document: EP Kind code of ref document: A1 |