CN113497908B - Data processing method and device, electronic equipment and storage equipment - Google Patents
Data processing method and device, electronic equipment and storage equipment Download PDFInfo
- Publication number
- CN113497908B CN113497908B CN202010197206.3A CN202010197206A CN113497908B CN 113497908 B CN113497908 B CN 113497908B CN 202010197206 A CN202010197206 A CN 202010197206A CN 113497908 B CN113497908 B CN 113497908B
- Authority
- CN
- China
- Prior art keywords
- watermark
- watermark information
- information
- carrier object
- brightness channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
- H04N2005/91307—Television signal processing therefor for scrambling ; for copy protection by adding a copy protection signal to the video signal
- H04N2005/91335—Television signal processing therefor for scrambling ; for copy protection by adding a copy protection signal to the video signal the copy protection signal being a watermark
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
The application discloses a data processing method, which comprises the following steps: obtaining carrier object and target watermark information; determining embedded region specification information of target watermark information according to the image characteristics of the carrier object; and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information. The method is adopted to solve the problem that the prior art cannot adapt to the watermark robustness of carrier objects with different image characteristics.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to two data processing methods, apparatuses, electronic devices, and storage devices.
Background
With the rapid development of multimedia technology and the popularization of mobile internet, video technology is applied to various fields of people's life. It is becoming more and more convenient to transmit and acquire video information, but since the internet itself has openness, sharability, etc., there is often a problem in that lawbreakers infringe on the copyright of video information. Copyright protection of video information has become a hotspot in society, where video watermarking technology has evolved. The basic principle of video watermarking is to embed information capable of proving copyright identity in video so as to achieve the purpose of protecting copyright.
In the prior art, when watermark information is embedded into a video frame, watermark information embedding is performed on video files with different image characteristics by adopting the same embedded region specification information, so that the watermark robustness of the video files with different image characteristics cannot be adapted.
Disclosure of Invention
The application provides a data processing method, which aims to solve the problem that the existing watermark information embedding method cannot adapt to watermark robustness of video files with different image characteristics.
The application provides a data processing method, which comprises the following steps:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to the image characteristics of the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
Optionally, the method further comprises:
determining the embedded addition and subtraction coefficients of the target watermark information according to the minimum perceived difference of the preset embedded region;
the embedding the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information includes:
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the specification information of the embedded region and the embedded addition and subtraction coefficients.
Optionally, the determining the embedded region specification information of the target watermark information according to the image feature of the carrier object includes:
and determining the specification information of the embedded region of the target watermark information according to the resolution of the carrier object and at least one factor of the image texture characteristics of the preset embedded region.
Optionally, the method further comprises:
and scrambling the target watermark information, and embedding the scrambled target watermark information into a brightness channel and two chromaticity channels of the carrier object.
The application also provides a data processing method, which comprises the following steps:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner;
watermark information is extracted from the extraction area.
Optionally, the carrier object is a preset odd number of carrier video frames in succession.
Optionally, the determining the extraction area containing watermark information in the carrier object by using a mode of dividing the carrier object into areas in multiple dimensions includes:
dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier video frame which is positioned at the first half and is selected from the continuous preset odd number of carrier video frames; the second brightness channel refers to a brightness channel of one carrier video frame positioned at the second half of the carrier video frames selected from the continuous preset odd number of carrier video frames;
Calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel;
determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference;
processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information;
and taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information.
Optionally, the determining whether the corresponding two regions are candidate extraction regions including watermark information according to the energy difference includes:
judging whether the energy difference is within a preset energy difference threshold range, if so, determining the two corresponding areas as candidate extraction areas;
if not, determining that the two corresponding regions are not candidate extraction regions.
Optionally, the extracting watermark information from the extraction area includes:
obtaining an evaluation result for each candidate embedded region according to the energy difference;
accumulating the evaluation results of the plurality of candidate extraction areas divided by the same division mode;
Determining extracted binary information according to a preset extraction mechanism and an accumulated result;
and obtaining watermark information according to the binary information.
Optionally, the obtaining watermark information according to the binary information includes:
according to the binary information, binary watermark sequences of a brightness channel and two chromaticity channels are obtained;
obtaining a plurality of watermark starting positions according to the binary watermark sequence;
watermark information is obtained from a binary watermark sequence of a plurality of watermark starting positions.
Optionally, the obtaining watermark information according to the binary watermark sequences of the plurality of watermark starting positions includes:
changing bits with 0 values in the plurality of binary watermark sequence information into bits with-1 values to obtain a plurality of second watermark sequences;
watermark information is obtained according to the binary watermark sequence and a plurality of second watermark sequences.
Optionally, the obtaining watermark information according to the binary watermark sequence and the plurality of second watermark sequences includes:
obtaining a weight corresponding to each second watermark sequence;
an accumulated value of weights of a plurality of second watermark sequences according to the weights;
obtaining a third watermark sequence according to the accumulated value;
And obtaining watermark information according to the third watermark sequence.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
obtaining the matching degree of watermark information heads of a plurality of second watermark sequences;
and obtaining the weight corresponding to each second watermark sequence according to the matching degree.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
obtaining an average value of bits of the binary watermark sequence;
calculating Euclidean distances between the plurality of second watermark sequences and the average value;
and obtaining the weight corresponding to each second watermark sequence according to the Euclidean distance.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
the weight of each second watermark sequence is set to 1.
The application also provides a data processing device, comprising:
a carrier object and information obtaining unit for obtaining carrier object and target watermark information;
an embedded region specification information determining unit configured to determine embedded region specification information of target watermark information according to image features of the carrier object;
and the target watermark information embedding unit is used for embedding the target watermark information into the brightness channel and the two chromaticity channels of the carrier object according to the specification information of the embedding region.
The present application also provides an electronic device including:
a processor;
a memory for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performing the steps of: comprising the following steps:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to the image characteristics of the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
The present application also provides a storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to the image characteristics of the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
The application also provides a data processing device, comprising:
a carrier object obtaining unit for obtaining a carrier object containing watermark information;
An extraction region determining unit, configured to determine an extraction region containing watermark information in the carrier object by using a manner of dividing the carrier object into regions in multiple dimensions;
and the watermark information extraction unit is used for extracting watermark information from the extraction area.
The present application also provides an electronic device including:
a processor;
a memory for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performing the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner;
watermark information is extracted from the extraction area.
The present application also provides a storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner;
watermark information is extracted from the extraction area.
Compared with the prior art, the application has the following advantages:
the application provides a data processing method, which is used for determining the specification information of an embedded region of target watermark information according to the image characteristics of a carrier object; and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information. The embedded region specification information considers the factors of the image characteristics, selects different embedded region specifications for carrier objects with different image characteristics, and is better suitable for the watermark robustness of the carrier objects with different image characteristics.
Drawings
Fig. 1A is a schematic diagram of a scene embodiment provided in the first embodiment of the present application.
Fig. 1 is a flowchart of a data processing method according to a first embodiment of the present application.
Fig. 2 is a flowchart of a data processing method according to a second embodiment of the present application.
Fig. 3 is a schematic diagram of a data processing apparatus according to a third embodiment of the present application.
Fig. 4 is a schematic diagram of an electronic device according to a fourth embodiment of the present application.
Fig. 5 is a schematic diagram of a data processing apparatus according to a sixth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
In order to more clearly show the present application, a simple description is first given of an application scenario of the data processing method provided in the first embodiment of the present application.
The data processing method provided by the first embodiment of the present application can be applied to a scenario where a client interacts with a server, as shown in fig. 1A, when target watermark information needs to be embedded into a carrier video file, the client typically establishes connection with the server first, after the connection, the client sends the carrier video file and the target watermark information to the server, and after the server receives the carrier video file and the target watermark information, the server determines a target carrier video frame in the carrier video file in which the target watermark information is to be embedded; then determining the specification information of the embedded region of the target watermark information according to the image characteristics of the target carrier video frame; and finally, embedding the target watermark information into a brightness channel and two chromaticity channels of the target carrier video frame according to the specification information of the embedded region to generate a watermark video file. And the server side provides the watermark video file to the client side, and the client side receives the watermark video file embedded with the target watermark information.
A first embodiment of the present application provides a data processing method, which is described below with reference to fig. 1.
As shown in fig. 1, in step S101, a carrier object and target watermark information are obtained.
The carrier object refers to a carrier image in which the target watermark information is to be embedded. The carrier image may be a moving image or a still image, for example, the image may be a moving image in GIF (Graphics Interchange Format) format, or may be a still image in JPEG (Joint Photographic Experts Group) format. For example, when the copyright owner of an image needs to distribute content to a plurality of partners, different watermarks need to be embedded, so that when piracy occurs, the image is a carrier object and the trace back is from which partner. In addition, the carrier image may also be a video frame in a carrier video, and the carrier video may be a physical video file, for example, the carrier video is a video file stored in a remote server for local downloading and playing; the video stream can also be in the form of streaming media, for example, the carrier video is a video stream which is provided by an online video-on-demand platform or an online live broadcast platform and can be directly streamed; in addition, the carrier video may be video in the form of AR, VR, or the like, or may be stereoscopic video, however, as the technology advances, the carrier video may also be video in other formats and other forms, which are related to video, and are not limited in particular herein.
The target watermark information refers to additional information added in the carrier object. The target watermark information may be a bit sequence of a predetermined number of bits. For example, adding copyright information as a watermark to a carrier object can prevent piracy.
As shown in fig. 1, in step S102, embedded region specification information of target watermark information is determined according to image characteristics of the carrier object.
The embedded region specification information may refer to an embedded block size occupied by the watermark information of a binary bit embedded in the carrier object, and may be represented by resolution. For example, one carrier object has a resolution of 1920×1080, and the embedded region specification information may be represented as 3*3 in resolution.
The determining the embedded region specification information of the target watermark information according to the image characteristics of the carrier object comprises the following steps: and determining the specification information of the embedded region of the target watermark information according to the resolution of the carrier object and at least one factor of the image texture characteristics of the preset embedded region.
Determining the specification size of an embedded region of target watermark information according to the resolution of a carrier object, specifically, the larger the resolution of the carrier object is, the larger the specification size of the determined embedded region is; the smaller the carrier object resolution, the smaller the size of the determined embedded region specification. For example, the carrier object resolution is 1280x720, and the embedding region specification may be selected to be 2x2; the carrier object resolution is 1920x1080 and the embedding region size is 3x3.
In addition to determining the embedded region specification information of the target watermark information according to the resolution of the carrier object, the embedded region specification information may be determined according to a variance or standard deviation of a preset embedded region. An example of determining the embedded region specification information of the target watermark information based on the variance or standard deviation of the preset embedded region is described below.
1. The carrier video is first converted into Y, U, V three channels, and consecutive T frames are taken, for example, T may be 5 frames. The Y channel of each frame is divided into blocks of size w×h according to video resolution, for example, 1920×1080 video may be divided into blocks of size 24×24, 960×540 video may be divided into blocks of size 12×12, and the like.
2. Each W H block is divided into 16 (W/4) X (H/4) blocks.
3. According to the texture characteristics of each (W/4) x (H/4) block ((W/4) x (H/4) block is a preset embedded area), the specification of the embedded area is adaptively selected, for example, the block can be subjected to high-pass filtering, and the convolution kernel can be f= [0, -1,0; -1, 4, -1;0, -1,0]/4, then calculating the variance or standard deviation of the filtering result, and selecting the specification of the embedded region according to the range of the variance or standard deviation. Generally, a larger variance or standard deviation indicates a richer image texture, and a larger selected embedded region specification, whereas a smoother image texture and a smaller selected embedded region specification. For example, the variance is less than 100, and the specification of the embedding area is selected to be 2x2; the variance is more than 100 and less than 200, and the specification of the embedding area is selected to be 3x3; the variance is greater than 200, the specification of the selected embedded region is 4x4, etc.
As shown in fig. 1, in step S104, the target watermark information is embedded into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information.
The luminance channel may refer to a Y channel of a YUV color space, and the two chrominance channels may refer to a U channel and a V channel of the YUV color space, respectively.
The first embodiment of the present application further includes:
determining an embedded addition and subtraction coefficient of an embedded region according to the minimum perceived difference of a preset embedded region;
the embedding the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information includes:
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the specification information of the embedded region and the embedded addition and subtraction coefficients.
The minimum perceived difference JND (Just Noticeable Difference) refers to the maximum modification amplitude allowed by each pixel describing the preset area without being easily perceived.
The embedded addition and subtraction coefficient may be a minimum perceived difference multiplied by a predefined coefficient β, for example, β=2, and then 2 jnd is the embedded addition and subtraction coefficient.
In the specific embedding, if the carrier object is 5 continuous frames, for example, if the embedded information is 1, the coefficient is subtracted or added to each pixel value of the embedded area in the previous T/2 frame, and the coefficient is added or subtracted in the later T/2 frame; the embedded information 0 is added or subtracted by the coefficient for each pixel value of the embedded region in the previous T/2 frame, and the coefficient is subtracted or added by the subsequent T/2 frame.
As an implementation method, the first embodiment of the present application further includes:
and scrambling the target watermark information, and embedding the scrambled target watermark information into a brightness channel and two chromaticity channels of the carrier object.
As an implementation method, the first embodiment of the present application further includes:
and adding a check value of the target watermark information into the target watermark information.
In order to improve the security, the actually embedded information may further include a verification value of the target watermark information, for example, the verification value may be a value calculated by using a hash function, then error correction and redundancy information are added, wherein the error correction code may use an error correction code such as BCH, RS, TURBO, the redundancy information may repeat R times, then the redundancy information is scrambled by using a Logistic chaotic scrambling algorithm, and then a watermark information header is added in front of the information to form the actually embedded information. The parameters of the Logistic chaotic scrambling algorithm are generated by key information, and the parameters can be successfully extracted only by generating and extracting the parameters by the corresponding key, so that a person without the key cannot extract watermark information and cannot forge the watermark.
According to the first embodiment of the application, different embedded region specifications are selected for carrier objects with different image characteristics, so that the watermark robustness of the carrier objects with different image characteristics is better adapted.
A second embodiment of the present application provides a data processing method, which is described below with reference to fig. 2.
As shown in fig. 2, in step S201, a carrier object containing watermark information is obtained.
The carrier object is a carrier image containing watermark information. The carrier image may be a moving image or a still image, for example, the image may be a moving image in GIF (Graphics Interchange Format) format, or may be a still image in JPEG (Joint Photographic Experts Group) format. In addition, the carrier image may also be a video frame in a carrier video, and the carrier video may be a physical video file, for example, the carrier video is a video file stored in a remote server for local downloading and playing; the video stream can also be in the form of streaming media, for example, the carrier video is a video stream which is provided by an online video-on-demand platform or an online live broadcast platform and can be directly streamed; in addition, the carrier video may be video in the form of AR, VR, or the like, or may be stereoscopic video, however, as the technology advances, the carrier video may also be video in other formats and other forms, which are related to video, and are not limited in particular herein.
The carrier object may be a predetermined odd number of consecutive carrier video frames.
For example, 5 consecutive carrier video frames are obtained from the video file.
As shown in fig. 2, in step S202, an extraction area including watermark information in the carrier object is determined in such a manner that the carrier object is divided into areas in multiple dimensions.
The method for determining the extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner comprises the following steps:
dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier video frame which is positioned at the first half and is selected from the continuous preset odd number of carrier video frames; the second brightness channel refers to a brightness channel of one carrier video frame positioned at the second half of the carrier video frames selected from the continuous preset odd number of carrier video frames;
calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel;
determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference;
Processing the two chrominance channels (the U channel and the V channel respectively) in a processing mode similar to that of the luminance channel (namely processing the first U channel and the second U channel according to the mode of processing the first luminance channel and the second luminance channel and processing the first V channel and the second V channel according to the mode of processing the first luminance channel and the second luminance channel), and determining candidate extraction areas of the two chrominance channels containing watermark information;
and taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information.
Because the actually extracted carrier object may go through various attacks, such as clipping, screen recording, screen capturing, scaling, rotation, brightness, contrast, and the like, for example, some attacks and resolutions may work better when dividing a certain area, while another attack and resolution may work better when dividing another area, it is uncertain whether the attack is passed at all or the relation between the original resolution and the extraction resolution is also uncertain, so that it is not known when extracting which dividing area is used better, and therefore the second embodiment of the present application adopts a multi-dimensional area dividing mode. And the method can be better suitable for various combination attacks and various resolutions by combining with the respective extraction of the Y, U, V three channels, so that the robustness of watermark extraction is enhanced.
The first luminance channel and the second luminance channel may be divided in multiple dimensions, for example, the first luminance channel may be divided into 4*4 regions, 5*6 regions, and 4*7 regions … …, the divided regions may be rectangular, or may be other shapes, for example, circular, and the second luminance channel is divided in the same manner as the first luminance channel.
The two regions corresponding to the first luminance channel and the second luminance channel refer to two regions at corresponding positions in the region divided by the same dimension, for example, the first luminance channel is divided into 4*4 regions, the second luminance channel is also divided into 4*4 regions, and the first region divided by the first luminance channel and the first region divided by the second luminance channel are two regions at corresponding positions.
The determining whether the two corresponding regions are candidate extraction regions according to the energy difference includes:
judging whether the energy difference is within a preset energy difference threshold range, if so, determining the two corresponding areas as candidate extraction areas;
if not, determining that the two corresponding regions are not candidate extraction regions.
The process of determining that the corresponding two regions are not candidate extraction regions is described below by one scene.
Firstly, selecting one carrier video frame positioned at the first half from the continuous preset odd number of carrier video frames as t1, selecting one carrier video frame positioned at the second half from the continuous preset odd number of carrier video frames as t2, and operating Y channels of two corresponding areas in the t1 frame and the t2 frame through f (tx, ty) functions, wherein f (tx, ty) is used for calculating energy difference between the two corresponding areas, for example, f (tx, ty) is the Euclidean distance of the two corresponding areas, and the energy difference result of the two corresponding areas is d. From the embedding step, it is known that if the region is a region in which information is embedded, there should be a certain energy difference between the two regions, which is related to the size and texture characteristics of the region. Since the actual embedded region specification size varies with video scaling, the embedded region specification size can be estimated from the video resolution. According to the extracted video resolution, the size of the partition area at the time of embedding is estimated to be W1xH1, namely the size of 16 blocks divided at the time of embedding is estimated to be (W1/4) x (H1/4). Then selecting the specification of the embedded region according to the method of the first embodiment of the application for the two corresponding regions to obtain the estimated specification of the embedded region, which is marked as W2xH2, then calculating the JND threshold value of the region and multiplying the JND threshold value by the preset coefficient beta to obtain the embedded addition-subtraction coefficient, which is marked as sigma. The average embedding intensity per pixel in this region is estimated to be e=σx (W2 xH 2)/((W1/4) x (H1/4)). The pixel values in the two areas corresponding to t1 and t2 are modified to be the difference value e x 2, namely the area is modified to be the estimated average embedded intensity of the pixels, then the energy difference of the two areas after the modification of the difference value is calculated according to f (tx, ty), the result is recorded as k, and the threshold range can be set to be k/2-k x 2. So when d is satisfied within the threshold range of k/2-k x 2, the region is a candidate extraction region and can participate in voting, otherwise the region does not participate in voting.
As shown in fig. 2, watermark information is extracted from the extraction area in step S203.
The extracting watermark information from the extraction area includes:
obtaining an evaluation result for each candidate extraction region according to the energy difference;
accumulating the evaluation results of the plurality of candidate extraction areas divided by the same division mode;
determining extracted binary information according to a preset extraction mechanism and an accumulated result;
and obtaining watermark information according to the binary information.
For example, when the result of f (tx, ty) function operation of different partitions of t1 frame and t2 frame is that the regular vote is 1, if negative, the vote is-1, and if invalid, 0, then the voting results of w×h areas are accumulated, if negative, the extracted binary information is 0, otherwise, the extracted binary information is 1.
The obtaining watermark information according to the binary information comprises the following steps:
according to the binary information, binary watermark sequences of a brightness channel and two chromaticity channels are obtained;
obtaining a plurality of watermark starting positions according to the binary watermark sequence;
watermark information is obtained from a binary watermark sequence of a plurality of watermark starting positions.
In the implementation, a plurality of watermark initial positions are obtained according to the binary watermark sequence, wherein the watermark initial positions comprise; and matching the binary watermark sequences of the Y channel, the U channel and the V channel with the sequences of watermark information heads respectively. Since the watermark information header is preset known information, the degree of matching can be calculated using cross-correlation. Taking the maximum value of the matching degree as E1, and judging the watermark initial position if E1 is larger than a threshold value Q1; otherwise, taking the sum of the maximum two values in the three channels as E2, and judging the watermark starting position if E2 is larger than a threshold value Q2; otherwise, taking the sum of the values of the three channels as E3 again, and judging the watermark starting position if E3 is larger than a threshold value Q3. The judgment of the watermark information head, namely the judgment of the watermark starting position, can be enhanced by combining any two channels or three channels, so that the accuracy of the judgment is improved, and the robustness of the watermark is further improved.
The obtaining watermark information according to the binary watermark sequences of the watermark starting positions comprises the following steps:
changing bits with 0 values in the plurality of binary watermark sequence information into bits with-1 values to obtain a plurality of second watermark sequences;
watermark information is obtained according to the binary watermark sequence and a plurality of second watermark sequences.
The obtaining watermark information according to the binary watermark sequence and the plurality of second watermark sequences comprises:
obtaining a weight corresponding to each second watermark sequence;
an accumulated value of weights of a plurality of second watermark sequences according to the weights;
obtaining a third watermark sequence according to the accumulated value;
and obtaining watermark information according to the third watermark sequence.
The obtaining the weight corresponding to each second watermark sequence includes:
obtaining the matching degree of watermark information heads of a plurality of second watermark sequences;
and obtaining the weight corresponding to each second watermark sequence according to the matching degree.
The obtaining the weight corresponding to each second watermark sequence includes:
obtaining an average value of bits of the binary watermark sequence;
calculating Euclidean distances between the plurality of second watermark sequences and the average value;
and obtaining the weight corresponding to each second watermark sequence according to the Euclidean distance.
The obtaining the weight corresponding to each second watermark sequence includes:
the weight of each second watermark sequence is set to 1.
If the watermark is embedded, a redundant multi-round embedding mode is adopted, such as K rounds of embedding. Therefore, according to the binary watermark sequence, the watermark starting position of the K 'section watermark can be obtained, and then the watermark is extracted by combining the K' section information because the watermark extraction effect of each section of information is not ideal due to attack. The following is an example of Y channel, and the operation of U, V channel and Y channel, that is, three channels can be respectively combined for extraction. Let k1 be the binary watermark sequence corresponding to the first watermark start position and k2 … be the binary watermark sequence corresponding to the second watermark start position, where k1, k2 represent the extracted 0, 1 information sequence. The 0 in each piece of information is changed to-1, i.e., to-1, 1 information sequence (second watermark sequence) is set to k1', k2' …. The following three combined extraction modes can be adopted.
a. Since the watermark information header is preset known information, the weight value can be calculated according to the matching degree of the watermark information header of the K' segment information, for example, the weight value is set to be equal to the matching degree, the weight value of the first segment is a1, and the weight value of the second segment is a2 …. Then, an accumulated value suma=a1+a1 '+a2+k2' + … of the information weight is calculated, a new 0, 1 sequence (third watermark sequence) is obtained according to the positive and negative of the accumulated value, and watermark information is obtained according to the third watermark sequence.
b. The average value of the information, that is, the average value of each bit of information, is calculated according to K1 and K2 …, then the Euclidean distance between the K' segment of information and the average value is calculated, and then the weights b1, b2 and b3 … are calculated, for example, the weight bi is set to be one tenth of the reciprocal of the Euclidean distance ki. Then, an accumulated value sumb=b1×k1'+b2×k2' + … of the information weight is calculated, a new 0, 1 sequence (third watermark sequence) is obtained according to the positive and negative of the accumulated value, and watermark information is obtained according to the third watermark sequence.
c. It is also possible to directly calculate the sum of each piece of information, i.e. let the weight be equal to 1, then calculate the accumulated value sumc=k1 '+k2' + …, obtain a new 0, 1 sequence (third watermark sequence) according to the sign of the accumulated value, obtain watermark information according to the third watermark sequence.
The second embodiment of the application is introduced, and the data processing method provided by the second embodiment of the application adopts a multi-dimensional partitioned different channel voting extraction mode when extracting the watermark, so that the adaptability of extracting the carrier object under various attack conditions is improved; meanwhile, an energy difference threshold range is determined according to the size of a preset region and texture characteristics, and a non-conforming region is eliminated according to the energy difference threshold, so that the robustness of watermark extraction is improved; in addition, the weight is calculated through different modes of multiple rounds of watermark information, so that the robustness of watermark extraction is improved.
A third embodiment of the present application provides a data processing apparatus corresponding to the data processing method provided in the first embodiment of the present application.
As shown in fig. 3, the apparatus includes:
a carrier object and information obtaining unit 301 for obtaining carrier object and target watermark information;
an embedded region specification information determining unit 302, configured to determine embedded region specification information of target watermark information according to image features of the carrier object;
and a target watermark information embedding unit 303, configured to embed the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information.
Optionally, the apparatus further includes: the embedded addition and subtraction coefficient determining unit is used for determining the embedded addition and subtraction coefficient of the target watermark information according to the minimum perceived difference of the preset embedded area;
the embedding the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information includes:
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the specification information of the embedded region and the embedded addition and subtraction coefficients.
Optionally, the embedded region specification information determining unit is specifically configured to:
and determining the specification information of the embedded region of the target watermark information according to the resolution of the carrier object and at least one factor of the image texture characteristics of the preset embedded region.
Optionally, the apparatus further includes: the processing unit is arranged to be scrambled,
the target watermark information is used for scrambling the target watermark information, and the scrambled target watermark information is embedded into the brightness channel and the two chromaticity channels of the carrier object.
It should be noted that, for the detailed description of the apparatus provided in the third embodiment of the present application, reference may be made to the description related to the first embodiment of the present application, which is not repeated here.
A fourth embodiment of the present application provides an electronic device corresponding to the data processing method provided in the first embodiment of the present application.
As shown in fig. 4, the electronic device:
a processor 401;
a memory 402 for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performs the steps of: comprising the following steps:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to the image characteristics of the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
Optionally, the electronic device further performs the following steps:
determining an embedded addition and subtraction coefficient of an embedded region according to the minimum perceived difference of a preset embedded region;
the embedding the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information includes:
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the specification information of the embedded region and the embedded addition and subtraction coefficients.
Optionally, the determining the embedded region specification information of the target watermark information according to the image feature of the carrier object includes:
and determining the specification information of the embedded region of the target watermark information according to the resolution of the carrier object and at least one factor of the image texture characteristics of the preset embedded region.
Optionally, the electronic device further performs the following steps:
and scrambling the target watermark information, and embedding the scrambled target watermark information into a brightness channel and two chromaticity channels of the carrier object.
It should be noted that, for the detailed description of the electronic device provided in the fourth embodiment of the present application, reference may be made to the description related to the first embodiment of the present application, which is not repeated here.
A fourth embodiment of the present application, which corresponds to a data processing method provided by the first embodiment of the present application, provides a storage device storing a program of the data processing method, the program being executed by a processor to perform the steps of:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to the image characteristics of the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
It should be noted that, for the detailed description of the storage device provided in the fifth embodiment of the present application, reference may be made to the description related to the first embodiment of the present application, which is not repeated here.
A sixth embodiment of the present application provides an apparatus corresponding to the data processing method provided by the second embodiment of the present application.
As shown in fig. 5, the apparatus includes:
a carrier object obtaining unit 501 for obtaining a carrier object containing watermark information;
an extraction area determining unit 502, configured to determine an extraction area containing watermark information in the carrier object in a manner of performing multidimensional division on the carrier object;
a watermark information extraction unit 503 for extracting watermark information from the extraction area.
Optionally, the carrier object is a preset odd number of carrier video frames in succession.
Optionally, the extraction area determining unit is specifically configured to:
dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier video frame which is positioned at the first half and is selected from the continuous preset odd number of carrier video frames; the second brightness channel refers to a brightness channel of one carrier video frame positioned at the second half of the carrier video frames selected from the continuous preset odd number of carrier video frames;
Calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel;
determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference;
processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information;
and taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information.
Optionally, the extraction area determining unit is specifically configured to:
judging whether the energy difference is within a preset energy difference threshold range, if so, determining the two corresponding areas as candidate extraction areas;
if not, determining that the two corresponding regions are not candidate extraction regions.
Optionally, the watermark information extraction unit is specifically configured to:
obtaining an evaluation result for each candidate extraction region according to the energy difference;
accumulating the evaluation results of the plurality of candidate extraction areas divided by the same division mode;
determining extracted binary information according to a preset extraction mechanism and an accumulated result;
And obtaining watermark information according to the binary information.
Optionally, the watermark information extraction unit is specifically configured to:
according to the binary information, binary watermark sequences of a brightness channel and two chromaticity channels are obtained;
obtaining a plurality of watermark starting positions according to the binary watermark sequence;
watermark information is obtained from a binary watermark sequence of a plurality of watermark starting positions.
Optionally, the watermark information extraction unit is specifically configured to:
changing bits with 0 values in the plurality of binary watermark sequence information into bits with-1 values to obtain a plurality of second watermark sequences;
watermark information is obtained according to the binary watermark sequence and a plurality of second watermark sequences.
Optionally, the watermark information extraction unit is specifically configured to:
obtaining a weight corresponding to each second watermark sequence;
an accumulated value of weights of a plurality of second watermark sequences according to the weights;
obtaining a third watermark sequence according to the accumulated value;
and obtaining watermark information according to the third watermark sequence.
Optionally, the watermark information extraction unit is specifically configured to:
obtaining the matching degree of watermark information heads of a plurality of second watermark sequences;
and obtaining the weight corresponding to each second watermark sequence according to the matching degree.
Optionally, the watermark information extraction unit is specifically configured to:
obtaining an average value of bits of the binary watermark sequence;
calculating Euclidean distances between the plurality of second watermark sequences and the average value;
and obtaining the weight corresponding to each second watermark sequence according to the Euclidean distance.
Optionally, the watermark information extraction unit is specifically configured to:
the weight of each second watermark sequence is set to 1.
It should be noted that, for the detailed description of the apparatus provided in the sixth embodiment of the present application, reference may be made to the related description of the second embodiment of the present application, which is not repeated here.
Corresponding to a data processing method provided by the second embodiment of the present application, a seventh embodiment of the present application provides an electronic device, including:
a processor;
a memory for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performing the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner;
watermark information is extracted from the extraction area.
Optionally, the carrier object is a preset odd number of carrier video frames in succession.
Optionally, the determining the extraction area containing watermark information in the carrier object by using a mode of dividing the carrier object into areas in multiple dimensions includes:
dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier video frame which is positioned at the first half and is selected from the continuous preset odd number of carrier video frames; the second brightness channel refers to a brightness channel of one carrier video frame positioned at the second half of the carrier video frames selected from the continuous preset odd number of carrier video frames;
calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel;
determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference;
processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information;
and taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information.
Optionally, the determining whether the corresponding two regions are candidate extraction regions including watermark information according to the energy difference includes:
judging whether the energy difference is within a preset energy difference threshold range, if so, determining the two corresponding areas as candidate extraction areas;
if not, determining that the two corresponding regions are not candidate extraction regions.
Optionally, the extracting watermark information from the extraction area includes:
obtaining an evaluation result for each candidate extraction region according to the energy difference;
accumulating the evaluation results of the plurality of candidate extraction areas divided by the same division mode;
determining extracted binary information according to a preset extraction mechanism and an accumulated result;
and obtaining watermark information according to the binary information.
Optionally, the obtaining watermark information according to the binary information includes:
according to the binary information, binary watermark sequences of a brightness channel and two chromaticity channels are obtained;
obtaining a plurality of watermark starting positions according to the binary watermark sequence;
watermark information is obtained from a binary watermark sequence of a plurality of watermark starting positions.
Optionally, the obtaining watermark information according to the binary watermark sequences of the plurality of watermark starting positions includes:
changing bits with 0 values in the plurality of binary watermark sequence information into bits with-1 values to obtain a plurality of second watermark sequences;
watermark information is obtained according to the binary watermark sequence and a plurality of second watermark sequences.
Optionally, the obtaining watermark information according to the binary watermark sequence and the plurality of second watermark sequences includes:
obtaining a weight corresponding to each second watermark sequence;
an accumulated value of weights of a plurality of second watermark sequences according to the weights;
obtaining a third watermark sequence according to the accumulated value;
and obtaining watermark information according to the third watermark sequence.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
obtaining the matching degree of watermark information heads of a plurality of second watermark sequences;
and obtaining the weight corresponding to each second watermark sequence according to the matching degree.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
obtaining an average value of bits of the binary watermark sequence;
calculating Euclidean distances between the plurality of second watermark sequences and the average value;
And obtaining the weight corresponding to each second watermark sequence according to the Euclidean distance.
Optionally, the obtaining the weight corresponding to each second watermark sequence includes:
the weight of each second watermark sequence is set to 1.
It should be noted that, for the detailed description of the electronic device provided in the seventh embodiment of the present application, reference may be made to the related description of the second embodiment of the present application, which is not repeated here.
An eighth embodiment of the present application, which corresponds to a data processing method provided by the second embodiment of the present application, provides a storage device storing a program of the data processing method, the program being executed by a processor, performing the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner;
watermark information is extracted from the extraction area.
It should be noted that, for the detailed description of the storage device provided in the eighth embodiment of the present application, reference may be made to the related description of the second embodiment of the present application, which is not repeated here.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Claims (20)
1. A method of data processing, comprising:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to image characteristics of the carrier object, wherein the image characteristics of the carrier object comprise at least one factor of the following: the resolution of the carrier object and the image texture characteristics of a preset embedding area, wherein the specification information of the embedding area is the size of an embedding block occupied when the target watermark information is embedded into the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
2. The method as recited in claim 1, further comprising:
determining an embedded addition and subtraction coefficient of an embedded region according to the minimum perceived difference of a preset embedded region;
the embedding the target watermark information into the luminance channel and the two chrominance channels of the carrier object according to the embedded region specification information includes:
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the specification information of the embedded region and the embedded addition and subtraction coefficients.
3. The method as recited in claim 1, further comprising:
and scrambling the target watermark information, and embedding the scrambled target watermark information into a brightness channel and two chromaticity channels of the carrier object.
4. A method of data processing, comprising:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner comprises the following steps: dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier object which is positioned at the first half and is selected from the carrier objects; the second brightness channel refers to a brightness channel of one carrier object which is positioned at the second half and is selected from the carrier objects; calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel; determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference; processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information; taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information;
Watermark information is extracted from the extraction area.
5. The method of claim 4, wherein the carrier object is a predetermined odd number of consecutive carrier video frames.
6. The method of claim 5, wherein the determining the extraction area of the carrier object containing watermark information by multi-dimensionally dividing the carrier object comprises:
dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier video frame which is positioned at the first half and is selected from the continuous preset odd number of carrier video frames; the second brightness channel refers to a brightness channel of one carrier video frame positioned at the second half of the carrier video frames selected from the continuous preset odd number of carrier video frames;
calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel;
determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference;
processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information;
And taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information.
7. The method of claim 6, wherein the determining whether the corresponding two regions are candidate extraction regions containing watermark information based on the energy difference comprises:
judging whether the energy difference is within a preset energy difference threshold range, if so, determining the two corresponding areas as candidate extraction areas;
if not, determining that the two corresponding regions are not candidate extraction regions.
8. The method of claim 6, wherein extracting watermark information from the extraction area comprises:
obtaining an evaluation result for each candidate extraction region according to the energy difference;
accumulating the evaluation results of the plurality of candidate extraction areas divided by the same division mode;
determining extracted binary information according to a preset extraction mechanism and an accumulated result;
and obtaining watermark information according to the binary information.
9. The method of claim 8, wherein obtaining watermark information from the binary information comprises:
According to the binary information, binary watermark sequences of a brightness channel and two chromaticity channels are obtained;
obtaining a plurality of watermark starting positions according to the binary watermark sequence;
watermark information is obtained from a binary watermark sequence of a plurality of watermark starting positions.
10. The method of claim 9, wherein obtaining watermark information from a binary watermark sequence of a plurality of watermark start positions comprises:
changing bits with 0 values in the plurality of binary watermark sequence information into bits with-1 values to obtain a plurality of second watermark sequences;
watermark information is obtained according to the binary watermark sequence and a plurality of second watermark sequences.
11. The method of claim 10, wherein the obtaining watermark information from the binary watermark sequence and the plurality of second watermark sequences comprises:
obtaining a weight corresponding to each second watermark sequence;
an accumulated value of weights of a plurality of second watermark sequences according to the weights;
obtaining a third watermark sequence according to the accumulated value;
and obtaining watermark information according to the third watermark sequence.
12. The method of claim 11, wherein obtaining the weight corresponding to each second watermark sequence comprises:
Obtaining the matching degree of watermark information heads of a plurality of second watermark sequences;
and obtaining the weight corresponding to each second watermark sequence according to the matching degree.
13. The method of claim 11, wherein obtaining the weight corresponding to each second watermark sequence comprises:
obtaining an average value of bits of the binary watermark sequence;
calculating Euclidean distances between the plurality of second watermark sequences and the average value;
and obtaining the weight corresponding to each second watermark sequence according to the Euclidean distance.
14. The method of claim 11, wherein obtaining the weight corresponding to each second watermark sequence comprises:
the weight of each second watermark sequence is set to 1.
15. A data processing apparatus, comprising:
a carrier object and information obtaining unit for obtaining carrier object and target watermark information;
an embedded region specification information determining unit, configured to determine embedded region specification information of target watermark information according to an image feature of the carrier object, where the image feature of the carrier object includes at least one factor of: the resolution of the carrier object and the image texture characteristics of a preset embedding area, wherein the specification information of the embedding area is the size of an embedding block occupied when the target watermark information is embedded into the carrier object;
And the target watermark information embedding unit is used for embedding the target watermark information into the brightness channel and the two chromaticity channels of the carrier object according to the specification information of the embedding region.
16. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performing the steps of: comprising the following steps:
obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to image characteristics of the carrier object, wherein the image characteristics of the carrier object comprise at least one factor of the following: the resolution of the carrier object and the image texture characteristics of a preset embedding area, wherein the specification information of the embedding area is the size of an embedding block occupied when the target watermark information is embedded into the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
17. A storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
Obtaining carrier object and target watermark information;
determining embedded region specification information of target watermark information according to image characteristics of the carrier object, wherein the image characteristics of the carrier object comprise at least one factor of the following: the resolution of the carrier object and the image texture characteristics of a preset embedding area, wherein the specification information of the embedding area is the size of an embedding block occupied when the target watermark information is embedded into the carrier object;
and embedding the target watermark information into a brightness channel and two chromaticity channels of the carrier object according to the embedded region specification information.
18. A data processing apparatus, comprising:
a carrier object obtaining unit for obtaining a carrier object containing watermark information;
an extraction area determining unit for determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner, including: dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier object which is positioned at the first half and is selected from the carrier objects; the second brightness channel refers to a brightness channel of one carrier object which is positioned at the second half and is selected from the carrier objects; calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel; determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference; processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information; taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information;
And the watermark information extraction unit is used for extracting watermark information from the extraction area.
19. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus, after powering on and running the program of the data processing method by the processor, performing the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner comprises the following steps: dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier object which is positioned at the first half and is selected from the carrier objects; the second brightness channel refers to a brightness channel of one carrier object which is positioned at the second half and is selected from the carrier objects; calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel; determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference; processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information; taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information;
Watermark information is extracted from the extraction area.
20. A storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
obtaining a carrier object containing watermark information;
determining an extraction area containing watermark information in the carrier object by dividing the carrier object into areas in a multi-dimensional manner comprises the following steps: dividing the first brightness channel and the second brightness channel into at least two areas; the first brightness channel refers to a brightness channel of one carrier object which is positioned at the first half and is selected from the carrier objects; the second brightness channel refers to a brightness channel of one carrier object which is positioned at the second half and is selected from the carrier objects; calculating the energy difference of two areas corresponding to the first brightness channel and the second brightness channel; determining whether the two corresponding areas are candidate extraction areas of a brightness channel containing watermark information according to the energy difference; processing the two chrominance channels in a processing mode similar to that of the luminance channel to determine candidate extraction areas of the two chrominance channels containing watermark information; taking the candidate extraction areas of the brightness channel and the candidate extraction areas of the two chromaticity channels as extraction areas containing watermark information;
Watermark information is extracted from the extraction area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010197206.3A CN113497908B (en) | 2020-03-19 | 2020-03-19 | Data processing method and device, electronic equipment and storage equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010197206.3A CN113497908B (en) | 2020-03-19 | 2020-03-19 | Data processing method and device, electronic equipment and storage equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113497908A CN113497908A (en) | 2021-10-12 |
CN113497908B true CN113497908B (en) | 2023-08-25 |
Family
ID=77993516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010197206.3A Active CN113497908B (en) | 2020-03-19 | 2020-03-19 | Data processing method and device, electronic equipment and storage equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113497908B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104053074A (en) * | 2014-06-18 | 2014-09-17 | 河海大学 | Video watermarking method based on depth image and Otsu segmentation |
CN106658021A (en) * | 2016-11-16 | 2017-05-10 | 佛山科学技术学院 | Method for embedding and detecting two types of watermarks of MPEG video |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4164463B2 (en) * | 2003-06-03 | 2008-10-15 | キヤノン株式会社 | Information processing apparatus and control method thereof |
-
2020
- 2020-03-19 CN CN202010197206.3A patent/CN113497908B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104053074A (en) * | 2014-06-18 | 2014-09-17 | 河海大学 | Video watermarking method based on depth image and Otsu segmentation |
CN106658021A (en) * | 2016-11-16 | 2017-05-10 | 佛山科学技术学院 | Method for embedding and detecting two types of watermarks of MPEG video |
Also Published As
Publication number | Publication date |
---|---|
CN113497908A (en) | 2021-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Oostveen et al. | Visual hashing of digital video: applications and techniques | |
Sallee | Model-based steganography | |
US20090220070A1 (en) | Video Watermarking | |
US20090252370A1 (en) | Video watermark detection | |
US20090136083A1 (en) | Coefficient Selection for Video Watermarking | |
Byun et al. | Fast and robust watermarking method based on DCT specific location | |
US9639910B2 (en) | System for embedding data | |
US20090226030A1 (en) | Coefficient modification for video watermarking | |
Yao et al. | Content-adaptive reversible visible watermarking in encrypted images | |
US8107669B2 (en) | Video watermarking apparatus in compression domain and method using the same | |
Su et al. | A practical design of digital watermarking for video streaming services | |
CN113497908B (en) | Data processing method and device, electronic equipment and storage equipment | |
CN113395475B (en) | Data processing method and device, electronic equipment and storage equipment | |
Narkedamilly et al. | Discrete multiwavelet–based video watermarking scheme using SURF | |
JP4107063B2 (en) | Encryption information transmission / reception system, transmission / reception method, encryption information embedding program, and encryption information recording apparatus | |
CN114528531A (en) | Data processing method, device and equipment | |
CN113497981B (en) | Data processing method, device and equipment | |
Chen et al. | A robust watermarking scheme for stereoscopic video frames | |
CN114547561A (en) | Data processing method, device and equipment | |
Dittmann et al. | Customer identification for MPEG video based on digital fingerprinting | |
Xie et al. | Detection and localization of image tamper with scalable granularity | |
Jaber et al. | Disparity map based watermarking for 3D-images | |
Van Huyssteen | Comparative evaluation of video watermarking techniques in the uncompressed domain | |
KR20240110212A (en) | Protecting Audio contents by using the audio watermark solution and it's method to create and insert the audio watermark | |
Aboalsamh et al. | An improved steganalysis approach for breaking the F5 algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |