CN101601068A - Be used to embed the system of data - Google Patents

Be used to embed the system of data Download PDF

Info

Publication number
CN101601068A
CN101601068A CN200880003825.1A CN200880003825A CN101601068A CN 101601068 A CN101601068 A CN 101601068A CN 200880003825 A CN200880003825 A CN 200880003825A CN 101601068 A CN101601068 A CN 101601068A
Authority
CN
China
Prior art keywords
video
frame
pigment
coordinate
expressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200880003825.1A
Other languages
Chinese (zh)
Other versions
CN101601068B (en
Inventor
Z·盖泽尔
L·德伦多尔夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synamedia Ltd
Original Assignee
NDS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL183841A external-priority patent/IL183841A0/en
Application filed by NDS Ltd filed Critical NDS Ltd
Priority claimed from PCT/IB2008/050104 external-priority patent/WO2008096281A1/en
Publication of CN101601068A publication Critical patent/CN101601068A/en
Application granted granted Critical
Publication of CN101601068B publication Critical patent/CN101601068B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

Described a kind of method and system that is used for data are embedded into frame of video, described method comprises the reception label information; Described label information is shown as the 2-coordinate vector, and described 2-coordinate vector is expressed as ω, wherein, described 2-coordinate is expressed as α, β respectively, so that ω=(α, β); The frame of video that will be labeled is provided, and described frame of video comprises a plurality of pixels, and each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, and described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is expressed as R, G, B respectively; And by according to: R ' (p)=R (p)+<p, ω R; G ' (p)=G (p)+<p, ω G; And B ' (p)=B (p)+<p, ω B, conversion each pixel in described a plurality of pixels is come the described frame of video of mark, wherein:<p, ω RRepresent p and ω RDot-product operation;<p, ω GRepresent p and ω GDot-product operation; And<p, ω BRepresent p and ω BDot-product operation.

Description

Be used to embed the system of data
Technical field
The present invention relates to the data embedded system, concrete, relate to the data embedded system of unique identification as input.
Background technology
Along with the recent progress in the internet content distribution that comprises peer-to-peer network and real-time video streaming system,, data are embedded in the video become important to follow the trail of point of departure for fear of undelegated distribution of contents.Point of departure usually is the beholder who authorizes, and for example, makes the cinema of pirate copies with field camera, and perhaps its output is hunted down and recompile is the set-top box television demoder of video file.After tracking the source, can take measures to avoid more unauthorized distribution.
It all is broad-spectrum field in academic research and commercial invention that signal is embedded in the video.Hidden watermark in compression (MPEG) territory is added on known in the art, as tangible watermark and the password figure watermark (steganographic watermark) that appears at the video top as bitmap.
M.Barni, the Digital Watermarking of Visual Data:State of the Art and New Trends of F.Bartolini and A.Piva., Congres Signal processing X:Theories andApplications (Tampere, 4-8 September 2000), EUPSICO 2000:European SignalProcessing Conference No 10, Tampere, Finland (04/09/2000) has looked back the prior art level of visual data being added digital watermarking briefly.The solution commonly used that has adopted communication viewpoint (communicationperspective) to determine the subject matter in the digital watermarking and proposed to adopt by research institution.The author has at first considered to be used for the various schemes that watermark embeds and hides.Consider communication channel subsequently, and summarized the main research tendency in the attack model foundation.Because watermark recovery to the influence of the final reliability of whole watermaking system, has therefore given special concern to it.
M.Barni, the Multichannel Watermarking of ColorImages of F.Bartolini and A.Piva., (IEEE Transactions on Circuits and Systems for Video Technology, Vol.12, No.3, publish in March, 2002), described in the image watermark field, research mainly concentrates in the gray level image watermark, and marking image brightness is normally passed through in the expansion of colored situation, perhaps finishes by handling each Color Channel respectively.In this document, a kind of DCT territory digital watermark that is specifically designed as the characteristic of utilizing coloured image has been proposed.The subclass of the full frame DCT coefficient by revising each Color Channel is hidden in watermark in the data.Detection is based on overall correlativity measurement, and overall correlativity measurement is to calculate by information of having considered to be transmitted by three Color Channels and their correlativity.Determine for final whether image comprises watermark, with relevance values and threshold.With respect to existing gray scale algorithm, new departure that a kind of threshold value is selected has been proposed, its permission will be omitted the probability that detects and will be reduced to minimumly, guarantee given flase drop survey probability simultaneously again.Provide experimental result and theoretical analysis to prove that this new departure is with respect to only in the validity of the algorithm of the enterprising row operation of brightness of image.
Satoshi Kanai, the Digital Watermarkingfor 3D Polygons using Multiresolution Wavelet Decomposition of Hiroaki Date and Takeshi Kishinami (can network address citeseer.ist.psu.edu/504450.html obtain) have described recently in order to the copyright of protection numerical data and avoid very interested in its method for bootlegging.Yet,, also do not have effective and efficient manner to protect the copyright of 3D geometric model in CAD/CAM and CG field.As the first step that addresses this problem, introduced a kind of new digital watermark method of the 3D of being used for polygon model in this article.Watermark is one of copy-right protection method, wherein, sightless watermark secret ground is embedded in the raw data.The water mark method that is proposed presents (MRR) based on the wavelet transformation (WT) and the multiresolution of polygon model.Can on the various resolution levels of MRR, watermark be embedded in the bigger wavelet coefficient vector.This feasible watermark that embeds can't be discovered for affined transformation and be constant.Also feasible control for the geometric error that is caused by watermark is reliable.At first, discuss the demand and the characteristics of the water mark method that is proposed.Secondly, the WT of polygon model and the mathematical formulae of MRR have been shown.The 3rd, proposed to embed and extract the algorithm of watermark.At last, the validity that shows the water mark method that is proposed by several analog results.
The United States Patent (USP) 7,068,809 of Stach has been described a kind of method, wherein, has used cutting techniques in the method for embedding and detection digital watermarking in the multi-media signal such as image, video and audio frequency.Digital watermarking embeds device based on characteristics of signals, and for example similarity measurement, texture measurement, shape measure or brightness or other color-values extreme value are measured, and media signal is divided into the zone of arbitrary shape.Subsequently that these are regional attribute is used to adjust auxiliary signal, so that it more effectively is hidden in the media signal.In an exemplary implementation, the perceptibility model that cutting procedure has utilized the people is grouped into the sample of media signal in a plurality of continuous zones based on its similarity.Subsequently the attribute such as its frequency characteristic in zone is adjusted to be suitable for expecting the frequency characteristic of watermark signal.A kind of feature of embedding grammar adjustment region to be to embed the element of auxiliary signal, for example through the message signale of Error Correction of Coding.Detection method recomputates described cutting apart, and calculates identical feature, and eigenwert is mapped as symbol, to rebuild the estimation of auxiliary signal.Demodulation subsequently or decoding auxiliary signal are so that recover message with error correction decoding/demodulation operation.
People's such as Schuman United States Patent (USP) 6,950,532 has been described a kind of sense of vision copyright protecting system, and described sense of vision copyright protecting system comprises input content, interrupt handler and output content.Interrupt handler will interrupt content and be inserted into input and create output content in the content, and it has hindered the ability that optical recording apparatus is produced the useful copy of output content.
The summary of Jap.P. JP11075055 has been described a kind of method, wherein, secret information is embedded in the luminance signal, and the positional information of secret information is embedded in the corresponding chrominance signal.In the M system of method use that is used for embedding secret information, it is a pseudo random number (a PN system).Picture signal is divided into the piece of N pixel value, and to increase length be the pseudo random number of N.Each piece of received image signal is all carried out this operation, so that can be formed in the picture signal that has wherein embedded secret information.Pseudo random number is being embedded into the location overlap of the corresponding chrominance signal in position in the luminance signal with tick-tack.Each bar sweep trace of chrominance signal all is divided into a plurality of pieces that are made of N pictorial element, and overlapping length is the pseudo random number of N.Calculate correlativity so that decode.
People's such as Brill U.S. Patent application 20020027612 has been described a kind of method, be used for watermark is added to the vision signal of presentation video, said method comprising the steps of: the first watermark function is used for first group of pixel of first frame, and the complementary function of the first watermark function is used for second group of pixel of first frame.
The United States Patent (USP) 5,832,119 of Rhoads has been described a kind of method, so as to the many bit signals that detect to embed with the password graphics mode from the empirical data such as image or voice data, and controls some aspects of the operation of related system in view of the above.An application of this invention is video playback or recording unit, controls this equipment according to the many bit signals that embed, so that restriction playback or recording operation.Another application is the photocopying booth, and it makes out password pictorial symbolization specific in the image that is replicated, and interrupts replicate run.
Think and also reflected the prior art state below with reference to document:
The US6 of Rhoads, 760,463;
People's such as Reed US6,721,440;
The US5 of Rhoads, 636,292;
The US5 of Rhoads, 768,426;
The US5 of Rhoads, 745,604;
The US6 of Rhoads, 404,898;
The US7 of Rhoads, 058,697;
The US5 of Rhoads, 832,119;
The US5 of Rhoads, 710,834;
People's such as Alattar US7,020,304;
The US7 of Stach, 068,809;
The US6 of Rhoads, 381,341;
People's such as Schumann US6,950; 532;
The US7 of Rhoads, 035,427; With
The WO02/07362 of Digimarc company.
Above-described and this instructions is the disclosure of described all lists of references in the whole text, and all lists of references of mentioning in these lists of references are in view of the above by with reference to incorporating this paper into.
Summary of the invention
The present invention manages to provide a kind of improved system and method that is used for data are embedded into target, and described target includes but not limited to digital video.Therefore, during data embedded, based on position and the input information of pixel on screen, all pixels that will embed each frame of data therein all were used for the mathematic(al) manipulation of tlv triple of three color components (R, G, B) of pixel.For example but what do not limit aforesaid ubiquity is that input information comprises the possessory unique ID that is encoded as bivector.Between to the detection period that embeds information,, try to achieve the value of chromaticity (color mass) to the color component value summation of pixel in each frame that comprises the frame that embeds data.Can compare with expected results by the value that will try to achieve, use equation to extract the information of embedding.
Therefore according to a preferred embodiment of the invention, a kind of method is provided, described method comprises: receive label information, described label information is shown as the 2-coordinate vector, described 2-coordinate vector is expressed as ω, wherein, described 2-coordinate is expressed as α respectively, β, so that ω=(α, β), the frame of video that will be labeled is provided, described frame of video comprises a plurality of pixels, and each pixel in described a plurality of pixels all is expressed as p, wherein p=(x, y), x and y comprise the coordinate of pixel p, and described a plurality of pixels are expressed as the tlv triple of pigment (color element), and described pigment is expressed as R respectively, G, B, and by according to R ' (p)=R (p)+<p, ω R, G ' (p)=G (p)+<p, ω GAnd B ' (p)=B (p)+<p, ω BConversion each pixel in described a plurality of pixels, come the described frame of video of mark, wherein,<p, ω RRepresent p and ω RDot-product operation,<p, ω GRepresent p and ω GDot-product operation,<p, ω BRepresent p and ω BDot-product operation.
In addition, according to a preferred embodiment of the invention, label information comprises the information that is used to identify rendering apparatus.
Other again, according to a preferred embodiment of the invention, the information that is used to identify rendering apparatus comprises unique device identifier.
In addition, according to a preferred embodiment of the invention, label information comprises copyright mark.
In addition, according to a preferred embodiment of the invention, label information comprises the access right data.
In addition, according to a preferred embodiment of the invention, the access right data comprise playback/copy permission.
Other again, according to a preferred embodiment of the invention, at least one pigment comprises the R-G-B pigment.
In addition, according to a preferred embodiment of the invention, at least one pigment comprises the chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YCbCr chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YPbPr chrominance/luminance pigment.
Other again, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YDbDr chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises xvYCC chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, R ' (p), G ' (p) and B ' (p) be no more than colour and present the maximal value that each pigment allowed in the system.
In addition, according to a preferred embodiment of the invention, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be no more than maximal value.
Other again, according to a preferred embodiment of the invention, colour presents system and comprises that the R-G-B colour presents system.
In addition, according to a preferred embodiment of the invention, colour presents system and comprises that the chrominance/luminance colour presents system.
In addition, according to a preferred embodiment of the invention, R ' (p), G ' (p) and B ' (p) be not less than colour and present the minimum value that each pigment allowed in the system.
In addition, according to a preferred embodiment of the invention, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be not less than minimum value.
Other again, according to a preferred embodiment of the invention, colour presents system and comprises that the R-G-B colour presents system.
In addition, according to a preferred embodiment of the invention, colour presents system and comprises that the chrominance/luminance colour presents system.
In addition, according to a preferred embodiment of the invention, the step that label information is expressed as the 2-coordinate vector comprises: label information is expressed as Bit String, described Bit String is subdivided into a plurality of bit substrings, and each the bit substring in described a plurality of bit substrings is converted to corresponding 2-coordinate vector.
In addition, according to a preferred embodiment of the invention, each the bit substring in described a plurality of bit substrings all comprises the Bit String of three bits.
Other again, according to a preferred embodiment of the invention, each the bit substring in described a plurality of bit substrings all comprises the Bit String of dibit.
According to a further advantageous embodiment of the invention, also provide a kind of method, described method comprises: catch and comprise the video flowing that embeds data; Described video flowing is divided into a plurality of frame of video that are included in wherein; By color-values coordinate summation to the given pigment in each the single frame of video that is included in described a plurality of frame of video, come each pigment location chromaticity for this single frame of video, be expressed as C '; Each pigment location chromaticity for corresponding single frame of video is expressed as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data; From C ', deduct C; And derive first coordinate and second and sit target value from the result who subtracts each other, described first coordinate and described second coordinate comprise the coordinate of vector, and described vector is corresponding to Bit String, and described Bit String comprises the information that is embedded in the single frame of video.
In addition, according to a preferred embodiment of the invention,, rebuild label information as deriving the result that first coordinate and second is sat the step of target value.
Other again, according to a preferred embodiment of the invention, the result as rebuilding described label information identifies unique user ID.
According to another preferred embodiment more of the present invention, also there is provided a system comprising: the label information receiver; The 2-coordinate vector, it is represented as ω, wherein, described 2-coordinate is expressed as α, β respectively, so that ω=(α, β), described 2-coordinate vector is represented described label information; The frame of video that is labeled, described frame of video comprises a plurality of pixels, each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, and described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is expressed as R, G, B respectively; And the frame of video marker, its by according to: R ' (p)=R (p)+<p, ω R; G ' (p)=G (p)+<p, ω G; And B ' (p)=B (p)+<p, ω BConversion each pixel in described a plurality of pixels, come the described frame of video of mark, wherein:<p, ω RRepresent p and ω RDot-product operation;<p, ω GRepresent p and ω GDot-product operation; And<p, ω BRepresent p and ω BDot-product operation.
According to another preferred embodiment more of the present invention, also there is provided a system comprising: comprise the video captured stream that embeds data; The video flowing dispenser, it is divided into described video captured stream a plurality of frame of video that are included in wherein; The first chromaticity steady arm, it comes to locate first chromaticity for each pigment of this single frame of video by the color-values coordinate summation to the given pigment in each the single frame of video that is included in described a plurality of frame of video, is expressed as C '; The second chromaticity steady arm, it locatees second chromaticity for each pigment of corresponding single frame of video, is expressed as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data; Processor, it deducts C from C '; And second processor, it is derived first coordinate and second and sits target value from the result who subtracts each other, described first coordinate and described second coordinate comprise the coordinate of vector, and described vector is corresponding to Bit String, and described Bit String comprises the information that is embedded in the single frame of video.
According to another preferred embodiment more of the present invention, a kind of signal also is provided, described signal comprises: the video flowing that comprises a plurality of frame of video, each frame of video in described a plurality of frame of video all comprises a plurality of pixels, each pixel in described a plurality of pixel all is expressed as p, wherein, p=(x, y), x and y comprise the coordinate of pixel p, described a plurality of pixel is expressed as the tlv triple of pigment, described pigment is expressed as R respectively, G, B, wherein: label information is shown as the 2-coordinate vector, and described 2-coordinate vector is expressed as ω, wherein, described 2-coordinate is expressed as α respectively, β, so that ω=(α, β), used described label information according to: R ' (p)=R (p)+<p, ω R; G ' (p)=G (p)+<p, ω G; And B ' (p)=B (p)+<p, ω BConversion each pixel in described a plurality of pixels, wherein:<p, ω RRepresent p and ω RDot-product operation;<p, ω GRepresent p and ω GDot-product operation; And<p, ω BRepresent p and ω BDot-product operation.
According to another preferred embodiment more of the present invention, a kind of storage medium also is provided, described storage medium comprises the video flowing that comprises a plurality of frame of video, each frame of video in described a plurality of frame of video all comprises a plurality of pixels, each pixel in described a plurality of pixel all is expressed as p, wherein, p=(x, y), x and y comprise the coordinate of pixel p, described a plurality of pixel is expressed as the tlv triple of pigment, described pigment is expressed as R respectively, G, B, wherein: label information is shown as the 2-coordinate vector, and described 2-coordinate vector is expressed as ω, wherein, described 2-coordinate is expressed as α respectively, β, so that ω=(α, β), used described label information according to: R ' (p)=R (p)+<p, ω R; G ' (p)=G (p)+<p, ω G; And B ' (p)=B (p)+<p, ω B, conversion each pixel in described a plurality of pixels, wherein:<p, ω RRepresent p and ω RDot-product operation;<p, ω GRepresent p and ω GDot-product operation; And<p, ω BRepresent p and ω BDot-product operation.
Description of drawings
According to the following detailed description and in conjunction with the accompanying drawings, will become more fully understood and understand the present invention, wherein:
Fig. 1 is the brief block diagram of the video data embedded system of constructing and operating according to the preferred embodiment of the present invention;
Fig. 2 is the sketch that data will be embedded into typical frame wherein in the system of Fig. 1;
Fig. 3 is the describing an of preferred embodiment of method that is used for label information is injected the typical frame of Fig. 2;
Fig. 4 is describing with the typical frame of Fig. 2 of 8 vector coverings;
Fig. 5 is a sketch of having described example frame according to the system of Fig. 1, and it had shown before data embed, and was included in the pigment and the pixel coordinate of a plurality of pixels in the described example frame;
Fig. 6 is typical color gradient in a frame that is produced by preferred implementation of the present invention and the sketch of 2-coordinate vector ω;
Fig. 7 and 8 is simplified flow charts of method for optimizing of operation of the system of Fig. 1.
Embodiment
With reference now to Fig. 1,, it is the brief block diagram of the video data embedded system of constructing and operating according to the preferred embodiment of the present invention.The system of Fig. 1 comprises content rendering apparatus 10.Content rendering apparatus 10 preferably includes label information 15 and data embedded system 20.
Label information 15 preferably includes any suitable information, for example but what do not limit aforesaid ubiquity is that the information of sign rendering apparatus 10 preferably is used for unique device id of content rendering apparatus 10.Interchangeable and preferably copyright mark or other access right data, for example but do not limit aforesaid ubiquity be the playback/copy permission of obeying by content rendering apparatus 10.For example but what do not limit aforesaid ubiquity is to one skilled in the art will recognize that copyright information can be that expression has the single-bit of copyright/no copyright.Interchangeable, can represent copyright with a plurality of bits, for example but what do not limit aforesaid ubiquity is can duplicate but can not be burnt to the permission of CD.The playback apparatus of supposing mandate supposes simultaneously that in accordance with these signals undelegated playback apparatus is in accordance with these signals.The combination that can be appreciated that the identification information of suitable type can replacedly be used as label information 15.
Data embedded system 20 preferably can be operated so that the embedding data that are depicted as asterisk " * " among Fig. 1 are injected in the frame 30,40,50 of video flowing 60.
The operation of the system of Fig. 1 is described now.Video flowing 60 is depicted as comprises three dissimilar frame of video:
Do not comprise the frame 30 that embeds data as yet;
Embedding the frame 40 of data; And
Embedded the frame 50 of data.
Data embedded system 20 preferably receives label information 15 as input, produces and embeds data, is depicted as asterisk " * ", and watermark (using term " WM " expression herein) is injected in the current frame 40 that is embedding data.
The content that comprises video flowing 60 comprises a plurality of frames 50 that embedded data now, described content can be uploaded to content sharing network 70 or it can be obtained on content sharing network 70.Content sharing network 70 generally include streaming medium content shared network or peer content shared network one of them.Interchangeable, content sharing network 70 can comprise the online of any suitable type and/or off-line content distribution approach, for example but do not limit aforesaid ubiquity be the retail of pirate DVD.Second equipment 80 can obtain video flowing 60 from content sharing network 70 subsequently.
The agency of broadcaster, content owner or other proper authorizations also can obtain video flowing 60 from content sharing network 70.By broadcaster, content owner or other interested shareholders after content sharing network 70 has obtained video flowing 60, preferably video flowing 60 is input in the checkout equipment 90.Checkout equipment 90 preferably each from be included in video flowing 60 has embedded and has extracted the embedding data that are depicted as asterisk " * " in the frame 50 of data.The embedding data of extracting are input to subsequently and embed in the data detection system 95.Embed data-detection apparatus 95 and preferably can determine the label information 15 of injection according to the embedding data of input.
With reference now to Fig. 2,, it is the sketch that data will be embedded into typical frame wherein in the system of Fig. 1.One skilled in the art will recognize that each frame that data will be embedded into wherein all comprises a plurality of pixels.Each pixel in described a plurality of pixel can be expressed as and comprise the tuple that is illustrated in one group of pigment in this pixel.For example but do not limit aforesaid ubiquity, and in the RGB color system (R hereinafter, G, B, wherein R represents redly, and G represent green, and B represents blueness.No matter be common or independent employing), each pixel in described a plurality of pixels can be expressed as the value that is included between 0 and 255.
One skilled in the art will recognize that, can replacedly in any suitable color space, show pixel color, such as any known chrominance/luminance (for example, YCbCr of system; YPbPr; YDbDr), perhaps according to the xvYCC standard, IEC 61966-2-4.Succinct for what discuss, with non-limiting way pixel color is expressed as the RGB tlv triple herein.
The term of all grammatical forms used herein " injects (inject) " can be used interchangeably with the term " embedding " of all grammatical forms.
Use following mark in argumentation below and the claim, in Fig. 2, described some part of these symbols for exemplary purpose:
W is the frame width of unit with the pixel
H is the vertical frame dimension degree of unit with the pixel
P=(x, the y) position with respect to the center of pixel, for example top left corner pixel be (W/2 ,-H/2).
R (p), G (p), the original red, green, blue component of B (p) pixel p
R ' (p), G ' (p), B ' is the red, green, blue component of the pixel p after data embed (p)
R *R (p) summation of each pixel p in=∑ R (p) frame
For G similarly, G *=∑ G (p), for B, B *=∑ B (p).Succinct for what discuss, further example is limited to the R component.
ω=(α, the β) information of Zhu Ruing is expressed as the 2-coordinate vector.As mentioned above, with reference to figure 1, the information of injection preferably depends on the information that some are suitable, preferably identifies the information of rendering apparatus 10 (Fig. 1), and preferably is used for unique device id of content rendering apparatus 10 (Fig. 1).
<A, B 〉=∑ (A i* B i) dot-product operation of vectorial A and B.
With reference now to Fig. 3 and 4.Fig. 3 is the describing an of preferred embodiment of method that is used for label information 15 (Fig. 1) is injected the typical frame of Fig. 2.Fig. 4 is describing with the typical frame of Fig. 2 of 8 vector coverings.As mentioned above, with reference to figure 1, label information 15 (Fig. 1) preferably includes any suitable information, for example but do not limit aforesaid ubiquity be, identify the information of rendering apparatus 10 (Fig. 1), preferably be used for unique device id of content rendering apparatus 10 (Fig. 1).In Fig. 3,, label information 300 is shown as optional 32 bit numbers for this non-limiting example.
Label information 300 is shown as and is divided into a plurality of 3 bit tlv triple.Describe each 3 bit tlv triple explicitly with specific 2-coordinate vector ω.Concrete:
Figure A20088000382500181
Each 3 bit tlv triple all is associated with one of 8 vectorial a-h shown in Fig. 4.Described a preferred version by beholder's matrix of the result 320 of related each 3 bit tlv triple in the bottom of Fig. 3.Concrete:
Vector Bit value
a 000
b 001
c 010
d 011
e 100
f 101
g 110
h 111
Can be appreciated that the method that identification information is divided into the group of 3 bits is arbitrarily, any suitable alternative dividing method all is effective.
Can be appreciated that vectorial a-h determines arbitrarily, is the effective Vector Groups that is used for the preferred embodiments of the present invention in order to make alternative Vector Groups have any alternative Vector Groups of using at the initial point of watching the screen center.
Can be appreciated that bit value is arbitrarily with the related of vector, any alternative scheme all is effective.For example but do not limit aforesaid ubiquity be, following form description each 3 bit tlv triple with the vector alternative related:
Vector Bit value
a 111
b 110
c 101
d 100
e 011
f 010
g 001
h 000
Can be appreciated that the description of the label information 300 among Fig. 3 is shown as 32 bit numbers, yet in order to have one group of complete vectorial ω R3, ω G3, ω B3, and possible ω R4, ω G4, ω B4, can need 33 bit numbers or 36 bit numbers.In Fig. 3, show and lack required bit by empty frame 330.In order to have one group of 33 complete bit or 36 bits, just must use techniques well known in the art to add one or four filling bits to 32 bit labeling information 300.For example but do not limit aforesaid ubiquity be, can add 4 bit verifications and be used as filling bit, can repeat last 4 bits and be used as filling bit, can add 4 bit sequences arbitrarily (for example 0000,0101,1010 or 1111 in any one) and be used as filling bit, thereby 32 bit labeling information 300 are rounded to 36 bits.Can use similar techniques that 32 bit labeling information 300 are rounded to 33 bits.
Preferably with three vectorial ω of each group Rn, ω Gn, ω BnBe used to the frame of limited quantity to embed data as described below.For example but do not limit aforesaid ubiquity be, in the example shown in Fig. 3 and 4, with ω R2, ω G2, ω B2Be used for data are embedded into frame 1801-3600.
After whole 33 bits or 36 bits being used for data are embedded a framing, repeating label information 300.
Preferably label information 15 (Fig. 1) is encoded to three 2 dimensional vector ω on the real number group R, ω G, ω B, it is subjected to the restriction of the following stated.
In order to inject data ω R, ω G, ω B, as each pixel p in down conversion one frame:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G; With
B’(p)=B(p)+<p,ω B>。
Can be appreciated that, no matter R ' (p), G ' (p) and B ' value (p) what is, the value of R, G and B also never can surpass and present the maximal value that system applies by video color.For example but what do not limit aforesaid ubiquity is that in the system of rgb value between 0 to 255, R, G and B never can be higher than maximal value 255.Similarly, no matter R ' (p), G ' (p) and B ' value (p) what is, the value of R, G and B also never can be lower than minimum value 0.For example but what do not limit aforesaid ubiquity is that if G ' (p)=258, so just G ' (p) being subdued is 255.Similarly, if B ' (p)=-2, so just (p) brings up to 0 with B '.
With reference now to Fig. 5,, it is a sketch of having described example frame according to the system of Fig. 1, and it had shown before data embed, and was included in the pigment and the pixel coordinate of a plurality of pixels in the described example frame.Discuss Fig. 3 and 4 as the example of a preferred embodiment of the present invention.Can be appreciated that.All values never should be interpreted as restriction all only for exemplary purpose provides.In order to be easy to describe, the example frame of describing among Fig. 5 only comprises 16 pixels.Following form with describe among Fig. 5 example frame shown in each example values list form in:
Pixel p (2,2) R 112 G 27 B 19
p(-1,2) 113 26 25
p(1,2) 111 27 19
p(2,2) 110 29 19
p(-2,1) 110 26 21
p(-1,1) 114 24 18
p(1,1) 110 24 23
p(2,1) 108 23 25
p(-2,-1) 108 23 23
p(-1,-1) 108 22 25
p(1,-1) 100 20 27
p(2,-1) 98 20 30
p(-2,-2) 103 19 27
p(-1,-2) 100 17 29
p(1,-2) 96 13 32
p(2,-2) 94 11 35
R *=∑R(p)=1695 G *=∑G(p)=351 B *=∑B(p)=397
Provide the several examples that embed data now.In order to be easy to narration, suppose that a frame is that 3 pixels are taken advantage of 3 pixels.Each pixel all is labeled as P n, and provide following coordinate for each pixel:
P 1(-1,-1) P 2(0,-1) P 3(1,-1)
P 4(-1,0) P 5(0,0) P 6(1,0)
P 7(-1,1) P 8(0,1) P 9(1,1)
As mentioned above, the pixel in the upper left corner be (W/2 ,-H/2), so the first half of coordinate system is used the negative value of y.
As mentioned above, each pixel P 1-P 9All comprise a rgb value.The rgb value that below provides provides as example:
P 1(191,27,0) P 2(188,25,220) P 3(212,6,194)
P 4(123,203,86) P 5(212,38,161) P 6(35,89,121)
P 7(20,194,19) P 8(104,76,199) P 9(62,149,131)
Suppose ω RGB=(α, β)=(2,0), and with the coordinate of each pixel (x, y) multiply by (α β), obtains (α * x)+(β * y)=(2*x)+(0*y)=(2*x), thereby provides the correction that will increase to each pigment in each pixel:
P 1(-2) P 2(0) P 3(2)
P 4(-2) P 5(0) P 6(2)
P 7(-2) P 8(0) P 9(2)
As mentioned above, each pigment increase correction for each pixel obtains:
P’ 1(189,25,0) P’ 2(188,25,220) P’ 3(214,6,194)
P’ 4(121,201,84) P’ 5(212,38,161) P’ 6(37,91,123)
P’ 7(18,192,17) P’ 8(104,76,199) P’ 9(64,151,133)
As second example, suppose that a frame is that 5 pixels are taken advantage of 5 pixels:
P 1(209,54,9) P 2(144,165,59) P 3(97,88,158) P 4(112,87,92) P 5(35,191,8)
P 6(118,184,246) P 7(204,18,51) P 8(60,253,35) P 9(20,116,54) P 10(111,76,177)
P 11(137,116,184) P 12(145,79,254) P 13(254,139,112) P 14(7,96,98) P 15(151,45,193)
P 16(142,85,214) P 17(123,193,146) P 18(64,41,196) P 19(231,60,231) P 20(69,56,174)
P 21(53,241,229) P 22(16,179,88) P 23(22,130,219) P 24(36,132,117) P 25(174,72,122)
With each pixel logo is P n, and provide following coordinate for each pixel:
P 1(-2,-2) P 2(-1,-2) P 3(0,-2) P 4(1,-2) P 5(2,-2)
P 6(-2,-1) P 7(-1,-1) P 8(0,-1) P 9(1,-1) P 10(2,-1)
P 11(-2,0) P 12(-1,0) P 13(0,0) P 14(1,0) P 15(2,0)
P 16(-2,1) P 17(-1,1) P 18(0,1) P 19(1,1) P 20(2,1)
P 21(-2,2) P 22(-1,2) P 23(0,2) P 24(1,2) P 25(2,2)
Suppose ω RGB=(α, β)=(1,1), and with the coordinate of each pixel (x, y) multiply by (α, β), obtain (α * x)+(β * y)=(1*x)+(1*y), thus provide the correction that will increase to each pigment in each pixel:
P 1(-2,-2)=0 P 2(1,-2)=-1 P 3(0,-2)=-2 P 4(-1,-2)=-3 P 5(-2,-2)=-4
P 6(2,-1)=1 P 7(1,-1)=0 P 8(0,-1)=-1 P 9(-1,-1)=-2 P 10(-2,-1)=-3
P 11(2,0)=2 P 12(1,0)=1 P 13(0,0)=0 P 14(-1,0)=-1 P 15(2-,0)=-2
P 16(2,1)=3 P 17(1,1)=2 P 18(0,1)=1 P 19(-1,1)=0 P 20(-2,1)=-1
P 21(2,2)=4 P 22(1,2)=3 P 23(0,2)=2 P 24(-1,2)=1 P 25(-2,2)=0
As mentioned above, each pigment increase correction for each pixel obtains:
P’ 1(209,54,9) P’ 2(143,164,58) P’ 3(95,86,156) P’ 4(109,84,89) P’ 5(31,187,4)
P’ 6(119,185,247) P’ 7(204,18,51) P’ 8(59,252,34) P’ 9(18,114,52) P’ 10(108,73,174)
P’ 11(139,118,186) P’ 12(146,80,255) P’ 13(254,139,112) P’ 14(6,95,67) P’ 15(149,43,191)
P’ 16(145,88,217) P’ 17(125,195,148) P’ 18(65,42,197) P’ 19(231,60,231) P’ 20(68,55,173)
P’ 21(57,245,233) P’ 22(19,182,91) P’ 23(24,132,221) P’ 24(37,133,118) P’ 25(174,72,122)
With reference now to Fig. 6,, it is by typical color gradient in the frame 620 of preferred implementation generation of the present invention and the sketch of 2-coordinate vector ω 610.As described below, p is maximum in the screen angle, therefore, dot product<p, ω〉be maximum for the maximum length of p.Therefore, pixel 630 is depicted as basically not as pixel 640 so dark.Can be appreciated that in this example, ω 610 shows the influence of ω 610 for any RGB component.
One skilled in the art will recognize that vision signal or other appropriate signals can comprise and comprising as above video with reference to the described embedding data of figure 1-6.One skilled in the art will recognize that, comprise as above video and can be stored in compact disk (CD), digital multi-purpose disk (DVD), flash memories with reference to the described embedding data of figure 1-6, or in other suitable storage mediums.
Describe now embedding the detection of data.In order to be easy to describe, below describe only concentrating on the red component.Can be appreciated that the detection of the embedding data in other color components is identical with the detection in red component.Checkout equipment 90 (Fig. 1) is usually from content sharing network 70 received contents 60.
In following summation, all summations all are at all pixels in the frame of being checked, unless specialize different.
As mentioned above, before data are embedded into given frame, will be expressed as R for the chromaticity of components R *=∑ R (p).Summation R *=∑ R (p) is the summation of whole values of the single pigment in each pixel in single frame.
Can be appreciated that after data being embedded into a frame, chromaticity remains unchanged:
∑R’(p)=∑R(p)+<∑p,ω R>=∑R(p)+0=R *
One skilled in the art will recognize that since for each pixel p=(x, y), dot product<∑ p, ω R〉=0, therefore exist corresponding pixel-p=(x ,-y).Therefore, for ∑<p, ω〉in each summand all have the equal summand of opposite in sign.
Allow C ' expression comprise the chromaticity center (color mass center) of a frame that embeds data.Therefore, for red component, the chromaticity center of this frame is defined as normalized bivector:
C &prime; ( R ) = &Sigma; R &prime; ( p ) * p R *
By means of subtraction, determine the difference between the chromaticity center of the chromaticity center of this frame after embedding data and primitive frame:
D ( R ) = &Sigma; R &prime; ( p ) * p &Sigma; R &prime; ( p ) - &Sigma;R ( p ) * p &Sigma;R ( p ) = &Sigma;R ( p ) * p + &Sigma;p < p , &omega; R > - &Sigma;R ( p ) * p R *
= &Sigma; < p , &omega; R > * p R *
Because p=(x, y) and ω R=(α, β),
∑<p,ω R>*p=(∑x*(αx+βy),∑y*(αx+βy))
=(∑x*(αx+βy),∑y*(αx+βy))
Open bracket and cancellation value and be 0 summand:
∑ x* (α x+ β y)=α ∑ x 2+ β ∑ xy, and
β∑xy=β∑x∑y=0。
Therefore,
&alpha;&Sigma; x 2 = 2 * 1 3 ( W 2 ) 3 * H = HW 3 12
To power with use following formula:
&Sigma; k = 0 n k 2 = 2 n 3 + 3 n 2 + n 6
Accurate and approximate equation below can from above equation, deriving:
&alpha; &Sigma; x = 0 . . W / 2 y = 0 . . H / 2 x 2 = &alpha; &Sigma; x = 0 W / 2 H 2 x 2 = &alpha; H 2 * 2 ( W / 2 ) 3 + 3 ( W / 2 ) 2 + W / 2 6 &ap; &alpha; HW 3 12
&beta; &Sigma; x = 0 . . W / 2 y = 0 . . H / 2 y 2 = &beta; &Sigma; x = 0 W / 2 W 2 y 2 = &beta; W 2 * 2 ( H / 2 ) 3 + 3 ( H / 2 ) 2 + H / 2 6 &ap; &beta; WH 3 12
Therefore:
D ( R ) &ap; HW 12 R * * ( &alpha;W 2 , &beta;H 2 ) ,
Obtain α and β from following approximated equation:
&alpha; &ap; D ( R ) 12 R * HW 3 * ( 1,0 )
&beta; &ap; D ( R ) 12 R * H 3 W * ( 0,1 )
Clear for what describe, at the above approximate expression that used.One skilled in the art will recognize that, accurate equation should be used for embedding the actual extracting process of data to replace approximate value.
Watch impression in order to ensure not damaging, preferably ω is selected, so that the color component correction is no more than certain threshold level.Inventor's of the present invention suggestion is: the threshold value of suggestion is 2%, perhaps on 0 to 255 scope about 4.Because dot product<p, ω〉be linear, dot product<p, ω〉for the maximum length of p, be maximum.Concrete, p is maximum on the angle of screen.Therefore, apply following constraint and preferably be used for limiting the upper limit, thereby guaranteed the scope about 0-255, threshold value is no more than 2%:
αH/2+βW/2<(2/100)*255
Inventor's of the present invention suggestion is: chromaticity frequency data embedded technology as herein described has the resistibility of height for known attack.Concrete:
Filtering-the present invention on the frequency domain of image and video, typically has very little influence on extremely low frequency in its preferred embodiment, perhaps not influence fully.Therefore, can not wait with standard low-pass filter, the video color poising tool that target noise and signal are in high frequency and detect or remove the WM technology that is proposed.
Adjust size (stretch), rotation and shear-since in simple terms α and β be the coordinate that covers the vector on the screen, so stretch or rotate the linear change that expection can cause α and β value in the coded data comprising the video that embeds data.In addition, can select a kind of coding method, for example be used for but do not limit aforesaid ubiquity ground by selecting the group of vectorial ω, make minimum angles between any two possible vectors than by attacking big many of caused maximum rotation, thereby avoided adjusting size (stretching), rotation and shearing attack.
Collusion attack-collusion attack is usually by averaging the several vision signals that comprise WM, perhaps selects each frame to carry out from the several frames that comprise WM, thereby produces the WM that has made up from the data of the signal of all reference inspections.Concrete, the frequency analysis of the signal of combination has been disclosed the frequency of all injections usually.If as mentioned above, data embedded system 20 (Fig. 1) is suspended between the injection of the byte of separating, and when an original WM only occurring, resultant signal preferably comprises at interval so, thereby allows Signal Separation.Preferably utilize as known in the art, injecting and detecting the standard error correcting technique that all uses, so that help a plurality of WM of separation.
Shearing-shearing comprises the video council that embeds data and causes losing of color information, thus α in the change coded data and the value of β.Variation in the value of α and β can be proportional in the video quality of perception decline and with the similarity of the quality of original video.
Collusion attack: average-by means of comprising collusion attack that the vision signal that embeds data averages and can produce usually and made up from all by the WM of the data of average original signal to several.The α and the β that produce can be that all are at first by average α and the mean value of β.For example but what do not limit aforesaid ubiquity is that the present invention by be in the embedding of turn-on data in the random time span at each injector, has preferably avoided the loss of information in its preferred embodiment.
Collusion attack: select-by means of from comprise the different vision signal that embeds data, selecting the collusion attack of different frames can produce the WM that has from the data of all initial selected signals.Resultant α and β can determine the source of each participation respectively.In other words, it is otiose selecting to attack.
With reference now to Fig. 7-8,, they are simplified flow charts of method for optimizing of operation of the system of Fig. 1.According to above argumentation, believe Fig. 7-the 8th, conspicuous.
Can be appreciated that if needs are arranged, component software of the present invention can be realized with ROM (ROM (read-only memory)) form.If needs are arranged, component software can use conventional art to realize with hardware usually.
Can be appreciated that described a plurality of characteristics of the present invention also can be combined among the single embodiment and provide for the sake of clarity and under the background of a plurality of independent embodiment.On the contrary, also can provide respectively or in suitable arbitrarily sub-portfolio, provide in described a plurality of characteristics of the present invention under the background of single embodiment for the sake of brevity.
One skilled in the art will recognize that the present invention is not subjected to the restriction of the content of above concrete demonstration and description.On the contrary, scope of the present invention is only defined by subsidiary claim.

Claims (36)

1, a kind of method comprises:
Receive label information;
Described label information is shown as the 2-coordinate vector, described 2-coordinate vector is expressed as ω, wherein, the 2-coordinate is expressed as α, β respectively, so that ω=(α, β);
The frame of video that will be labeled is provided, and described frame of video comprises a plurality of pixels, and each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, and described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is represented as R, G, B respectively; And
By according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G; With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in described a plurality of pixels is come the described frame of video of mark, wherein:
<p, ω RRepresent p and ω RDot-product operation;
<p, ω GRepresent p and ω GDot-product operation; And
<p, ω BRepresent p and ω BDot-product operation.
2, the method for claim 1, wherein described label information comprises the information that is used to identify rendering apparatus.
3, method as claimed in claim 2, wherein, the described information that is used to identify rendering apparatus comprises unique device identifier.
4, the method for claim 1, wherein described label information comprises copyright mark.
5, the method for claim 1, wherein described label information comprises the access right data.
6, method as claimed in claim 5, wherein, described access right data comprise playback/copy permission.
7, as any described method among the claim 1-6, wherein, at least one pigment comprises the R-G-B pigment.
8, as any described method among the claim 1-6, wherein, at least one pigment comprises the chrominance/luminance pigment.
9, method as claimed in claim 8, wherein, described chrominance/luminance pigment comprises YCbCr chrominance/luminance pigment.
10, method as claimed in claim 8, wherein, described chrominance/luminance pigment comprises YPbPr chrominance/luminance pigment.
11, method as claimed in claim 8, wherein, described chrominance/luminance pigment comprises YDbDr chrominance/luminance pigment.
12, method as claimed in claim 8, wherein, described chrominance/luminance pigment comprises xvYCC chrominance/luminance pigment.
13, as any described method of claim 1-12, wherein, R ' (p), G ' (p) and B ' (p) all be no more than colour and present the maximal value that each described pigment allowed in the system.
14, method as claimed in claim 13, wherein, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be no more than described maximal value.
15, as claim 13 or 14 described methods, wherein, described colour presents system and comprises that the R-G-B colour presents system.
16, as claim 13 or 14 described methods, wherein, described colour presents system and comprises that the chrominance/luminance colour presents system.
17, as any described method of claim 1-16, wherein, R ' (p), G ' (p) and B ' (p) all be not less than colour and present the minimum value that each described pigment allowed in the system.
18, method as claimed in claim 17, wherein, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be not less than described minimum value.
19, as claim 17 or 18 described methods, wherein, described colour presents system and comprises that the R-G-B colour presents system.
20, as claim 17 or 18 described methods, wherein, described colour presents system and comprises that the chrominance/luminance colour presents system.
21, as any described method of claim 1-20, wherein, the step that described label information is expressed as the 2-coordinate vector comprises:
Described label information is expressed as Bit String;
Described Bit String is subdivided into a plurality of bit substrings; And
Each bit substring in described a plurality of bit substrings is converted to corresponding 2-coordinate vector.
22, method as claimed in claim 21, wherein, each the bit substring in described a plurality of bit substrings all comprises the Bit String of three bits.
23, method as claimed in claim 21, wherein, each the bit substring in described a plurality of bit substrings all comprises the Bit String of dibit.
24, a kind of method comprises:
Catch and comprise the video flowing that embeds data;
Described video flowing is divided into a plurality of frame of video that are included in wherein;
By color-values coordinate summation to the given pigment in each the single frame of video that is included in described a plurality of frame of video, come each pigment location chromaticity for this single frame of video, it is represented as C ';
Be each pigment location chromaticity of corresponding single frame of video, it is represented as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data;
From C ', deduct C; And
Derive first coordinate and second and sit target value from the result who subtracts each other, described first coordinate and described second coordinate comprise the coordinate of vector, and described vector is corresponding to Bit String, and described Bit String comprises the information that is embedded in the single frame of video.
25, method as claimed in claim 24 wherein, as deriving the result that first coordinate and second is sat the step of target value, has been rebuild label information.
26, as claim 24 or 26 described methods, wherein, the result as rebuilding described label information identifies unique user ID.
27, a kind of system comprises:
The label information receiver;
The 2-coordinate vector, it is represented as ω, wherein, described 2-coordinate is expressed as α, β respectively, so that ω=(α, β), described 2-coordinate vector is represented described label information;
The frame of video that is labeled, described frame of video comprises a plurality of pixels, and each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, and described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is represented as R, G, B respectively; And
The frame of video marker, its by according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G; With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in described a plurality of pixels is come the described frame of video of mark, wherein:
<p, ω RRepresent p and ω RDot-product operation;
<p, ω GRepresent p and ω GDot-product operation; And
<p, ω BRepresent p and ω BDot-product operation.
28, a kind of system comprises:
Video captured stream, it comprises the embedding data;
The video flowing dispenser, it is divided into described video captured stream a plurality of frame of video that are included in wherein;
The first chromaticity steady arm, it comes to locate first chromaticity for each pigment of this single frame of video by the color-values coordinate summation to the given pigment in each the single frame of video that is included in described a plurality of frame of video, and it is represented as C ';
The second chromaticity steady arm, it locatees second chromaticity for each pigment of corresponding single frame of video, and it is represented as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data;
Processor, it deducts C from C '; And
Second processor, it is derived first coordinate and second and sits target value from the result who subtracts each other, and described first coordinate and described second coordinate comprise the coordinate of vector, and described vector is corresponding to Bit String, and described Bit String comprises the information that is embedded in the single frame of video.
29, a kind of signal comprises:
The video flowing that comprises a plurality of frame of video, each frame of video in described a plurality of frame of video all comprises a plurality of pixels, each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is represented as R, G, B respectively, wherein:
Label information is shown as the 2-coordinate vector, described 2-coordinate vector is expressed as ω, wherein, the 2-coordinate is expressed as α, β respectively so that ω=(α, β), used described label information according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G; With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in described a plurality of pixels, wherein:
<p, ω RRepresent p and ω RDot-product operation;
<p, ω GRepresent p and ω GDot-product operation; And
<p, ω BRepresent p and ω BDot-product operation.
30, a kind of storage medium comprises:
The video flowing that comprises a plurality of frame of video, each frame of video in described a plurality of frame of video all comprises a plurality of pixels, each pixel in described a plurality of pixels all is expressed as p, wherein, and p=(x, y), x and y comprise the coordinate of pixel p, described a plurality of pixels are expressed as the tlv triple of pigment, and described pigment is represented as R, G, B respectively, wherein:
Label information is shown as the 2-coordinate vector, described 2-coordinate vector is expressed as ω, wherein, the 2-coordinate is expressed as α, β respectively so that ω=(α, β), used described label information according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G; With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in described a plurality of pixels, wherein:
<p, ω RRepresent p and ω RDot-product operation;
<p, ω GRepresent p and ω GDot-product operation; And
<p, ω BRepresent p and ω BDot-product operation.
31, as any described and as indicated above basically device of claim 27-30.
32, as claim 27-30 any one described and basically as shown in drawings the device.
33, as any described and as indicated above basically method of claim 1-26.
34, as any described and method as shown in drawings basically of claim 1-26.
35, as any described and as indicated above basically system of claim 27-30.
36, as any described and system as shown in drawings basically of claim 27-30.
CN200880003825.1A 2007-02-05 2008-01-13 System and method for embedding and detecting data Expired - Fee Related CN101601068B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
IL181167A IL181167A0 (en) 2007-02-05 2007-02-05 System for embedding data
IL181167 2007-02-05
IL183841 2007-06-11
IL183841A IL183841A0 (en) 2007-06-11 2007-06-11 System for embedding data
PCT/IB2008/050104 WO2008096281A1 (en) 2007-02-05 2008-01-13 System for embedding data

Publications (2)

Publication Number Publication Date
CN101601068A true CN101601068A (en) 2009-12-09
CN101601068B CN101601068B (en) 2012-12-19

Family

ID=41421582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200880003825.1A Expired - Fee Related CN101601068B (en) 2007-02-05 2008-01-13 System and method for embedding and detecting data

Country Status (2)

Country Link
CN (1) CN101601068B (en)
IL (1) IL181167A0 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098453A (en) * 2010-09-13 2013-05-08 杜比实验室特许公司 Data transmission using out-of-gamut color coordinates

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0766468B1 (en) * 1995-09-28 2006-05-03 Nec Corporation Method and system for inserting a spread spectrum watermark into multimedia data
US5825892A (en) * 1996-10-28 1998-10-20 International Business Machines Corporation Protecting images with an image watermark
US5960081A (en) * 1997-06-05 1999-09-28 Cray Research, Inc. Embedding a digital signature in a video sequence
CN100370481C (en) * 2004-10-10 2008-02-20 北京华旗数码影像技术研究院有限责任公司 Method of using vulnerable watermark technology for digital image fidelity

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098453A (en) * 2010-09-13 2013-05-08 杜比实验室特许公司 Data transmission using out-of-gamut color coordinates
US9190014B2 (en) 2010-09-13 2015-11-17 Dolby Laboratories Licensing Corporation Data transmission using out-of-gamut color coordinates
CN103098453B (en) * 2010-09-13 2016-12-21 杜比实验室特许公司 Use the data transmission of the outer color coordinates of colour gamut

Also Published As

Publication number Publication date
CN101601068B (en) 2012-12-19
IL181167A0 (en) 2008-01-06

Similar Documents

Publication Publication Date Title
US6961444B2 (en) Time and object based masking for video watermarking
US9996891B2 (en) System and method for digital watermarking
EP2115694B1 (en) System for embedding data
US20080226125A1 (en) Method of Embedding Data in an Information Signal
JP2003134483A (en) Method and system for extracting watermark signal in digital image sequence
Jiang et al. Adaptive spread transform QIM watermarking algorithm based on improved perceptual models
Ahuja et al. Video watermarking scheme based on candidates I-frames for copyright protection
CN113179407B (en) Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation
US9111340B2 (en) Frequency-modulated watermarking
Biswas et al. MPEG-2 digital video watermarking technique
Kay et al. Robust content based image watermarking
CN101601068B (en) System and method for embedding and detecting data
Unno et al. Invisibility and readability of temporally and spatially intensity-modulated metaimage for information hiding on digital signage display system
KR20100007616A (en) The adaptive watermarking method for converting digital to analog and analog to digital
Qin et al. A new JPEG image watermarking method exploiting spatial JND model
Tang et al. Improved spread transform dither modulation using luminance-based JND model
Das et al. An invisible color watermarking framework for uncompressed video authentication
Sulaiman et al. Fractal based fragile watermark
El Allali et al. Object based video watermarking sheme using feature points
Fu et al. RAWIW: RAW Image Watermarking robust to ISP pipeline
Bae et al. A New Mobile Watermarking Scheme Based on Display-capture
Pandya et al. Digital Video Watermarking for Educational Video Broadcasting and Monitoring Application
Biswas et al. Compressed video watermarking technique
Bahrushin et al. A video watermarking scheme resistant to synchronization attacks
Atomori et al. Picture Watermarks Surviving General Affine Transformation and Random Distortion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1138669

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1138669

Country of ref document: HK

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121219

CF01 Termination of patent right due to non-payment of annual fee