EP1683104A2 - Image-watermarking method and device - Google Patents
Image-watermarking method and deviceInfo
- Publication number
- EP1683104A2 EP1683104A2 EP04805313A EP04805313A EP1683104A2 EP 1683104 A2 EP1683104 A2 EP 1683104A2 EP 04805313 A EP04805313 A EP 04805313A EP 04805313 A EP04805313 A EP 04805313A EP 1683104 A2 EP1683104 A2 EP 1683104A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- sub
- coefficients
- bands
- component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000013598 vector Substances 0.000 claims description 178
- 238000000354 decomposition reaction Methods 0.000 claims description 51
- 238000001514 detection method Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 6
- 235000010389 delta-tocopherol Nutrition 0.000 claims description 6
- 239000002446 δ-tocopherol Substances 0.000 claims description 6
- PPASLZSBLFJQEF-RKJRWTFHSA-M sodium ascorbate Substances [Na+].OC[C@@H](O)[C@H]1OC(=O)C(O)=C1[O-] PPASLZSBLFJQEF-RKJRWTFHSA-M 0.000 claims description 5
- 235000010378 sodium ascorbate Nutrition 0.000 claims description 5
- 239000004261 Ascorbyl stearate Substances 0.000 claims description 4
- 239000004262 Ethyl gallate Substances 0.000 claims description 3
- 239000004283 Sodium sorbate Substances 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 239000004303 calcium sorbate Substances 0.000 claims description 3
- 235000010244 calcium sorbate Nutrition 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 239000000473 propyl gallate Substances 0.000 claims description 3
- 235000010388 propyl gallate Nutrition 0.000 claims description 3
- 235000019250 sodium sorbate Nutrition 0.000 claims description 3
- 239000000737 potassium alginate Substances 0.000 claims description 2
- 235000010408 potassium alginate Nutrition 0.000 claims description 2
- 239000008272 agar Substances 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 description 28
- 230000006870 function Effects 0.000 description 7
- 238000003780 insertion Methods 0.000 description 5
- 230000037431 insertion Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000004107 Penicillin G sodium Substances 0.000 description 4
- 239000011668 ascorbic acid Substances 0.000 description 4
- 235000010323 ascorbic acid Nutrition 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 239000005711 Benzoic acid Substances 0.000 description 3
- 235000010233 benzoic acid Nutrition 0.000 description 3
- 239000011692 calcium ascorbate Substances 0.000 description 3
- 235000010376 calcium ascorbate Nutrition 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000770 propane-1,2-diol alginate Substances 0.000 description 3
- 235000010409 propane-1,2-diol alginate Nutrition 0.000 description 3
- 235000010384 tocopherol Nutrition 0.000 description 3
- 239000000541 tocopherol-rich extract Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 239000004284 Heptyl p-hydroxybenzoate Substances 0.000 description 2
- 239000004183 Monensin A Substances 0.000 description 2
- 239000004186 Penicillin G benzathine Substances 0.000 description 2
- 239000004185 Penicillin G procaine Substances 0.000 description 2
- 239000004189 Salinomycin Substances 0.000 description 2
- 239000004098 Tetracycline Substances 0.000 description 2
- 239000004182 Tylosin Substances 0.000 description 2
- 239000004188 Virginiamycin Substances 0.000 description 2
- 239000000728 ammonium alginate Substances 0.000 description 2
- 235000010407 ammonium alginate Nutrition 0.000 description 2
- 239000004301 calcium benzoate Substances 0.000 description 2
- 235000010237 calcium benzoate Nutrition 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000555 dodecyl gallate Substances 0.000 description 2
- 235000010386 dodecyl gallate Nutrition 0.000 description 2
- 239000004403 ethyl p-hydroxybenzoate Substances 0.000 description 2
- 235000010228 ethyl p-hydroxybenzoate Nutrition 0.000 description 2
- 239000000574 octyl gallate Substances 0.000 description 2
- 235000010387 octyl gallate Nutrition 0.000 description 2
- 235000019275 potassium ascorbate Nutrition 0.000 description 2
- 239000004300 potassium benzoate Substances 0.000 description 2
- 235000010235 potassium benzoate Nutrition 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000004334 sorbic acid Substances 0.000 description 2
- 235000010199 sorbic acid Nutrition 0.000 description 2
- 239000004184 Avoparcin Substances 0.000 description 1
- 239000004099 Chlortetracycline Substances 0.000 description 1
- 239000004181 Flavomycin Substances 0.000 description 1
- 239000004104 Oleandomycin Substances 0.000 description 1
- 239000004100 Oxytetracycline Substances 0.000 description 1
- 239000004105 Penicillin G potassium Substances 0.000 description 1
- 239000004187 Spiramycin Substances 0.000 description 1
- 239000000783 alginic acid Substances 0.000 description 1
- 235000010443 alginic acid Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 235000010385 ascorbyl palmitate Nutrition 0.000 description 1
- 239000000648 calcium alginate Substances 0.000 description 1
- 235000010410 calcium alginate Nutrition 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000542 fatty acid esters of ascorbic acid Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000010382 gamma-tocopherol Nutrition 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000004302 potassium sorbate Substances 0.000 description 1
- 235000010241 potassium sorbate Nutrition 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000000661 sodium alginate Substances 0.000 description 1
- 235000010413 sodium alginate Nutrition 0.000 description 1
- 239000004299 sodium benzoate Substances 0.000 description 1
- 235000010234 sodium benzoate Nutrition 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002076 α-tocopherol Substances 0.000 description 1
- 235000004835 α-tocopherol Nutrition 0.000 description 1
- 239000002478 γ-tocopherol Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0052—Embedding of the watermark in the frequency domain
Definitions
- the present invention relates to a method and a device for tattooing a mark in an image comprising at least three components. It also relates to a method and a device for detecting a signature inserted in an image comprising at least three components.
- the invention is in the field of tattooing, or Watermarking in English terminology, images comprising several components such as for example color images. These tattooing techniques can for example be used for the protection of copyright on a digital image.
- the mark tattooed in an image must be robust to the various manipulations of the image. These manipulations are for example the compression thereof.
- the mark tattooed in an image must also be imperceptible in order to preserve the quality of the image.
- the watermarking of a mark in the image is carried out according to an insertion rule taking into account the relative position of three component vectors.
- the tattooing of the mark is carried out according to a marking force which can be adapted as a function of the colorimetric or texture characteristics of the image.
- the inventors of the present invention have noticed that the tattoo as presented in this application can, in certain cases, be visible when it is desired to increase the robustness of the brand. Otherwise, the brand tattoo is not very robust when you want to make the tattoo invisible.
- the invention aims to solve the drawbacks of the prior art by proposing a method and a device for tattooing a mark in an image which is both invisible and more robust than the state of the art.
- the invention provides a method of watermarking a mark composed of a plurality of binary information in an image comprising at least three components, characterized in that the method comprises the steps of: - decomposition of at least one component of the image into detail sub-bands in different directions and comprising coefficients, each coefficient being characterized by its position in the detail sub-band to which it belongs and its amplitude, - determination , for each position, of information representative of variations of local amplitudes in different directions from the amplitudes of the coefficients at the position in the different detail sub-bands and of the coefficients close to the position in the different detail sub-bands , - determination of a marking force at each position from information representative of variations in amplitudes l ocals in different directions determined for the position, - formation,
- the invention relates to a device for watermarking a mark composed of a plurality of binary information in an image comprising at least three components, characterized in that the device comprises: - means for breaking down at least one component of the image in sub-bands of details in different directions and comprising coefficients, each coefficient being characterized by its position in the sub-band of details to which it belongs and its amplitude, - means for determining, for each position, information representative of variations in local amplitudes in different directions from the amplitudes of the coefficients at the position in the different detail sub-bands and of the coefficients close to the position in the different detail sub-bands, - means for determination of a marking force at each position from information representative of variations in am local plitudes in different directions determined for the position, - means for forming, for each position and for each component, a vector whose coordinates are the amplitudes of the coefficients at the position in the different sub-bands of details of the component, - means for selecting, for each position,
- each component of the image is broken down into detail sub-bands in different directions, the information representative of variations in amplitudes is determined for each component and for each position, the marking force is determined at each position of each component.
- the determination of the information representative of variations in local amplitudes for each component and for each position is broken down into a squaring of the amplitude of each coefficient of each sub-band of details of each component and the calculation of a median value, from the amplitudes squared of the coefficient and the coefficients close to the coefficient, for each coefficient of each sub-band of details of each component.
- a squaring of the amplitudes of the coefficients we introduce a notion of energy to the coefficients of the sub-band. They can then be considered as indicators of the evolution of energy in the image.
- the areas of sub-bands comprising large variations in amplitudes are highlighted while avoiding possible detections of contours breaks.
- the determination of the marking force at each position of each component is carried out by forming a vector whose coordinates are the median values calculated in each detail sub-band, by grouping the vectors whose coordinates are similar in predetermined classes and by assigning a marking force to each position according to the predetermined class to which the vector of the position.
- the predetermined classes are the class grouping the vectors representative of areas having no variations and / or the class grouping together the vectors representative of areas of the image comprising predominantly horizontal variations, and / or the class grouping the vectors representative of areas of the image comprising predominantly vertical variations and / or the class grouping together the vectors representative of areas of the image comprising predominantly diagonal variations, and / or the class grouping together the vectors representative of zones of very strongly textured image with no preferred direction.
- the marking force is also affected as a function of the component of the image and the decomposition is a decomposition into Haar wavelets.
- the decomposition is a decomposition into Haar wavelets.
- the Haar wavelet is particularly well suited for texture detection because it does not show edge effects on the image while being simple in its implementation.
- each component of the image is broken down, according to another decomposition into sub-bands, into sub-bands comprising coefficients, each coefficient being characterized by its position in the sub band to which it belongs and its amplitude and in that the method further comprises a step of reconstructing the image from the coefficients of the sub-bands and of the coefficients whose amplitudes have been modified.
- the invention also relates to a method for detecting a signature inserted in an image comprising at least three components, characterized in that the method comprises the steps of breaking down at least one component of the image into detail sub-bands in different directions and including coefficients, each coefficient being characterized by its position in the detail sub-band to which it belongs and its amplitude; for determining, for each position, information representative of variations in local amplitudes in different directions from the amplitudes of the coefficients at this position in the different detail sub-bands and from the coefficients close to this position in the different sub-bands details and detection of the signature from at least a part of binary information inserted in a plurality of positions of the image and information representative of the variations of local amplitudes in different directions corresponding to the positions of the binary information.
- the invention provides a device for detecting a signature inserted in an image comprising at least three components, characterized in that the device comprises means for breaking down at least one component of the image into sub-bands of details in different directions and comprising coefficients, each coefficient being characterized by its position in the detail sub-band to which it belongs and its amplitude; means for determining, for each position, information representative of variations in local amplitudes in different directions from the amplitudes of the coefficients at this position in the different detail sub-bands and coefficients close to this position in the different sub-bands -bands of details and means of detection of the signature from at least a portion of binary information inserted in a plurality of positions of the image and information representative of the variations of local amplitudes in different directions corresponding to the positions binary information.
- the binary information used for the detection is the binary information included at positions of the image for which the information representative of the variations of local amplitudes in different directions correspond to information representative of the variations d 'predetermined local amplitudes.
- weights are assigned to at least part of the binary information, the weights being assigned according to the information representative of the amplitude variations at the positions corresponding to the positions of the binary information.
- the invention also relates to the computer program stored on an information medium, said program comprising instructions making it possible to implement the watermarking and / or detection methods of a signature described above, when they are loaded and executed by a computer system.
- the invention also relates to an information medium comprising at least one image tattooed according to the tattooing method according to the present invention.
- Fig. 2 represents an algorithm for determining, for a plurality of positions of the image, information representative of variations in local amplitudes in different directions
- Fig. 3 represents the algorithm for tattooing a mark in a color image
- Fig. 4 represents a detailed algorithm of the step of inserting binary information at a position of the lowest resolution level of the image to be tattooed
- Fig. 5 is an example of a mark created by redundancy of a signature generated by a key in a pseudo-random manner
- Fig. 6 shows a table comprising different marking forces used by the present invention as a function of the information representative of variations in local amplitudes in different directions
- Fig. 7 shows the algorithm for detecting a signature inserted in an image according to the watermarking algorithm according to the present invention.
- Fig. 1 represents a block diagram of a device for tattooing a mark in a color image and / or for detecting a signature inserted in a color image.
- the device 10 for tattooing a mark in a color image and / or for detecting a signature inserted in a color image according to the present invention is for example a computer.
- the watermarking device 10 can also be integrated into a mobile telephone handset comprising means for capturing images.
- the watermarking device 10 comprises a communication bus 101 to which are connected a processor 100, a ROM read-only memory 102, a random access memory RAM 103, a screen 104 and a keyboard 105 serving as a man-machine interface, a hard disk 108, a reader recorder 109 of information on a removable medium such as a compact disc and an interface 106 making it possible to transfer tattooed images according to the present invention to a telecommunications network 150 and / or to receive color images to detect whether a mark has been included in these images.
- ROM ROM 102 stores the programs implementing the invention.
- the programs according to the present invention are transferred into the random access memory 103 which then contains the executable code of the algorithms which will be described later with reference to FIGS. 2, 3, 4 and 7.
- the ROM 102 also includes a table which will be described later with reference to FIG. 6.
- the processor 100 executes the instructions stored in the RAM
- the tattooing device 10 comprises a screen 104 and a keyboard 105 making it possible to select images to be tattooed according to the present invention or to modify for example the number of classes used by the present invention to determine the variations of local amplitudes in different directions around of a position of the image or to modify the values of the marking forces included in the table of FIG. 6.
- Fig. 2 represents an algorithm for determining, for a plurality of positions of the image, information representative of variations in local amplitudes in different directions.
- the processor 100 of the device 10 for watermarking a mark and / or for detecting a signature reads, from ROM ROM 102, the instructions of the program corresponding to steps E200 to E214 of FIG.
- the image to be tattooed is a color image made up of pixels and has several components. These components are for example chromatic components such as for example the Red, Green and Blue components. Other color components such as yellow, cyan and magenta can also be used.
- the image can also be represented in the form of a luminance component and two chromatic components.
- the processor 100 takes a first component of the color image to be processed and performs in step E201 a transformation of this component of the image. This transformation is a decomposition into wavelets or Discret Wavelet Transform in Anglo-Saxon terminology.
- the wavelet decomposition is preferably a Haar wavelet decomposition.
- the Haar wavelet is particularly well suited for the determination of information representative of variations of local amplitudes of an image because it does not show edge effects on the image while being simple in its implementation.
- the decomposition of an image, more precisely of a component of the image is carried out by applying to the image two digital filters, respectively low pass and high pass which filter the signal in a first direction, for example horizontal. After filtering, the two filtered images are decimated by two. Each decimated image is respectively applied to a low pass filter and a high pass which filter them in a second direction, for example vertical. Each resulting filtered signal is then decimated by two to form four resolution sub-bands.
- a sub-band includes the low frequency coefficients along the two directions of the image signal.
- This sub-band is conventionally called the low sub-band of the first level of decomposition.
- the other three so-called detail sub-bands include the high frequency wavelet coefficients in the respective horizontal, vertical and diagonal directions.
- Each of these detail sub-bands, constructed from the original image, contains information corresponding to a respectively vertical, horizontal and diagonal orientation of the image, in a given frequency band.
- the decomposition performed is such that a sub-band of a given resolution is distributed into four sub-bands of lower resolution and therefore has four times more coefficients than each of the sub-bands of immediately lower resolution.
- To four coefficients at a position (2x, 2y) of a sub-band of given resolution corresponds a coefficient in each sub-band of lower resolution obtained at a position (x, y).
- a coefficient located at a given position (x, y) in a sub-band correspond coefficients to the same position (x, y) in the other respective sub-bands of the same level of decomposition.
- coefficients located at a given position (x, y) correspond four other coefficients to a position (2x, 2y) in the low frequency sub-band of higher decomposition level.
- a position of a sub-band corresponds to a position in the original image in the form of a pixel and / or corresponds a position in a sub-band of a different level of decomposition.
- the low sub-band of the first level of decomposition is decomposed again according to the same decomposition as that described previously to form a low sub-band of second level of decomposition and three sub -bands of details comprising high frequency wavelet coefficients in the respective horizontal, vertical and diagonal directions.
- the low sub-band of the lowest decomposition level is used to form a new low sub-band of lower decomposition level and three detail sub-bands comprising high-frequency wavelet coefficients in the respective horizontal directions. , vertical and diagonal.
- the decomposition is carried out on four levels of decomposition to obtain four sub-bands of fourth level of decomposition.
- the processor 100 For each sub-band of details of the fourth level of decomposition, the high-frequency wavelet coefficients are stored in the respective horizontal, vertical and diagonal directions.
- the processor 100 considers in step E202 a first sub-band of details in a direction previously obtained.
- the processor 100 squares the amplitude of each coefficient of the sub-band considered. This makes it possible to introduce a concept of energy to the coefficients of the sub-band. They can then be considered as indicators of the evolution of energy in the image.
- the processor 100 determines a median value for each coefficient of the sub-band considered whose amplitude has been previously squared.
- This median value is for example calculated on a support of size three by three, that is to say that the median value is determined from the amplitudes of the coefficients close to the coefficient for which the determination is made. This median calculation has the effect of highlighting the areas of the detail sub-band in a direction which have large variations, that is to say textured areas.
- the processor 100 goes to the next step E205 and checks whether all the detail subbands have been processed. If not, the processor 100 goes to step E206 and repeats the loop made up of steps E203 to E206 until the three detail sub-bands are processed. When the three detail sub-bands comprising the high frequency wavelet coefficients in the respective horizontal, vertical and diagonal directions have been processed, the processor 100 goes to the next step E207.
- step E207 the processor 100 forms, for each position (x, y) of detail sub-bands of the fourth level of decomposition, a vector of dimension three representative of the intensity of the local fluctuations.
- the vector formed has as coordinates the median value determined previously at position (x, y) for the sub-band of details comprising high frequency wavelet coefficients in the horizontal direction, the median value determined previously at position (x, y) for the detail sub-band comprising high-frequency wavelet coefficients in the vertical direction and the median value determined previously at position (x, y) for the detail sub-band comprising coefficients high frequency wavelets in the diagonal direction.
- the processor 100 defines in step E208 the classes according to which the vectors formed in the previous step will be classified. For example, five classes are defined.
- the class denoted class 1 groups together the vectors representative of areas not comprising variations, that is to say the vectors whose coordinates are of small value.
- Class 2 groups the vectors representative of areas of the image comprising predominantly horizontal variations, that is to say the vectors whose median value coordinate calculated in the sub-band of details comprising high-frequency wavelet coefficients according to the direction horizontal has an important value while the median values calculated in the other detail sub-bands are of low value.
- Class 3 groups the vectors representative of areas of the image comprising predominantly vertical variations, that is to say the vectors whose median value coordinate calculated in the sub-band of details comprising high-frequency wavelet coefficients according to the direction vertical has an important value, while the median values calculated in the other detail sub-bands are of low value.
- Class 4 groups the vectors representative of areas of the image comprising predominantly diagonal variations, that is to say the vectors whose median value coordinate calculated in the sub-band of details comprising high frequency wavelet coefficients according to the direction diagonal has an important value while the median values calculated in the other detail sub-bands are of low value.
- Class 5 groups the vectors representative of very strongly textured areas of the image and without preferred direction, that is to say the vectors which each have their coordinates at a significant value.
- the number of classes can be reduced.
- classes 2, 3 and 4 may alternatively be grouped into a single class.
- the number of classes can also be increased.
- Other classes grouping the vectors representative of areas of the image comprising variations in two directions can also be formed.
- the use of five classes is preferred according to the present invention and allows precise detection of the textures included in the image to be processed. Even textures whose direction is mainly diagonal can be detected, although conventionally, the energy associated with these diagonals is lower than those associated with other directions.
- the processor 100 will thus group the vectors whose coordinates are similar in the five predetermined classes.
- the method used is for example the dynamic cloud method. Of course, other methods can also be used.
- the processor 100 determines in step E209 an initial center for each of the classes 1 to 5.
- the processor 100 takes the zero vector denoted gO as the initial center.
- the processor 100 forms an initial center gl whose coordinates are (MaximumH, 0, 0) where Maximum H is the maximum median value for the detail sub-band comprising high frequency wavelet coefficients in the horizontal direction.
- the processor 100 forms an initial center g2 whose coordinates are (0, MaximumV, 0) where MaximumV is the maximum median value for the detail sub-band comprising high frequency wavelet coefficients in the vertical direction.
- the processor 100 forms an initial center g3 whose coordinates are (0, 0, MaximumD) where MaximumD is the maximum median value for the detail sub-band comprising high frequency wavelet coefficients in the diagonal direction.
- the processor 100 forms an initial center g4 whose coordinates are (MaximumH, MaximumV, MaximumD).
- the initial centers defined, the processor 100 goes to the next step E210. At this step, the processor 100 constructs a partition of the image into zones by assigning each vector formed in step E207 to the class of which it is closest to the initial center.
- step E211 the processor 100 determines a new initial center representative of each area from the vectors which have been assigned respectively to each area. This operation carried out, the processor 100 determines whether the quality of the distribution is improved or not. If so, the processor 100 returns to step E210 and repeats steps E210 to E212 as long as the quality of the distribution is improved. When the quality of the distribution is no longer improved, the processor 100 goes to step E213. In step E213, the processor 100 checks whether all the components of the image have been processed. If not, the processor 100 goes to step E214, takes another component and returns to step E201 to process the new component of the color image. When all the components have been processed, a class is associated with each position (x, y) of each detail sub-band of each component.
- This class corresponds to a local determination of the texture of the image at a position (x, y). It should be noted here that the segmentation as performed by the algorithm of FIG. 2 aims to complete the areas of the image having textures, regardless of the main direction of the textures included in the image. It should be noted here that, as a variant, only one component can be treated. For example, when the image is formed of a luminance component and of chromatic components, the present algorithm is only carried out on the luminance component.
- the processor 100 of the watermarking device has determined for each position of the sub-bands of details of the lowest level of resolution of the information representative of the variations of the local amplitudes in different directions of the coefficients of the sub-bands of details from the amplitudes of the coefficients respectively at this position and amplitudes of the coefficients close to this position.
- the processor 100 also determined from the lowest resolution level different texture zones. From these determined texture zones, it is then possible to determine, for the sub-bands of details of higher resolution level and for the original image and for each position of the image, information representative of the variations in local amplitudes according to different directions.
- the present algorithm defines for each position of an image or of a decomposition sub-band the variations of the local amplitudes of the image or of a sub-band.
- the present algorithm also makes it possible to assign a class to each position of an image or of a decomposition sub-band.
- Fig. 3 represents the algorithm for tattooing a mark in a color image.
- the processor 100 of the device 10 for watermarking a mark in a color image reads, from the ROM ROM 102, the instructions of the program corresponding to steps E300 to E313 of FIG. 3 and loads them into RAM 103 to execute them.
- the image to be tattooed is a color image made up of pixels and includes. several chromatic components, for example the Red, Green and Blue components.
- the image can also be represented in the form of a luminance component and two chromatic components.
- the processor 100 takes a first component of the color image to be processed and performs, in step E301, a transformation of this component of the image.
- This transformation is a decomposition into wavelets.
- the decomposition in wavelets is preferably a decomposition in wavelets of Daubechies.
- the Daubechies wavelets use filters relating to a larger number of samples than the filters used for a decomposition into Haar wavelets. They thus offer better results for the decomposition of the images into sub-bands.
- the wavelet decomposition is performed on four decomposition levels.
- the processor 100 goes to the next step E302 and checks whether all the components of the color image have been decomposed into wavelets. If not, the processor 100 goes to step E303, considers the next component and returns to step E301 to decompose a new component of the color image in the same manner as that previously described. The processor 100 repeats the loop made up of steps E301 to E302 until all the components of the image have been broken down.
- the processor 100 goes to step E304.
- the processor 100 takes, for each component of the image to be processed, the first corresponding coefficient of each sub-band of details of the last level of decomposition. These first coefficients correspond to the first position (x, y) processed.
- the processor 100 forms, for the position (x, y) of the detail sub-bands of the last level of decomposition, three-dimensional vectors whose coordinates are the values of the high frequency wavelet coefficients of the sub -bands of details for each component of the image.
- a vector is determined for each component of the color image to be processed. For example, when the color image is in the form of three Red Green and blue components, the vectors are of the form:
- Vi (x, y) (CoeffH 4i (x, y), CoeffV 4i (x, y), CoeffD 4i (x, y))
- i represents the red, green or blue component
- CoeffH4i (x, y) represents the coefficient of the fourth detail sub-band in the horizontal direction
- CoeffV4i (x, y) represents the coefficient of the fourth detail sub-band in the vertical direction
- CoeffD4i (x, y) represents the coefficient of the fourth sub- strip of details in diagonal direction.
- the vectors formed, the processor 100 goes to step E306 and calculates for the current position (x, y) at Euclidean distance between each of the three vectors taken two by two:
- step E307 determines for the current position (x, y) the greatest of the distances D RB (x, y), D RG (x, y) and D BG (x, y) previously calculated in step E306.
- the processor 100 determines for the current position (x, y), the vectors serving as a reference for marking the vector to be marked as well as the vector among the three vectors V R (x, y), Vo (x, y) V B (x, y) which will be used for marking or tattooing. If D RB (x, y)> D RG (x, y) and D RB (x, y)> D BG (x, y), the vector V G (x, y) is chosen or selected as vector VM comprising the mark and the vectors V (x, y) and V B (x, y) are considered as reference vectors denoted respectively Vrefl and Vref2.
- the vector V B (x, y) is chosen or selected as vector VM comprising the mark and the vectors V (x, y) and V G (x, y) are considered as reference vectors denoted respectively Vrefl and Vref2.
- the vector V R (x, y) is chosen or selected as vector VM comprising the mark and the vectors V B (x, y) and V G (x, y) are considered as reference vectors denoted respectively Vrefl and Vrefi.
- step E311 the processor goes to step E311 and checks whether the current position is the last of the positions to be processed. If not, the processor 100 takes in step E312, for each component of the image to be processed, the coefficient at the corresponding next position of each sub-band of details of the last level of decomposition. As long as all the coefficients of the detail sub-bands have not been processed, the processor 100 repeats the loop made up of steps E305 to E312. When all the coefficients of the detail sub-bands have been processed, the processor 100 goes to step E313 and reconstructs the image by taking into account the coefficients marked by the insertion of the mark. This operation carried out, the algorithm stops and resumes at step E300 when a new mark must be tattooed in a new image. Fig.
- step E 400 represents a detailed algorithm of step E310 of FIG. 3 insertion of binary information at a position of the lowest resolution level of the image to be tattooed.
- the algorithm of FIG. 4 describes the modification of the coordinates of the vector VM determined in step E309 of FIG. 3 as comprising binary information for a position of the detail sub-bands.
- the processor 100 determines the binary information to be inserted at the current position during processing. Each position of the detail sub-bands corresponds to binary information to be inserted.
- a pseudo-random signature S is generated by means of a key. This signature S denoted 50 in FIG. 5 consists of a series of binary information of size N * N.
- This signature can also be representative of the name of the author, the name of the owner of the image, the content of the image or any type of information.
- Binary information is shown in Fig. 5 with black or white squares.
- the black squares of the signatures represent binary information at binary value one while the white squares represent binary information at binary value zero.
- the signature S contains a limited number of information relative to the number of positions in the detail sub-bands.
- the signature S is duplicated so that each lowest resolution sub-band position, or each vector VM, is associated with binary information to be inserted. Indeed, the more this signature is redundantly inserted a large number of times in the detail sub-bands, the greater the robustness of the marking.
- the duplication can be carried out bit by bit like the redundancy noted 51 of FIG.
- the number of repetitions of the signature 40 is determined by the ratio between the size of the lowest level decomposition sub-band and the size of the signature.
- the processor therefore determines at this stage the binary information zero or a corresponding one which must be inserted at this position.
- the processor 100 determines in step E401, among the two reference vectors Vrefl and Vref2, the marking reference vector denoted Vrefm used for the modification of the vector VM determined in step E309 of FIG. 3 as comprising the mark for a position of the detail sub-bands.
- the marking according to the present invention is carried out by modifying the vector VM so as to bring it closer to the reference vector Vrefl or Vref2 according to the value of the binary information to be inserted and according to certain predefined conventions. More specifically, the vector VM is modified in such a way that the distance from the reference reference vector Vrefm is less than the distance from the other reference vector. For example, the following convention can be taken: If VM is the vector V (x, y) and the value of the binary information to be inserted is zero, the vector VM must be compared to the vector V G (x, y) . The vector Vrefm is then the vector V G (x, y).
- the vector VM must be compared with the vector V B (x, y).
- the vector Vrefm is then the vector V B (x, y). If VM is the vector V G (x, y) and the value of the binary information to be inserted is zero, the vector VM must be compared with the vector V R (x, y). The vector Vrefm is then the vector V R (x, y). If VM is the vector V G (x, y) and the value of the binary information to be inserted is one, the vector VM must be compared to the vector V B (x, y).
- the vector Vrefm is then the vector V B (x, y). If VM is the vector V B (x, y) and the value of the binary information to be inserted is zero, the vector VM must be compared with the vector V R (x, y). The vector Vrefm is then the vector V R (x, y). If VM is the vector V B (x, y) and the value of the binary information to be inserted is one, the vector VM must be compared with the vector V G (x, y). The vector Vrefm is then the vector V BG (x, y). Once the marking reference vector has been determined, the processor 100 goes to the next step E402. At this stage, the processor 100 determines the marking force F of the binary information to be inserted.
- the processor 100 obtains the class to which the vector determined in step E208 belongs for the current position. As a function of this class and of the vector VM, the processor 100 determines for example from a table stored in ROM ROM 102 of the marking device 10 the strength of marking of the binary information. This table conforms, for example, to the table as shown in FIG. 6.
- the table in FIG. 6 comprises five lines denoted 61 to 65.
- Line 61 comprises, for the various components of the image, the marking forces associated with class 1 representative of zones considered to be uniform.
- Line 62 includes, for the various components of the image, the marking forces associated with class 2 representative of textured areas in a predominantly horizontal direction.
- Line 63 includes, for the various components of the image, the values representative of the marking forces associated with class 3 representative of textured zones in a predominantly vertical direction.
- Line 64 includes, for the various components of the image, the values representative of the marking forces associated with class 4 representative of textured areas in a predominantly diagonal direction.
- Line 65 includes, for the various components of the image, the values representative of the marking forces associated with class 5 representative of highly textured areas.
- the table in FIG. 6 has as many columns as there are components of the color image. According to our example, the table has three columns denoted 66 to 68. Column 66 corresponds according to our example to the red component of the image to be processed and includes for each of the classes of the algorithm of FIG. 2 a value representative of the marking force for the red component.
- Column 67 corresponds according to our example to the green component of the image to be processed and includes for each of the classes of the algorithm of FIG. 2 a value representative of the marking force for the green component.
- Column 68 corresponds according to our example to the blue component of the image to be processed and includes for each of the classes of the algorithm of FIG. 2 a value representative of the marking force for the blue component.
- the processor 100 determines the component of the image to be processed. This component is the component of the vector which has been determined as a vector VM carrying the mark. For example, if the vector V B (x, y) was determined in step E309 of the algorithm of FIG. 3 as a vector VM comprising the mark, the component determined is the blue component.
- the processor 100 determines, for the determined component, to which class the position being processed belongs to. This class was previously determined according to the algorithm of FIG. 2. Depending on the determined component and the class to which the position being processed belongs, the processor 100 thus determines the value of the marking force F to be applied to this position. It should be noted that, as a variant, the marking forces can be identical for each of the components. Alternatively, the marking strength can also be determined by determining which class the position being processed belongs to for the other components. The processor 100, according to the present variant, checks whether the classes defined for each of the components to this component are consistent. If for example at this position, the vectors formed in step E207 are all considered to belong to an identical class, the value of the marking force is increased.
- the processor 100 determines whether the distance between the vector VM comprising the mark and the reference vector of marking Vrefm is greater than the distance between the vector VM comprising the mark and the reference vector Vrefl or Vref2 which was not considered as the reference marking vector. If not, the processor 100 goes to step E405.
- the processor 100 goes to step E404.
- the processor 100 modifies the vector VM so that the distance between the vector VM comprising the mark and the marking reference vector Vrefm is less than the distance between the vector VM comprising the mark and the reference vector Vrefl or Vref2 not having been considered as a reference labeling vector. This modification is carried out in such a way as to make the modification of the vector VM minimal. This operation carried out, the processor 100 goes to the next step E405.
- the vector VM is the vector determined in step E309 of the algorithm of FIG. 3 or the vector VM displaced in step E403, F the marking force and Vrefm the reference marking vector.
- the marking force F varies between values between zero and one.
- the marking force is zero, that is to say when the position being processed is in an area considered to be uniform, the marked vector is equal to the vector VM. No mark is inserted for this position. Indeed, the insertion of marks in uniform areas of an image creates visually discernible disturbances.
- the algorithm according to the invention therefore does not insert any mark at the positions corresponding to the uniform zones.
- the marking force F is close to unity, the vector VM comprising the mark is almost replaced by the reference marking vector Vrefm.
- This marking is particularly robust and resists any further processing such as compression of the image.
- This marking creates disturbances in the discernible image.
- the marking force F is for example equal to half, the marked vector Vwm is equal to the average of the two vectors VM and Vrefm. This allows a correct compromise between visibility and robustness of the brand for textured areas in a preferred direction.
- the marking force applied to the green component is, whatever the class to which the position being processed belongs, less than half. Indeed, the green component makes it possible to better protect a brand from possible attacks, but the human visual system is more sensitive to variations in this component. It should be noted here that the marking force applied to the red component or the blue component is more than half when the class to which the position being processed belongs is a texture class. Indeed, the human visual system is less sensitive to variations in these components, the mark can then be inserted with a greater marking force.
- the vector Vwm calculated, the processor 100 goes to step 406 and modifies the coefficients of the detail sub-bands of the modified component corresponding to the coordinates of the marked vector Vwm.
- the processor 100 returns to step E311 of the algorithm of FIG. 3.
- the watermarking of the image is carried out by inserting binary information at each position and the algorithm of FIG. 2 does not define zones which must include the mark, but it allows the determination for each position of an image of information representative of variations in local amplitudes in different directions and of defining a marking force to be applied to each position.
- Fig. 7 shows the algorithm for detecting a signature inserted in an image according to the watermarking algorithm of a mark according to the present invention.
- the processor 100 of the device 10 for detecting a signature reads, from the ROM 102, the program instructions corresponding to steps E700 to E716 of FIG. 7 and loads them into RAM 103 to execute them.
- step E700 the processor 100 determines, for each position of the image in which a mark has been tattooed, information representative of variations in local amplitudes in different directions.
- This step corresponds to the algorithm of FIG. 2 previously described. It will not be further explained.
- Step E701 as well as steps E702, E703 and E704 are identical to steps E300, E301, E302 and E303 respectively. They will not be described further.
- the wavelet decompositions having been carried out, the processor 100 goes to the next step E705.
- the processor 100 takes, for each component of the image to be processed, the first corresponding coefficient of each sub-band of details of the last level of decomposition.
- step E706 the processor 100 forms, for the position (x, y) of the detail sub-bands of the last level of decomposition, a vector for each component of the image to be processed. These three-dimensional vectors have as coordinates the values of the high frequency wavelet coefficients of the detail sub-bands for the respective components of the image. This step is identical to step E305 of FIG. 3. It will not be further explained.
- the vectors formed, the processor 100 goes to step E707 and calculates for the current position (x, y) the Euclidean distance between each of the three vectors determined in step E706 taken two by two. This step is identical to step E306 of FIG. 3. It will not be further explained.
- step E708 the processor 100 goes to step E708 and determines for the current position (x, y) the greatest of the distances previously calculated in step E707.
- step E709 the processor 100 determines for the current position (x, y), the vectors serving as a reference Vrefl and Vref2 for the marking of the vector to be marked. These vectors Vrefl and Vref2 are the vectors separated by the greatest distance calculated in step E708.
- This operation carried out, the processor 100 goes to the next step E710 and determines the vector VM which has been used for watermarking as well as the marking reference vector denoted Vrefm used for the modification of the vector VM.
- the vector VM is the vector that was not used as a reference vector in the previous step.
- the reference vector of marking Vrefm is determined by choosing the vector, from the reference vectors Vrefl and Vref2, which is the closest to VM. This operation performed, the processor 100 determines in step E711 the class determined for the current position in step E700. For this, the processor 100 determines the class to which the vector formed in step E700 belongs and having the same component as the vector VM. Five classes are used in the present invention, these classes are identical to classes 1, 2, 3, 4 and 5 explained above with reference to FIG. 2. In the next step E712, the processor 100 determines the marking convention used when tattooing the mark in the image.
- the marking is carried out by modifying the vector VM so as to bring it closer to the reference vector Vrefl or Vref2 according to the value of the binary information to be inserted.
- the processor 100 thus deduces according to the convention used if the value of the binary information inserted on the vector VM is equal to one or to zero. This operation performed, the processor 100 stores in step E713 the value of the determined binary information as well as the class determined in the previous step E711. In the next step E714, the processor 100 checks whether each position of the detail subbands has been processed. If not, the processor 100 goes to step E715 and takes for each component of the image to be processed the following coefficient of each sub-band of details of the last level of decomposition.
- step E706 the processor 100 returns to step E706 previously described and repeats steps E706 to E714 as long as all the positions of the detail sub-bands have been processed.
- step E716 the processor 100 obtains a signature from the binary information stored in the previous step E713.
- the signature S has been duplicated so that with each vector VM, binary information is associated. Knowing the duplication rule, the processor 100 obtains at least part of the binary information inserted for each bit of the signature and determines for each bit of the signature its average value.
- Class 1 groups the non-textured image positions. When binary information is inserted into these non-textured areas, it is inserted with a low value marking force. Given the manipulations that can be carried out on the image comprising the mark, the risk of determining erroneous binary information at this position is significant. In order to guarantee proper detection of the signature, the binary information included or likely to be understood at these positions is therefore not taken into account.
- the processor 100 weights each binary information by a confidence factor determined as a function of the class corresponding to the position at which the binary information was obtained.
- the binary information obtained at positions corresponding to class 5 has been inserted with a greater marking force than the other classes.
- the risk of determining erroneous binary information at positions corresponding to class 5 is low.
- more weight is given to the binary information obtained at positions corresponding to class 5 when calculating the average values than to other binary information.
- the processor 100 then calculates the rate of resemblance between the original signature S which it has and the signature obtained S 'from the mean values previously calculated.
- the calculation of the likelihood rate cc (S, S ') is for example of the form
- the processor 100 determines whether or not the detected signature corresponds to the original signature. If the likelihood rate is greater than a predetermined threshold equal, for example to the numerical value 0.7, the detected signature then corresponds to the original signature. At the end of this step, the processor 100 returns to step E700 and waits for an image to be processed.
- a predetermined threshold for example to the numerical value 0.7
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0313171 | 2003-11-10 | ||
PCT/FR2004/002756 WO2005048189A2 (en) | 2003-11-10 | 2004-10-26 | Image-watermarking method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1683104A2 true EP1683104A2 (en) | 2006-07-26 |
Family
ID=34586259
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP04805313A Withdrawn EP1683104A2 (en) | 2003-11-10 | 2004-10-26 | Image-watermarking method and device |
Country Status (3)
Country | Link |
---|---|
US (1) | US7729505B2 (en) |
EP (1) | EP1683104A2 (en) |
WO (1) | WO2005048189A2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040070267A (en) * | 2001-12-21 | 2004-08-06 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Increasing integrity of watermarks using robust features |
FR2853748A1 (en) * | 2003-04-11 | 2004-10-15 | France Telecom | METHOD FOR TATOTING A VECTOR-APPROACHING COLOR IMAGE, METHOD FOR DETECTING A TATTOO MARK, DEVICES, IMAGE AND CORRESPONDING COMPUTER PROGRAMS |
FR3059446B1 (en) * | 2016-11-25 | 2019-07-05 | Institut Mines-Telecom / Telecom Bretagne | METHOD OF INSERTING DATA TO THE STREAM IN A TATUE DATA BASE AND ASSOCIATED DEVICE. |
AU2019433629B2 (en) | 2019-03-12 | 2021-11-04 | Citrix Systems, Inc. | Tracking image senders on client devices |
US11537690B2 (en) * | 2019-05-07 | 2022-12-27 | The Nielsen Company (Us), Llc | End-point media watermarking |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6590996B1 (en) * | 2000-02-14 | 2003-07-08 | Digimarc Corporation | Color adaptive watermarking |
US6332030B1 (en) * | 1998-01-15 | 2001-12-18 | The Regents Of The University Of California | Method for embedding and extracting digital data in images and video |
US6385329B1 (en) * | 2000-02-14 | 2002-05-07 | Digimarc Corporation | Wavelet domain watermarks |
US6993149B2 (en) * | 2001-09-25 | 2006-01-31 | Digimarc Corporation | Embedding digital watermarks in spot colors |
US6633654B2 (en) * | 2000-06-19 | 2003-10-14 | Digimarc Corporation | Perceptual modeling of media signals based on local contrast and directional edges |
US6940993B2 (en) * | 2000-12-13 | 2005-09-06 | Eastman Kodak Company | System and method for embedding a watermark signal that contains message data in a digital image |
US7072487B2 (en) * | 2001-01-26 | 2006-07-04 | Digimarc Corporation | Watermark detection using adaptive color projections |
US7376242B2 (en) * | 2001-03-22 | 2008-05-20 | Digimarc Corporation | Quantization-based data embedding in mapped data |
US20030068068A1 (en) * | 2001-09-28 | 2003-04-10 | Nam-Deuk Kim | Content based digital watermarking using wavelet based directionality measures |
FR2846828B1 (en) * | 2002-10-31 | 2005-03-11 | France Telecom | METHOD FOR TATOTING A VIDEO SIGNAL, SYSTEM AND DATA CARRIER FOR IMPLEMENTING SAID METHOD, METHOD FOR EXTRACTING THE TATTOO OF A VIDEO SIGNAL, SYSTEM FOR IMPLEMENTING SAID METHOD |
US7203669B2 (en) * | 2003-03-17 | 2007-04-10 | Intel Corporation | Detector tree of boosted classifiers for real-time object detection and tracking |
FR2853792A1 (en) * | 2003-04-11 | 2004-10-15 | France Telecom | Digital video sequence tattooing process, involves selecting optimal tattooed displacement vector, based on preset criteria, such that modified coordinates of displacement vector are coordinates of optimal vector |
FR2853748A1 (en) * | 2003-04-11 | 2004-10-15 | France Telecom | METHOD FOR TATOTING A VECTOR-APPROACHING COLOR IMAGE, METHOD FOR DETECTING A TATTOO MARK, DEVICES, IMAGE AND CORRESPONDING COMPUTER PROGRAMS |
-
2004
- 2004-10-26 US US10/578,337 patent/US7729505B2/en not_active Expired - Fee Related
- 2004-10-26 EP EP04805313A patent/EP1683104A2/en not_active Withdrawn
- 2004-10-26 WO PCT/FR2004/002756 patent/WO2005048189A2/en active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2005048189A2 * |
Also Published As
Publication number | Publication date |
---|---|
US20070140523A1 (en) | 2007-06-21 |
US7729505B2 (en) | 2010-06-01 |
WO2005048189A2 (en) | 2005-05-26 |
WO2005048189A3 (en) | 2007-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20100063978A1 (en) | Apparatus and method for inserting/extracting nonblind watermark using features of digital media data | |
Shen et al. | A robust associative watermarking technique based on vector quantization | |
US9639910B2 (en) | System for embedding data | |
EP1473944A2 (en) | Digital video watermarking method with adaptive selection of the watermarking area, watermarking detection method, device, corresponding computer readable storage medium and computer program product. | |
EP3832535A1 (en) | Method for detecting at least one visible element of interest in an input image by means of a convolutional neural network | |
CA3043090C (en) | Character recognition process | |
FR2905188A1 (en) | Input image e.g. palm imprint image, density converting method for e.g. image processing improving system, involves converting minimum local and maximum local values into minimum and maximum common values, and reconstructing input image | |
EP1416737B1 (en) | Method, system and data support for video watermarking, method and system for extracting this watermaking | |
EP1683104A2 (en) | Image-watermarking method and device | |
US20060117180A1 (en) | Method of extracting a watermark | |
EP1330110B1 (en) | Method and system for watermark decoding | |
Manaf et al. | Genetic audio steganography | |
Zamani et al. | A novel approach for audio watermarking | |
FR2853748A1 (en) | METHOD FOR TATOTING A VECTOR-APPROACHING COLOR IMAGE, METHOD FOR DETECTING A TATTOO MARK, DEVICES, IMAGE AND CORRESPONDING COMPUTER PROGRAMS | |
FR2775812A1 (en) | METHOD FOR DISSIMULATING BINARY INFORMATION IN A DIGITAL IMAGE | |
FR2769733A1 (en) | PROCESS FOR AUTOMATICALLY EXTRACTING PRINTED OR HANDWRITED INSCRIPTIONS ON A BACKGROUND, IN A MULTI-LEVEL DIGITAL IMAGE | |
FR2986890A1 (en) | METHOD FOR INSERTING A DIGITAL MARK IN AN IMAGE, AND CORRESPONDING METHOD FOR DETECTING A DIGITAL MARK IN AN IMAGE TO BE ANALYZED | |
WO2021123209A1 (en) | Method for segmenting an input image showing a document containing structured information | |
FR3118243A1 (en) | METHOD FOR EXTRACTING A SIGNATURE FROM A FINGERPRINT AND DEVICE IMPLEMENTING SUCH METHOD | |
WO2006037867A1 (en) | Method and device for determining reference points in an image | |
EP2763426B1 (en) | Method for recognizing video contents or pictures in real-time | |
TWI447669B (en) | System and method for removing watermarks from an image | |
FR2929431A1 (en) | METHOD AND DEVICE FOR CLASSIFYING SAMPLES REPRESENTATIVE OF AN IMAGE DIGITAL SIGNAL | |
JP2007525127A (en) | Digital watermark detection by correlation shape analysis | |
Aboalsamh et al. | Steganalysis of JPEG images: an improved approach for breaking the F5 algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20060418 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL HR LT LV MK |
|
DAX | Request for extension of the european patent (deleted) | ||
PUAK | Availability of information related to the publication of the international search report |
Free format text: ORIGINAL CODE: 0009015 |
|
17Q | First examination report despatched |
Effective date: 20080821 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: FRANCE TELECOM |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20150501 |