GB2390251A - Detecting and characterising attacks on a watermarked digital object - Google Patents

Detecting and characterising attacks on a watermarked digital object Download PDF

Info

Publication number
GB2390251A
GB2390251A GB0306047A GB0306047A GB2390251A GB 2390251 A GB2390251 A GB 2390251A GB 0306047 A GB0306047 A GB 0306047A GB 0306047 A GB0306047 A GB 0306047A GB 2390251 A GB2390251 A GB 2390251A
Authority
GB
United Kingdom
Prior art keywords
watermark
attack
digital
estimates
digitally watermarked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0306047A
Other versions
GB0306047D0 (en
Inventor
Dominique Albert Winne
David Roger Bull
Cedric Nishan Canagarajah
Henry David Knowles
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Bristol
Original Assignee
University of Bristol
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Bristol filed Critical University of Bristol
Publication of GB0306047D0 publication Critical patent/GB0306047D0/en
Priority to PCT/GB2003/002470 priority Critical patent/WO2004003841A2/en
Priority to AU2003244781A priority patent/AU2003244781A1/en
Publication of GB2390251A publication Critical patent/GB2390251A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • G06T1/0078Robust watermarking, e.g. average attack or collusion attack resistant using multiple thresholds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

A method of detecting an attack on a digital object into which a first watermark has been embedded using a watermarking algorithm comprising the steps of: estimating the first watermark, embedding a second watermark using the same algorithm, estimating the second watermark and using information derived from the estimates to determine whether the watermarked image has been attacked. Preferably a library of possible attacks is provided and the information is compared with data contained in the library to determine the type of attack which has taken place. The information derived from the estimates of the watermarks may include average coherence of the estimated watermark with the original watermark in wavelet sub-bands. A Bayesian classifier may be used to determine the most likely attack. A method of estimating a watermark comprising the steps of setting a range of threshold values, estimating the watermark using each threshold value, comparing the estimates with a known watermark block and storing the estimate that is most similar to the known watermark block is also disclosed.

Description

J ( 239025
DOUBLE WATERMARKING
TECHNICAL FIELD OF THE INVENTION
This invention relates to watermarking digital 5 objects, and in particular to a method and system for detecting particular attacks on a digital object, such as a digital image.
BACKGROUND TO THE INVENTION
10 A watermark is a visible or invisible structure in an image, which can be recovered after it has been embedded. A digital watermark is a digital pattern inserted into a digital creation, such as a digital image. The process of inserting a watermark into a 15 digital image (embedding procedure) can be done directly in the spatial or transformed domain. The watermark can be inserted by altering certain image coefficients in a way that minimises the resulting perceptual distortion of the image.
20 The imperceptibility of the watermark is the first line of defence, since, if an image is not visibly watermarked, it is more difficult to avoid the watermark by tampering with the image undetectable.
One of the aims when inserting the watermark into the 25 image is to maximise the 'energy' of the watermark.
That is, the magnitudes of the changes should be maximized. However, neither this nor the visibility constraint are essential, although they are highly desirable. 30 As stated above, a watermark is added to an image at the embedding stage. At this time, the integrity of the image is assumed, i.e. no attacks on the image have taken place. The watermarked image is then stored, and whilst in storage may or may not be subject to some 35 form of tampering or attack.
$ ( When the authenticity of the image is to be verified, a detection process is used. The aims of a detection process are to determine whether or not an image has been tampered with, to determine the location 5 of any such tampering on the image, to determine the method of attack used and (ideally) to be able to restore the attacked area(s) in the watermarked image.
Conventional watermarking systems are able to locate areas of an image that have been attacked, but 10 cannot provide an indication of the type of attack used. SUMMARY OF THE INVENTION
It is therefore an object of the present invention 15 to provide a watermarking system and method that is able to detect whether a watermarked image has been attacked, and to provide an indication as to the type of attack used.
For many watermarking algorithms, it has been 20 found that, when the algorithm is applied sequentially, i.e. adding a first digital watermark to an object to form a first watermarked object, and adding a second digital watermark to the first watermarked object to form a second watermarked object, the strength with 25 which the second digital watermark is embedded is approximately equal to the strength with which the first digital watermark is embedded.
However, in the event that a watermarked object has been tampered with, any estimate of the first 30 digital watermark will also include the effects of the tampering with the first watermarked object, and there will be a significant difference between the estimate of the first digital watermark and the estimate of the second digital watermark which indicates tampering with 35 the first watermarked object.
( Therefore, according to a first aspect of the present invention, there is provided a method of detecting an attack on a digital object (I) into which a first digital watermark has been embedded using a 5 watermarking algorithm to form a digitally watermarked object (I') with the method comprising the steps of: estimating the first digital watermark embedded in the digitally watermarked object (I'); embedding a second watermark into the digitally watermarked object (I') 10 using said watermarking algorithm; estimating the second watermark embedded in the digitally watermarked object (I'); comparing the estimate of the first digital watermark and the estimate of the second digital watermark; and determining whether the 15 digitally watermarked object (I') has been attacked on the basis of the comparison.
According to a second aspect of the present invention, there is provided a method of characterizing attacks used on a digital object (I) into which a first 20 digital watermark has been embedded using a watermarking algorithm to form a digitally watermarked object (I'), the method comprising the steps of: estimating the first digital watermark embedded in the digitally watermarked object (I'); embedding a second 25 watermark into the digitally watermarked object (I') using said watermarking algorithm; estimating the second watermark embedded in the digitally watermarked object (I'); and determining the types of attack used on the basis of information derived from the estimates 30 of the first and second watermarks.
According to a third aspect of the present invention, there is provided a method of estimating a watermark in a digital object (I) into which a first digital watermark has been embedded using a 35 watermarking algorithm to form a digitally watermarked object (I'), wherein the first digital watermark block
is known, the method comprising the steps of: setting a range of threshold values; estimating the embedded watermark using each of the threshold values; comparing the estimates of the embedded watermark with the known S watermark block; and storing the estimate of the embedded watermark that is most similar to the known watermark block.
According to other aspects of the present invention, there are provided computer devices for 10 performing the methods described above.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, and to show how it may be put into effect, 15 reference will now be made to the accompanying drawings, in which: Figure 1 is a flow chart illustrating the detection process according to the present invention; Figure 2 illustrates a preferred method of 20 embedding a watermark in an image; Figure 3 illustrates a preferred method of extracting embedded watermarks; Figure 4 illustrates a method of generating histograms for attack determination according to the 25 first embodiment of the present invention; Figure 5 illustrates a method of training the parametric models used in alternative embodiments of the present invention; Figure 6 illustrates a generalized step of 30 determining the most likely attack according to the present invention; Figure 7 illustrates the step of determining the most likely attack in accordance with the first embodiment of the invention; and 35 Figure 8 illustrates the step of determining the most likely attack using parametric models in
( accordance with alternative embodiments of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
5 A digital object (hereinafter referred to as an image, although the invention is applicable to other objects) has been generated, for example by a digital camera, and it is necessary to preserve the authenticity of the image. The image data may be in 10 any form and, for example, may be a set of luminance and chrominance values associated with respective pixels of the image.
A watermark is embedded in the image data and, advantageously, the embedding procedure is carried out 15 near to the source of the image. For example, the watermarking procedure can be carried out within a digital camera.
The watermarking process can be carried out by a general-purpose computer, operating under the control 20 of suitable software, or by another hardware device, such as a DSP or an ASIC, or other integrated circuit.
The watermarked image is then stored, perhaps on a computer hard disc or other storage medium, and whilst in storage may or may not be subject to some form of 25 tampering or attack.
Figure 1 shows a flow chart illustrating the detection process according to the present invention.
At step 101, the watermarked image is retrieved from storage and the embedded watermark is extracted.
30 The extraction process comprises determining an estimate of the embedded watermark.
Conventionally, the estimated watermark is then compared with the original watermark, and the differences between the two used to determine whether 35 particular parts of the image have been attacked.
However, the correlation between the estimated watermark and the original watermark is dependent not only on the attack but on the statistics of the image itself. Therefore, a reference is required to 5 determine whether the correlation observed is due to tampering or due to the statistics of the image.
Therefore at step 103, in accordance with the invention, a second watermark is added to the watermarked image, using the same watermarking 10 algorithm which was used to embed the first watermark.
At step 105, the second watermark is extracted from the doubly watermarked image using a procedure similar to that used in step 101.
As the second watermark is extracted immediately 15 after being embedded, it is known that no further attacks have occurred on the watermarked image.
Therefore, the correlation observed between the estimate of the second watermark and the second watermark will be caused by the statistics of the 20 image. Therefore, this second watermark estimate provides a reference to determine whether the correlation observed between the estimate of the first watermark and the first watermark is due to tampering or the statistics of the image.
25 If the estimate of the first watermark and the estimate of the second watermark are similar, then it is likely that no attack has occurred. However, if the estimates are significantly different, then the watermarked image has probably been attacked.
30 At step 107, the data relating to the estimate of the first watermark and the estimate of the second watermark is converted into an appropriate form to allow a determination of the type, or types, of attack that have been used on the watermarked image.
( In a first embodiment of the invention, the data is converted into values that correspond to the elements of a histogram.
In alternative embodiments, a prior model is 5 assumed for each attack and the parameters of the model are stored.
In other alternative embodiments, neural networks or support vector machines (SVMs) are used. These transform the data so that a measure relating to the 10 error of the classification is minimized.
At step log, the determination of the most likely attack(s) used on the watermarked image is carried out.
The methods used in this determination stage depend upon the type of classifier being used, and several 15 methods are described below with reference to Figures 6, 7 and 8.
Whilst in storage, the whole, or part, of a watermarked image may have been attacked. Indeed, a number of different attacks may have been carried out 20 on different areas of the watermarked image. To determine which attacks have taken place, if any, the watermarked image is considered as a number of elements and the determination in step 109 can provide an indication as to which attack(s) were most likely to 25 have occurred in each element.
In the first embodiment, each possible attack has an associated histogram stored in a library, and the values generated in step 107 are compared with the histograms associated with the possible attacks. The 30 library also contains a histogram relating to a 'no attack' attack. 'No attack' is defined as no tampering with the watermarked image. A probability can then be calculated that indicates which of the attacks was most likely to have been carried out on the watermarked 35 image.
( This probability is denoted as P(PIAi), or "the probability of the parameters P taking a particular value given attack Ai has taken place". For the image in question, the parameters P are compiled, and for 5 each attack Al, the probability that those particular parameter values would have been obtained is calculated. The attack for which this probability is greatest is deemed the most likely attack to have occurred. 10 Preferably, a non-parametric Bayesian classifier is used to determine the type of attack that has occurred. The Bayes classifier attempts to find a solution to the equation: A= erg my 6(Ai A,eA where A is the estimated attack which maximises the right-hand side of the equation,A, is a member of A, the set of all attacks in the library, and P is the set 20 of input parameters.
Bayes theorem states that: ( I) p(p) (2) 25 Using Equation 2, and observing that p(P) is constant, Equation 1 can be manipulated to: A erg m=A(P|A.)P(A.)] 30 where pa,) may be assumed to be a uniform prior.
Alternatively, pa,) may be varied. For example if a particular attack,A,, is more likely, pa,) can be increased. Alternatively, if false negatives are of particular concern, the assumed probability of no 35 attack may be decreased.
As no models exist which accurately describe the relationship between the input parameters P for a given attack A', these values can be generated through simulation. Briefly, for a given attack from the S library of attacks, each image from a library of images is attacked and the attacked images are used to generate histograms using the input parameters P. These histograms approximate p(P|A,). The generation of histograms is described in more detail below, with 10 reference to Figure 4.
Although a Bayesian classifier is described as the preferred classifier, it will be appreciated that many other types of classifiers may be used, depending upon the type of image to be analysed and the types of 15 attacks that are reasonably expected to occur.
Specifically, some classifiers have a lower misclassification rate for a particular type of attack, or for a variety of attacks on a particular type of image. 20 According to an alternative embodiment of the present invention, models are constructed for each attack and the models are stored, rather than generating and storing the histograms as described above. These models provide the advantage over non 25 parametric techniques (such as those described above) that histograms are not necessary for classification.
The number of features used as inputs to the classifier can be increased, which provides superior classification performance, without imposing 30 significant extra memory or computational requirements.
The improvements in performance due to not having a fixed histogram bin size or issues with histogram sparsity (see the description of histogram generation
with reference to Figure 4 below) outweigh the 35 disadvantages due to a mismatch between the model and the observed data.
( In the alternative embodiments, a Bayesian framework is constructed, and assumptions are made regarding the underlying distribution.
Like with the histogram model, p(P|A) is not known, 5 either in terms of specific parameters or an appropriate general model. As histograms grow exponentially in size with dimensionality, it is preferable to assume a parametric model as describing the distribution completely. In the second embodiment, 10 the distribution is assumed to be Gaussian and the distribution is therefore fully described by its mean and covariance matrix.
The general multivariate normal density function is given by (2)d,2ll,, 2exp[ 2(x p)2 (x-)] (4) where d is the number of dimensions, x is dcomponent input vector, is the d-by-d covariance matrix, is 20 the ddimensional mean vector, and (x - p)t indicates the transpose of (x - p). From this, the likelihood discriminant function is written as g,(x)=-(x-) (x-)-ln(det()) (5) where x is assigned to class l for which gl(x) is at a 25 maximum. It has been shown that when the data has a multivariate normal distribution, the use of (5) gives a performance equal to that of the Bayes classifier in (3). The data for the model is built up by estimating 30 the d-by-d covariance matrix, 2, and the d-dimensional mean vector, a, by simulating different attacks on various images. This training process is described with reference to Figure 5 below.
In a third embodiment, the distribution is assumed 35 to be skew-normal and the distribution is therefore described by its mean, covariance matrix and skew. The
inclusion of the skew parameter enables more accurate modelling of the distribution, and therefore an improved error performance compared to a standard normal distribution.
The basic multivariate skew-normal distribution for a random variable, Z. has the following distribution function 2(z;Qv)(atz), (zonk) (6) where ik (Z;Qv) is the k-dimensional normal density 10 function with zero mean and correlation matrix Ov, O() is the N(0,1) cumulative density function, and is a k-dimensional vector controlling the skew. For the case where =O, (6) reduces to k(O,Qv) e.g. there is no skew. The effect of the skew is such that both 15 positive and negative half-normal distributions are achievable. It is often convenient to write z - SNk.(Qv, a) The theory can be extended to include scale and location parameters, such that 20 y = + as (a) Where = (1... Ilk) t, and = dial... ok) The new skew-normal distribution of Y may be written as 24(y-;Q) ''(y-)} (8) where Q = Qvo, so we may write 25 Y SNk.(<, Q. a) Therefore, using the Bayesian framework described in (3), we assume that p(P|At) - SNk- ( (i, Q1, i) Often, for example in the case of the Gaussian 30 distribution, discriminant functions may be derived.
For the skew-normal distributions, no such functions exist for the cases other than when Ql and al are equal.
Therefore, (8) must be evaluated in full.
The data for the model is built up by estimating 35 i, Ql and ai from (8) by simulating different attacks
( on various images. Again, this is described with reference to Figure 5.
Although the two preferred parametric model embodiments described above use Gaussian and skew 5 normal distributions, it will be appreciated that many other types of distribution may be used.
The watermarking technique of the present invention may also be used on images that have been compressed after the first watermark has been applied 10 but before the watermarked image is stored.
The present invention (using non-parametric histograms, parametric models or support vector machines) may be used to detect attacks on watermarked images that have been compressed using a compression 15 algorithm such as JPEG, JPEG2000, Embedded Zerotree Wavelet (EZW) or Set Partitioning in Hierarchical Trees (SPIHT). It will be appreciated that the invention is also applicable to other compression algorithms.
In the case where a skew-normal distribution is 20 used on images that have been JPEG compressed after the first watermark has been applied, the performance of the classifier improves.
Further, multiple classifiers (including both parametric and nonparametric classifiers) may be used 25 in determining attacks on an image, in order to capitalize on the varying performance of different classifiers. Figure 2 illustrates a preferred method of embedding a watermark in an image according to the 30 invention.
This method of embedding a watermark is generally conventional, and will be described only briefly as the details thereof will be known to the person skilled in the art. Moreover, although this is a preferred 35 algorithm, it will be appreciated that the present invention is applicable to any additive watermarking
algorithm where there is a dependence on the image being watermarked.
In step 201, the watermark is created. A bipolar (+1) block of size block_size x block_size is generated 5 using a random number generator. A user key acts as a seed number ( seed) for the random number generator.
The bipolar block is doubled in size by replicating each bit horizontally, vertically and diagonally such that, e.g. 1 1 - 1 - 1
10 (1 - 1 1 1 - 1 - 1
1 1) becomes 1 1 1 1 1 1 1 1
This is known as the watermark block WM. WM is then tiled over the whole of the image to be watermarked I. 15 At step 203, the Discrete Wavelet Transform (DWT) is applied to the whole of the image I. At step 205, the Discrete Wavelet Transform (DWT) is applied to the tiled WM.
At step 207, the Noise Visibility Function (NVF) 20 of each subband of the DWT of the image I is calculated, where NVF ( 1, j) = - ( 1 + - ---a(i,]) manic is the local subband variance calculated using a 3 25 by-3 sliding window, D is a user defined variable in the range 50-100 and Om iS the maximum local variance for the subband.
The embedding strength for all of the wavelet coefficients So(i,j), is then determined using 30 So(i,j) = Sew *(1 - NVF(i,j))+ Sfo * NVF(i,j), (10) where Se and Sf are matrices that denote the embedding strength in edge and flat regions of the image I respectively. Se and Sf may be defined by the user depending on the required trade-off between visibility 35 and watermark energy. The subscripts L and O signify the decomposition level and orientation respectively of
the Wavelet subbands, and i and j refer to the position of the current coefficient in a two dimensional space.
This ensures that the watermark is embedded in the image with as much energy as possible without the 5 watermark becoming visible.
At step 209, the watermark is embedded into image I by multiplying the DWT of the tiled watermark block WM with SL,O(ij) and adding it to the DWT of the image I to form the watermarked image I' in the transform 10 domain i.e. I' = I + Rae * SL,,o(i, j) (ll) At step 211, an inverse Discrete Wavelet Transform is performed on I' to obtain the watermarked image in the spatial domain.
15 Then, I' is rounded to take integer values in an appropriate range.
The image has now been watermarked and can be stored in a suitable device.
Figure 3 illustrates a method of extracting the 20 embedded watermark in accordance with the invention.
At this stage, the watermarked image I' has been stored, and it is not known whether it has been attacked. The (possibly) attacked watermarked image is denoted I'.
25 The watermark block WM and the watermark block size block_si ze are known.
At step 301, WM is tiled so that it is the same size as the (possibly) attacked watermarked image I'.
The Discrete Wavelet Transform (DWT) is applied to 30 the tiled watermark block WM.
At step 303, the Discrete Wavelet Transform is applied to the watermarked image I'.
Then the transformed tiled watermark block WM is split back into blocks of size block_size/(2eVel) x 35 block size/(2evel) and for each transformed watermark block the process in steps 307 to 321 is performed.
It should be noted that the block size decreases in area by a factor of 4 in the wavelet domain each time the DWT is applied. 'Level' indicates the number of times the image has been decomposed using the DWT to 5 get the current wavelet subband.
At step 307, a range of possible thresholds is set. The range depends upon whether soft or hard thresholding is used.
The process of estimating the watermark works by 10 taking coefficient values with a magnitude below the threshold. By varying the threshold, different estimates of the watermark are obtained.
Then, for each threshold value, steps 311 to 319 are performed.
15 At step 311, the embedded watermark is estimated using the current threshold value.
The thresholded image block, comprising the watermarked image coefficients that are larger than the threshold, is subtracted from the image block to give 20 an estimate of the watermark M. An image block is defined as a block of the image I' of size block_size/(2eVel) x block_size/(2eVel).
The estimation process involves first thresholding the image to give an estimate of the image without the 25 watermark. The difference between the estimate of the watermark-free image and the watermarked image is an estimate of the watermark.
At step 313, the coherence between the estimate of the watermark block WM and the original transformed 30 watermark block WM is calculated.
At step 315, it is determined whether the coherence of the estimate of the watermark block AM is greater than a maximum coherence obtained with any other threshold value used in calculating previous 35 estimates.
( If the current coherence is greater than the maximum coherence obtained thus far, the process moves to step 317. Here, the estimate of the watermark WM and its coherence are stored as the maximum. The 5 process moves to step 319, where steps 309 to 317 are repeated for another threshold value.
If the current coherence is less than the maximum, the process moves to step 319, where steps 309 to 317 are repeated for the remaining threshold values.
10 By comparing the coherence of the current estimated watermark block WM with the maximum coherence obtained thus far, the estimate of the watermark that is most similar to the original is obtained.
Once all of the threshold values for a particular 15 watermark block WM have been examined, the process is repeated for each watermark block tile WM in the image I'. At step 323, an inverse DWT is performed on a composite of the estimated watermark blocks WM to 20 convert them into the spatial domain.
The method of extracting watermarks described above is used for extracting both the first watermark and the second watermark.
If the object was tampered with, the tampering 25 would affect the estimates of the watermark.
Nevertheless, it should be noted that, irrespective of the possibility of tampering, the estimate of the watermark that is most similar to the original watermark is still taken as the best estimate for these 30 purposes.
There are potentially an infinite number of different attacks that can be performed on a particular image and it is not possible to determine a definite mathematical formula to describe how different attacks 35 affect particular watermarks. Therefore, in accordance with an embodiment of the invention, histograms are
generated for different attacks based on observed data.
As attacks can affect different images in different ways, a wide range of different images are used in compiling the histograms.
5 Figure 4 illustrates a method of generating histograms for attack determination in accordance with the first embodiment of the invention.
A histogram is generated for each possible attack stored in a library. The histogram is constructed from 10 the analysis of the results of attacks on a number of sample images stored in a database.
One or more times, each image in the database is watermarked and subjected to each attack stored in the library in turn. For each attack, the individual 15 elements in the watermarked (and attacked) image are analyzed. It should be noted that an element might be a pixel, a block of pixels, a transform coefficient or a block of transform coefficients.
20 At step 407, the values contained in the data are converted into an appropriate form for compiling a histogram. For all values contained in the data, the histogram index corresponding to these values is calculated in order that the value stored in each of 25 the calculated histogram indices, or bins, may be incremented. For each element, a number of different parameters can be used as inputs to the histogram. For example, the parameters may include the possibly attacked 30 watermarked image, estimated first watermark, estimated second watermark, the average coherence of the estimated first watermark with the original in Level 1 wavelet subbands, the average coherence of the estimated first watermark with the original in Level 2 35 wavelet subbands, the sum of the absolute value of the wavelet coefficients for each watermark block averaged
( over all Level 1 subbands, the average of the masking function strength for all subbands in the Level 1 wavelet decomposition, the average of the masking function strength for all subbands in the Level 2 5 wavelet decomposition, the average difference in watermark coherence (or correlation coefficient) for the horizontal, vertical and diagonal orientations in Level 1 (or Level 2) of the wavelet decomposition, thevariance of the watermarked image or the variance of 10 individual subbands.
It will be appreciated that the above parameters are but a small selection of the total number of parameters that may be used in constructing the histograms, and that other parameters will be readily 15 apparent to a person skilled in the art.
Some parameters may vary on a block-by-block basis, while others may vary on a pixel-by-pixel basis.
For a parameter that varies on a block-by-block basis, the parameter value is replicated block_size x 20 block_size times.
The level is the number of times the image has been decomposed using the DWT, where for each decomposition, apart from the first time when the input is the original image, the input to the DWT is the low 25 frequency subband from the previous decomposition.
Within each level, there are 4 different subbands or orientations. These are: the approximation which contains the low frequency information mentioned previously, and the horizontal, vertical and diagonal 30 subbands, which correspond to components of the image that vary horizontally, vertically and diagonally respectively. The average difference in coherence for the horizontal, vertical and diagonal orientations for a given level is defined thus. It is the difference 35 between the coherence of the first extracted watermark with its original, and the second extracted watermark
with its original, averaged over the appropriate subbands. It is possible to use separate inputs relating to each of these orientations, or to reduce the number of 5 inputs by taking an average of them.
The number of inputs determines the number of dimensions in the histogram, and so reducing the number of inputs reduces the data handling requirements.
The number of dimensions in the histogram, and the 10 particular features used therein, should be carefully chosen. It would be expected that the inclusion of an extra feature to the input vector, P. (i.e. increasing the dimension of the histogram) would at worst leave 15 the results of the classification unaffected. However, this is not the case. It has been found that, for certain classifiers, the best performance occurs when the number of features used is between 3 and 6 (i.e. the histograms have between 3 and 6 dimensions). An 20 increase of dimension beyond 6 usually impairs the performance of the classifier. This is known as the Curse of Dimensionality'.
This may be understood in terms of the population of the histograms. As the number of features 25 increases, the number of bins will rise exponentially.
As the training set (the set of images used to construct the histograms) is finite, the histograms will become increasingly sparse. Therefore, the likelihood of a particular P corresponding to a bin 30 that has been filled sufficiently to be representative of the true distribution is small. Indeed, many of the bins may be zero. Thus, in the high sparsity case, for all attacks in A, for a given P. it is likely that that particular value of P was not generated during the 35 training process, so it may not be possible to make a decision. However, it is also noted that many of the
classifiers having a poorer performance use only two features. Thus, a compromise must be struck: too many parameters and the histograms become too sparse to be 5 reliable; too few parameters and the samples from the different classes are not sufficiently separated to allow accurate classification.
Although the best performance for some classifiers is obtained when the number of features used as 10 histogram inputs is between 3 and 6, it will be appreciated that other classifiers may have an optimum dimension of less than 3, or greater than 6.
Further, different classifiers provide their optimum performance using different sets of parameters, 15 as well as having an optimum dimension.
It has been found that, in one preferred embodiment of the invention, using one particular library of images and one particular library of attacks, the optimal selection of features for the non 20 parametric Bayes classifier is: the estimated second watermark, the average coherence of the estimated first watermark with the original in Level 1 wavelet subbands, the average coherence of the estimated first watermark with the original in Level 2 wavelet subbands 25 and the average of the masking function strength for all subbands in the Level 2 wavelet decomposition.
Returning to Figure 4, values are obtained for each of the selected parameters, and the values are assigned to appropriate histogram bins. That is, for 30 each parameter, there are a number of bins, each relating to a range of possible values for the parameter. The appropriate bin is then incremented.
Once this process has been completed, there will exist, for each particular attack, a histogram that 35 indicates the probability of the values that the
various parameters are likely to take in the event that that particular attack has been used on an image.
Figure 5 illustrates a method of training the parametric models used in alternative embodiments of 5 the present invention.
Here, a parametric model is derived for each of the attacks contained in the library of possible attacks. Again, data for a 'No Attack' attack is collected, and an appropriate model generated. As with 10 the histograms above, the models are constructed on the basis of the results of attacks on a number of sample images stored in a database.
This generalized method of training the parametric models applies to both the Gaussian and Skew-Normal 15 distributions.
Steps 502, 503 and 505 are repeated for each attack stored in the library.
At step 502, training data for the model is obtained by watermarking each image in the database and 20 attacking the watermarked image using the current attack. The training data comprises values of various parameters that will be used in the model. The parameters may be similar to those used in constructing the histograms above, or may be different. The values 25 are obtained by analysing the elements of the attacked images and from analysing the effects of the application of the second watermark.
At step 503, a percentage of the data derived from the results of a particular attack used on all of the 30 images in the database is loaded into memory.
Since the amount of data derived from an attacked image can be very large (depending upon the number of parameters being used in the model and the size of the attacked images) and the determination of a suitable 35 model can be very complex (depending upon the model
( being used), usually only a percentage of the total data obtained is loaded into memory.
The selection of the percentage of the total data to be used may be performed manually, i.e. by selecting 5 particular parameters to be used, or by selecting the data for parameters obtained from a subset of the images in the database, or, preferably, the selection may be performed randomly.
It will be appreciated that if sufficient 10 computing power is available, all of the training data may be used in training the parametric model.
At step 505, the classifier is trained based upon the data loaded in step 503.
According to the second and third embodiments of 15 the present invention, this step comprises estimating the appropriate model parameters from the loaded data.
In the second embodiment, where a Gaussian distribution is used, this step comprises estimating (the d-by-d covariance matrix) and (the ddimensional 20 mean vector).
In the third embodiment, where a Skew-Normal distribution is used, this step comprises estimating (the d-by-d covariance matrix), (the ddimensional mean vector) and (the k-dimensional vector 25 controlling the skew).
According to the fourth embodiment of the present invention where neural networks or support vector machines (SVMs) are used, this step comprises constructing a decision boundary that minimises the 30 number of errors in the classification.
Using these histograms or models, the step of determining the most likely attack (step 109 in Figure 1) can be carried out.
Figure 6 illustrates a generalized step of determining the most likely attack according to the present invention.
Each element of the possibly attacked watermarked 5 image is considered in turn (steps 601 to 605). As above, an element may be a pixel, a block of pixels, a transform coefficient or a block of transform coefficients. For each element in the image, the most likely 10 attack is calculated (step 603).
Preferably, this comprises assigning a probability to each attack (including the 'No Attack' attack) in the library of possible attacks. This probability indicates the likelihood that this attack has been 15 carried out on that particular element of the image.
The identity of the most likely attack is stored in an array, where each element of the array corresponds to an element in the image.
This array forms the output of the determination 20 process and indicates which attack is most likely to have occurred on the corresponding element in the image. To provide a higher degree of certainty in classifying the types of attack used, multiple 25 classifiers may be used. In this case, output arrays for each classifier are generated and the arrays are compared. Parts of the arrays indicating the same type of attack reinforce the likelihood that that attack has taken place.
30 A further step in the detection process can compare the indications of the most likely attack across the whole image. This may be done so that higher confidence can be placed in an answer if it indicates that many contiguous elements of the image 35 have been the subject of a particular attack, rather
( than if it indicates that isolated elements have been the subject of different attacks.
Figure 7 illustrates the step of determining the most likely attack in accordance with the first 5 embodiment of the present invention.
The process contained in steps 703 to 713 is repeated for each element in the (possibly) attacked watermarked image.
At step 703, the values contained in the data for 10 each element are converted into an appropriate form for indexing a histogram. Thus, values are obtained for each of the parameters used in the histograms.
As described above, a number of different parameters may be used, depending upon the complexity 15 of the histogram.
If the average difference in coherence for the horizontal, vertical and diagonal orientations in Level 1 (or Level 2) of wavelet decomposition is used, these values are calculated from the coherences of the first 20 and second watermarks with their estimates.
Then, in steps 705 to 711, the histogram data for each element in the (possibly) attacked watermarked image is used to determine, via the histograms in the library, which attack is most likely. Specifically, 25 for the histogram index corresponding to each element in the (possibly) attacked watermarked image, it is determined which of the histograms has the highest bin value for that index.
In step 707, it is determined whether the current 30 attack is the most likely to have been used on the element in question. This is achieved by comparing the current histogram bin value to the maximum obtained so far. The higher the histogram bin value, the more 35 likely it is that this attack has occurred. So the attack that is most likely to have occurred will
( correspond to the histogram that has the highest bin value corresponding to the current input value.
If the current bin value is the highest obtained so far, the process moves to step 709 where the current attack from the library is stored as the most likely.
The identity of the attack is stored in an output array, where each element of the array corresponds to an element in the image.
If the current bin value is not greater than the 10 maximum obtained so far, the current attack is not stored, and the process returns to step 705, where the next attack in the library is evaluated.
Once all attacks in the library have been evaluated for all elements in the image I', the process 15 ends.
The output of the process is an array where each element of the array corresponds to an element of the image, and the content of each element in the array indicates what attack is most likely to have occurred 20 on the corresponding element in the image.
The further step of comparing these indications across the whole image can now be carried out.
Figure 8 illustrates the step of determining the most likely attack using parametric models in 25 accordance with alternative embodiments of the present invention. The process contained in steps 803 to 813 is repeated for each element in the (possibly) attacked watermarked image.
30 Steps 805 to 811 are repeated for each attack in the library of possible attacks.
As with the histograms above, values of various parameters will have been obtained for each element in the possibly attacked watermarked image. The actual 35 parameters will depend upon the model being used.
At step 805, the probability of the current attack being used on the watermarked image is calculated based upon the input vector (the vector containing the values of the various parameters).
5 The probability of the current attack being used on the current element is calculated by considering the probability of the parameters P taking their measured values in view of the model parameters (which were obtained during the training stage).
10 For example, in the second embodiment where a Gaussian distribution is used, the probability of occurrence of the values of parameters P is calculated for the estimated (the d-by-d covariance matrix) and p (the ddimensional mean vector). The probability 15 function is given by Equation (4) above.
In the third embodiment where a Skew-Normal distribution is used, the probability of occurrence of the values of parameters P is calculated for the estimate (the d-by-d covariance matrix), (the d 20 dimensional mean vector) and a (the k-dimensional vector controlling the skew). The probability function is given by Equation t8) above.
At step 807, it is determined whether the probability calculated for the current attack on the 25 current element of the image in step 805 is greater than the respective probabilities calculated for the other possible attacks on the current element of the image. If the probability for the current attack is 30 greater than the maximum probability calculated for other attacks already considered, the process moves to step 809 where the current attack is stored as the most likely attack, and then to step 811 where the probability is stored as the maximum probability 35 obtained for that element so far. The process then moves to step 813.
If the probability for the current attack is not greater than the maximum probability calculated for other attacks already considered, the process moves to step 813.
S If all of the attacks in the library of possible attacks have not been considered for the current element, the process returns to step 805 from step 813.
Once all attacks in the library have been considered, the process moves to step 815.
10 As described above, the steps of 803 to 815 are repeated for each element in the image.
Once all attacks in the library have been evaluated for all elements in the image I', the process ends. 15 As with the generalized process in Figure 6, the output of the process is an array, with the identity of the attack that has been found to be most likely to have occurred on a particular element stored in an appropriate element of the output array.
20 Again, the further step of comparing these indications across the whole image can now be carried out. There is therefore described a method of detecting and characterislng attacks on a watermarked image.

Claims (58)

1. A method of characterizing attacks used on a digital object (I) into which a first digital watermark has been embedded using a watermarking algorithm to 5 form a digitally watermarked object (I'), the method comprising the steps of: estimating the first digital watermark embedded in the digitally watermarked object (I'); embedding a second watermark into the digitally 10 watermarked object (I') using said watermarking algorithm; estimating the second watermark embedded in the digitally watermarked object (I'); and determining the types of attack used on the basis 15 of information derived from the estimates of the first and second watermarks.
2. A method as claimed in claim 1 further comprising the step of: 20 providing a library of possible attacks, wherein the library contains data relating to the effects of possible attacks on a variety of candidate digital objects; and the step of determining comprises comparing the 25 information derived from the estimates of the first and second watermarks with the data contained in the library of possible attacks.
3. A method as claimed in claim 2 wherein the library 30 of possible attacks further contains data relating to the effects of no attacks on the variety of candidate digital objects.
4. A method as claimed in any preceding claim wherein 35 the information derived from the estimates of the first and second watermarks includes the average coherence of
( the estimated first watermark with the original first watermark in Level 1 wavelet subbands.
5. A method as claimed in one of claims 1 to 4 wherein 5 the information derived from the estimates of the first and second watermarks includes the average coherence of the estimated first watermark with the original first watermark in Level 2 wavelet subbands.
10
6. A method as claimed in any preceding claim wherein the information derived from the estimates of the first and second watermarks includes the average coherence of the estimated second watermark with the original second watermark in Level 1 wavelet subbands.
7. A method as claimed in one of claims 1 to 4 wherein the information derived from the estimates of the first and second watermarks includes the average coherence of the estimated second watermark with the original second 20 watermark in Level 2 wavelet subbands.
8. A method as claimed in one of claims 1 to 4 wherein the information derived from the estimates of the first and second watermarks includes the sum of the absolute 25 value of the wavelet coefficients for each watermark block averaged over all Level 1 subbands.
9. A method as claimed in one of claims 1 to 4 wherein the information derived from the estimates of the first 30 and second watermarks includes the averaging of the masking function strength for all subbands in the Level 1 wavelet decomposition.
10. A method as claimed in one of claims 1 to 4 wherein the information derived from the estimates of the first and second watermarks includes the averaging
of the masking function strength for all subbands in the Level 2 wavelet decomposition.
11. A method as claimed in any preceding claim, 5 wherein the step of determining the type of attack used comprises assigning an a priori probability value to each possible attack stored in the library.
12. A method as claimed in any preceding claim, 10 wherein the step of determining the types of attack used comprises considering elements of the digitally watermarked object (I') in turn to provide an indication of the areas of the digital object (I) that have been subjected to attack.
13. A method as claimed in claim 12 wherein an element is a pixel.
14. A method as claimed in claim 12 wherein an element 20 is a block of pixels.
15. A method as claimed in claim 12 wherein an element is a transform coefficient.
25
16. A method as claimed in claim 12 wherein an element is a block of transform coefficients.
17. A method as claimed in one of claims 12 to 16, wherein the step of determining the types of attack 30 used comprises assigning an a posterior) probability value for each possible attack stored in the library to each element of the digitally watermarked object (I').
18. A method as claimed in any one of claims 12 to 17, 35 wherein an array is produced that indicates which types
À 31 of attack are most likely to have been used in each element of the digitally watermarked object (I').
19. A method as claimed in claim 2 or in one of claims 5 3 to 18 when dependent on claim 2, wherein the library of possible attacks comprises data relating to the effects of possible attacks on a variety of candidate digital objects in the form of histograms for each attack, and the step of determining the types of attack 10 used comprises: deriving values for features from the estimates of the first and second watermarks, wherein the features correspond to the inputs to the histograms; and comparing the values of features to the histogram 15 for each attack to determine the type of attack used.
20. A method as claimed in claim 19 wherein the step of comparing comprises using a Bayesian Classifier to determine the most likely attack.
21. A method as claimed in claim 20 wherein the estimated second watermark, the average coherence of the estimated first watermark with the original in Level 1 wavelet subbands, the average coherence of the 25 estimated first watermark with the original in Level 2 wavelet subbands and the average of the masking function strength for all subbands in the Level 2 wavelet decomposition are the input features of the histograms.
22. A method as claimed in claim 20 wherein the estimated second watermark, the average coherence of the estimated first watermark with the original in Level 1 wavelet subbands, the average coherence of the 35 estimated first watermark with the original in Level 2 wavelet subbands, the average coherence of the
( estimated second watermark with the original second watermark in Level 1 wavelet subbands and the average coherence of the estimated second watermark with the original second watermark in Level 2 wavelet subbands 5 are the input features of the histograms.
23. A method as claimed in claim 18, 19 or 20 wherein the number of inputs to the histograms is between 3 and 6.
24. A method as claimed in claim 2, or in one of claims 3 to 16 when dependent on claim 2, wherein the library of possible attacks comprises data relating to the effects of possible attacks on a variety of 15 candidate digital objects in the form of a parametric model for each attack, and the step of determining the types of attack used comprises: deriving values of features from the estimates of the first and second watermarks, wherein the features 20 correspond to the features used in the models for each attack; and calculating the probability that each attack has occurred based on the values of the features.
25 25. A method as claimed in claim 24 wherein the models are constructed using a Bayesian framework.
26. A method as claimed in claim 25 wherein the underlying distribution is assumed to be Gaussian and 30 the model for each attack is constructed by: attacking a variety of candidate digital objects with the attack; estimating the mean vector and covariance matrix from the attacked objects;
deriving the model for the attack by estimating the model parameters from the estimates of the mean vector and covariance matrix.
5
27. A method as claimed in claim 25 wherein the underlying distribution is assumed to be skew-normal and the model for each attack is constructed by: attacking a variety of candidate digital objects with the attack; 10 estimating the mean vector, the covariance matrix and the skew vector from the attacked objects; deriving the model for the attack by estimating the model parameters from the estimates of the mean vector, the covariance matrix and the skew vector.
28. A method as claimed in any preceding claim, wherein the step of determining the types of attack used is repeated using a different classification technique.
29. A method as claimed in any preceding claim, wherein the digitally watermarked object (I') is compressed using a compression algorithm immediately after the first digital watermark has been applied.
30. A method as claimed in claim 29, wherein the digitally watermarked object (I') is compressed using JPEG compression.
30
31. A method as claimed in claim 29, wherein the digitally watermarked object (I') is compressed using JPEG2000 compression.
32. A method as claimed in claim 29, wherein the 35 digitally watermarked object (I') is compressed using Embedded Zerotree Wavelet compression.
(
33. A method as claimed in claim 29, wherein the digitally watermarked object (I') is compressed using Set Partitioning in Hierarchical Trees compression.
34. A computer system, programmed to carry out a method as claimed in any one of claims 1 to 33.
35. A computer software product, containing code for lo carrying out a method as claimed in any one of claims l to 33.
36. A hardware device adapted to carry out a method as claimed in any one of claims 1 to 33.
37. A method of detecting an attack on a digital object (I) into which a first digital watermark has been embedded using a watermarking algorithm, to form a digitally watermarked object (I'), the method 20 comprising the steps of: estimating the first digital watermark embedded in the digitally watermarked object (I'); embedding a second watermark into the digitally watermarked obj ect ( I') using said watermarking 25 algorithm; estimating the second watermark embedded in the digitally watermarked object (I'); comparing the estimate of the first digital watermark and the estimate of the second digital 30 watermark; and determining whether the digitally watermarked object (I') has been attacked on the basis of the comparison. 35
38. A method as claimed in claim 37, wherein it is determined that the digitally watermarked object (I')
has not been attacked if the estimates of the first and second digital watermarks are similar.
39. A method as claimed in claim 37, wherein it is 5 determined that the digitally watermarked object (I') has been attacked if the estimates of the first and second digital watermarks are substantially different.
40. A method as claimed in claim 37, wherein the step 10 of comparing the estimates comprises: determining a measure of the differences between the estimate of the first digital watermark and the first digital watermark; determining a measure of the differences between Is the estimate of the second digital watermark and the second digital watermark; and comparing the measure of the differences between the estimate of the first digital watermark and the first digital watermark and the measure of the 20 differences between the estimate of the second digital watermark and the second digital watermark.
41. A method as claimed in claim 40, wherein it is determined that the digitally watermarked object (I') 2s has not been attacked if the measure of the differences between the estimate of the first digital watermark and the first digital watermark and the measure of the differences between the estimate of the second digital watermark and the second digital watermark are similar.
42. A method as claimed in claim 41 wherein the watermarking algorithm generates digital watermarks using properties of the digital object (I) and the step of comparing indicates whether the measure of the 35 differences are due to properties of the digital object (I) or due to one or more attacks.
(
43. A method as claimed in claim 42, wherein it is determined that the digitally watermarked object (I') has not been attacked if the estimates of the first and 5 second watermarks have common differences from their respective originals.
44. A method as claimed in claim 42, wherein it is determined that the digitally watermarked object (I') 10 has been attacked if the estimate of the first watermark differs from its original more substantially than the estimate of the second watermark differs from its original.
15
45. A method as claimed in one of claims 37 to 44, wherein the steps of estimating the first and second watermarks comprises considering elements of the digitally watermarked object (I') in turn to provide an indication of the areas of the digital object (I) that 20 have been subjected to attack.
46. A method as claimed in claim 45 wherein an element is a pixel.
25
47. A method as claimed in claim 45 wherein an element is a block of pixels.
43. A method as claimed in claim 45 wherein an element is a transform coefficient.
49. A method as claimed in claim 45 wherein an element is a block of transform coefficients.
50. A computer system, programmed to carry out a 35 method as claimed in any one of claims 37 to 49.
(
51. A computer software product, containing code for carrying out a method as claimed in any one of claims 37 to 49.
5
52. A hardware device adapted to carry out a method as claimed in any one of claims 37 to 49.
53. A method of estimating a watermark in a digital object (I) into which a first digital watermark has 10 been embedded using a watermarking algorithm to form a digitally watermarked object (I'), wherein the first digital watermark block is known, the method comprising the steps of: setting a range of threshold values; 15 estimating the embedded watermark using each of the threshold values; comparing the estimates of the embedded watermark with the known watermark block; and storing the estimate of the embedded watermark 20 that is most similar to the known watermark block.
54. A method as claimed in claim 53 wherein the step of comparing the estimates comprises calculating the coherence between the estimates and the known watermark 25 block.
55. A method as claimed in claim 53 or 54, further comprising the steps of: embedding a second watermark into the digitally 30 watermarked object (I') using said watermarking algorithm; estimating the second watermark embedded in the digitally watermarked object (I') by: setting a second range of threshold values;
estimating the second embedded watermark using each of the second range of threshold values; comparing the estimates of the second 5 embedded watermark with the known second watermark block; and storing the estimate of the second embedded watermark that is most similar to the known second watermark block.
56. A computer system, programmed to carry out a method as claimed in any one of claims 53 to 55.
57. A computer software product, containing code for 15 carrying out a method as claimed in any one of claims 53 to 55.
58. A hardware device, adapted to carry out a method as claimed in any one of claims 53 to 55.
GB0306047A 2002-06-27 2003-03-17 Detecting and characterising attacks on a watermarked digital object Withdrawn GB2390251A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/GB2003/002470 WO2004003841A2 (en) 2002-06-27 2003-06-09 Image attack characterisation by double watermarking
AU2003244781A AU2003244781A1 (en) 2002-06-27 2003-06-09 Image attack characterisation by double watermarking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0214985A GB2390246A (en) 2002-06-27 2002-06-27 Method of characterising attacks on a watermarked object

Publications (2)

Publication Number Publication Date
GB0306047D0 GB0306047D0 (en) 2003-04-23
GB2390251A true GB2390251A (en) 2003-12-31

Family

ID=9939474

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0214985A Withdrawn GB2390246A (en) 2002-06-27 2002-06-27 Method of characterising attacks on a watermarked object
GB0306047A Withdrawn GB2390251A (en) 2002-06-27 2003-03-17 Detecting and characterising attacks on a watermarked digital object

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB0214985A Withdrawn GB2390246A (en) 2002-06-27 2002-06-27 Method of characterising attacks on a watermarked object

Country Status (1)

Country Link
GB (2) GB2390246A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679572A (en) * 2017-09-29 2018-02-09 深圳大学 A kind of image discriminating method, storage device and mobile terminal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105857B (en) * 2007-07-20 2010-09-29 北京交通大学 High capacity reversible water mark method based on predication and companding technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002032031A1 (en) * 2000-10-11 2002-04-18 Digimarc Corporation Watermarks carrying content dependent signal metrics for detecting and characterizing signal alteration
GB2374995A (en) * 2001-04-25 2002-10-30 Univ Bristol Watermarking using representative values

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654479B1 (en) * 1999-08-19 2003-11-25 Academia Sinica Cocktail watermarking on images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002032031A1 (en) * 2000-10-11 2002-04-18 Digimarc Corporation Watermarks carrying content dependent signal metrics for detecting and characterizing signal alteration
GB2374995A (en) * 2001-04-25 2002-10-30 Univ Bristol Watermarking using representative values

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679572A (en) * 2017-09-29 2018-02-09 深圳大学 A kind of image discriminating method, storage device and mobile terminal
CN107679572B (en) * 2017-09-29 2021-05-04 深圳大学 Image distinguishing method, storage device and mobile terminal

Also Published As

Publication number Publication date
GB0214985D0 (en) 2002-08-07
GB2390246A (en) 2003-12-31
GB0306047D0 (en) 2003-04-23

Similar Documents

Publication Publication Date Title
Karampidis et al. A review of image steganalysis techniques for digital forensics
Wang et al. Optimized feature extraction for learning-based image steganalysis
US7167574B2 (en) Method and apparatus for content-based image copy detection
CN102063907B (en) Steganalysis method for audio spread-spectrum steganography
CN101151622A (en) System and method for steganalysis
Ustubioglu et al. A new copy move forgery detection technique with automatic threshold determination
Zong et al. Blind image steganalysis based on wavelet coefficient correlation
CN108596823A (en) A kind of insertion of the digital blind watermark based on sparse transformation and extracting method
Shi et al. Steganalysis versus splicing detection
Luo et al. Detection of quantization artifacts and its applications to transform encoder identification
WO2004003841A2 (en) Image attack characterisation by double watermarking
Wang et al. Contourlet domain locally optimum image watermark decoder using Cauchy mixtures based vector HMT model
Cho et al. Block-based image steganalysis for a multi-classifier
GB2390251A (en) Detecting and characterising attacks on a watermarked digital object
Pevný et al. Multi-class blind steganalysis for JPEG images
Fadoua et al. Medical video watermarking scheme for telemedicine applications
Chen et al. Identifying computer generated and digital camera images using fractional lower order moments
Progonov Information-Theoretic Estimations of Cover Distortion by Adaptive Message Embedding
Malik Steganalysis of qim steganography using irregularity measure
Li et al. An intelligent watermark detection decoder based on independent component analysis
Sharma et al. Robust Prediction of Copy-Move Forgeries using Dual-Tree Complex Wavelet Transform and Principal Component Analysis
Yu et al. Cumulant-based image fingerprints
CN112348816B (en) Brain magnetic resonance image segmentation method, storage medium, and electronic device
Jin et al. Adaptive digital image watermark scheme based on Fuzzy Neural Network for copyright protection
Xu et al. Segmentation of synthetic aperture radar image using multiscale information measure-based spectral clustering

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)