GB2486987A - Classifying images using enhanced sensor noise patterns - Google Patents
Classifying images using enhanced sensor noise patterns Download PDFInfo
- Publication number
- GB2486987A GB2486987A GB1200030.3A GB201200030A GB2486987A GB 2486987 A GB2486987 A GB 2486987A GB 201200030 A GB201200030 A GB 201200030A GB 2486987 A GB2486987 A GB 2486987A
- Authority
- GB
- United Kingdom
- Prior art keywords
- images
- image
- snp
- similarity
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 87
- 238000011524 similarity measure Methods 0.000 claims abstract description 21
- 230000000052 comparative effect Effects 0.000 claims abstract description 19
- 230000002708 enhancing effect Effects 0.000 claims abstract description 10
- 230000007423 decrease Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 16
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 7
- 239000003623 enhancer Substances 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims description 2
- 238000003064 k means clustering Methods 0.000 claims description 2
- 238000011109 contamination Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 39
- 230000008569 process Effects 0.000 description 16
- 230000002159 abnormal effect Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000001419 dependent effect Effects 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004374 forensic analysis Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000000356 contaminant Substances 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000011843 digital forensic investigation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G06K9/40—
-
- G06K9/6267—
-
- G06K9/64—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/90—Identifying an image sensor based on its output data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
A method of classifying a plurality images taken by one or more known or unknown image capture devices. The method comprising the steps of: for each image extracting an initial Sensor Noise Pattern (SNP) for the image; enhancing the initial SNP to create an enhanced SNP by applying a correcting model wherein the correcting model scales the initial SNP by a factor which decreases as the magnitude of the signal intensity of the initial SNP increases; identifying a subset of images from the plurality of images and; forming an image classifier based on the subset of images by identifying one or more clusters of images which have identical or similar SNPs. The identification is based on a similarity measure between the enhanced SNP for a given image with one or more different images in the subset of images classifying one or more of the remaining images that were not part of the initial subset by calculating a comparative measure between the remaining images and each cluster as identified in the image classifier, and determining if said remaining image belongs to an identified cluster based on the comparative measure. The enhanced SNP reduces image scene contamination.
Description
Methods for identifying imaging devices and classifying images acquired by unknown imaging devices
Technical field of invention
S The invention relates to a method of extracting and enhancing a Sensor Noise Pattern (SNP) or fingerprint from an image, the SNP being an intrinsic characteristic of the camera which took the image, and comparing the extracted SNP with other SNPs in order to identify other images that contain an identical or similar SNP. In particular, but not exclusively, the present invention is used in the fields of digital forensics in order to identify images taken by the same camera.
Background to the invention
It is known, and desirable, in the field of digital forensics to identify images taken from a particular camera. This is of particular importance, in fields such as criminal investigation, where an investigator wishes to prove that an individual image, or images, were taken using a specific camera. The classification of images from one or more devices also has applications in commercial sectors, such as, classifying of images, cataloguing, managing image processing etc. For example, if the photo is of an indecent nature, e.g. child pornography, being able to prove that the camera was used to take a given photo may be used as evidence against the owner of the camera. In particular, if a link between an individual and a particular camera can be established, e.g. the camera may be recovered in a raid, being able to prove that a specific photo, or photos, originates from that camera allows investigators to determine a causal link between the owner and the images.
A method of identifying a camera is via information contained in the metadata of a digital photo. Such metadata may often contain information such as the time and date a photo was taken, as well as a device identifier such as a camera name. However, criminals taking indecent photos will often remove such data in order to subvert the identification process.
Some types of camera will automatically embed a watermark or hash mark into the photos taken with the camera. However, not all cameras have this ability and therefore this identification method is limited to those images taken with those particular makes and models. It is therefore desirable to be able to extract a signal that is present in all makes and models of devices that this is not easily subvertible.
In particular it is desirable, given a set of digital imaging devices such as cameras and scanners, to identify one of the devices that have been used in the acquisition of an image under investigation or return a negative report indicating that the image is taken by an unknown device.
It is known that each camera will have a unique intrinsic sensor noise pattern (SNP) which results from the inhomogeneities of sensor of the camera. The inhomogeneitics are specific to a particular sensor and therefore allow for the unique identification of a camera via its sensor or CCD. The term fingerprint and SNP will be used interchangeably in this specification.
This SNP is present in every images taken by a device, though without processing of the image it is often indistinguishable from the detail of an image.
WO/2007/094856 identifies a method of extracting the SNP present in an image, and comparing the extracted SNP with a set of reference SNPs. These reference SNPs are constructed from imaging devices that are accessible by the investigator. Each reference SNP is constructed by taking the averaged version of the SNPs extracted from a number (of the order of several tens) of low-variation images (e.g., blue sky images) taken by the same device.
A disadvantage of WO/2007/094856 is that the SNPs extracted from images may be highly contaminated by the details from the scene and as a result the misclassification rate is high.
To compensate for the influence from the details of the scene, the whole image has to be analysed in order to achieve an acceptable identification rate. This may result in unacceptably high demands of computational resources. A further disadvantage is the construction of the reference SNP requires several low-variation images, which without possession of the originating device may not be possible to obtain.
Additionally, during a digital forensic investigation the image set that needs to be analysed may contain several thousand images taken by an unknown number of unknown devices. The method of comparison in WO/2007/094856 is a pair-wise comparison method which becomes prohibitively expensive for large data sets.
Typically, a forensic investigator will want to identify, or cluster, images that have been taken by the same device. Some of the many challenges in such a scenario are: the forensic investigator does not have the imaging devices that have taken the images to generate clean reference device fingerprints (such as the reference SNP) for comparison; there is no prior knowledge about the number and types of the imaging devices; the similarity comparison is pair-wise. With a large dataset, exhaustive comparison is S computationally prohibitive; and given the shear number of images, analysing each image in its full size is computationally infeasible.
WO/2007/094856 may seem like a candidate method for the first and simpler task of fingerprint extraction. However, the influence from the details of the scene and the absence of the imaging devices, prevents the investigator from acquiring a clean reference SNP, therefore unless the investigator has a number of "clean" images from which to extract a SNP (which unless they are in possession of the originating device would be incredibly unlikely) this document has limited applications. Additionally, this document is unable to perform the clustering task to identify images taken from the same, possibly unknown, device.
Summary of the invention
To mitigate at least some of the above problems in the prior art, there is provided a method of extracting a SNP from a single image and removing the containments from said image to allow identification of other images that have the same or similar SNP.
There is also provided a method for classifying a large number of images based on their SNP, where the number of originating devices is unknown.
Further aspects, features and advantages of the present invention will be apparent from the
following description and appended claims.
Brief description of the drawings
An embodiment of the invention will now be described by way of example only, with reference to the following drawings, in which: Figure 1 is a schematic of the apparatus used in the present invention; Figure 2 is a flow chart of the processes of extracting a SNP and identifying a potential match involved according to an aspect of the invention; Figure 3 shows an example of the extraction of an image fingerprint; Figure 4 the functions of the models; Figure 5 is a flow chart of the processes of classifying a large number of images according to their SNP according to another aspect of the invention; Figure 6 shows an example of a population of unclassified fingerprints; and Figure 7 shows the classified fingerprints of Figure 6.
Detailed description of an embodiment of the invention Figure 1 shows a schematic of the apparatus used in the present invention, there is shown, a image source 2, database 4, a Sensor Noise Pattern (SNP) extractor 6, a SNP enhancer 8, a similarity calculator 10, a classifier trainer 12 and a classifier 14.
The image source 2 is any known source e.g. an image capture device such as a digital camera, the internet, a mobile telephone etc. The present invention is able to work with any images taken with a device that has a sensor to detect an image, as the SNP is an intrinsic property of the imaging device.
Images from the image store 2, are downloaded to a database 4. The term database 4 is used as a generic term to describe any memory store for a computer. A stored image is passed from the database 4 to the SNP extractor 6 where an initial SNP is extracted from the image.
This process is discussed in further detail with respect to Figure 2 and its corresponding text.
The initial SNP is then passed from the SNP extractor 6 to the SNP enhancer 8. Optionally, the initial SNP may be stored on a form of writeable memory or database 4 for reference. The SNP enhancer 8 enhances the initial SNP. This process is described in detail with reference to Figures 2, 3 and 4 and their associated text. This enhanced SNP is also stored in the database 4. The enhanced SNPs for individual images are compared by the similarity calculator 10.
The similarity calculator 10 is described further with reference to Figure 2 and Equations 7 and 8 and their associated text.
The classifier 14 and the classifier trainer 12 contain suitable processing means to group S images according to criteria based on their SNP. The function of the classifier trainer 12 and the classifier 14 are described in detail with reference to Figure 5. Once a group of images has been identified based on their SNP characteristics these are preferentially stored in the database 4, with some means e.g. metadata, to identify the groups.
The present invention may be performed on a suitable computing device such as a desktop or personal computer, which is enabled to analyse and perform the calculations described herein. The skilled man would understand that the present invention may be implemented on a single computer, a network of computers or over the internet without deviating from the inventive concepts.
Figure 2 describes the overall process of extracting a SNP for a given image and identifying the originating device of the image or other images that originate from the same device according to an aspect of the invention.
There is shown the step of collecting the images at step S 102, initial SNP extraction at step S 104, enhancing the SNP at step S 106, and calculating a similarity measure at step S 108.
The collecting of images at step S102 may occur by any known method of image collection.
In the field of Forensic analysis this may involve the downloading of images of a hard drive of seized personal computer or images found on a website.
Once an image has been collected at S102 an initial SNP is extracted at step S 104. The method used to extract the initial SNP is that as described in WO/2007/094856. The strength of the SNP is dependent on the sensor itself and the conditions in which the image was taken, and the contribution of the SNP for each individual image varies for each image and therefore needs to be determined for each image. The model used to extract the SNP, n, from an image I is n1-F(I) (1) where F is a denoising function which fihers out the sensor noise pattern. The choice of the denoising function F is critical in the extraction of the SNP.
Various denoising filters may be used as F, and in the preferred embodiment, the wavelet.-based denoising filter described in Appendix A of Lukág et al. "Digital Camera Identification from Sensor Pattem Noise," IEEE Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 205 -214, June 2006, is used. This fiher has been found to be an effective filter, though other denoising filters may be used. This wavelet based denoising filter filters the images in the frequency domain of the image. Other frequency based or spatial denoising filters may also be used. A key limitation of Eq. (1) is that the SNP is highly contaminated by the details from the scene. The extent of this limitation is apparent in Figures 3 (a), (b) and (c).
Figure 3(a) shows the reference SNP taken from a camera using the average SNP of 50 images taken of a blue sky, Figure 3(b) shows a natural scene taken using the same camera and Figure 3 (c), the SNP extracted from Figure 3 (b) using the method of WO/2007/094856.
Figure 3(d) shows the enhanced SNP extracted from Figure 3(b).
Multiple images of blue sky, or any other sets images of flat featureless surfaces, are ideal as they provide flat images with low signal variation, making SNP extraction a relatively trivial task. However, most images contain detail which is more difficult to extract. In Figure 3(b) this detail is present in the form of a building. It is immediately apparent in Figure 3 (c) that the extracted SNP is highly contaminated by the original signal in the relatively banal image shown in Figure 3 (b). Therefore, it is clear that the initial extraction of the SNP at step S104 does not provide a sufficiently "clean" SNP from which an accurate comparative measure may be made.
Therefore, unless the image only contains a featureless low noise variation e.g. a blue sky or a white wall, the initial SNP is of limited use as the contaminants from the scene dominate the SNP. Additionally, even for featureless images several images are required to be averaged to produce an uncontaminated SNP. This is of limited practical value as such images are not routinely taken.
Therefore, it is necessary to manipulate the initial SNP to obtain an enhanced SNP which occurs at step S 106. The key factor of this process is the realisation that in the vast majority of images where there is some form of detail present, the stronger a SNP component in n is, the less trustworthy the component should be. An enhanced fingerprint e can therefore be obtained by assigning weighting factors that are inversely proportional to the magnitude of the initially extracted SNP component.
The invention can use a number of different models to fiher the image, whose functions are based on the above premise. In the preferred embodiment in conjunction with the wavelet based denoising filter, the following five models are used as these are found to be the most effective: O5 Model 1: fle(X,Y) e,ifo«=n(x,y) (2) -e,otherwise i_2,ifo«=n(x,y)«=a Model2: j1eG't,Y) ,if-a«=n(x,y)<0 (3) 0,otherwise 1-,if 0 «= n(x,y) «= a ,n(x,y) >a Model3: n (x,y)= . (4) if -a «= n(x,y) <0 (-1 + e)' ,if n(x, y) < -a n(x,y) 2,ifo«=n(x,y)«=a -o e a2 ifn(x,y)>a Model4: n (x,y)= (5) e,if-a«=n(x,y)c0 _e_05,if n(x,y) < ifo«=n(x,y)«=a if n(x, y) > a ModelS: n (x,y)= (6) e iLl,if-a«=n(x,y)<0 if n(x,y) < -a where n(x,y) and n (x,y) are the (x,y)th component of n and n, respectively and a is a scaling factor which determines the scaling rate as the models are not linear. It is empirically observed that a = 7 is the optimal value for all the five models. In the preferred embodiment the defauh selling is for Model 1 (as defined by Eq. 2) with a = 7, though other models, including those not listed, and other values of a may also be used.
The function of the five models are shown graphically in Figure 4 (a) to (e) which shows Models 1 to 5 respectively. The horizontal and vertical axes represent the contaminated fingerprint n and the enhanced fingerprint respectively. All five models follow the basic premise that the initial SNP is weighted by an inverse function.
Models 1 and 2 allow the magnitude of fle, (i.e., n) to decrease monotonically with respect to the magnitude of n. Models 3, 4 and S allow the magnitude of lie, (i.e., n) to grow monotonically in accordance with the magnitude of n (i.e., n) if n a and to decrease monotonically and rapidly with respect to n if n > a. From Eq. (2) -(3) and Figure 4(a) - (b) we can see that a determines the decreasing rate. In Eq. (3) -(6) and Figure 4(c) -(e) a determines the point where the magnitude of n (i,j) start decreasing and the increasing and decreasing rate before and after a.
Figure 3(d) shows the enhanced SNP extracted from Figure 3(b) by applying model 1 with a = 7 to the initial SNP as shown in Figure 3(c). It is immediately apparent that the contamination seen in Figure 3(c) has been removed. A further advantage of the use of a model to enhance the SNP is that an uncontaminated SNP can be extracted from a single image.
The implementation of the wavelet denoising filter and enhancing the SNP by the use of the models is performed using known computing methods for image manipulation.
In an embodiment of the invention, the initial SNP and an enhanced SNP for a given image are stored, preferably in a database 4, with metadata detailing the parent image from which they were extracted so that the parent image may be identified.
In an embodiment of the invention the SNP for each image is compared so that identical or similar SNPs may be identified.
The SNP for a given device, say a digital camera, is stable and therefore will not change significantly over time. Therefore, all images taken with the same device will have substantially the same SNP, thereby allowing images taken with the same device to be S identified via their SNP. In order to identify images with the same SNP a similarity measure is required. A similarity measure is measured by a similarity metric as defined in Equation 7 to quantify the difference in an extracted enhanced SNP against a given reference SNP. A single SNP, or multiple SNPs, may be used as the reference SNPs, with a similarity measure for each SNP would be obtained. This score is preferably stored in a database 4 with metadata associating it with the originating image and enhanced SNPs.
To identify the source imaging device that has taken image I under investigation from D devices, we use the correlation Pd, as formulated in Equation 7, between the enhanced sensor noise pattern fle of image I and the reference SNP, Pd, of device a', a' e {1, 2, ..., D}, as a similarity metric.
-(lie fle)(Pj -Pd) Pd---C P-Pd where n and Pd are the means of n and Pd, respectively. The larger the value of Pd the higher the likelihood that a given image I was taken by a given device a'. In the case where d>1, the device a' that yields highest correlation d is identified as the device that has taken image I, if d is greater than a threshold it set by the user i.e., f = argrnax(p tie {1,2,...,D}).
Otherwise, the identifier should report that the image is taken by an unknown device. It is noted that a value of it = 0.01 is found to produce an accurate match. The similarity metric may be performed by comparing the entire image or the same subsection of two or more images.
The time taken for the similarity metric to be calculated is clearly dependent on the size of the image used, and the number of reference devices D for which there is an SNP. This can quickly become prohibitively expensive in terms of number of computations for large images and or a large reference data set D. The size of the similarity metric is also dependent on the dimensions of the images used, e.g. 1024 * 2048 pixels, and therefore can rapidly become unacceptably large in terms of memory and storage requirements.
In Forensic analysis the numbers of images that form a dataset may be many thousand and therefore performing the similarity calculation for all reference devices may become unfeasible.
S
Therefore, it is not desirable to obtain a similarity measure for all images, nor is it desirable to use the whole of the image. The present invention therefore also provides an optimised method of classifying images according to their SNP.
There is provided a method of overcoming this problem with an unsupervised digital image classification method.
Additionally, images may be cropped or rotated before they are released. The similarity metric as described above requires the same subsection of two or images to be compared or else a match will not be returned even if they have identical SNPs (as the comparison would be between different areas of an SNP which are unlikely to be the same).
Therefore, there is also a need to be able to compensate for potential manipulation of the image.
Figure 5 shows a flow process of the invention including the steps of classifying the images according to the preferred embodiment.
There is shown the step of initialisation of the dataset at step S200, extraction and enhancement of the SNP at step S202, establishing a similarity matrix for a training set at step S204, assigning a random class label at step S206, calculating a reference similarity at step S208, establishing membership criteria at step S210, updating class labels at step S212, determining if stop criteria has been met at step S214, classifying the remaining images in a dataset using the trained classifier at step S216 and classifying any abnormal photos at step S218.
The above steps may be broadly classified as four separate modules: digital fingerprint extraction and enhancement, similarity measurement of the images of the training set, classifier training and image clustering. The purposes of each module are described as follows.
Digital fingerprint extraction and enhancement: This is as described above.
S
Similarity measurement of the images of the training set: Given the unacceptably large costs in terms of computation and storage that would occur when using the full dataset (potentially containing thousands or millions of images), a training set is established which is representative of the total set. Analysis is performed on this training set to provide the ensuing classifier training module an M x M similarity matrix p, with element p(i, j) indicating the similarity between SNPs i and j. Where M is the number of images used to form the training set.
Classifier training: The training set is used to determine potential membership classes and the criteria for these classes. Once determined, the similarity comparison need only be performed against each class and not all the images. This allows for the reduction of the number of comparisons that need to be made in order to identify the originating device.
The classifier training module, in the preferred embodiment, takes a small sub-set of the images of the entire dataset at random and assigns them a number of classes according to the similarity provided by the previous sub-task. Each class corresponds to one imaging device (known or unknown), and a centroid of each class is then calculated. The centroid is equivalent to the "average" SNP for that class.
In the preferred embodiment the number of classes is inferred and the class assignments is made adaptively without the user providing a threshold. This allows for the unsupervised classification of the images. In further embodiments the user sets thresholds to classify images. This may occur, for example, where the user is aware of the originating device and can cluster a number of images taken from that device, thereby avoiding the need to define a class or device assignment. However, in practice such a situation is rare and therefore an unsupervised definition of the classes is preferred as it also removes any user biases, which are often unquantifiable, that occur.
-11 -The entire dataset can be provided to this module so that the next image clustering module can be excluded. However, doing so incurs unacceptable memory space for storing the similarity matrix and computational cost when the dataset is large. The size of the training set depends on the size of and the anticipated diversity of the entire dataset. Therefore no theoretical backing for determining the size of the training set is available, though it is found that a training set of 300 regardless of size of the actual set (assuming naturally the data set is larger than 300) is sufficient.
Image clustering: Given the class centroids provided by the classifier trainer, this module is to assign each image's SNP fin the non-training set to a class with the centroid most similar to i's SNP. As discussed previously it is immediately apparent that by training the classifier wfth a sample of the entire image dataset the number of calculations required, and therefore computational requirements, are greatly reduced.
In order to compensate for the potentially random orientation of the images, the following embodiment describes a method to overcome this problem. The system is initialised at step S200 where the parameters of the system are determined.
It is known for most photos to be taken with detectors that contain dimensions of 2 pixels e.g. 256, 512, 1024 etc. or produce images known image dimensions e.g. 4288 x 2848, 2544 x 1696, 1728 x 1152 pixels etc. This dimensions change between make and model of camera, but cameras of the same make and model (sometimes several models) have the same image dimensions. In an embodiment of the invention images that do not conform to these known dimensions or are of a different size are flagged and removed from the dataset. Due to their non-standard features, and in order to increase the efficiency of the invention these are considered separately, at the end of the process at step S218.
Optionally, the user of the invention may specify information to reduce the data set of images to be processed or potentially increase the likely matches. If in an investigation it was known that pictures were taken at a particular time and/or date, then in order to reduce the size of the data set to be processed the metadata that is present in images may be used appropriately to find associated data wfthin the data set. Metadata on a typical photo will comprise information such as a time, date and camera identifiers, though this information is not always present or may be subverted. By reading this metadata, it is possible to filter photos by such -12 -information stored in the metadata. This may include, the time, date, camera make or model or combinations of this data.
In another embodiment, the photos may be tagged with one or more keywords to describe the content of the photo. This information is preferably stored in the database 4 along with a reference to the originating photo. This key words may be for example "building", "aeroplane", "crowd", "bus station", "child", "football match" etc. During the initialising stage, the images to be identified may be selected by these keywords. The tags used need not be general terms but may also be specific e.g. a car license plate, an individual person etc. In a further embodiment, the images may be identified using known scene or facial recognition software to automatically assign tags that relate to the scene.
Clearly, such filtering is advantageous to reduce the data set and therefore the computer time required to analysis the data set.
Preferably, images from cameras that are identified as being of the same make and model are processed together as it is more likely to matches from these subsets than say a match for an image taken with by two different brands of camera. Further reductions in the data set may occur by only searching for image sizes supported a camera. e.g. only analysing images that are 4288 x 2848 which are typical of say a Sony TM digital camera and ignoring images that are say 3872 x 2592 pixels, as may be found on say a Pentax TM camera..
However, as the metadata may be changed with relative ease, it is possible that images that have identical metadata, purporting to originate from the same device may in fact originate from different devices. The present invention is able to identify such cases as the SNPs for these images would not match.
As digital images, in general, are rectangular in dimension the degeneracy in the number of possible relative orientations of the SNP is two (i.e. it is impossible to tell if the SNP of an image is the "right way up" or "upside down" for a given orientation). An initialising step is therefore, to orientate all images so that horizontal axis is the longest. Horizontally oriented images are left intact and the vertically oriented images are tuned 90 degree clockwise. This set is called DATASET 1, including those which are left intact. DATASET 2 is then created -13 -by rotating each image in DATASET 1 by 180 degree. Whilst this increases the number of images in the dataset it also ensures that images with the same SNP will be guaranteed to have an SNP that is orientated in the same relative direction.
Once the two datasets are created the fingerprint extraction can begin. However, as mentioned previously it is undesirable to perform this extraction of the whole image and on the whole dataset. Therefore, a small subsection of each image is used in the classification process. The subsection to be sampled is taken from the same place in each picture, for ease this is taken to be the centre of the image, as it is found that the centre of the image is less likely to be saturated thereby allowing the extraction of the SNP, though any subsection may be sampled. It is found that a block size of greater than or equal to 256 x 512 pixels (or 512 x 256) is large enough to provide a sufficient sample of a given SNP to be able to determine if a match is present to a high degree of confidence. Larger block sizes are indeed preferable but also result in an increase in the computational requirements.
Further initialising steps include the selection of the enhancing model and the value of a used. As discussed with respect to Figure 2 the preferred model is model 1 as defined by equation 2 and a value of a =7.
The step of selecting the subset of images from the dataset to be used as the training set is also performed at this stage. The size of the training set is M images, where M may be specified by the user or taken as a pre-set number. The value ofM depends on the size of and the anticipated diversity of the entire dataset. Therefore, no theoretical backing for determining the size of the training set is available. If M is set by the user the computing resource and time constraints should be considered. Moreover, since there is no ground truth in real forensic cases, therefore a good practice to ensure that the classifier provides "accurate" result is to execute the classifier multiple times and verify the consistency of the results. It is found that approximately 300 images forms a sufficiently diverse sample for a given image set.
Once the system is initialised at step S200, the SNP for the all M images of the training set is extracted at step S202. The extraction and enhancing of the SNP occurs as described previously with reference to Figure 2. The extraction only occurs for the subsection of the images as defined in step S200 (e.g. the 256 x 512 block in the top left comer of all images).
-14 -This reduces the number of calculations required to extract and enhance the SNP, thereby reducing computational requirements.
Once all the enhanced SNPs have been extracted for all M images of the training set, the similarity of the SNPs in the training set is determined at step S204. The similarity between any two enhanced digital fingerprints i andj is calculated using a slightly modified version of Equation (7).
(n. -n)(n. -n1) p(ij) -I -, i,Je {l,2,3,...,M} (8)
- -
As the value is frequently reused and during the ensuing classifier training stage, to reduce computational cost, an M x M similarity matrix p, is established. Element p(i, j) therefore indicates the similarity between fingerprints i andj. This matrix is stored and thus the matrix p can be queried at element p(i,j) for future references of the similarity between SNPs i andj thereby avoiding the need for repeated calculation.
In order to overcome the random orientation problem to calculate the similarity between two images i andj, four combinations need to be taken into account (i.e. (i of DATASET 1, j of DATASET 1), (1 of DATASET 1, j of DATASET 2), (1 of DATASET 2, j of DATASET 1) and (i of DATASET 2, j of DATASET 2)). Therefore the invention calculates the similarity for each combination and takes the maximum (i.e. the most likely match) of the four similarity values as the (i,j)th and (I' i)th element of the similarity matrix.
Once the matrix p has been calculated the invention commences the classifier training module. The purpose of the classifier is to assign each image of the training set into an optimal class based on the similarity measurement as calculated in step 5204. Formally this may be expressed as if there are K classes of images in the training set, with the value of K unknown. Denote D = {dk k = 1, 2, ..., K} as the set of class labels andj, j e D, as the class label/class membership of SNP i. The objective of the classifier trainer is to assign an optimal class label dk to each SNP 1, in an iterative manner until a set of stop criteria are met.
The first step is to assign a unique class label for each SNP at step S206. For example, if the training set consists of 300 images 300 different class labels would be required. Formally, K, -15 -is unknown, and therefore each fingerprint i is treated as a singleton class (i.e., assume that K = Al), with each assigned a unique class label (i.e., J = d1, ie {1,2,3 M}). Thus, the starting condition is that there a M singleton clusters, where M is the size of the training set.
At step S208 a reference similarity is calculated for each SNP. Whilst there are K unknown classes (or devices) in a set it is possible to determine the probable number of devices, or value of K using the following premise. The similarity between fingerprints of the same class (the intra-class similarity) is expected to be greater than the similarity between fingerprints of different classes (the inter-class similarity). For each SNP i, using a known k-means algorithm the M-1 similarity values between i and the rest of the training set are clustered into two groups, an intra-class and inter-class group. The separation of the average centroids for the two clusters, as defined by the k-means algorithm, is calculated and stored as the reference similarity, r1. Although the similarity values are both scene-and device-dependent, the step of enhancing the SNP S202, reduces this dependency. For a given SNP i, its reference similarity r1 may be used to distinguish between intra and inter class members. It is found that most intra-class similarity values are greater than r1 while most inter-class similarity values are less than r1.
At step S210 a membership committee C1 for each SNP i in the training set is established.
The membership committee contains the SNPs that are the most similar i.e. have the highest similarity measure as calculated at step S204, to an SNP i. The size of the committee, i.e. the number of similar SNPs that are chosen, can be M-1 or a subset of the training set. In the preferred embodiment a subset of the training set is chosen i.e. C1 < M-1, though a value of C1 z=M1 may be used.
During the first iteration each SNP i, is still assigned with the unique class label from step S206. The class labels of the membership committee C1 are used to define a new label for the committee. Therefore a high value of C1 is potentially computationally expensive and therefore a subset is preferred.
Once the committee C1 for each SNP i, has been established the class labelf is updated at step S212. The following process will allow groups that have similar SNPs to be identified and assigned the same class label.
For each SNP i, a cost, p1(f) (defined below) is calculated for each committee member], of C1 (i.e.,je ci). The cost is the cost of each class labelj5 for each member]. Therefore, if Cj is large the computation cost is high and thus in the preferred embodiment C1 < Mat step S210.
Once all costs for the class labels J) have been calculated the class label with the lowest associated cost f)aowest cost) for the committee is determined. The originally assigned class label f is then updated with the value off) Uowcst cost,) This process is repeated for all SNPs that is to say all].
Let L denote the number of class labels currently assigned to the members of C1 and i ftself (i.e., L={flle {{f}u{f1 je C1}}}. The cost,pQ), is defined as (9) where p (i,]) is the similarity (as defined in Equation (8)) between i and the]th member of C1, r1 is the reference similarity (as calculated at step S208), c is the number of members of C1 and s(l,]) is a sign function define as 1+1,ifl!=f s(l,ñi,ifl=f1 (10) where 1 is an arbitrary class label in L with its cost being calculated andf is the class label of the]th member in C. From Eq. (9) and (10) we can see that * When p(i,]) > r1, p(i,]) is an intra-class similarity value and fingerprints i and] arc expected to belong to the same class and represents the case where it might be expected that the SNPs originate from the same device. In this case a) If class label 1!=J) which is inconsistent with the expectation, the value of s(l,]) = 1 would result in a positive value (i.e., penalty) added to the cost p(l).
b) If class label 1 =j), which is consistent with the expectation, the value of s(l,]) -i would result in a negative value (i.e., gain) added to the costp1(l).
* When p (i,]) <r1, p (i,]) is an inter-class similarity value and fingerprints i and] are expected to belong to different classes. In this case c) If class label 1!=f) which is consistent with the expectation, the value of s(l,]) = 1 -17 -would result in a negative value (i.e., gain) added to the eostp1(l).
d) If class label / = J), which is inconsistent with the expectation, the value of s(/,j) = -l would result in a positive value (i.e., penahy) added to the cost p(l).
The skilled man would understand that step 5212 for each SNP i, identifies the SNPs as determined by their similarity score. Step 212 therefore clusters similar SNPs together, eventually allowing for the identification of images that originated from the same device by virtue of each member of the cluster having the same class label.
Once step S212 has been performed all SNP i, the invention checks to see if the stop criterion has been met at step S214. The stop criterion in the preferred embodiment is when there are no changes of class labels to any fingerprint in x consecutive iterations. It is found that when using a training set of M = 300 fingerprints selling x = 1 is enough. Clearly on the first pass of step S212 this criterion will not be met and therefore step S212 is performed.
Those skilled in the art will understand that upon each successive pass of step S212 that clusters of similar SNPs will form and will be given identified as having the same class label (which would be j5owest cost)). A visual representation of the clustering and classiQting step is shown in Figures 6 and 7.
Figure 6 shows a synthetic dataset of 150 data points. There is shown the synthetic data points plotted on an arbitrary three dimensional axis.
The data dimensions of the similarity metric are determined by the size of the data block used to extract the SNP as defined in the initialising step S200 e.g. 256 x 512 which cannot be represented in 2-D. Therefore, a synthetic plot is used to illustrate the clustering techniques used. As can be seen from Figure 6 there are some clusters of data points which may be identified by eye. However, such identification results in undesirable and unquantifiable biases. Steps S212 and S214 provide an unbiased method of identifying clusters.
Figure 7 shows the same data set as in Figure 6 where the invention has classified the data and assigned class labels to the clusters according to the method described above. Points which share the same class labels are circled.
-18 -Once the iterative process of steps S212 and S214 have met the desired criteria, the stage of training the classifier has been completed. With the trained classifier the remaining images that did not form part of the training set may be classified using the trained classifier at step S2 16.
The centroids of the clusters (as identified as having the same class label) are calculated and these centroids are used to classify the remaining images that did not form the training set.
The enhanced SNP for these images has already been extracted at step S202. Each image will have two SNPs to overcome the orientation problem described earlier. To classify an SNP i, we compare the similarity of its two fingerprints (one associate with DATASET 1 and the other with DATASET 2) to each centroid of the clusters as identified in the trained classifier.
The similarity is calculated as described previously according to Equation (8). This returns two similarity values for each SNP (one for each orientation), the greater of the two values (i.e. the most similar) is retained. Once the similarity measure for each cluster has been determined the SNP i, is assigned to the cluster with the highest similarity measure.
During the image clustering process, the centroids of the classifier can either be fixed throughout the entire process or updated when new images are assigned the corresponding classes. The update is accomplished by recalculating the average fingerprint of the classes which take in new members. The update operation has a negative impact on the efficiency of the classifier without necessarily improving classification accuracy. It is found that there is no need to update the cenfroid of classes with more than 20 members.
The invention is advantageously able to identify new clusters in the preferred embodiment. If the similarity between a fingerprint and most similar centroid is less than a threshold set by the user, a new class with the fingerprint as the founding member is created and allowed to attract new members just like the classes identified by the trainer. Therefore, even if the training set does not contain images which originate from the device of the new founding member it will be identified as a new device. This adaptability therefore allows the present invention to work successfully without knowledge of the devices of indeed the number of devices that are present. This is particularly advantageous in the application of digital forensics where little or no knowledge of the devices is known.
Those skilled in the art will also realise that the above process allows for the unsupervised classification without the user specifying/guessing the similarity threshold r1 and the number of classes K as: * The fact that, for each fingerprint, the similarity values between it and the rest of the training set can be grouped into intra-class and inter-class as described in Step S208 facilitates adaptive determination of r1 automatically by the trainer. This adaptivity also makes the classifier applicable to new databases without tuning any parameters.
* The trainer starts with a class label space/set as big as the training set (i.e., the worse case with each fingerprint as a singleton class) and the most similar fingerprints are always kept in i's membership committee C1, so the classes can merge and converge quickly to a certain number of final clusters in a few iterations. The term p (i, J) -i in Eq. (9) also provides adaptivity and helps the trainer to converge because it gives the fingerprints with the similarity value p (i, f) farther away from the reference similarity more say in determining the class label for the fingerprint in question. That is to say Eq. (9) allows the trainer to exploit the power of the discriminative/influential fingerprints and maintain high degree of immunity to errors due to the less discriminative ones.
Once the remaining images have classified using the trained classifier at step S216 the invention considers the images that were considered "abnormal" during the initilisation step S200.
These special cases are advantageously considered at step S218 as it allows for all the clusters identified during steps S212 and S216 to be used to determine their likely origin.
Additionally, due to the high computational costs associated with these step it is desirable to be able to avoid performing this step as few times as possible. Therefore, whilst analysis for the abnormal photos may occur alongside analysis of all other photos it is more efficient to do so only after all, or the majority, of photos have been analysed.
As the objective is to determine which class these abnormal cases belong to it is clearly more efficient to perform this step once all the classes have been identified. However, a problem is that as these abnormal images are often cropped there is no guarantee that the area sampled for the SNP will be present in the image. Therefore, the entire SNP for each cluster (which is -20 -equivalent to a device) must be extracted. This may be taken as the average SNP for all SNPs that form a cluster, or for a sample of each cluster e.g. a maximum of 20 SNPs per cluster may be used to determine the average SNP for the cluster.
S The SNP for the abnormal image is extracted using the previously explained method. As the image has been in someway manipulated it is unknown which part of the image is present e.g. has the image been cut from the centre, the top left edge, the bottom right edge etc. The invention samples the SNP of the averaged cluster SNP in blocks the size of the abnormal image across a number of different parts of the image. These samples of the average SNP and the extracted SNP for the abnormal image are compared as described for the standard images at step S204 using the similarity matrix. The highest similarity score represents the closest match, and therefore the most likely overlap. If desired the coordinates of the block with the highest similarity score are used as the starting point to further sample the average SNP to try and improve the overlap match. This process is repeated for the average SNP for each cluster.
The cluster with the highest similarity score would therefore represent the best match or if the match level is below a threshold represent a potential new cluster as described in step S2l6.
Clearly, these special cases require many more calculations as the similarity measurements must be performed for each cluster and a number of positions for each individual cluster.
As scaling an image e.g. the magnification or shrinking of an image, will affect the SNP, images that originate from the same device, and have been scaled in some way, will not be present in the same cluster as images from the same device that have not been scaled due to the differences in their SNPs. Images from the same device, which have been scaled in the same way, will be identified as a cluster as they will have similar SNPs.
Whilst the preferred embodiment has been described with particular reference to Equations 1 to 9, it should be noted that equations that of a similar function but of a different form may also be used without deviating from the concept of the invention.
It is immediately apparent to the skilled man, that whilst this invention has been described as a method that is able to identify images that were taken by the same device were there are an unknown number of originating devices, that the present invention is able to identify which device an image came from if the SNP for that device was available.
-21 -Further aspects of the invention.
1. A method of classifying an image taken by an image capture device, the method comprising the steps of: S extracting an initial Sensor Noise Pattern (SNP) for the image; enhancing the initial SNP to create an enhanced SNP by applying a correcting model, wherein the correcting model scales the initial SNP by a factor inversely proportional to the signal intensity of the initial SNP; determining a similarity measure between the enhanced SNP for said image with one or more previously calculated enhanced SNPs for one or more different images; and classifying the image in a group of one or more images with similar or identical SNPs based on the determined similarity measure.
2. A method of classifying a plurality images taken by one or more known or unknown image capture devices, the method comprising the steps of: for each image extracting an initial Sensor Noise Pattern (SNP) for the image; enhancing the initial SNP to create an enhanced SNP by applying a correcting model; identifying a subset of images from the plurality of images and; forming an image classifier based on the subset of images by identifying one or more clusters of images which have identical or similar SNPs, wherein the identification is based on a similarity measure between the enhanced SNP for a given image with one or more different images in the subset of images classifying one or more of the remaining images that were not part of the initial subset by calculating a comparative measure between the remaining images and each cluster as identified in the image classifier, and determining if said remaining image belongs to an identified cluster based on the comparative measure.
3. The method of 2 where the correcting model scales the initial SNP by a factor inversely proportional to the signal intensity of the initial SNP; 4. The method of 2 or 3 wherein the identification of the clusters in the subset of images comprises the steps of for each image of the subset: -22 -calculating a similarity matrix to determine a comparative measure of the similarity of the enhanced SNPs for each image that forms the subset; identifying an initial set of intra-cluster members and inter-cluster members of the subset of images by means of a k-means algorithm; S determining a reference similarity based on a measure of the difference between identified the intra and inter-clusters; identifying a membership committee and for each image in the membership committee: calculating a cost score based on their comparative measure and reference similarity, and updating the intra-cluster members based on the cost score, thereby identifying groups of images which have similar or identical enhanced SNPs.
5. The method of 4 further comprising: assigning each image in the subset a unique class label; and updating the class label for images which are identified as having similar or identical SNPs by the cost score with the same class label.
6. The method of any of 2 to 5 where images which have a similarity measure below a threshold are identified as not belonging to a predetermined cluster and are identified as being founding members of a new cluster.
7. The method of any preceding claim where the initial extraction SNP, and subsequent enhancement and comparison are performed on a subsection of the input and comparison images.
8. The method of any preceding wherein the initial SNP is identified with a wavelet based dc-noising filter and the correcting model is one or more of the following functions: -�.5.
e,if0«=n(i,j) Modell:n(i,j)= 2..
-o.s±JhL) -e 2,otherwise -23 -i_12± ,ifo«=n(i,j)«=a Model2: ne(t,J)= -i-2i,if-a«=n(i,j)<0 o,otherwise 1-e" ,ifo«=n(i,j)«=a (1eL) ,n(i,j) > a Model3: n(i,j)= -1+ e'" ,if -a «= n(i,j) <0 (-1 + e_U). jf n(i,j) < -a n(i,j) 2,ifo«=n(i,j)«=a -05 ((iJ)-a) e,ifn(i,j)>a Model4: n(i,j)= if-a«=n(i,j)czo -e052,if n(i,j) < ifo«=n(i,j)«=a if nQ, j > a ModelS: n(i,j)= if-a«=n(i,j)<0 -,if n(i, j < -a where n(i,j) and ne(i,j) are the (i,j)th component of n and ne, respectively 9. The method of 8 wherein a =7 10. The method of any preceding wherein the similarity measurement is based on a comparative measure of the input and reference images.
11. The method of 10 wherein the comparative measure is a vector based.
12. The method of 11 wherein the comparative measure has the form: Pd---P-Pd 13. The method of any of 2 to 12 wherein the identification of groups of images with identical or similar SNPs occurs by a k-means clustering algorithm based on the comparative measure -24 - 14. The method of any of 2 to 13 wherein the reference similarity is a measure of the distance between the centroids defined by the intra-cluster members and inter-cluster members 15. The method of any of 2 to 14 wherein the centroids are updated when the intra-cluster members are updated.
16. The method of any of 2 to 15 further comprising the step of identifying images taken from a given source image capture device by identifying images which have a similar or identical SNP to that of the source image capture device.
17. The method of claim 16 further comprising the steps of creating a reference SNP for the source image capture device; determining a similarity measure between the reference SNP for the source image capture device and the identified clusters; identifying a cluster of images, if any, which originate from said source image capture device based on the similarity measure.
18. The method of any of 2 to 17 further comprising the steps of identifying images that show similar or identical characteristics to create a reduced subset of images from the plurality of images; and performing the analysis on the reduced subset.
19. The method of 18 where the characteristics are based on one or more of the following: make of camera, model of camera, image size, time that the image was taken, and date that the image was taken as identified by some form of metadata associated with the image.
20. The method 18 and 19 wherein the characteristics are based on one or more of the following: keywords or tags associated with the image, said keywords or tags being indicative of the content of the image.
-25 - 21. The method of 20 wherein the keywords or tags are assigned to a image by a user after inspection of the image.
22. Apparatus for classifying an image taken by an imaging capturing device, the apparatus comprising: a processor enabled to extract an initial Sensor Noise Pattem (SNP) from the image; a enhancer enabled to enhancer the initial SNP to create an enhanced SNP by applying a correcting model, wherein the correcting model scales the initial SNP by a factor inversely proportional to the signal intensity of the initial SNP; a similarity measurer to determine the similarity between the enhanced SNP for said image with one or more previously calculated enhanced SNPs for one or more different images; and classifier to group of one or more images with similar or identical SNPs based on the determined similarity measure.
23. Apparatus of 22 to enable to the method of ito 21.
24. Computer program product comprising computer readable instructions encoded thereon to enable the steps of any of method ito 2i.
-26 -
Claims (14)
- Claims 1. A method of classifying a plurality images taken by one or more known or unknown image capture devices, the method comprising the steps of for each image extracting an initial Sensor Noise Pattem (SNP) for the image; enhancing the initial SNP to create an enhanced SNP by applying a correcting model wherein the correcting model scales the initial SNP by a factor which decreases as the magnitude of the signal intensity if the initial SNO increases; identifying a subset of images from the plurality of images and; forming an image classifier based on the subset of images by identifying one or more clusters of images which have identical or similar SNPs, wherein the identification is based on a similarity measure between the enhanced SNP for a given image with one or more different images in the subset of images classifying one or more of the remaining images that were not part of the initial subset by calculating a comparative measure between the remaining images and each cluster as identified in the image classifier, and determining if said remaining image belongs to an identified cluster based on the comparative measure.
- 2. The method of claim 1 wherein the identification of the clusters in the subset of images comprises the steps of for each image of the subset: calculating a similarity matrix to determine a comparative measure of the similarity of the enhanced SNPs for each image that forms the subset; identifying an initial set of intra-cluster members and inter-cluster members of the subset of images by means of a k-means algorithm; determining a reference similarity based on a measure of the difference between identified the intra and inter-clusters; identifying a membership committee and for each image in the membership committee: calculating a cost score based on their comparative measure and reference similarity, and updating the intra-cluster members based on the cost score, thereby identifying groups of images which have similar or identical enhanced SNPs.-27 -
- 3. The method of claim 2 further comprising: assigning each image in the subset a unique class label; and updating the class label for images which are identified as having similar or identical SNPs by the cost score with the same class label.
- 4. The method of any preceding claim where images which have a similarity measure below a threshold are identified as not belonging to a predetermined cluster and are identified as being founding members of a new cluster.
- 5. The method of any preceding claim where the initial extraction SNP, and subsequent enhancement and comparison are performed on a subsection of the input and comparison images.
- 6. The method of any preceding claim wherein the initial SNP is identified with a wavelet based de-noising filter and the correcting model is one or more of the following functions: Model 1: eQ'J) e,ifo«=n(i,j) -e,otherwise i_j2± ,ifo«=n(i,j)«=a Model2: neQ,J)= _i_2i,if-a«=n(i,j)czo 0,otherwise 1-,if 0 «= n(i,j) «= a (1 -et)' , n(i, j > a Model3: nQ,j)= . . -1+e" ,if-a «=n(i,j)cz0 (-1+ e_a). ,if nQ,j) < -a -28 -n(i,j) 2,ifo«=n(i,j)«=a -o 5(])-cL) e,ifn(i,j)>a Model4: n(t,j)= e,if-a«=n(i,j)c0 -e_05r,if n(i,j) < ifo«=n(i,j)«=a if nQ, j > a Model5: n(i,j)= e ILl,if-a«=n(i,j)<0 -,if n(i, j < -a where n(i,j) and ne(i,j) are the (i,j)th component of n and ne, respectively
- 7. The method of claim 6 wherein a =7
- 8. The method of any preceding claim wherein the similarity measurement is based on a comparative measure of the input and reference images.
- 9. The method of claim 8 wherein the comparative measure is a vector based.
- 10. The method of claim 9 wherein the comparative measure has the form: ci 011e _eM -P
- 11. The method of any preceding claim wherein the identification of groups of images with identical or similar SNPs occurs by a k-means clustering algorithm based on the comparative measure
- 12. The method of any preceding claim wherein the reference similarity is a measure of the distance between the centroids defined by the intra-cluster members and inter-cluster members
- 13. The method of any preceding claim wherein the centroids are updated when the intra-cluster members are updated.-29 -
- 14. The method of any preceding claim further comprising the step of identifying images taken from a given source image capture device by identifying images which have a similar or identical SNP to that of the source image capture device.S15. The method of claim 14 further comprising the steps of creating a reference SNP for the source image capture device; determining a similarity measure between the reference SNP for the source image capture device and the identified clusters; identifiying a cluster of images, if any, which originate from said source image capture device based on the similarity measure.16. The method of any preceding claim further comprising the steps of identiing images that show similar or identical characteristics to create a reduced subset of images from the plurality of images; and performing the analysis on the reduced subset.17. The method of claim 16 where the characteristics are based on one or more of the following: make of camera, model of camera, image size, time that the image was taken, and date that the image was taken as identified by some form of metadata associated with the image.18. The method claims 16 or 17 wherein the characteristics are based on one or more of the following: keywords or tags associated with the image, said keywords or tags being indicative of the content of the image.19. The method of claim 18 wherein the keywords or tags are assigned to a image by a user after inspection of the image.20. Apparatus for classifying an image taken by an imaging capturing device, the apparatus comprising: a processor enabled to extract an initial Sensor Noise Pattern (SNP) from the image; a enhancer enabled to enhancer the initial SNP to create an enhanced SNP by applying a correcting model, wherein the correcting model scales the initial SNP by a factor inversely proportional to the signal intensity of the initial SNP; a similarity measurer to determine the similarity between the enhanced SNP for said image with one or more previously calculated enhanced SNPs for one or more different images; and classifier to group of one or more images with similar or identical SNPs based on the determined similarity measure.21. Apparatus of claim 20 to enable to the method of claims 1 to 19.22. Computer program product comprising computer readable instructions encoded thereon to enable the steps of any of method claims 1 to 22.-31 -
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1200030.3A GB2486987B (en) | 2012-01-03 | 2012-01-03 | Methods for automatically clustering images acquired by unknown devices |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1200030.3A GB2486987B (en) | 2012-01-03 | 2012-01-03 | Methods for automatically clustering images acquired by unknown devices |
Publications (3)
Publication Number | Publication Date |
---|---|
GB201200030D0 GB201200030D0 (en) | 2012-02-15 |
GB2486987A true GB2486987A (en) | 2012-07-04 |
GB2486987B GB2486987B (en) | 2013-09-04 |
Family
ID=45755682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1200030.3A Expired - Fee Related GB2486987B (en) | 2012-01-03 | 2012-01-03 | Methods for automatically clustering images acquired by unknown devices |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2486987B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016207774A1 (en) * | 2015-06-23 | 2016-12-29 | Politecnico Di Torino | Method and device for searching images |
WO2017037478A1 (en) * | 2015-09-03 | 2017-03-09 | Functional Technologies Ltd | Clustering images based on camera fingerprints |
WO2023091105A1 (en) * | 2021-11-18 | 2023-05-25 | Bursa Uludağ Üni̇versi̇tesi̇ | Source camera sensor fingerprint generation method from panorama photos |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1408449A2 (en) * | 2002-10-04 | 2004-04-14 | Sony Corporation | Method and apparatus for identifying a photographic camera by correlating two images |
WO2006017031A1 (en) * | 2004-07-13 | 2006-02-16 | Eastman Kodak Company | Matching of digital images to acquisition devices |
WO2006017011A1 (en) * | 2004-07-13 | 2006-02-16 | Eastman Kodak Company | Identification of acquisition devices from digital images |
WO2006091928A2 (en) * | 2005-02-24 | 2006-08-31 | Dvip Multimedia Incorporated | Digital video identification and content estimation system and method |
WO2007094856A2 (en) * | 2005-12-16 | 2007-08-23 | The Research Foundation Of State University Of New York | Method and apparatus for identifying an imaging device |
-
2012
- 2012-01-03 GB GB1200030.3A patent/GB2486987B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1408449A2 (en) * | 2002-10-04 | 2004-04-14 | Sony Corporation | Method and apparatus for identifying a photographic camera by correlating two images |
WO2006017031A1 (en) * | 2004-07-13 | 2006-02-16 | Eastman Kodak Company | Matching of digital images to acquisition devices |
WO2006017011A1 (en) * | 2004-07-13 | 2006-02-16 | Eastman Kodak Company | Identification of acquisition devices from digital images |
WO2006091928A2 (en) * | 2005-02-24 | 2006-08-31 | Dvip Multimedia Incorporated | Digital video identification and content estimation system and method |
WO2007094856A2 (en) * | 2005-12-16 | 2007-08-23 | The Research Foundation Of State University Of New York | Method and apparatus for identifying an imaging device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016207774A1 (en) * | 2015-06-23 | 2016-12-29 | Politecnico Di Torino | Method and device for searching images |
IL256105A (en) * | 2015-06-23 | 2018-02-28 | Torino Politecnico | Method and device for searching images |
US10515112B2 (en) | 2015-06-23 | 2019-12-24 | Politecnico Di Torino | Method and device for searching images |
WO2017037478A1 (en) * | 2015-09-03 | 2017-03-09 | Functional Technologies Ltd | Clustering images based on camera fingerprints |
US10460205B2 (en) | 2015-09-03 | 2019-10-29 | Functional Technologies Ltd. | Clustering images based on camera fingerprints |
WO2023091105A1 (en) * | 2021-11-18 | 2023-05-25 | Bursa Uludağ Üni̇versi̇tesi̇ | Source camera sensor fingerprint generation method from panorama photos |
Also Published As
Publication number | Publication date |
---|---|
GB2486987B (en) | 2013-09-04 |
GB201200030D0 (en) | 2012-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2752632C (en) | Methods for identifying imaging devices and classifying images acquired by unknown imaging devices | |
Munisami et al. | Plant leaf recognition using shape features and colour histogram with K-nearest neighbour classifiers | |
Zhao et al. | Passive image-splicing detection by a 2-D noncausal Markov model | |
CN107153817B (en) | Pedestrian re-identification data labeling method and device | |
Fan et al. | General-purpose image forensics using patch likelihood under image statistical models | |
Suresh et al. | Image texture classification using gray level co-occurrence matrix based statistical features | |
Li | Unsupervised classification of digital images using enhanced sensor pattern noise | |
Liao et al. | Texture classification by using advanced local binary patterns and spatial distribution of dominant patterns | |
US11816946B2 (en) | Image based novelty detection of material samples | |
Tuna et al. | Image description using a multiplier-less operator | |
Mursi et al. | An improved SIFT-PCA-based copy-move image forgery detection method | |
GB2486987A (en) | Classifying images using enhanced sensor noise patterns | |
Doegar et al. | Image forgery detection based on fusion of lightweight deep learning models | |
Chergui et al. | Kinship verification using BSIF and LBP | |
Zhong et al. | Copy-move forgery detection using adaptive keypoint filtering and iterative region merging | |
Cozzolino et al. | Image forgery detection based on the fusion of machine learning and block-matching methods | |
Rouhi et al. | User profiles’ image clustering for digital investigations | |
Debbarma et al. | Keypoints based copy-move forgery detection of digital images | |
Chowdhury et al. | Copy-move forgery detection using SIFT and GLCM-based texture analysis | |
Li et al. | Does Capture Background Influence the Accuracy of the Deep Learning Based Fingerphoto Presentation Attack Detection Techniques? | |
CN113919421A (en) | Method, device and equipment for adjusting target detection model | |
Shri et al. | Video Analysis for Crowd and Traffic Management | |
Gazzah et al. | Digital Image Forgery Detection with Focus on a Copy-Move Forgery Detection: A Survey | |
Po et al. | Leveraging genetic algorithm and neural network in automated protein crystal recognition | |
Lourembam et al. | A robust image copy detection method using machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20190213 |