WO2023091105A1 - Source camera sensor fingerprint generation method from panorama photos - Google Patents

Source camera sensor fingerprint generation method from panorama photos Download PDF

Info

Publication number
WO2023091105A1
WO2023091105A1 PCT/TR2021/051391 TR2021051391W WO2023091105A1 WO 2023091105 A1 WO2023091105 A1 WO 2023091105A1 TR 2021051391 W TR2021051391 W TR 2021051391W WO 2023091105 A1 WO2023091105 A1 WO 2023091105A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise
fingerprint
cluster
panorama
camera
Prior art date
Application number
PCT/TR2021/051391
Other languages
French (fr)
Inventor
Ahmet KARAKÜÇÜK
Ahmet Emir DİRİK
Original Assignee
Bursa Uludağ Üni̇versi̇tesi̇
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bursa Uludağ Üni̇versi̇tesi̇ filed Critical Bursa Uludağ Üni̇versi̇tesi̇
Publication of WO2023091105A1 publication Critical patent/WO2023091105A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7625Hierarchical techniques, i.e. dividing or merging patterns to obtain a tree-like representation; Dendograms

Definitions

  • the present invention relates to a method for obtaining a fingerprint identifying the source camera with which the panorama photographs are taken.
  • the present invention relates to a method of how to generate a “PRNU fingerprint” on hardware and/or software in a computer, a mobile phone, or a computing environment that is used to determine, identify and verify the source of panorama photographs produced within a camera or with the aid of a computer, to be used in the identification and verification of the source camera of the panorama photographs obtained with a camera.
  • Panorama photos are composite photos obtained by combining a series of photos taken with a camera from different angles in a way that visually increases their range.
  • Various image processing methods are used in the combining process.
  • the visual relationship among the individual photographs that make up the scene are modeled with these image processing methods, the positional difference between the photos is calculated with this model, a geometric transformation corresponding to the calculated positional difference is applied.
  • This type of photographs can be named in different ways as “stitched photographs”, “composite photographs”.
  • Panorama photos can be obtained by means of a computer, a mobile phone or any electronic device equipped with a computer, without the need for special user expertise.
  • the first difficulty is that the aspect ratio and resolutions of panorama photos are variable.
  • the second difficulty and the one to be overcome is to obtain the panorama photographs by going through an unknown number and quality of geometric transformations.
  • Parameters for geometric transformations applied here varies according to the visual repetition relationship between each photograph and the nearby photograph or photographs, the camera hardware and the image processing software used inside or outside the hardware associated with the production of the panorama photo, in other words, to each panorama creation system.
  • Cylindrical, spherical, linear, affine, and projective and a plurality of plane transformation types can be used for each photo that creates a panorama photo in panorama shooting systems.
  • Said transformation types are also implemented using different transformation parameter depending on the repetition relationship between the images, that is, each of the photographic parts that make up a panorama can be transformed with a different parameter set. Therefore, those that are similar among the photographs that make up the panorama in terms of the geometric transformation applied should be brought together so as to obtain panorama PRNU fingerprints from such photographs.
  • the first difficulty is process cost.
  • the second difficulty is the iterative testing of the process.
  • the invention aims to solve the abovementioned disadvantages by being inspired from the current conditions.
  • the main aim of the invention is to create a method that enables the generation of fingerprints from panorama photographs.
  • the inventive method it is suggested in the inventive method to use the panorama images themselves to be used in the source camera fingerprint generation for the panorama modes of the cameras.
  • the amount of noise in the fingerprint increases since parts of the image processed with different geometric transformation parameters on a panorama image are combined in the same fingerprint. Due to noise, the distinguishability of the source of the panorama photos for a fingerprint weakens, and accuracy of the source camera recognition decreases.
  • fingerprints of different regions of the sensor overlap on fingerprints obtained with the known method from panorama photos. This overlap also reduces the distinctiveness of the fingerprint to the camera.
  • each panorama image or the sensor noises obtained from these images into one or more parts, obtaining the camera sensor noises of these parts and clustering these noises according to the similarity between them, and non-clustered noise fragments are not included in the fingerprint.
  • one or more fingerprints can be created so as to be used to identify the source cameras of the panorama photos of a camera.
  • the number of panorama fingerprints produced here depends on the inner operations of the respective camera's panorama shooting method and may vary from camera to camera.
  • the inventive method can be summarized as follows, without limitation: a) Each panorama photograph or camera sensor noise matrix is fragmented into parts, b) Camera sensor noises are accessed for these parts, c) Pairwise (mutual) similarity values of the sensor noises are measured, d) Photo fragments are divided into clusters according to their pairwise similarity values, e) The sensor noise matrices in each cluster, starting from the noise matrix that is closest (with the highest similarity value) to the other elements of the cluster in terms of sensor noise similarity, to the noise matrix with the least similarity, is merged by shifting the same by the matrix coordinate where the sensor noise similarity is measured the highest.
  • the inventive method makes it possible to generate a characteristic fingerprint related with camera sensor noise, from sensor noises from panorama images, from panorama photos taken with relevant camera systems, and from types of merged photos (for example, types of three-dimensional cameras used for virtual reality, from which surround images are obtained, or from image stitching software, that is used to bring together single images by geometric transformations with variable aspect ratio) and use these fingerprints to identify and verify the sources of images taken with the relevant camera
  • the present invention is a method that enables the identification, recognition, association and verification of the sources of the panorama photographs by producing the source camera fingerprint from the panorama photographs. Accordingly, the method comprises the following process steps;
  • FIG 1 gives the main flow diagram of the inventive method.
  • Figure 2 gives a schematic view of the process of fragmenting the camera sensor noise of the accessed panorama photos and/or panorama photos.
  • Figure 3 gives a schematic view of the process step of reading the noise matrices of the panorama photographs and calculating the pairwise similarity.
  • Figure 4 gives a schematic view of the hierarchical clustering of noise matrices according to the calculated pairwise similarities.
  • Figure 5 gives the schematic flow view of the process step of thresholding the noise matrices within the cluster or clusters with the cross-correlation method and positioning them according to the photo coordinate with the highest correlation and summing the overlapping noise matrix values on a pixel basis and dividing the same by the number of overlapping pixels.
  • Figure 8 gives the third comparison graph of the invention against studies developed in the state of the art.
  • Part-3 Obtaining and/or reading noise matrices for parts and calculating their pairwise similarity
  • the noise matrices are combined by arranging all the noise matrices in the cluster to the farthest noise matrix in order. As a result of the combining process, the combined cluster noise matrix remains as many as the number of clusters.
  • This cluster noise matrices are defined as "panorama fingerprint" for the respective panorama shooting system.
  • the flow is completed by obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium (6).
  • the process step of obtaining and/or reading noise matrices for parts and calculating their pairwise similarity (3) has the following details:
  • Fragmented photographs (31) obtained as a result of the fragmentation process examples of which are shown in Figure-2 as Part-1 (21), Part-2 (22), Part-3 (23) are transmitted.
  • Sensor noise extraction from fragmented photographs (31) is obtained by the equation (32)
  • Nx Px - D(Px).
  • D denotes any noise removal function such as two-dimensional Wiener or multi-resolution Wavelet, two-dimensional Median is; any piece of panorama photo is denoted with Px, and the noise matrix obtained from a piece of panorama photo is denoted with Nx.
  • Table-1 gives the pairwise similarity table, an example of which is shown over 4 noise matrices obtained from 4 image segments. It is not necessary to calculate similarity values between all noise matrices with the similarity function. In case the same noise matrices are given to the input of similarity functions, the values that it will produce can be known without performing any process. For example, if the correlation function is used as the similarity function, the result of the operation such as B(Nx, Nx) can be known without performing the operation. Similarly, many similarity functions operate independently of the input order (commutative). Thus, it may be known in advance that the result of operations B(Nx, Ny) and B(Ny, Nx) will be equal. As a result, the process of filling the similarity table, an example of which is shown in Table-1 , using the mathematical properties of the selected similarity function, can be performed with fewer calls to the similarity function.
  • the output example of similarity value table (41) shown in Table 1 is transferred to the clustering process.
  • the clustering similarity threshold T (42) is used for clustering these similarity values.
  • the clustering similarity threshold (42) is selected according to the similarity value function used in the pairwise similarity value matrix (33). Alternatively, similarity values can be recalculated with a different similarity measure.
  • a hierarchical clustering is performed using the clustering similarity threshold (42) and similarity value table (41). Noise matrices with the closest similarity are assigned to clusters in hierarchical clustering. Noise matrices are divided into clusters according to their similarity values (43).
  • T value is a neighborhood distance value, it can be selected differently according to the similarity criterion used in the process step of obtaining the noise matrices and calculating the pairwise similarity (3). For example, if the similarity measurement is to be made with the "peak correlation energy ratio”, clustering similarity threshold (42) T can be selected as 200 units.
  • a visual example (431 ) of this positioning and the result of clustering is given.
  • the similarities between the noise matrices in cluster-1 (432) are equal to or greater than the threshold value T.
  • the similarity between the noise matrices in cluster-2 (433) is equal to or greater than the T value.
  • the similarity between the noise matrices outside the cluster and each noise matrix is lower than the T value.
  • the sample of cluster matching related to the list formed in the cluster match list (44) is shown in Figure-4.1.
  • the example shown relates to the result of the clustering operation shown with the sample elements between numbers 431 -435.
  • This information can be configured in different ways. For example, it can be in a matrix structure. It can be in a human-readable structure; it can be in a binary form that can be read by a computer. It can be in a structure that can be processed by any application programming interface (API). It can be produced with additional data to comply with the requirements of the hardware or software base on which the system will be installed.
  • API application programming interface
  • the combining of the clustered noise matrices and the creation of the cluster fingerprint are performed in this step.
  • the noise matrices within the cluster are combined as detailed processing steps are given in Figure 5 using the cluster/noise matrix relations created in the cluster mapping list (44).
  • the formed clusters come to the combining step one by one in any order. Access to the noise matrices in the cluster is provided within the combining element (51 ).
  • a primary fingerprint will be formed first using these matrices.
  • the primary fingerprint is a layered structure where multiple and different dimensional noise matrixes can coexist. For example, a representative image of a noise matrix (561) in the first layer, a noise matrix (562) in the second layer, and a representative image of a noise matrix (563) in the third layer is given as an example, each noise matrix is expressed as a layer in the primary fingerprint.
  • An example of a noise matrix (561) in the first layer, an example of a noise matrix (562) in the second layer, an example of a noise matrix (563) in the third layer is shown in these examples.
  • the first of the matrices listed for the cluster is recorded as the first layer of the primary fingerprint structure (52).
  • the second and subsequent noise matrices in the cluster will be recorded in other layers of the primary fingerprint structure.
  • this noise matrix is accessed (54) and the amount of shift that has the highest match to the mean of the primary fingerprint is measured.
  • the amount of shift with the highest matching can be measured with a template matching function.
  • the normalized cross-correlation function can be used as a template matching function.
  • the following processes are performed; multiplying the similarity between the associated noise matrix and the primary fingerprint mean, multiplying the primary fingerprint mean by the noise matrix shifted one by one by all possible shift amounts and summing this multiplication result.
  • the shift amount in which the total value is the highest is determined as the shift amount that produces the highest match with the mean of the primary noise matrix and the noise matrix.
  • the noise matrix is shifted left and right by the specified shift amount as a result of this operation and it is shifted (55) to the coordinate where it shows the highest match with the mean of the primary fingerprint.
  • the shifted state of the noise matrix is added to the primary fingerprint as a layer and the primary fingerprint is updated (56).
  • the average of the values of the noise matrices in the primary fingerprint is calculated (57) at the end of the combining process and the matrix obtained as a result of this calculation is recorded as a cluster fingerprint (58).
  • Calculation of the mean of the primary matrix is performed in steps 55 and 57. This process is performed by adding the overlapping noise matrix values there are in the layers in the primary fingerprint and dividing by the number of overlapping values. For example, considering three layers as shown in a noise matrix (561) in the first layer, a noise matrix (562) in the second layer, and a noise matrix (563) in the third layer and the panorama fingerprint to be obtained as a result of the calculation process in the 57th process step can be expressed as follows: In the region where all three layers overlap, the average of the values of the three noise matrices in the corresponding region will be located in the corresponding region in the panorama fingerprint; The leftmost values will consist of the noise matrix in the layer marked with a noise matrix (563) in the third layer, and the values on the far right and top will consist of the values in the layer marked with a noise matrix (561) in the first layer. The values for the regions where the two layers intersect will consist of the average of the values of the two layers in those regions.
  • This record can be in a file type readable by a separate computer, it can also be saved as a single computer-readable file prepared in any data type where a plurality of files can be expressed together.
  • Figure-7 shows the measured similarities in the Mismatch condition. Similarity values obtained with Known Method 1 (US7787030), Known Method 2 (US9525866) and the inventive method show similar performance in case of mismatch. While the camera fingerprint obtained by the inventive method measures the similarity much better than other methods in case of matching, it does not increase the false-acceptance rate in case of mismatch.

Abstract

The present invention relates to a method of how to generate a "PRNU fingerprint" on hardware and/or software in a computer, a mobile phone, or a computing environment that is used to determine, identify and verify the source of panorama photographs produced inside a camera or with the aid of a computer, to be used in the identification and verification of the source camera of the panorama photographs obtained with a camera.

Description

SOURCE CAMERA SENSOR FINGERPRINT GENERATION METHOD FROM PANORAMA PHOTOS
Field of the Invention
The present invention relates to a method for obtaining a fingerprint identifying the source camera with which the panorama photographs are taken.
In particular, the present invention relates to a method of how to generate a “PRNU fingerprint” on hardware and/or software in a computer, a mobile phone, or a computing environment that is used to determine, identify and verify the source of panorama photographs produced within a camera or with the aid of a computer, to be used in the identification and verification of the source camera of the panorama photographs obtained with a camera.
State of the Art
Panorama photos are composite photos obtained by combining a series of photos taken with a camera from different angles in a way that visually increases their range. Various image processing methods are used in the combining process. The visual relationship among the individual photographs that make up the scene are modeled with these image processing methods, the positional difference between the photos is calculated with this model, a geometric transformation corresponding to the calculated positional difference is applied. This type of photographs can be named in different ways as “stitched photographs”, “composite photographs”. Panorama photos can be obtained by means of a computer, a mobile phone or any electronic device equipped with a computer, without the need for special user expertise.
There are various studies on camera source identification and verification with PRNU in the patent and non-patent literature. There are many studies on how to make source camera identification from photos and videos with fixed aspect ratio and resolution in abovementioned literature. There are recommendations for both the generation and testing of the camera fingerprint in said literature, for source camera identification and verification using images obtained with fixed ratios of 1 :1 , 4:3, 16:9, 19:10 and similar, expressed as normal photo shooting, and image frames extracted from videos. There are two process steps for source camera identification and verification. The first of these is the step of obtaining a fingerprint of the camera. The second step is to test the similarity between this fingerprint and the photograph whose source is being determined or verified.
There are various difficulties in producing camera fingerprints in panorama photos. The first difficulty is that the aspect ratio and resolutions of panorama photos are variable. The second difficulty and the one to be overcome is to obtain the panorama photographs by going through an unknown number and quality of geometric transformations.
A successful fingerprint from panorama photos cannot be obtained by known methods, therefore, successful identification and verification of source camera in such photographs cannot be performed by known methods. The studies in the literature that have suggestions for the difficulties described above are summarized herein the following.
In the literature search, the patent numbered US7787030 was encountered. In this study, it is suggested that the camera PRNU fingerprint be generated by obtaining noise matrices from normal photographs (non-panorama) and thus averaging the noise extracted from multiple normal photographs of equal resolution and aspect ratio. In order to make the method described in the application numbered US7787030 to work, the image obtained with the camera must be the same size as a camera fingerprint obtained for the relevant camera and must not have undergone any geometric transformation. In panorama photos, the sizes of the photos used in the source camera PRNU fingerprint generation are not equal and they are combined using geometric transformations. Therefore, there is no solution for how to generate fingerprints with images that have undergone such processes with this work.
In another study numbered US9525866, it is explained how to generate camera fingerprints from cropped and distorted photographs. A cropped photo refers to the presence of only a portion of a normal photo; corrupted photo also refers to a damaged photo file that can only be partially read (with unreadable parts of the image) and a method that can be used to obtain a source camera PRNU fingerprint with these two types of photographs is described. Therefore, it exceeds the state of the art by specifying how camera PRNU fingerprints can be produced from photos of different resolutions. On the other hand, a solution on how to produce fingerprints with images that have undergone geometric transformations is not included in the relevant invention. Therefore, the photographs used in fingerprint production must not have undergone geometric transformations in order to make this method work; a low performance can be achieved when this method is used in the panorama type, which is obtained as a result of combining photographs that have undergone geometric transformations.
In another study (Karakuguk, 2015), a method for how a source camera verification can be made with photographs that have undergone geometric transformations was explained. To do so, it is recommended to try for possible geometric transformation parameters for each test photo and to measure the similarity between the camera PRNU fingerprint obtained from normal photographs in each trial and a test photograph undergoing a geometric transformation using a correlation measure. Thus, it is explained that the pre-transformation state of a photo can be obtained (i.e. the transformation can be reversed) and thus the source camera of the photo that has been brought to the pre-transformation state can be detected by known methods by using the transformation parameters in the experiment in which the highest correlation value was obtained. The processing load and duration of the aforementioned trial processes are quite high. It is necessary to determine the transformation parameters by repeating these processes separately for each of the panorama photographs whose source is questioned in the relevant method; because a method on how to obtain a fingerprint from panorama photos is not suggested in the relevant method.
Panorama photos are obtained by taking a series of photos successively at different angles and positions of a scene and combining them with image processing methods so as to obtain a comprehensive photo of a scene. Geometric transformations are also included among the image processing methods. It is necessary to determine the positions and angles at which photos were taken relative to the scene, and to place the photos taken on a plane that corresponds to these positions and angles so as to combine multiple photos. They also need to be positioned using the visual repetitions they contain as clues so as to place photos without any gap during placement and to be transformed geometrically in order to be transferred onto a plane. Parameters for geometric transformations applied here varies according to the visual repetition relationship between each photograph and the nearby photograph or photographs, the camera hardware and the image processing software used inside or outside the hardware associated with the production of the panorama photo, in other words, to each panorama creation system. Cylindrical, spherical, linear, affine, and projective and a plurality of plane transformation types can be used for each photo that creates a panorama photo in panorama shooting systems. Said transformation types are also implemented using different transformation parameter depending on the repetition relationship between the images, that is, each of the photographic parts that make up a panorama can be transformed with a different parameter set. Therefore, those that are similar among the photographs that make up the panorama in terms of the geometric transformation applied should be brought together so as to obtain panorama PRNU fingerprints from such photographs. There are great difficulties in applying this process with known techniques. The first difficulty is process cost. The second difficulty is the iterative testing of the process.
As a result, due to the abovementioned disadvantages and the insufficiency of the current solutions regarding the subject matter, a development is required to be made in the relevant technical field.
Aim of the Invention
The invention aims to solve the abovementioned disadvantages by being inspired from the current conditions.
The main aim of the invention is to create a method that enables the generation of fingerprints from panorama photographs.
Another aim of the invention is to provide a method that separates panorama photos into parts, clusters the geometric transformations applied to each part according to the PRNU noise similarities for each part, obtains fingerprints of the relevant camera after the clustering process and reaches a conclusion regarding whether the fingerprints obtained match the sensor noise obtained from the panorama images whose source is questioned by a low cost comparison.
It is suggested in the inventive method to use the panorama images themselves to be used in the source camera fingerprint generation for the panorama modes of the cameras. In case such images are used directly in fingerprint production as in the state of the art, the amount of noise in the fingerprint increases since parts of the image processed with different geometric transformation parameters on a panorama image are combined in the same fingerprint. Due to noise, the distinguishability of the source of the panorama photos for a fingerprint weakens, and accuracy of the source camera recognition decreases. Moreover, fingerprints of different regions of the sensor overlap on fingerprints obtained with the known method from panorama photos. This overlap also reduces the distinctiveness of the fingerprint to the camera. It is not possible to produce a fingerprint for panorama photos using the prior art method of detecting the reverse transformation; for each photograph whose source is questioned, this process must be repeated during the query, this requires a large number of trials for each panorama photograph whose source is questioned. The number of trials to be made is specified as 41x41 , even for an image of 512x512 pixels, in the relevant study. The inventive method overcomes these three weaknesses of the state of the art. The fragmentation and clustering of panorama photographs or the noise obtained from these photographs creates fingerprint clusters related to the different transformed regions of the panorama photos. Thus, information about different regions is not mingled as in the state of the art. Information that will be lost or noisy if the state of the art is applied and thus reduce the recognition and verification performance, can be used to produce a camera fingerprint within the scope of the inventive method without such disadvantages. Moreover, with the generation of a fingerprint for panorama photographs, a single comparison for each panorama photograph whose source camera is desired to be detected is sufficient.
The following are recommended in the inventive method; fragmenting each panorama image or the sensor noises obtained from these images into one or more parts, obtaining the camera sensor noises of these parts and clustering these noises according to the similarity between them, and non-clustered noise fragments are not included in the fingerprint. As a result of this clustering process, one or more fingerprints can be created so as to be used to identify the source cameras of the panorama photos of a camera. The number of panorama fingerprints produced here depends on the inner operations of the respective camera's panorama shooting method and may vary from camera to camera. It is not necessary to make a preliminary description regarding the inner operations of the camera shooting method with the proposed method, in other words, the model of the image transformation applied in the relevant panorama shooting device is not known previously or any costly reverse transformation parameter is not determined experimentally. Criteria regarding the similarity of sensor noise among the photographs are used in grouping the images.
The inventive method can be summarized as follows, without limitation: a) Each panorama photograph or camera sensor noise matrix is fragmented into parts, b) Camera sensor noises are accessed for these parts, c) Pairwise (mutual) similarity values of the sensor noises are measured, d) Photo fragments are divided into clusters according to their pairwise similarity values, e) The sensor noise matrices in each cluster, starting from the noise matrix that is closest (with the highest similarity value) to the other elements of the cluster in terms of sensor noise similarity, to the noise matrix with the least similarity, is merged by shifting the same by the matrix coordinate where the sensor noise similarity is measured the highest.
The inventive method makes it possible to generate a characteristic fingerprint related with camera sensor noise, from sensor noises from panorama images, from panorama photos taken with relevant camera systems, and from types of merged photos (for example, types of three-dimensional cameras used for virtual reality, from which surround images are obtained, or from image stitching software, that is used to bring together single images by geometric transformations with variable aspect ratio) and use these fingerprints to identify and verify the sources of images taken with the relevant camera
When PRNU fingerprint is obtained with the inventive method, the similarity of all tested panorama photos with the PRNU fingerprint produces a value above the decision threshold in case there is “match”, all photos produce values below the decision threshold in case there is “no match”. Moreover, similarity value distributions for “matched” and “unmatched” situations are differentiated. Therefore, it is possible to identify and verify the source from the panorama photographs with the camera fingerprints obtained using the proposed method instead of the known methods.
When the advantages of the inventive method, are examined with the criterion called Cohen's Distance (Figure 8), it reveals the advantage of the relevant method over known methods in another way. According to these results, fingerprint obtained from panorama images by the inventive method has 30% higher performance than the closest competitive method in terms of source identification and verification performance in panorama images.
In order to fulfill the above mentioned aims, the present invention is a method that enables the identification, recognition, association and verification of the sources of the panorama photographs by producing the source camera fingerprint from the panorama photographs. Accordingly, the method comprises the following process steps;
• providing access to panorama photos obtained with a camera
• fragmentation of accessed panorama photos or camera sensor noise associated with these panorama photos,
• obtaining and/or reading noise matrices for parts and calculating their pairwise similarity
• hierarchical clustering of noise matrices according to the calculated pairwise similarities,
• thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster,
• obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium.
The structural and characteristic features of the present invention will be understood clearly by the following drawings and the detailed description made with reference to these drawings and therefore the evaluation shall be made by taking these figures and the detailed description into consideration.
Figures Clarifying the Invention
Figure 1 gives the main flow diagram of the inventive method.
Figure 2 gives a schematic view of the process of fragmenting the camera sensor noise of the accessed panorama photos and/or panorama photos.
Figure 3 gives a schematic view of the process step of reading the noise matrices of the panorama photographs and calculating the pairwise similarity. Figure 4 gives a schematic view of the hierarchical clustering of noise matrices according to the calculated pairwise similarities.
Figure 4.1 gives a visual example of the result of the clustering operation.
Figure 5 gives the schematic flow view of the process step of thresholding the noise matrices within the cluster or clusters with the cross-correlation method and positioning them according to the photo coordinate with the highest correlation and summing the overlapping noise matrix values on a pixel basis and dividing the same by the number of overlapping pixels.
Figure 5.1 is a sample image of a noise matrix in the first layer, a noise matrix in the second layer and a noise matrix in the third layer.
Figure 6 gives the first comparison graph of the invention against studies developed in the state of the art.
Figure 7 gives the second comparison graph of the invention against studies developed in the state of the art.
Figure 8 gives the third comparison graph of the invention against studies developed in the state of the art.
Description of the Part References
1. Providing access to panorama photos obtained with a camera
2. Fragmentation of accessed panorama photos or Fragmentation of camera sensor noise associated with these panorama photos
21. Photo example-1
22. Photo example-2
23. Photo example-3
P1. Part-1
P2. Part-2
P3. Part-3 3. Obtaining and/or reading noise matrices for parts and calculating their pairwise similarity
31. Fragmented photos
32. Sensor noise extraction
33. Pairwise similarity value matrix
4. Hierarchical clustering of noise matrices according to the calculated pairwise similarities,
41. Similarity value table
42. Clustering similarity threshold
43. Fragmentation of noise matrices into clusters according to similarity values
431. Visual example of the result of the clustering operation
432. Cluster-1
433. Cluster-2
434. Noise matrix outside the cluster-A
435. Noise matrix outside the cluster-B
44. Cluster match list
5. Thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster,
51. Providing access to the noise matrices in the cluster
52. Registering the first of the matrices listed for the cluster as the first layer of the primary fingerprint structure 53. Checking if there is a noise matrix in the cluster that has not yet been added to the primary fingerprint
54. Accessing the next noise matrix
55. Shifting the noise matrix to the coordinate where it has the highest match with the mean of the primary fingerprint
56. Adding the shifted noise matrix to the primary fingerprint as a layer and updating the primary fingerprint
561. Noise matrix in the first layer
562. Noise matrix in the second layer
563. Noise matrix in the third layer
57. Calculating the mean of the primary fingerprint
58. Recording a cluster fingerprint
6. Obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium
Detailed Description of the Invention
In this detailed description, the preferred embodiments of the inventive method are described by means of examples only for clarifying the subject matter.
The main operation of the method is as follows:
Access to panorama photos taken with any camera is provided (1). Accessed panorama photos (Photo example-1 (21 ), Photo example-2 (22), Photo example-3 (23)) (Part-1 (P1), Part-2 (P2), Part-3 (P3)) or the camera sensor noises associated with these panorama photos are fragmented (2).
Images are denoised using any noise removal filter in the process step of obtaining and/or reading noise matrices for parts and calculating their pairwise similarity (3). Noise matrices are obtained by subtracting the denoised photo from the image and pairwise similarities between these matrices are calculated by any similarity function. The noise matrices are clustered hierarchically according to their pairwise similarity in the process step of hierarchical clustering of noise matrices according to the calculated pairwise similarities (4). Cluster membership is determined for the noise matrix obtained from each photo fragment using pairwise similarity measures and the clustering similarity threshold (T). The number of clusters formed is a function of the similarities between the noise matrices themselves. If similarity is measured as much as the clustering similarity threshold value between all noise matrices, a single cluster is formed, if two clusters with less than the threshold value are formed in terms of similarity, two clusters may be formed. Likewise, if there are one or more noise matrices that are similar with no noise matrices, these noise matrices can be excluded from clustering. The cluster and noise matrix correlation information obtained as a result of the clustering process is used in the next step of the process.
In the process step of thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster (5), for each cluster, starting from the noise matrix in the center of that cluster, the noise matrices are combined by arranging all the noise matrices in the cluster to the farthest noise matrix in order. As a result of the combining process, the combined cluster noise matrix remains as many as the number of clusters. This cluster noise matrices are defined as "panorama fingerprint" for the respective panorama shooting system.
The flow is completed by obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium (6).
The process step of obtaining and/or reading noise matrices for parts and calculating their pairwise similarity (3) has the following details:
Fragmented photographs (31) obtained as a result of the fragmentation process, examples of which are shown in Figure-2 as Part-1 (21), Part-2 (22), Part-3 (23) are transmitted. Sensor noise extraction from fragmented photographs (31) is obtained by the equation (32) Nx = Px - D(Px). Herein, D denotes any noise removal function such as two-dimensional Wiener or multi-resolution Wavelet, two-dimensional Median is; any piece of panorama photo is denoted with Px, and the noise matrix obtained from a piece of panorama photo is denoted with Nx. Pairwise similarity between noise matrices, Nx and Ny being any two noise matrices, pairwise similarity value matrix (33) is calculated with the formula Similarity=B (Nx,Ny). This process is repeated for all incoming fragmented photograph (31) fragments. Any measure of similarity such as mean quadratic difference, correlation etc. is expressed with the function B in the expression.
Figure imgf000014_0001
Table 1
Table-1 gives the pairwise similarity table, an example of which is shown over 4 noise matrices obtained from 4 image segments. It is not necessary to calculate similarity values between all noise matrices with the similarity function. In case the same noise matrices are given to the input of similarity functions, the values that it will produce can be known without performing any process. For example, if the correlation function is used as the similarity function, the result of the operation such as B(Nx, Nx) can be known without performing the operation. Similarly, many similarity functions operate independently of the input order (commutative). Thus, it may be known in advance that the result of operations B(Nx, Ny) and B(Ny, Nx) will be equal. As a result, the process of filling the similarity table, an example of which is shown in Table-1 , using the mathematical properties of the selected similarity function, can be performed with fewer calls to the similarity function.
The details of the process step of hierarchical clustering of noise matrices according to the calculated pairwise similarities (4): The output example of similarity value table (41) shown in Table 1 is transferred to the clustering process. The clustering similarity threshold T (42) is used for clustering these similarity values. The clustering similarity threshold (42) is selected according to the similarity value function used in the pairwise similarity value matrix (33). Alternatively, similarity values can be recalculated with a different similarity measure. A hierarchical clustering is performed using the clustering similarity threshold (42) and similarity value table (41). Noise matrices with the closest similarity are assigned to clusters in hierarchical clustering. Noise matrices are divided into clusters according to their similarity values (43). For example, if all the noise matrices produce a high similarity to each other, a single cluster will form, if there are two noise matrices that differ by one T threshold value, two separate clusters will form. T value is a neighborhood distance value, it can be selected differently according to the similarity criterion used in the process step of obtaining the noise matrices and calculating the pairwise similarity (3). For example, if the similarity measurement is to be made with the "peak correlation energy ratio”, clustering similarity threshold (42) T can be selected as 200 units. “Peak correlation energy ratio criterion”, is a measure of similarity obtained with crosscorrelation of any two data, ratio of the average correlation in a small (for example 11 square pixels) section around the coordinate with the highest cross-correlation to the mean correlation of the region outside this section. As another alternative, if the correlation criterion is determined as the similarity function used in the calculation of the pairwise similarity value matrix (33), a value of 0.51 can be chosen as the clustering similarity threshold. Different T threshold values can be used for other alternative criteria. As a result of the operation performed by separating the noise matrices into clusters according to their similarity values (43), the noise matrices are positioned according to their similarity to each other. A visual example (431 ) of this positioning and the result of clustering is given. There are the locations of 9 different noise matrices in this example, and two clusters were formed as cluster-1 (432) and cluster-2 (433). There are 4 noise matrices in Cluster-1 (432) and 3 noise matrices in Cluster-2 (433). For example, the similarities between the noise matrices in cluster-1 (432) are equal to or greater than the threshold value T. Likewise, the similarity between the noise matrices in cluster-2 (433) is equal to or greater than the T value. The similarity between the noise matrices outside the cluster and each noise matrix is lower than the T value. In this case, the noise matrix outside the cluster-A (434) and the noise matrix outside the cluster-B (435) were left out of the cluster. As a result of the clustering process, the cluster match list related to the noise matrices is created (44). For each cluster formed in the process of separating the noise matrices into clusters according to their similarity values (43), the part information of all the noise matrices in that cluster is made into an ordered list within the cluster match list (44). The ranking is from the noise matrix having the highest mean similarity with the noise matrix in the cluster to the noise matrix with the lowest. There is no intersection between clusters, so each noise matrix can be assigned to only one cluster.
The sample of cluster matching related to the list formed in the cluster match list (44) is shown in Figure-4.1. The example shown relates to the result of the clustering operation shown with the sample elements between numbers 431 -435. This information can be configured in different ways. For example, it can be in a matrix structure. It can be in a human-readable structure; it can be in a binary form that can be read by a computer. It can be in a structure that can be processed by any application programming interface (API). It can be produced with additional data to comply with the requirements of the hardware or software base on which the system will be installed.
The details of the process step of thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster (5):
The combining of the clustered noise matrices and the creation of the cluster fingerprint are performed in this step.
For each cluster, the noise matrices within the cluster are combined as detailed processing steps are given in Figure 5 using the cluster/noise matrix relations created in the cluster mapping list (44).
In case there is a plurality of clusters, the formed clusters come to the combining step one by one in any order. Access to the noise matrices in the cluster is provided within the combining element (51 ). A primary fingerprint will be formed first using these matrices. The primary fingerprint is a layered structure where multiple and different dimensional noise matrixes can coexist. For example, a representative image of a noise matrix (561) in the first layer, a noise matrix (562) in the second layer, and a representative image of a noise matrix (563) in the third layer is given as an example, each noise matrix is expressed as a layer in the primary fingerprint. An example of a noise matrix (561) in the first layer, an example of a noise matrix (562) in the second layer, an example of a noise matrix (563) in the third layer is shown in these examples.
The first of the matrices listed for the cluster is recorded as the first layer of the primary fingerprint structure (52). The second and subsequent noise matrices in the cluster will be recorded in other layers of the primary fingerprint structure. Thus, it is checked whether there is a noise matrix in the cluster that has not yet been added to the primary fingerprint (53). If such a noise matrix exists, this noise matrix is accessed (54) and the amount of shift that has the highest match to the mean of the primary fingerprint is measured. The amount of shift with the highest matching can be measured with a template matching function. For example, the normalized cross-correlation function can be used as a template matching function. In this case, the following processes are performed; multiplying the similarity between the associated noise matrix and the primary fingerprint mean, multiplying the primary fingerprint mean by the noise matrix shifted one by one by all possible shift amounts and summing this multiplication result. Thus, a total value is obtained for each shift amount. The shift amount in which the total value is the highest is determined as the shift amount that produces the highest match with the mean of the primary noise matrix and the noise matrix. The noise matrix is shifted left and right by the specified shift amount as a result of this operation and it is shifted (55) to the coordinate where it shows the highest match with the mean of the primary fingerprint. The shifted state of the noise matrix is added to the primary fingerprint as a layer and the primary fingerprint is updated (56). These processes continue until all the noise matrices in the relevant cluster are added as a layer to the primary fingerprint.
The average of the values of the noise matrices in the primary fingerprint is calculated (57) at the end of the combining process and the matrix obtained as a result of this calculation is recorded as a cluster fingerprint (58).
Calculation of the mean of the primary matrix is performed in steps 55 and 57. This process is performed by adding the overlapping noise matrix values there are in the layers in the primary fingerprint and dividing by the number of overlapping values. For example, considering three layers as shown in a noise matrix (561) in the first layer, a noise matrix (562) in the second layer, and a noise matrix (563) in the third layer and the panorama fingerprint to be obtained as a result of the calculation process in the 57th process step can be expressed as follows: In the region where all three layers overlap, the average of the values of the three noise matrices in the corresponding region will be located in the corresponding region in the panorama fingerprint; The leftmost values will consist of the noise matrix in the layer marked with a noise matrix (563) in the third layer, and the values on the far right and top will consist of the values in the layer marked with a noise matrix (561) in the first layer. The values for the regions where the two layers intersect will consist of the average of the values of the two layers in those regions.
Details of the process step (6) of obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium:
Cluster fingerprints are produced as much as the number of clusters produced as a result of the processes in the process steps of hierarchical clustering of noise matrices according to calculated pairwise similarities (4) and thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster (5). All of these cluster fingerprints can be used with high accuracy in source identification and verification of panorama photos of the relevant camera. Each cluster of fingerprints detects the fingerprint of the camera sensor noise which undergoes different transformations, in the panorama photos that can be taken with the relevant camera or in the panorama photos produced with a panorama creation software. Therefore, all of these different clusters of fingerprints are considered and recorded as panorama fingerprints of the respective photographing device. This record can be in a file type readable by a separate computer, it can also be saved as a single computer-readable file prepared in any data type where a plurality of files can be expressed together.
As seen in Figure-6, although the panorama photos and the photos with PRNU fingerprints were taken with the same camera (match condition), the source of the panorama photos cannot be verified with the camera fingerprints obtained by the Known Method 1 (US7787030). The similarity between the camera fingerprint and the panorama image obtained by the known Method 2 (US9525866) is measured 4 times lower than the inventive method. The fact that the similarity criterion is so high shows the advantage of the inventive method. When the camera fingerprint obtained by the method described in Known Method 1 (US7787030) is used, false negative results are produced in all panorama photos. When the known Method 2 (US9525866) is used, when the decision threshold value is 200, erroneous results are produced in 25% of the images. On the other hand, when the fingerprint obtained by the inventive method is used, all of the images were matched correctly; that is, panorama images are associated with the source camera with 100% accuracy.
Figure-7 shows the measured similarities in the Mismatch condition. Similarity values obtained with Known Method 1 (US7787030), Known Method 2 (US9525866) and the inventive method show similar performance in case of mismatch. While the camera fingerprint obtained by the inventive method measures the similarity much better than other methods in case of matching, it does not increase the false-acceptance rate in case of mismatch.
REFERENCES
US7787030: Fridrich, J., Goljan, M., & Lukas, J. (2010). US Patent No: 7,787,030. Washington, DC: U.S. Patent and Trademark Office.
US9525866: Charlton, S.T. and Martin, C.D., National Security Agency, 2016. Building a digital camera fingerprint from cropped or corrupted images. U.S. Patent 9,525,866.
(Karakucuk, 2015): Karakijguk, A., Dirik, A. E., Sencar, H. T., & Memon, N. D. (2015, March). Recent advances in counter PRNU based source attribution and beyond. In Media Watermarking, Security, and Forensics 2015 (Vol. 9409, p. 94090N).
International Society for Optics and Photonics.

Claims

1. A method that enables the identification, recognition, association, and verification of the sources of the panorama photographs by producing the source camera fingerprint from the panorama photographs, characterized by comprising of the following process steps;
• providing access to panorama photos obtained with a camera (1 ),
• fragmentation of accessed panorama photos and/or fragmentation of camera sensor noise associated with these panorama photos (2),
• obtaining or reading noise matrices for parts and calculating their pairwise similarity (3),
• hierarchical clustering of noise matrices according to the calculated pairwise similarities (4),
• thresholding and positioning the noise matrices within the cluster using similarity criteria for each cluster and gathering these matrices and determining the matrix obtained as a result of calculating an average according to the number of overlapping pixels as a camera fingerprint for the relevant cluster (5),
• obtaining the PRNU fingerprint matrix for each cluster and recording the resulting camera fingerprint on a recording medium (6).
2. Method according to claim 1 , characterized by comprising the following process step in which D is any noise removal function such as two-dimensional Wiener or multi-resolution Wavelet, two-dimensional Median; Px is any panorama photo or fragment thereof, and Nx is noise matrix for a panorama photo or fragment; obtaining the sensor noise extraction by the equation (32) Nx = Px - D(Px).
3. Method according to claim 1 , characterized by comprising the following process step; using a clustering similarity threshold T (42) selected according to the similarity value function used in the pairwise similarity value matrix (33) in clustering similarity values.
4. Method according to claim 1 , characterized by comprising the following process step; assigning the most similar noise matrices to clusters in hierarchical clustering.
5. Method according to claim 1 , characterized by comprising the following process steps;
• providing access to the noise matrices in the cluster (51 ),
• registering the first of the matrices listed for the cluster as the first layer of the primary fingerprint structure (52),
• checking if there is a noise matrix in the cluster that has not yet been added to the primary fingerprint (53),
• if there is a noise matrix in the cluster that has not yet been added to the primary fingerprint, o accessing the next noise matrix (54), o shifting the noise matrix to the coordinate where it has the highest match with the mean of the primary fingerprint (55), o adding the shifted noise matrix to the primary fingerprint as a layer and updating the primary fingerprint (56),
• if there is no noise matrix in the cluster that has not yet been added to the primary fingerprint, o calculating the mean of the primary fingerprint (57), o recording a cluster fingerprint (58).
6. Method according to claim 5, characterized by comprising the following process step; calculation of the primary fingerprint mean, with the arithmetic mean of how many overlapping noise matrix values exist in the layers in the primary fingerprint by summing these values and dividing by the number of overlapping values or with any type of statistical average.
7. Method according to claim 1 , characterized by comprising the following process step; generating the source camera fingerprint for each type of photograph that is combined through geometric transformation processes, and using the same in detecting, recognizing, associating and verifying photographic sources.
PCT/TR2021/051391 2021-11-18 2021-12-10 Source camera sensor fingerprint generation method from panorama photos WO2023091105A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2021/018005 2021-11-18
TR2021/018005A TR2021018005A2 (en) 2021-11-18 2021-11-18 SOURCE CAMERA SENSOR FINGERPRINT GENERATION FROM PANORAMA PHOTOS

Publications (1)

Publication Number Publication Date
WO2023091105A1 true WO2023091105A1 (en) 2023-05-25

Family

ID=85113948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2021/051391 WO2023091105A1 (en) 2021-11-18 2021-12-10 Source camera sensor fingerprint generation method from panorama photos

Country Status (2)

Country Link
TR (1) TR2021018005A2 (en)
WO (1) WO2023091105A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006017011A1 (en) * 2004-07-13 2006-02-16 Eastman Kodak Company Identification of acquisition devices from digital images
US20120087589A1 (en) * 2009-02-13 2012-04-12 Li Chang-Tsun Methods for identifying imaging devices and classifying images acquired by unknown imaging devices
GB2486987A (en) * 2012-01-03 2012-07-04 Forensic Pathways Ltd Classifying images using enhanced sensor noise patterns
US20170169293A1 (en) * 2014-07-21 2017-06-15 Politecnico Di Torino Improved method for fingerprint matching and camera identification, device and system
CN108319986A (en) * 2018-02-08 2018-07-24 深圳市华云中盛科技有限公司 The identification method and its system of image sources based on PRNU
CN112367457A (en) * 2020-04-08 2021-02-12 齐鲁工业大学 Video PRNU noise extraction method and camera source detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006017011A1 (en) * 2004-07-13 2006-02-16 Eastman Kodak Company Identification of acquisition devices from digital images
US20120087589A1 (en) * 2009-02-13 2012-04-12 Li Chang-Tsun Methods for identifying imaging devices and classifying images acquired by unknown imaging devices
GB2486987A (en) * 2012-01-03 2012-07-04 Forensic Pathways Ltd Classifying images using enhanced sensor noise patterns
US20170169293A1 (en) * 2014-07-21 2017-06-15 Politecnico Di Torino Improved method for fingerprint matching and camera identification, device and system
CN108319986A (en) * 2018-02-08 2018-07-24 深圳市华云中盛科技有限公司 The identification method and its system of image sources based on PRNU
CN112367457A (en) * 2020-04-08 2021-02-12 齐鲁工业大学 Video PRNU noise extraction method and camera source detection method

Also Published As

Publication number Publication date
TR2021018005A2 (en) 2021-12-21

Similar Documents

Publication Publication Date Title
Ardizzone et al. Copy–move forgery detection by matching triangles of keypoints
US8160366B2 (en) Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
RU2647670C1 (en) Automated methods and systems of identifying image fragments in document-containing images to facilitate extraction of information from identificated document-containing image fragments
US8374386B2 (en) Sensor fingerprint matching in large image and video databases
Battiato et al. Multimedia forensics: discovering the history of multimedia contents
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
KR101706216B1 (en) Apparatus and method for reconstructing dense three dimension image
Irschara et al. Towards wiki-based dense city modeling
US20140093122A1 (en) Image identifiers and methods and systems of presenting image identifiers
Gaborini et al. Multi-clue image tampering localization
EP2136319A2 (en) Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
CN106169064A (en) The image-recognizing method of a kind of reality enhancing system and system
Sharma et al. Comprehensive analyses of image forgery detection methods from traditional to deep learning approaches: an evaluation
Maiwald Generation of a benchmark dataset using historical photographs for an automated evaluation of different feature matching methods
Nawaz et al. Single and multiple regions duplication detections in digital images with applications in image forensic
Liu et al. Overview of image inpainting and forensic technology
CN106851140B (en) A kind of digital photo images source title method using airspace smothing filtering
CN110728296B (en) Two-step random sampling consistency method and system for accelerating feature point matching
Zheng et al. The augmented homogeneous coordinates matrix-based projective mismatch removal for partial-duplicate image search
WO2023091105A1 (en) Source camera sensor fingerprint generation method from panorama photos
CN111860486B (en) Card identification method, device and equipment
Jaafar et al. New copy-move forgery detection algorithm
Kulkarni et al. Source camera identification using GLCM
Yılmaz et al. Solving Double‐Sided Puzzles: Automated Assembly of Torn‐up Banknotes Evidence
Manda et al. Image stitching using RANSAC and Bayesian refinement