CN116778305A - Image copying-moving fake detection method based on key point filtering - Google Patents

Image copying-moving fake detection method based on key point filtering Download PDF

Info

Publication number
CN116778305A
CN116778305A CN202310749649.2A CN202310749649A CN116778305A CN 116778305 A CN116778305 A CN 116778305A CN 202310749649 A CN202310749649 A CN 202310749649A CN 116778305 A CN116778305 A CN 116778305A
Authority
CN
China
Prior art keywords
filtering
key point
image
key
keypoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310749649.2A
Other languages
Chinese (zh)
Inventor
董云云
岳广宇
周维
段清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN202310749649.2A priority Critical patent/CN116778305A/en
Publication of CN116778305A publication Critical patent/CN116778305A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Abstract

The invention provides an image copying-moving fake detection method based on key point filtration, which comprises the following steps: s1, extracting key point characteristics and descriptor information of an original image which are amplified by one time and twice; s2, formatting key point features; s3, matching and filtering key points of the original image; s4, adding the filtering result and the amplified key points for one time, and matching and filtering; s5, adding the filtering result and the amplified twice key points, and matching and filtering; s6, integrating and de-duplicating key point filtering results; s7, image forging and positioning. The invention uses the improved AdaLAM algorithm to match and filter SIFT key points, ensures stronger key point filtering effect, and can accurately locate the characteristic difference between the fake area and other areas so as to realize image copy-mobile fake detection.

Description

Image copying-moving fake detection method based on key point filtering
Technical Field
The invention belongs to the technical field of image detection, and particularly relates to an image copying-mobile counterfeiting detection method for key point filtering.
Background
Digital media manipulation and information counterfeiting have become a serious problem in modern information systems. The difficulty of tampering of digital photographs has been greatly reduced due to the advent of easy-to-use photograph editing software. As a result, various digital image forgery events frequently occur, and thus serious doubts are raised about the authenticity of digital images. The modification of images by people in daily life is only a few standard and widely accepted modifications, such as clipping, rotation or horizon correction. Image correction techniques such as these are often used for aesthetic and entertainment purposes, do not affect the authenticity of the image, and do not fall within the category of counterfeit images. The analysis of the counterfeit image thus determines whether the alteration to the image changed what was originally meant in the picture. For example, by analysis, it was found that the modification of some key original real pixels, content of an image has deviated from the representation of the original content meaning in the picture, in which case image artifacts may have potentially serious consequences for daily life. Accordingly, image forgery detection, including stitching, retouching, and copy-movement, has received considerable attention in some important digital image applications. Among them, image copy-mobile forgery is one of the most common and more difficult ways of image forgery to detect. In the sense that one or more regions are copied and pasted into the same image. Typical motivations for such counterfeiting include hiding elements in the image or emphasizing specific objects.
In the prior art, searching for all possible image portions of different sizes and locations is computationally difficult to achieve, as the cloning region may be located at any location or may have any shape. Furthermore, since the copy-and-paste area is from the same image, its features (e.g., color and noise) are compatible with the image, which is difficult to detect by looking for differences in the features of the counterfeit area from other areas. Image copy-mobile forgery is more difficult to detect than other types (e.g., stitching and retouching). Therefore, the design of a reliable and effective image copying-mobile counterfeiting detection method is a challenging work and has important practical significance for image content screening, forensic evidence obtaining and the like.
Disclosure of Invention
The embodiment of the invention aims to provide an image copying-moving counterfeiting detection method based on key point filtering, so as to enhance the capability of extracting key points of the method and solve the problem that the method is poor in generalization when aiming at images of different styles due to the lack of adaptability when filtering and describing key points meeting conditions in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that the image copy-mobile forgery detection method based on key point filtration comprises the following steps:
s1, extracting key point characteristics and descriptor information of an original image which are amplified by one time and twice;
s2, formatting key point features;
s3, matching and filtering key points of the original image;
s4, adding the filtering result and the amplified key points for one time, and matching and filtering;
s5, adding the filtering result and the amplified twice key points, and matching and filtering;
s6, integrating and de-duplicating key point filtering results;
s7, image forging and positioning.
Further, the step S1 of extracting the original image and amplifying the key point characteristics and descriptor information of one time and two times is specifically as follows:
s11, image preprocessing: firstly, carrying out Gaussian smoothing on an original image and an image which is amplified by one time and amplified by two times, and calculating the gradient amplitude and direction of each pixel;
s12, key point detection and descriptor extraction: and searching key points with highlighting features in the original image and the image which is doubled and magnified, and extracting descriptors according to the gradient, the direction, the pixel color value and other information of pixels where the key points are located.
Further, the formatting of the S2 key point features is specifically as follows:
for each key point P extracted into the key point set KP j Extracting the local binary features to form 132-dimensional keypoint feature descriptors:
KP={P 1 ,P 2 ,P 3 ,…,P s } (1)
P j ={PT j ,A j ,S j ,D j } (2)
where s is the number of keypoints; j represents the j-th key point, j e {1,2, …, s }; PT (PT) j Is a 1 x 2 matrix, representing the coordinates of the current key point in two dimensions; a is that j A matrix representing gradient directions in the vicinity of the key points, which is 1×1; s is S j Is a 1 x 1 matrix, representing the importance of the current keypoint; d (D) j Representing a 1 x 128 dimensional feature vector.
Further, the matching and filtering of the key points of the S3 original image are specifically as follows:
s31, copying the key point set KP obtained in S2, and adding the current picture length to the ordinate to obtain the key point set KP R Marking the original key point set KP as KP L
S32, selecting a limited number of key points with strong representativeness and good distribution;
s33, executing a highly parallel random sampling consistency algorithm by using a sample self-adaptive inner threshold value, and verifying local affine consistency in each key point neighborhood;
s34, outputting the union of all inner layers of the key points, namely the key point setAnd->
Further, the screening method of the key points with strong representativeness and good distribution is as follows:
for each key point in a set, firstly, taking 132-dimensional descriptors as a standard, and obtaining a distance vector D= { D according to Euclidean distance 1 ,d 2 …d s-1 -where s is the number of keypoints; sequentially calculating adjacent distance directionsQuantity ratio M i =d i /d i+1 Where i.epsilon. {1,2, …, s-2}, if 1.ltoreq.i.ltoreq.s-2 is present, such that M i <τ and M i+1 And (2) not less than tau, wherein tau represents a threshold value, i represents an intermediate parameter, and the to-be-detected point and s characteristic points are matched, namely the to-be-detected point is a key point with strong representativeness and good distribution.
Further, the step of adding and matching filtering the S4 filtering result and the amplified double key point is specifically as follows: the filtered key point setAnd->Adding the amplified key point information to the multiplied key point information, and then carrying out the S32-S34 process again on the added result to realize the enhancement of the number of the key points and the position aggregation during filtering, thereby obtaining a key point set +.>And
further, the filtering result of S5 is added to the amplified two times of key points and matched filtering is specifically:
filtered set of key pointsAnd->Adding the amplified two times of key point information, and then carrying out the S32-S34 process again on the added result to realize the enhancement of the number of key points and the position aggregation during the re-filtering, thereby obtaining a key point setAnd->
Further, the integration and deduplication of the S6 keypoint filtering result is specifically:
two point sets obtained by filtering and superposing twiceAnd->Changing all the abscissa coordinates of the two sets into the average value of the corresponding point abscissa coordinates, then carrying out de-duplication on the repeated point coordinates, and obtaining a union set of the two sets to obtain a final key point set DP;
DP x ={(X 0 ,Y 0 ),(X 1 ,Y 1 )…(X M ,Y M )} (3)
wherein x refers to two key point sets KP R Or KP L X, Y represents the abscissa and ordinate in the point set; m is the number of points in the current point set.
Further, the S7 image falsification positioning specifically includes: mapping each point on the key point set DP on the original picture, and judging that the current image is real and not marking the original picture when the total number of key points in the DP after filtering is less than 10; if the total number is greater than or equal to 10, marking on the specific image in a mode of filling the key point coordinate pixels; wherein, the more the key points are marked, the denser the marks are, and the probability of the current area being copied-forged is high.
The beneficial effects of the invention are as follows:
(1) The invention adopts an improved AdaLAM algorithm to match and filter SIFT key points in technology, and obtains better key point filtering effect.
(2) The invention carries out three operations of amplifying and extracting SIFT key points, matching and filtering through iteration in the flow, thus not only enriching the number of the extracted key points, but also enabling the key points to be concentrated in the copy-mobile forging area.
(3) Compared with the prior art, the method can be used for mapping and positioning the characteristic difference between the fake area and other areas; therefore, detection is realized, and detection errors caused by noise and image confusion are avoided.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a block diagram of an embodiment of a method for detecting image copy-mobile forgery based on keypoint filtering according to the present invention;
FIG. 2 is a flow chart of a keypoint extraction process based on keypoint filtering;
FIG. 3 is a flow chart of a keypoint matching and filtering process based on keypoint filtering;
FIG. 4 is a graph showing the comparison of the effects of different keypoint-based detection methods.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a block diagram of an embodiment of a method for detecting image copy-mobile forgery based on key point filtering according to the present invention. As shown in fig. 1, the specific steps of the image copy-mobile forgery detection method based on key point filtering of the present invention include:
s1, extracting an original image, doubling the original image, doubling SIFT key point characteristics and descriptor information:
fig. 2 is a flow chart of a keypoint extraction process based on keypoint filtering. As can be seen from the figure, the present invention extracts and combines SIFT key point sets from the original image, the image doubled and the image doubled, respectively, thereby obtaining enough key points.
SIFT (Scale-Invariant Feature Transform) is an algorithm for processing images, whose main function is to describe features in the image and to perform feature detection. The algorithm has high robustness and can work normally under the conditions of rotation, scaling and scale change of the image. In addition, the SIFT algorithm also has the capability of detecting fine features in the image, so that the SIFT algorithm plays an important role in visual object detection and image retrieval. The processing flow of the SIFT algorithm can be briefly described as follows:
s11, image preprocessing: the image is first gaussian smoothed and the gradient replication and direction for each pixel is calculated.
S12, key point detection: key points are found in the image, which have prominent visual features such as edges, corner points, etc.
S13, describing characteristics: and extracting a descriptor for each key point according to the gradient, the direction, the pixel color value and other information of the pixel where the key point is located, wherein the descriptor can represent the statistical information of the pixels around the key point.
S14, feature matching: and matching descriptors of key points in the two images to find out the best match.
S2, formatting key point features:
for each keypoint P extracted into the set of keypoints KP j Local binary pattern features are extracted to form 132-dimensional keypoint feature descriptors. The detailed information is shown in the formula (1) and the formula (2):
KP={P 1 ,P 2 ,P 3 ,…,P s } (1)
P j ={PT j ,A j ,S j ,D j } (2)
wherein s of equation (1) is the number of keypoints; j represents the j-th key point, j e {1,2, …, s }; in formula (2), PT j Is 1X 2A matrix representing coordinates of the current key point in two dimensions; a is that j Representing that the gradient direction near the key points is calculated by using the SIFT algorithm, and the gradient direction is a matrix of 1 multiplied by 1; s is S j Is a 1 x 1 matrix, representing the importance of the current keypoint. Finally, D j Is a 1 x 128 dimensional feature vector calculated by the SIFT algorithm.
S3, matching and filtering key points of an original image:
as shown in fig. 3, the present invention uses the AdaLAM algorithm on the keypoint match and filter inputs. The method is an image matching algorithm for finding similar parts of two images, wherein the two images are input first and SIFT key point characteristic information of the two images is extracted respectively; and selecting a limited number of key points with strong representativeness and good distribution based on adjacent compatible corresponding relations, namely using a G2NN algorithm: for each key point in a set, a 132-dimensional descriptor is used as a standard to obtain a distance vector D= { D according to Euclidean distance 1 ,d 2 …d s-1 -where s is the number of keypoints; searching by using the cyclic neighbor criterion, namely sequentially calculating M i =d i /d i+1 Where i.epsilon. {1,2, …, s-2}, if 1.ltoreq.i.ltoreq.s-2 is present, such that M i <τ and M i+1 τ is equal to or greater than a threshold value (τ in an AdaLAM algorithm is 0.64), and the to-be-detected point is matched with the s characteristic points; next, using the sample adaptive inner threshold to perform highly parallel RANSAC, verifying local affine consistency in each keypoint neighborhood; and finally outputting the union of all inner layers of the key points, wherein each key point set can provide a sufficiently strong quantity of support within a specific inner layer threshold.
In input, the invention copies the KP obtained before and adds the current picture length to the ordinate, which is called KP R The copy before is called KP L . After copying, the effect actually achieved is that two pictures which are vertically placed and have identical information are input to an AdaLAM algorithm; in the information link of matching points, for a certain point in a certain KP set on a copying areaSince the other set corresponds to the point of the replication region in the right-hand graph +.>The point descriptor information is the same as the point descriptor information of the current set except for coordinates, and therefore +.>And->Cannot match each other in the case of copy-mobile forgery detection. Whereas points on the counterfeited area in copy-mobile counterfeiting are actually second matches in another set. In order to correctly match the points of the copy or counterfeit area in one set to the points on the copy or counterfeit area in the other set, the present invention chooses to match the points to each other at a second suitable point, thereby achieving the desired matching effect. The other steps are still performed according to the procedure of AdaLAM.
S4, adding the filtering result and the amplified double key points, and matching and filtering:
first, the filtered key point setAnd->Adding the amplified key point information to the amplified key point information, and inputting the added result into the algorithm again to realize the enhancement of the number of key points and the position aggregation during filtering, thereby obtaining a point set ∈ ->And
s5, adding the filtering result and the amplified twice key points, and matching and filtering:
filtered set of key pointsAnd->Adding the obtained key point information with the amplified key point information twice, and inputting the added result into an algorithm again to realize the enhancement of the number of key points and the position aggregation during the re-filtering so as to obtain a key point set ∈ ->And->
S6, integrating and de-duplicating key point filtering results:
obtain onlyAnd->Obtaining the DP after the coordinate information of the two key point sets L And DP R They have a number of points in the copy-mobile forgery area, respectively. Each set of points is represented as follows:
DP x ={(X 0 ,Y 0 ),(X 1 ,Y 1 )…(X M ,Y M )} (3)
wherein x refers to two key point sets KP R Or KP R X, Y represents the abscissa and ordinate in the point set. M is the number of points in the current point set.
In the process of matching and filtering, the algorithm of the invention is actually the matching between the points in the two pictures, the information of the points is the same except for the abscissa, and the finally obtained point set is only one, so that the two sets are required to be integrated in this step, namely, the abscissas of the two point sets are all changed into the average value of the abscissas of the corresponding points. After averaging, some repeated point coordinates are found, so that the deduplication work is also performed, taking the union of the two sets. After which the last completed set of key points DP is obtained.
S7, image forging and positioning:
and mapping each point on the key point set DP on the original picture, so as to judge whether the current image is forged and identify the forged area. In the specific determination, if the total number of the filtered DPs is less than 10, determining that the current image is real and not marking the original image; otherwise, the key point coordinate pixels are filled in and marked on the specific image. Wherein the more and denser the key points are identified, the greater the likelihood that the current region will be subjected to copy-forgery operations.
Table 1 reflects the image level detection effect contrast structure of the present invention and other methods, including accuracy, recall and F 1 Score. The accuracy is that these samples are predicted positive now, and a measure of how many of these samples are true positive; the recall means that these samples, which are now predicted positive, account for the proportion of all samples predicted positive. The two indexes evaluate the detection capability of the current algorithm from two mutually exclusive angles respectively aiming at the sample after prediction and the original sample before prediction, so that the two indexes can not simultaneously obtain high values practically, namely the relationship between the two indexes is eliminated. To evaluate the overall performance of the algorithm, reference F is made herein to 1 The score is evaluated, and the accuracy rate and the recall rate can be simultaneously combined by harmonic averaging of the accuracy rate and the recall rate.
Table 1 image level detection effect contrast table of the present invention and other methods
Method Accuracy rate of Recall rate of recall F 1 Score of
BusterNet 0.554 0.453 0.498
HFPM 0.529 0.474 0.500
DOA-GAN 0.585 0.630 0.607
AdaLAM 0.775 0.609 0.682
The invention is that 0.807 0.640 0.714
It can be seen from Table 1 that the BusterNet method performs the worst at the image level. DOA-GAN results were similar on the CASIA-CMFD dataset, F 1 The fractions were 0.629 and 0.682, respectively, and the overall performance of the adalam method was superior to the first three. The method of the invention has the advantages of accuracy, recall rate and F 1 The three indexes are comprehensively dominant.
In addition, the present embodiment also compares the display effect of the copy-mobile counterfeit area. FIG. 4 is a graph showing the comparison of the effects of different keypoint-based detection methods. The SIFT+RANSAC method is the most commonly used key point filtering method in engineering; it can be seen that the method can not only concentrate the filtered key points to the copy-mobile forging area more, but also effectively filter out some misjudgment points, so that the detection result is clearer.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. An image copy-mobile forgery detection method based on key point filtering, which is characterized by comprising the following steps:
s1, extracting key point characteristics and descriptor information of an original image which are amplified by one time and twice;
s2, formatting key point features;
s3, matching and filtering key points of the original image;
s4, adding the filtering result and the amplified key points for one time, and matching and filtering;
s5, adding the filtering result and the amplified twice key points, and matching and filtering;
s6, integrating and de-duplicating key point filtering results;
s7, image forging and positioning.
2. The keypoint filtering-based image copy-mobile forgery detection method according to claim 1, wherein the S1 extraction of the original image and the doubling of the magnification of the keypoint features and descriptor information is specifically as follows:
s11, image preprocessing: firstly, carrying out Gaussian smoothing on an original image, an image which is amplified by one time and an image which is doubled, and calculating the gradient amplitude and the gradient direction of each pixel;
s12, key point detection and descriptor extraction: and searching key points with highlighting features in the original image, the image which is doubled and doubled, and extracting descriptors according to the gradient, the direction, the pixel color value and other information of pixels where the key points are located.
3. The keypoint filtering based image copy-mobile forgery detection method according to claim 1, characterized in that the S2 keypoint feature formatting is specifically as follows:
for each key point P extracted into the key point set KP j Extracting the local binary features to form 132-dimensional keypoint feature descriptors:
KP={P 1 ,P 2 ,P 3 ,…,P s } (1)
P j ={PT j ,A j ,S j ,D j } (2)
where s is the number of keypoints; j represents the j-th key point, j e {1,2, …, s }; PT (PT) j Is a 1 x 2 matrix, representing the coordinates of the current key point in two dimensions; a is that j A matrix representing gradient directions in the vicinity of the key points, which is 1×1; s is S j Is a 1 x 1 matrix, representing the importance of the current keypoint; d (D) j Representing a 1 x 128 dimensional feature vector.
4. The method for detecting image copy-mobile forgery based on keypoint filtering according to claim 1, wherein the matching and filtering of the keypoint of the S3 original image is specifically as follows:
s31, copying the key point set KP obtained in S2, and adding the current picture length to the ordinate to obtain the key point set KP R Then the original is treatedIs marked as KP L
S32, selecting a limited number of key points with strong representativeness and good distribution;
s33, executing a highly parallel random sampling consistency algorithm by using a sample self-adaptive inner threshold value, and verifying local affine consistency in each key point neighborhood;
s34, outputting the union of all inner layers of the key points, namely the key point setAnd->
5. The method for detecting image copy-mobile forgery based on keypoint filtering according to claim 4, wherein the screening method of the keypoints with strong representativeness and good distribution is as follows:
for each key point in a set, firstly, taking 132-dimensional descriptors as a standard, and obtaining a distance vector D= { D according to Euclidean distance 1 ,d 2 …d s-1 -where s is the number of keypoints; sequentially calculating the adjacent distance vector ratio M i =d i /d i+1 Where i.epsilon. {1,2, …, s-2}, if 1.ltoreq.i.ltoreq.s-2 is present, such that M i <τ and M i+1 And (2) not less than tau, wherein tau represents a threshold value, i represents an intermediate parameter, and the to-be-detected point and s characteristic points are matched, namely the to-be-detected point is a key point with strong representativeness and good distribution.
6. The method for detecting image copy-mobile forgery based on keypoint filtering according to claim 1 or 4, wherein the step of adding and matching filtering the amplified keypoints is specifically: the filtered key point setAnd->Adding the amplified key point information to the multiplied key point information, and then carrying out the S32-S34 process again on the added result to realize the enhancement of the number of key points and the position aggregation during the re-filtering, thereby obtaining a key point set +.>And->
7. The method for detecting image copy-mobile forgery based on keypoint filtering according to claim 1 or 4, wherein the step of adding and matching filtering the keypoint after twice amplification with the S5 filtering result is specifically:
filtered set of key pointsAnd->Adding the amplified two times of key point information, and then carrying out the S32-S34 process again on the added result to realize the enhancement of the number of key points and the position aggregation during the re-filtering, thereby obtaining a key point set ∈ ->And
8. the method for detecting image copy-mobile forgery based on keypoint filtering according to claim 1, wherein the integrating and de-duplicating of the S6 keypoint filtering result is specifically as follows:
two point sets obtained by filtering and superposing twiceAnd->Changing all the abscissa coordinates of the two sets into the average value of the corresponding point abscissa coordinates, then carrying out de-duplication on the repeated point coordinates, and obtaining a union set of the two sets to obtain a final key point set DP;
DP x ={(X 0 ,Y 0 ),(X 1 ,Y 1 )…(X M ,Y M )} (3)
wherein x refers to two key point sets KP R Or KP L X, Y represents the abscissa and ordinate in the point set; m is the number of points in the current point set.
9. The keypoint filtering-based image copy-mobile forgery detection method according to claim 1, characterized in that the S7 image forgery localization is specifically: mapping each point on the key point set DP on the original picture, and judging that the current image is real and not marking the original picture when the total number of key points in the DP after filtering is less than 10; if the total number is greater than or equal to 10, marking on the specific image in a mode of filling the key point coordinate pixels; wherein the more and denser the key points are identified, the greater the probability that the current region is copy-forged.
CN202310749649.2A 2023-06-25 2023-06-25 Image copying-moving fake detection method based on key point filtering Pending CN116778305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310749649.2A CN116778305A (en) 2023-06-25 2023-06-25 Image copying-moving fake detection method based on key point filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310749649.2A CN116778305A (en) 2023-06-25 2023-06-25 Image copying-moving fake detection method based on key point filtering

Publications (1)

Publication Number Publication Date
CN116778305A true CN116778305A (en) 2023-09-19

Family

ID=88005950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310749649.2A Pending CN116778305A (en) 2023-06-25 2023-06-25 Image copying-moving fake detection method based on key point filtering

Country Status (1)

Country Link
CN (1) CN116778305A (en)

Similar Documents

Publication Publication Date Title
Ardizzone et al. Copy–move forgery detection by matching triangles of keypoints
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Amerini et al. A sift-based forensic method for copy–move attack detection and transformation recovery
Huang et al. Detection of copy-move forgery in digital images using SIFT algorithm
Amerini et al. Geometric tampering estimation by means of a SIFT-based forensic analysis
Sarkar et al. Detection of seam carving and localization of seam insertions in digital images
Uliyan et al. A novel forged blurred region detection system for image forensic applications
CN104933721B (en) Stitching image altering detecting method based on color filter array characteristic
Bi et al. Multi-scale feature extraction and adaptive matching for copy-move forgery detection
CN110136125B (en) Image copying and moving counterfeiting detection method based on hierarchical feature point matching
İmamoğlu et al. Detection of copy-move forgery using krawtchouk moment
Alamro et al. Copy-move forgery detection using integrated DWT and SURF
Thajeel et al. Detection copy-move forgery in image via quaternion polar harmonic transforms
CN106709915B (en) Image resampling operation detection method
Mahmood et al. A passive technique for detecting copy-move forgeries by image feature matching
Nawaz et al. Single and multiple regions duplication detections in digital images with applications in image forensic
CN116778305A (en) Image copying-moving fake detection method based on key point filtering
Yohannan et al. Detection of copy-move forgery based on Gabor filter
KR20070073332A (en) Robust image watermarking using scale invariant feature transform
CN111768368B (en) Image area copying and tampering detection method based on maximum stable extremal area
Sujin et al. Copy-Move Geometric Tampering Estimation Through Enhanced SIFT Detector Method.
Jaafar et al. New copy-move forgery detection algorithm
Fadl et al. Fan search for image copy-move forgery detection
Katyayan et al. Detection of copy-move image forgery using normalized cross correlation and fast Fourier transform
Zheng et al. A rotation invariant feature and image normalization based image watermarking algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination