CN105740858A - Region-of-interest extraction based image copy detection method - Google Patents

Region-of-interest extraction based image copy detection method Download PDF

Info

Publication number
CN105740858A
CN105740858A CN201610052702.3A CN201610052702A CN105740858A CN 105740858 A CN105740858 A CN 105740858A CN 201610052702 A CN201610052702 A CN 201610052702A CN 105740858 A CN105740858 A CN 105740858A
Authority
CN
China
Prior art keywords
element blocks
image
query image
color
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610052702.3A
Other languages
Chinese (zh)
Other versions
CN105740858B (en
Inventor
李黎
吴国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Fenglijian Information Technology Co Ltd
Original Assignee
Nanjing Fenglijian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fenglijian Information Technology Co Ltd filed Critical Nanjing Fenglijian Information Technology Co Ltd
Priority to CN201610052702.3A priority Critical patent/CN105740858B/en
Publication of CN105740858A publication Critical patent/CN105740858A/en
Application granted granted Critical
Publication of CN105740858B publication Critical patent/CN105740858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a region-of-interest extraction based image copy detection method. The method comprises the following steps: step 1, extracting a region-of-interest of a query image: segmenting the query image Q into a series of small images by utilizing a region-of-interest extraction algorithm; and step 2, matching the small images obtained by segmenting the query image with images in a database one by one to obtain a detection result. According to the method, the important region in the query image is extracted through the improved region-of-interest extraction algorithm, the query image is segmented into the series of the small images, and the small images are matched with the images in the database one by one, so that the images after attack can be accurately identified, especially under the attack of strong shear, picture-in-picture, text insertion and the like, the result accuracy is still ensured, and the relatively high robustness is achieved.

Description

A kind of image copy detection method based on region of interesting extraction
Technical field
The invention belongs to multi-media processing technical field, relate to a kind of image copy detection method based on region of interesting extraction.
Background technology
In recent years, along with the development of multimedia technology, the transmission of digital picture becomes more and more simpler.Although the development of technology facilitates the life of people, but owing to the simplicity of digital picture propagation also makes the copyright of protection digital picture become more and more difficult, image copyright owner can be caused huge loss by this.Accordingly, it would be desirable to whether the image that a kind of technology detects oneself for original author is illegally used.
Image copy detection is a kind of technology identifying image copyright.In copy detection system, systematic collection builds a data base.If the image that the copyright owner of piece image suspects oneself is illegally used, it is possible to this system application detection inquiry.Current image copy detection method mainly has two kinds, the copy detection method based on global characteristics and the copy detection method based on local feature.Artwork is generally divided into a series of 8*8 fritter by the copy detection method of the overall situation, extracts the feature feature as image of each fritter according to dct transform.But such method is often when resisting significantly shearing attack, show poor robustness.The copy detection method of local is generally basede on characteristic point, mainly includes three steps: (1) feature point detection, (2) characteristic point region normalization, and (3) characteristic point describes son and calculates.After calculating description, mate the characteristic point between two width figure, and adopt RANSAC algorithm to remove the feature point pairs of erroneous matching.
But, the digital picture of copy is often carried out some and processes by illegal piracy person, such as geometric transformation, brightness flop, contrast change, color change, fuzzy, text insertion etc..Also having some illegal piracy persons to shear the part in original digital image, and be partly embedded in this in other images to use, this attack is called in figure figure attacks.In significantly shearing at present, figure, figure attack is the difficult problem in copy detection method, and the article that can resist these attacks is less.Lin (Lin, C.C., Klara, N., Hung, C.J.:Animagecopydetectionschemebasedonedgefeatures//Mult imediaandExpo, 2008IEEEInternationalConferenceonIEEE.665-668 (2008)) etc. propose a kind of copy detection method based on picture edge characteristic, it is possible to opposing the less attack of shearing amplitude and image mosaic attack.Carefully analyze Strong shear attack and figure scheme attack it is found that it is all cut off unessential region in artwork that both is attacked, remain artwork important area.Therefore when solving above two and attacking, it should in conjunction with important area information in artwork.Based on this, for solving existing issue, the present invention proposes a kind of image copy detection method based on region of interesting extraction.
Summary of the invention
Present invention aims to the deficiencies in the prior art, it is proposed to a kind of image copy detection method based on region of interesting extraction.The present invention can accurately identify and attack later image, under especially figure, text insertion etc. are attacked in Strong shear, figure, still ensures the accuracy of result, has higher robustness.
The technical solution adopted for the present invention to solve the technical problems comprises the steps:
Step 1, query image region of interesting extraction:
Utilize region of interesting extraction algorithm that query image Q is divided into a series of little figure;
Step 2, the query image little figure that obtains of segmentation is mated one by one with the image in data base, thus obtaining testing result.
Described step 1 detailed process is as follows:
Query image Q is divided into series of elements block E by 1-1.;
1-2. calculates the color distance tolerance between element blocks at Lab color space;Element blocks EkWith element blocks EiBetween color distance tolerance D (Ek,Ei) as shown in formula (1):
Wherein, nkIt is element blocks EkIn color category number, niIt is element blocks EiIn color category number, f (Ek,a) it is element blocks EkIn a kind color occur probability, f (Ei,b) it is element blocks EiIn b kind color occur probability, D (Ek,a,Ei,b) it is element blocks EkIn a kind color and element blocks EiIn b kind color at the Lab space three-channel Euclidean distance of calculated color;
1-3. calculates the luminance delta of each element blocks on the L * channel of Lab color space;Element blocks EkLuminance deltaAs shown in formula (2):
Wherein, N (Ek) it is element blocks EkIn the pixel number that comprises, dE iIt is pixel i place 3 neighborhood and template in element blocksConvolution, k is the sequence number of element blocks;
1-4. calculates the significance value of each element blocks;Assumed calculation element blocks EkSignificance value S (Ek), as shown in formula (3), the significance value of other element blocks calculates by same method and obtains:
Wherein, σsIt is the weight between two element blocks shared by space length, σs 2=0.4, w (Ei) it is element blocks EiWeight, the pixel number that comprises in this element blocks calculate and obtain, Ds(Ek,Ei) it is element blocks EkWith element blocks EiSpace length each other;D(Ek,Ei) it is element blocks EkWith element blocks EiBetween color distance tolerance, by formula (1) calculate obtain;It is element blocks EkLuminance delta, by formula (2) calculate obtain;
1-5. calculates the significance value of each pixel in query image;The significance value of each pixel is the significance value of this pixel place element blocks;
Query image is converted into, according to threshold value T, the contour images that area-of-interest is prominent by 1-6.;Formula (4) is utilized to calculate threshold value T:
Wherein, n is the line number of query image pixel, and m is the columns of query image pixel, and (i, j) for the significance value of pixel each in query image for S;
Utilize formula (5) that query image is converted into the black and white contour images C that area-of-interest is prominent:
1-7. calculates the minimum enclosed rectangle at each area-of-interest place in black and white contour images C, then according to the minimum enclosed rectangle information of all area-of-interests, query image is divided into a series of little figure, and each little figure comprises an area-of-interest.
Described step 2 specific implementation process is as follows:
2-1. extracts in query image the characteristic point of image in each little figure and data base according to SURF Feature Points Extraction;
2-2. utilizes SURF characteristic point to describe little figure and the image in data base of sub-matching inquiry image;
2-3. utilizes RANSAC algorithm to remove the feature point pairs of erroneous matching;
2-4. removes the feature point pairs of erroneous matching further;
2-5. determines threshold value Thres, and whether image in data base is as the copy of query image to utilize threshold value Thres to judge, thus obtaining testing result.
The feature point pairs specific implementation process removing erroneous matching further of described step 2-4 is as follows:
If the feature point pairs of coupling is (Q after RANSAC algorithm processi,Ii);With QiAnd IiCentered by, the corresponding scale of the characteristic point of image I is extracted 5*5 fritter Block_Q respectively in the little figure of query image and data baseiAnd Block_Ii, then by fritter Block_QiAnd Block_IiNormalize to characteristic point direction respectively;Calculate two fritters structural similarity on RGB color, obtain the structural similarity (ssim_r of three passagesi,ssim_gi,ssim_bi), as shown in formula (6):
(ssim_ri,ssim_gi,ssim_bi)=SSIM (Block_Qi,Block_Ii)(6)
In the color value of the more each passage of rgb space, obtain Block_Q according to formula (7)iWith Block_IiBetween final structure similarity ssim_c:
Wherein, Qi_ r, Qi_ g, Qi_ b is Q respectivelyiPoint color value on R, G, B color channel.
When the threshold value choosing ssim_c is 0.55, it is possible to obtain higher recall ratio and precision ratio;If feature point pairs (Qi,Ii) between ssim_c value more than threshold value 0.55, then it is assumed that be the feature point pairs of correct coupling, be otherwise the feature point pairs of erroneous matching.
If image is with the matching result of certain little figure of query image in data base, the feature point pairs number of correct coupling is more than threshold value Thres, then it is assumed that the copy that this image is query image in data base.
When threshold value Thres value is 10, it is possible to obtain higher recall ratio and precision ratio.
The present invention has the beneficial effect that:
The inventive method extracts the important area in query image by the region of interesting extraction algorithm improved, and query image is divided into a series of little figure, and mates one by one with the image in data base with these little figure.Therefore, even if illegal piracy person shears the part use in original copyright digital image, or being embedded in other images to use this parts of images, these attack the later copy that image is query image to adopt this detection method to identify.Simultaneously, utilize the image matching method improved, by removing the feature point pairs of erroneous matching exactly, it is ensured that the method is the robustness of (geometric transformation, contrast change, brightness flop, color change, fuzzy, text insertion etc.) under the attack of various routines.This method compares with existing image copy detection technique and has the advantage that
Major part copy detection method is poor to the attack robust of figure attack and some routines in Strong shear attack, figure at present, main reason is that Strong shear is attacked and in figure, figure attacks is all cut off unessential region in artwork, remain artwork important area, and major part copy detection method is all that view picture query image is gone detection as input at present.On the other hand, in images match, big multiplex RANSAC algorithm carries out the removal of error matching points pair, but this algorithm still can the feature point pairs of member-retaining portion erroneous matching.These can cause the inaccurate of testing result.And this method extracts the important area in query image first with region of interesting extraction algorithm, and these important areas are divided into a series of little figure as input go detection, afterwards when images match, remove the feature point pairs of erroneous matching further, it is ensured that the accuracy of testing result.This method can resist the attack of figure attack and some routines in Strong shear attack, figure simultaneously, has good practical value in digital image copyright protection and piracy tracking.
Accompanying drawing explanation
Fig. 1 is the overall schematic of the image copy detection method based on region of interesting extraction that the present invention proposes;
Fig. 2 (a), 2 (b), 2 (c), 2 (d) are the schematic diagram utilizing area-of-interest exacting method that query graph is divided into a series of little figure;
Threshold value when Fig. 3 is remove error matching points pair selects;
The threshold value that Fig. 4 is correct matching double points number selects;
Fig. 5 is original query image collection;
Fig. 6 is the curve that this method compares in recall ratio and precision ratio with additive method;
Fig. 7 is the recall ratio precision ratio curve that this method schemes under attacking in the drawings.
Detailed description of the invention
The present invention utilize the region of interesting extraction algorithm of improvement query image is divided into a series of little figure as input mate one by one with the image in data base, and pass through to remove further the feature point pairs of erroneous matching on RANSAC algorithm basis, it is achieved that to the image copy detection method of attack all robusts of figure attack and some routines in Strong shear attack, figure.Specific embodiments of the present invention is described in detail with example below with reference to accompanying drawings.As shown in Figure 1, the inventive method includes query image region of interesting extraction and two steps of images match, interesting image regions extracts and query image is divided into a series of little figure, and is mated one by one with the image in data base by these little figure, obtains copy detection result.The present invention is described specifically as follows below in conjunction with example:
Step 1, query image region of interesting extraction:
Utilizing region of interesting extraction algorithm that query image Q is divided into a series of little figure, detailed process is as follows.
Query image Q is divided into series of elements block E by 1-1., and query image is such as shown in Fig. 2 (a).
1-2. calculates the color distance tolerance between element blocks at Lab color space.Element blocks EkWith element blocks EiBetween color distance tolerance D (Ek,Ei) as shown in formula (1):
Wherein, nkIt is element blocks EkIn color category number, niIt is element blocks EiIn color category number, f (Ek,a) it is element blocks EkIn a kind color occur probability, f (Ei,b) it is element blocks EiIn b kind color occur probability, D (Ek,a,Ei,b) it is element blocks EkIn a kind color and element blocks EiIn b kind color at the Lab space three-channel Euclidean distance of calculated color.
1-3. calculates the luminance delta of each element blocks on the L * channel of Lab color space.Element blocks EkLuminance deltaAs shown in formula (2):
Wherein, N (Ek) it is element blocks EkIn the pixel number that comprises, dE iIt is pixel i place 3 neighborhood and template in element blocksConvolution, k is the sequence number of element blocks.
1-4. calculates the significance value of each element blocks.Assumed calculation element blocks EkSignificance value S (Ek), as shown in formula (3), the significance value of other element blocks calculates by same method and obtains:
Wherein, σsIt is the weight between two element blocks shared by space length, σs 2=0.4, w (Ei) it is element blocks EiWeight, the pixel number that comprises in this element blocks calculate and obtain, Ds(Ek,Ei) it is element blocks EkWith element blocks EiSpace length each other.D(Ek,Ei) it is element blocks EkWith element blocks EiBetween color distance tolerance, by formula (1) calculate obtain.It is element blocks EkLuminance delta, by formula (2) calculate obtain.
1-5. calculates the significance value of each pixel in query image.The significance value of each pixel is the significance value of this pixel place element blocks.The Saliency maps of the query image Q obtained is such as shown in Fig. 2 (b).
Query image is converted into, according to threshold value T, the contour images that area-of-interest is prominent by 1-6..Formula (4) is utilized to calculate threshold value T:
Wherein, n is the line number of query image pixel, and m is the columns of query image pixel, and (i, j) for the significance value of pixel each in query image for S.
Utilize formula (5) that query image is converted into the black and white contour images C that area-of-interest is prominent:
Obtain the profile diagram of query image Q such as shown in Fig. 2 (c).
In 1-7, calculating black and white contour images C, the minimum enclosed rectangle at each area-of-interest place, is then divided into a series of little figure according to the minimum enclosed rectangle information of all area-of-interests by query image, and each little figure comprises an area-of-interest.Obtain a series of little figure of query image Q such as shown in Fig. 2 (d).
Step 2, being mated one by one with the image in data base by the query image little figure that obtains of segmentation, obtain testing result, specific implementation process is as follows.
2-1. extracts in query image the characteristic point of image in each little figure and data base according to SURF Feature Points Extraction;
2-2. utilizes SURF characteristic point to describe little figure and the image in data base of sub-matching inquiry image;
2-3. utilizes RANSAC algorithm to remove the feature point pairs of erroneous matching;
2-4. removes the feature point pairs of erroneous matching further, specific as follows:
If the feature point pairs of coupling is (Q after RANSAC algorithm processi,Ii).With QiAnd IiCentered by, the corresponding scale of the characteristic point of image I is extracted 5*5 fritter Block_Q respectively in the little figure of query image and data baseiAnd Block_Ii, then by fritter Block_QiAnd Block_IiNormalize to characteristic point direction respectively.Calculate two fritters structural similarity on RGB color, obtain the structural similarity (ssim_r of three passagesi,ssim_gi,ssim_bi), as shown in formula (6):
(ssim_ri,ssim_gi,ssim_bi)=SSIM (Block_Qi,Block_Ii)(6)
In the color value of the more each passage of rgb space, obtain Block_Q according to formula (7)iWith Block_IiBetween final structure similarity ssim_c:
Wherein, Qi_ r, Qi_ g, Qi_ b is Q respectivelyiPoint color value on R, G, B color channel;
By the statistical method of great many of experiments, as shown in fig. 3, it was found that when the threshold value choosing ssim_c is 0.55, it is possible to obtain higher recall ratio and precision ratio, therefore, it is possible to ensure good testing result.If feature point pairs (Qi,Ii) between ssim_c value more than threshold value, then it is assumed that be the feature point pairs of correct coupling, be otherwise wrong feature point pairs.
2-5. determines threshold value Thres, and whether image in data base is as the copy of query image to utilize threshold value Thres to judge.The threshold value Thres of correct matching characteristic point logarithm is determined by the statistical method of great many of experiments.Find according to experiment, as shown in Figure 4, when Thres value is 10, it is possible to obtain higher recall ratio and precision ratio, therefore can obtain good testing result.If image is with the matching result of certain little figure of query image in data base, the feature point pairs number of correct coupling is more than Thres, then it is assumed that the copy that this image is query image in data base.
Providing description of test from accuracy of detection aspect below, empirically checking the inventive method has higher recall ratio and precision ratio, and relatively other copy detection method, has higher robustness.
In the present invention, experiment adopts 10000 images downloaded from the Internet as test image library, and image size is random but is both less than 800*800, preserves in the jpeg-format.From test image library choose 10 image construction original query image collection, as shown in Figure 5.Every piece image in query set all utilizes Photoshop and StirMark software to carry out 20 kinds and attacks generation copy image collection.Attack and the sample number of correspondence is particularly as follows: JPEG compression (15) for every kind, mean filter (1), PSNR process (10), convergent-divergent (10), shear (13), rotate (18), affine transformation (8), delete pixel column (10), rotation adds convergent-divergent (10), rotation adds shearing (10), add make an uproar (2), Seam-carving (3), color change (1), brightness flop (1), contrast change (1), Image Reversal (1), text inserts (1), watercolor (1), mosaic (1), numeral in its bracket is the corresponding sample number attacked.
nullFinally,This method (method 5) compared for the method (method 1 of Xu,XuZ,LingH,ZouF,etal.Anovelimagecopydetectionschemebasedonthelocalmulti-resolutionhistogramdescriptor[J].MultimediaTools&Applications,2011,52(2-3):445-463)、Method (the method 2 of Kim,KimC.Content-basedimagecopydetection[J].SignalProcessingImageCommunication,2003,18:169–184)、Method (the method 3 of Wu,WuMN,LinCC,ChangCC.Novelimagecopydetectionwithrotatingtolerance[J].JournalofSystems&Software,2007,80 (7): 1057-1069) and the method for Baber (method 4,BaberJ,SatohS,KeatmaneeC,etal.ImprovingtheperformanceofSIFTandCSLBPforimagecopydetection[C].TelecommunicationsandSignalProcessing(TSP),201336thInternationalConferenceon,2013:803-807) obtain recall ratio with precision ratio as shown in Figure 6,No matter weighing from recall ratio or precision ratio,This method all obtains good effect.On the other hand, figure in figure is attacked by this method also has higher recall ratio and precision ratio, as shown in Figure 7.

Claims (7)

1. the image copy detection method based on region of interesting extraction, it is characterised in that comprise the steps:
Step 1, query image region of interesting extraction:
Utilize region of interesting extraction algorithm that query image Q is divided into a series of little figure;
Step 2, the query image little figure that obtains of segmentation is mated one by one with the image in data base, thus obtaining testing result.
2. a kind of image copy detection method based on region of interesting extraction according to claim 1, it is characterised in that described step 1 detailed process is as follows:
Query image Q is divided into series of elements block E by 1-1.;
1-2. calculates the color distance tolerance between element blocks at Lab color space;Element blocks EkWith element blocks EiBetween color distance tolerance D (Ek,Ei) as shown in formula (1):
D ( E k , E i ) = Σ a = 1 n k Σ b = 1 n i f ( E k , a ) f ( E i , b ) D ( E k , a , E i , b ) - - - ( 1 )
Wherein, nkIt is element blocks EkIn color category number, niIt is element blocks EiIn color category number, f (Ek,a) it is element blocks EkIn a kind color occur probability, f (Ei,b) it is element blocks EiIn b kind color occur probability, D (Ek,a,Ei,b) it is element blocks EkIn a kind color and element blocks EiIn b kind color at the Lab space three-channel Euclidean distance of calculated color;
1-3. calculates the luminance delta of each element blocks on the L * channel of Lab color space;Element blocks EkLuminance deltaAs shown in formula (2):
D E k = ( Σ i = 1 N ( E k ) d E i ) / N ( E k ) - - - ( 2 )
Wherein, N (Ek) it is element blocks EkIn the pixel number that comprises,It is pixel i place 3 neighborhood and template in element blocks 0 , - 1 , 0 - 1 , 4 , - 1 0 , - 1 , 0 Convolution, k is the sequence number of element blocks;
1-4. calculates the significance value of each element blocks;Assumed calculation element blocks EkSignificance value S (Ek), as shown in formula (3), the significance value of other element blocks calculates by same method and obtains:
S ( E k ) = D E k Σ E k ≠ E i e D s ( E k , E i ) - σ s 2 w ( E i ) D ( E k , E i ) - - - ( 3 )
Wherein, σsIt is the weight between two element blocks shared by space length, σs 2=0.4, w (Ei) it is element blocks EiWeight, the pixel number that comprises in this element blocks calculate and obtain, Ds(Ek,Ei) it is element blocks EkWith element blocks EiSpace length each other;D(Ek,Ei) it is element blocks EkWith element blocks EiBetween color distance tolerance, by formula (1) calculate obtain;It is element blocks EkLuminance delta, by formula (2) calculate obtain;
1-5. calculates the significance value of each pixel in query image;The significance value of each pixel is the significance value of this pixel place element blocks;
Query image is converted into, according to threshold value T, the contour images that area-of-interest is prominent by 1-6.;Formula (4) is utilized to calculate threshold value T:
T = ( Σ i = 1 n Σ j = 1 m S ( i , j ) ) n × m - - - ( 4 )
Wherein, n is the line number of query image pixel, and m is the columns of query image pixel, and (i, j) for the significance value of pixel each in query image for S;
Utilize formula (5) that query image is converted into the black and white contour images C that area-of-interest is prominent:
C ( i , j ) = 1 , S ( i , j ) &GreaterEqual; T 0 , S ( i , j ) < T - - - ( 5 )
In 1-7, calculating black and white contour images C, the minimum enclosed rectangle at each area-of-interest place, is then divided into a series of little figure according to the minimum enclosed rectangle information of all area-of-interests by query image, and each little figure comprises an area-of-interest.
3. a kind of image copy detection method based on region of interesting extraction according to claim 1, it is characterised in that described step 2 specific implementation process is as follows:
2-1. extracts in query image the characteristic point of image in each little figure and data base according to SURF Feature Points Extraction;
2-2. utilizes SURF characteristic point to describe little figure and the image in data base of sub-matching inquiry image;
2-3. utilizes RANSAC algorithm to remove the feature point pairs of erroneous matching;
2-4. removes the feature point pairs of erroneous matching further;
2-5. determines threshold value Thres, and whether image in data base is as the copy of query image to utilize threshold value Thres to judge, thus obtaining testing result.
4. a kind of image copy detection method based on region of interesting extraction according to claim 3, it is characterised in that the feature point pairs specific implementation process removing erroneous matching further of described step 2-4 is as follows:
If the feature point pairs of coupling is (Q after RANSAC algorithm processi,Ii);With QiAnd IiCentered by, the corresponding scale of the characteristic point of image I is extracted 5*5 fritter Block_Q respectively in the little figure of query image and data baseiAnd Block_Ii, then by fritter Block_QiAnd Block_IiNormalize to characteristic point direction respectively;Calculate two fritters structural similarity on RGB color, obtain the structural similarity (ssim_r of three passagesi,ssim_gi,ssim_bi), as shown in formula (6):
(ssim_ri,ssim_gi,ssim_bi)=SSIM (Block_Qi,Block_Ii)(6)
In the color value of the more each passage of rgb space, obtain Block_Q according to formula (7)iWith Block_IiBetween final structure similarity ssim_c:
s s i m _ c = s s i m _ r Q i _ r > Q i _ g a n d Q i _ r > Q i _ b s s i m _ g Q i _ g > Q i _ r a n d Q i _ g > Q i _ b s s i m _ b Q i _ b > Q i _ g a n d Q i _ b > Q i _ r - - - ( 7 )
Wherein, Qi_ r, Qi_ g, Qi_ b is Q respectivelyiPoint color value on R, G, B color channel.
5. a kind of image copy detection method based on region of interesting extraction according to claim 4, it is characterised in that when the threshold value choosing ssim_c is 0.55, it is possible to obtain higher recall ratio and precision ratio;If feature point pairs (Qi,Ii) between ssim_c value more than threshold value 0.55, then it is assumed that be the feature point pairs of correct coupling, be otherwise the feature point pairs of erroneous matching.
6. a kind of image copy detection method based on region of interesting extraction according to claim 4, it is characterized in that if in data base in the matching result of certain little figure of image and query image, the feature point pairs number of correct coupling is more than threshold value Thres, then it is assumed that the copy that this image is query image in data base.
7. a kind of image copy detection method based on region of interesting extraction according to claim 4, it is characterised in that when threshold value Thres value is 10, it is possible to obtain higher recall ratio and precision ratio.
CN201610052702.3A 2016-01-26 2016-01-26 A kind of image copy detection method based on region of interesting extraction Active CN105740858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610052702.3A CN105740858B (en) 2016-01-26 2016-01-26 A kind of image copy detection method based on region of interesting extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610052702.3A CN105740858B (en) 2016-01-26 2016-01-26 A kind of image copy detection method based on region of interesting extraction

Publications (2)

Publication Number Publication Date
CN105740858A true CN105740858A (en) 2016-07-06
CN105740858B CN105740858B (en) 2018-12-25

Family

ID=56247664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610052702.3A Active CN105740858B (en) 2016-01-26 2016-01-26 A kind of image copy detection method based on region of interesting extraction

Country Status (1)

Country Link
CN (1) CN105740858B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460623A (en) * 2018-11-22 2019-03-12 上海华力微电子有限公司 Similar domain judgment method
CN112348024A (en) * 2020-10-29 2021-02-09 北京信工博特智能科技有限公司 Image-text identification method and system based on deep learning optimization network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570969A (en) * 2003-07-23 2005-01-26 西北工业大学 An image retrieval method based on marked interest point
CN102629325A (en) * 2012-03-13 2012-08-08 深圳大学 Image characteristic extraction method, device thereof, image copy detection method and system thereof
CN104182973A (en) * 2014-08-11 2014-12-03 福州大学 Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
CN104881668A (en) * 2015-05-13 2015-09-02 中国科学院计算技术研究所 Method and system for extracting image fingerprints based on representative local mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570969A (en) * 2003-07-23 2005-01-26 西北工业大学 An image retrieval method based on marked interest point
CN102629325A (en) * 2012-03-13 2012-08-08 深圳大学 Image characteristic extraction method, device thereof, image copy detection method and system thereof
CN104182973A (en) * 2014-08-11 2014-12-03 福州大学 Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
CN104881668A (en) * 2015-05-13 2015-09-02 中国科学院计算技术研究所 Method and system for extracting image fingerprints based on representative local mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾福运,陈明志,肖传奇,查昊迅: "基于圆形描述算子CSIFT的图像复制粘贴检测算法", 《信息网络安全》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460623A (en) * 2018-11-22 2019-03-12 上海华力微电子有限公司 Similar domain judgment method
CN112348024A (en) * 2020-10-29 2021-02-09 北京信工博特智能科技有限公司 Image-text identification method and system based on deep learning optimization network

Also Published As

Publication number Publication date
CN105740858B (en) 2018-12-25

Similar Documents

Publication Publication Date Title
Ardizzone et al. Copy-move forgery detection via texture description
Fadl et al. Robust copy–move forgery revealing in digital images using polar coordinate system
CN107622489B (en) Image tampering detection method and device
CN102393900B (en) Video copying detection method based on robust hash
CN105760842A (en) Station caption identification method based on combination of edge and texture features
US9147223B2 (en) Method and device for localized blind watermark generation and detection
CN102957915B (en) Double JPEG (Joint Photographic Experts Group) compressed image-targeted tamper detection and tamper locating method
Lu et al. Seam carving estimation using forensic hash
Zhao et al. Tampered region detection of inpainting JPEG images
CN101308567A (en) Robust image copy detection method base on content
Yu et al. Perceptual hashing with complementary color wavelet transform and compressed sensing for reduced-reference image quality assessment
CN110782442B (en) Image artificial fuzzy detection method based on multi-domain coupling
Zhang et al. A survey on passive-blind image forgery by doctor method detection
CN112907598A (en) Method for detecting falsification of document and certificate images based on attention CNN
CN102547477B (en) Video fingerprint method based on contourlet transformation model
Wandji et al. Detection of copy-move forgery in digital images based on DCT
CN101887574B (en) Robust fingerprint embedding and extracting method capable of resisting geometric attacks
CN105740858A (en) Region-of-interest extraction based image copy detection method
Nawaz et al. Single and multiple regions duplication detections in digital images with applications in image forensic
CN103885978A (en) Multilayer grading image retrieval method
CN107977964A (en) Slit cropping evidence collecting method based on LBP and extension Markov feature
CN103561274A (en) Video time domain tamper detection method for removing moving object shot by static camera lens
Wang et al. Coarse-to-fine grained image splicing localization method based on noise level inconsistency
CN102881008B (en) Based on the anti-rotation image Hash method of annulus statistical nature
CN102129659A (en) Robust zero-watermarking algorithm based on singular value decomposition, Harr wavelet transformation and mean value calculation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant