CN105354558A - Face image matching method - Google Patents
Face image matching method Download PDFInfo
- Publication number
- CN105354558A CN105354558A CN201510820897.7A CN201510820897A CN105354558A CN 105354558 A CN105354558 A CN 105354558A CN 201510820897 A CN201510820897 A CN 201510820897A CN 105354558 A CN105354558 A CN 105354558A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- point
- pixel
- surf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000004044 response Effects 0.000 claims description 33
- 230000001815 facial effect Effects 0.000 claims description 29
- 230000008878 coupling Effects 0.000 claims description 18
- 238000010168 coupling process Methods 0.000 claims description 18
- 238000005859 coupling reaction Methods 0.000 claims description 18
- 238000005286 illumination Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 230000003321 amplification Effects 0.000 claims description 6
- 230000001186 cumulative effect Effects 0.000 claims description 6
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000002407 reforming Methods 0.000 claims description 5
- 241000196324 Embryophyta Species 0.000 claims description 3
- 230000019552 anatomical structure morphogenesis Effects 0.000 claims description 3
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000012141 concentrate Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000031068 symbiosis, encompassing mutualism through parasitism Effects 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face image matching method, and relates to the processing of image data. The face image matching method is based on two times of SURF and shape context; rough matching is implemented by using the SURF algorithm to obtain scale difference and direction difference information, then the information is used for performing accurate matching by using the SURF algorithm, and mismatching of the obtained matching result is removed by the shape context algorithm. The method comprises the steps of: determining a face area; generating a reconstructed integral image; implementing two times of SURF feature matching; and generating a shape context descriptor, removing mismatching, and thereby completing face image matching. According to the face image matching method, the defect that the existing face image matching method is less in feature points and matching points and not high in accuracy is overcome.
Description
Technical field
Technical scheme of the present invention relates to image real time transfer, specifically Humanface image matching method.
Background technology
Facial image coupling is an important branch in images match field.Along with the arrival in automated information epoch, facial image coupling has had increasing application in actual life.Because face information has uniqueness, be difficult to forge and be easy to gather, being widely used in gate control system, video monitoring and identity validation technology field.
Existing facial image matching algorithm, mostly to extract based on face local feature, utilizes face local feature to mate.Principal component analysis (PCA) (hereinafter referred to as PCA method) is the face Local Feature Extraction commonly used the most.1991, the people such as the Turk of University of California utilized PCA method, proposed classical " eigenface " facial image matching algorithm, and achieved good effect.But PCA method only considered the second-order statistics information of view data, fail the higher-order statistics utilized in data, have ignored the non-linear dependencies between multiple pixel.2004, DavidLowe proposes scale invariant feature transfer algorithm (hereinafter referred to as SIFT algorithm), potential yardstick and the key point of invariable rotary is identified by gaussian derivative function, the model meticulous by matching determines position and yardstick, utilize the gradient direction of image local, one or more direction is distributed to each key point position, achieve the unchangeability of yardstick and rotation, in each crucial neighborhood of a point, to the gradient of the image local of some scale, there is a kind of SIFT represent, larger local deformation and illumination variation can be allowed.But SIFT efficiency of algorithm is low and speed is slow.2006, the people such as the Bay of Switzerland propose SpeededUpRobustFeature algorithm (hereinafter referred to as SURF algorithm) and improve SIFT algorithm, the maximum value of Hessian matrix determinant is utilized to detect unique point, simplify the calculating of convolution in DOH (i.e. Hessian determinant of a matrix value) by the method for integral image box filtering, substantially increase efficiency of algorithm.But it is few still to there is the unique point detected in SURF algorithm, the problem that the number of pairs obtained is also few.
Existing Humanface image matching method carries out feature extraction owing to have employed PCA method, SIFT algorithm or SURF algorithm, existing characteristics point is few, match point is few and the defect that accuracy is not high, especially to the facial image having attitude, expression, illumination variation, unique point number and accuracy have much room for improvement.Therefore, many and accuracy the is high Humanface image matching method of research characteristic point has great importance.
Summary of the invention
Technical matters to be solved by this invention is: provide Humanface image matching method, it is the Humanface image matching method based on twice SURF algorithm and Shape context, be called for short TSURF+SC (TwiceSURF+ShapeContext), utilize SURF algorithm slightly to mate and obtain yardstick difference and direction difference information, recycle these information and carry out SURF algorithm exact matching, error hiding is removed to the matching result Shape context algorithm that obtains, to overcome in existing Humanface image matching method that existing characteristics point is few, match point is few and the defect that accuracy is not high.
The present invention solves this technical problem adopted technical scheme: Humanface image matching method, and be the Humanface image matching method based on twice SURF and Shape context, concrete steps are as follows:
The first step, determine human face region:
Input the facial image of the formed objects of two width same persons, to a wherein width espressiove, the image of attitude and illumination variation is as image to be matched, another width standard front face facial image is as template image, image to be matched and template image are all carried out zooming to 1/8 of template image, then the Face datection search box of 20 × 20 pixel sizes is used to scan respectively above-mentioned two width images from top to bottom from left to right, to the subimage that each scans, the obverse face detection device carried with OpenCV judges it whether as facial image, if, then be labeled as human face region, one time has been scanned respectively to above-mentioned two width images at every turn, Face datection search box is amplified 10%, rescan one time again, repetition like this, until stop scanning when Face datection search box expands the half size of above-mentioned image to, then from RGB, YCrCb color space is transformed into all face subimages be labeled, to the Cr of each pixel wherein, Cb component carries out colour of skin checking, verify that colour of skin condition used is as shown in formula (1),
133≤Cr≤173∩77≤Cb≤127(1),
In formula, Cr, Cb represent tone and the saturation degree of image in YCrCb color space respectively,
The region that the pixel in the image of above-mentioned scanning with more than 40% meets formula (1) is defined as human face region, i.e. area-of-interest, again the human face region determined in image to be matched and template image is amplified 8 times, return to original size size;
Second step, generates the integral image of reconstruct:
Again rgb space is transformed into the human face region determined in the above-mentioned first step, then utilizes formula (2) to be converted to gray level image,
X=0.299R+0.587G+0.114B(2),
In above formula, R, G and B are the redness of rgb space, green and blue channel respectively, and X represents the gray-scale value in gray level image;
Then calculate the significant factor of each pixel in gray level image, obtain significant factor figure, as shown in formula (3),
σ(X
c)=magn×arctan(V/X
c)(3),
In above formula, magn is amplification coefficient, σ (X
c) be the significant factor of the pixel in facial image, V is pixel X in facial image
cwith with pixel X
ccentered by eight neighborhood X
i(i=0 ..., 7) grey scale difference, computing method as shown in formula (4),
In the rectangular area that significant factor figure upper left corner initial point and this point are formed the pixel value of pixel value sum this point on integrogram a little, generate the integral image of reconstruct, as shown in formula (5),
IN (X in above formula
c) be X in the integral image of reconstruct
cthe pixel value at place, X
ccoordinate be (x, y), IN (X
c) value equal (0,0) point in significant factor figure, (x, 0) point, (0, y) put with (x, y) all pixel values in the rectangular area that forms with;
3rd step, twice SURF characteristic matching:
First matching process detects SURF feature and generates descriptor, and then carry out once thick coupling and obtain yardstick difference and direction difference information, finally recycle these information and carry out an exact matching, concrete steps are as follows:
(1) SURF descriptor is generated:
The integral image of the reconstruct obtained with box Filtering Template and the above-mentioned second step of different size asks for the Hessian matrix determinant response image of the different scale of significant factor figure, on these response images, adopt 3D non-maxima suppression to detect afterwards, the point with response maximum value is defined as unique point, the yardstick of this unique point is the yardstick of respective response image, if box Filtering Template size is L × L pixel, original dimension L=9 pixel, the response image yardstick s=1.2 of its correspondence, then L=15 pixel is used successively, 21 pixels, the size of 27 pixels, response image yardstick s corresponding respectively can be calculated by formula (6),
s=1.2×L/9(6),
After the position obtaining unique point and yardstick s, to each unique point, centered by unique point, 6s be radius border circular areas in response computing is carried out to the Haar small echo template that significant factor figure size is 4s × 4s pixel, here s need be rounded to integer, then one is used centered by unique point, subtended angle is the fan-shaped moving window of π/3, rotate around unique point with step-length 0.2 radian, often forward a place to, image Haar small echo horizontal direction and vertical direction response dx in statistics moving window, the accumulated value ∑ dx+ ∑ dy of dy, there is the principal direction of direction as unique point of peak response accumulated value, after obtaining principal direction, centered by unique point, along principal direction, the image of 20s × 20s pixel size is divided into 4 × 4 sub-blocks, each sub-block utilizes Haar template that size is 2s × 2s pixel to carry out the calculating of response, and its cumulative sum and absolute value cumulative sum ∑ dx are added up respectively to the response of horizontal direction x and vertical direction y, ∑ | dx|, ∑ dy, ∑ | dy|, morphogenesis characters vector, i.e. SURF descriptor, the SURF descriptor that each unique point symbiosis becomes 4 × 4 × 4=64 to tie up,
(2) SURF algorithm slightly mates:
Treat each unique point in matching image, calculate the Euclidean distance between all unique point SURF descriptors in it and template image, record nearest neighbor distance d1 and time nearest neighbor distance d2, as d1/d2<th1 and d1<th2 time, this point and its arest neighbors are recorded as a pair match point, stored in initial matching collection, complete SURF algorithm thus and slightly mate, above-mentioned th1 and th2 is the threshold value preset;
(3) SURF algorithm exact matching:
Calculate and add up initial matching that above-mentioned (2) step obtains and concentrate yardstick difference between matching double points and differential seat angle, yardstick mentioned here is the yardstick of unique point, angle is unique point principal direction, then average ds and the standard deviation dc of all matching double points yardstick differences is calculated, calculate the average dO of differential seat angle, initial matching collection is emptied, treat each unique point in matching image and calculate yardstick difference tds in it and template image between all unique points, differential seat angle tdO, yardstick difference between image to be matched and template image each several part and differential seat angle should be consistent, mate between yardstick difference and differential seat angle should fall within the specific limits, if tds and tdO meets formula (7) condition,
(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),
Then calculate the Euclidean distance between two match points, above-mentioned (2) step SURF algorithm of reforming slightly mates step, by the match point that obtains stored in set of matches, if do not meet formula (7), then skip this to match point, no longer calculate their Euclidean distance;
4th step, remove error hiding:
Utilize Shape context algorithm to carry out the rejecting of error hiding to the matching result that above-mentioned 3rd step obtains, concrete grammar is as follows:
(1) Shape context descriptor is generated:
The sampled point of unique point as two width images of image to be matched and template image will be belonged in the set of matches obtained in above-mentioned 3rd step, each sampled point is calculated and records other sampled point in all figure to its distance and its angulation, then 6 intervals will be divided into after range normalization, and by angle [0,2 π] be divided into 12 intervals, 72 blocks are constituted with 12 angular interval between these 6 distance regions, add up the sampled point number dropped in each block, just generate the Shape context descriptor of 72 dimensions;
(2) reject error hiding, complete facial image coupling:
Calculate the Euclidean distance d of Shape context descriptor between often pair of match point in the set of matches obtained in the 3rd step
sc, obtain Euclidean distance average w and standard deviation f, the coupling then sc not being met formula (8) weeds out being used as error hiding, and remaining set of matches is then final set of matches,
d
sc≤w+f(8),
So far facial image coupling is completed.
Above-mentioned Humanface image matching method, the amplification coefficient magn=10 in described formula (3).
Above-mentioned Humanface image matching method, during described SURF algorithm slightly mates, the threshold value preset: th1=0.6, th2=0.3.
Above-mentioned Humanface image matching method, in described SURF algorithm exact matching, gets th1=0.7, th2=0.4 when above-mentioned (2) step SURF algorithm of reforming slightly mates step.
The invention has the beneficial effects as follows: compared with prior art, outstanding substantive distinguishing features of the present invention and marked improvement as follows:
(1) the inventive method is the Humanface image matching method based on SURF algorithm and Shape context, utilize SURF algorithm slightly to mate and obtain yardstick difference and direction difference information, recycle these information and carry out SURF algorithm exact matching, error hiding is removed to the matching result Shape context algorithm that obtains, to overcome in existing Humanface image matching method that existing characteristics point is few, match point is few and the defect that accuracy is not high.
(2) the inventive method arranges human face region is area-of-interest, and only carrying out coupling for human face region can save the plenty of time, improves matching efficiency.
(3) the inventive method has carried out restructuring transformation when formation product partial image to image, and the position making eyes, eyebrow, nose, mouth comprise a large amount of face characteristic is given prominence to more, have more multi-characteristic points, thus adds validity feature point quantity.
(4) the inventive method utilizes the poor information of yardstick difference and direction of slightly mating and obtaining to carry out exact matching, can obtain more match points, and accuracy is improved.
(5) the inventive method utilizes Shape context algorithm to carry out error hiding to the result that exact matching obtains and rejects operation, further increases accuracy rate.
The following examples have made further proof to outstanding substantive distinguishing features of the present invention and marked improvement.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the present invention is further described.
Fig. 1 is the schematic process flow diagram of the inventive method.
The coupling that Fig. 2 is the inventive method TSURF+SC method from existing Humanface image matching method under different threshold value is to number curve.
Fig. 3 is the inventive method TSURF+SC method and the accuracy curve of existing Humanface image matching method under different threshold value.
Embodiment
Embodiment illustrated in fig. 1ly show, the flow process of the inventive method is: integral image → twice SURF characteristic matching → the generations Shape context descriptor determining human face region → generation reconstruct, removes error hiding, completes facial image and mate.
Display embodiment illustrated in fig. 2, at different threshold value th1 and th2=0.4 time, the inventive method TSURF+SC method and the existing Humanface image matching method coupling under different threshold value is to number curve, showing the inventive method TSURF+SC method in figure correctly mates quantity maximum, illustrates that the more existing Humanface image matching method of its algorithm effect is better.
Display embodiment illustrated in fig. 3, at different threshold value th1 and th2=0.4 time, the inventive method TSURF+SC method and the existing Humanface image matching method accuracy curve under different threshold value, show the accuracy that the inventive method TSURF+SC method correctly mates in figure the highest, illustrate that the more existing Humanface image matching method of its algorithm effect is better.
Embodiment
The Humanface image matching method of the present embodiment, be the Humanface image matching method based on twice SURF and Shape context, concrete steps are as follows:
The first step, determine human face region:
Input the facial image of the formed objects of two width same persons, to a wherein width espressiove, the image of attitude and illumination variation is as image to be matched, another width standard front face facial image is as template image, image to be matched and template image are all carried out zooming to 1/8 of template image, then the Face datection search box of 20 × 20 pixel sizes is used to scan respectively above-mentioned two width images from top to bottom from left to right, to the subimage that each scans, the obverse face detection device carried with OpenCV judges it whether as facial image, if, then be labeled as human face region, one time has been scanned respectively to above-mentioned two width images at every turn, Face datection search box is amplified 10%, rescan one time again, repetition like this, until stop scanning when Face datection search box expands the half size of above-mentioned image to, then from RGB, YCrCb color space is transformed into all face subimages be labeled, to the Cr of each pixel wherein, Cb component carries out colour of skin checking, verify that colour of skin condition used is as shown in formula (1),
133≤Cr≤173∩77≤Cb≤127(1),
In formula, Cr, Cb represent tone and the saturation degree of image in YCrCb color space respectively,
The region that the pixel in the image of above-mentioned scanning with more than 40% meets formula (1) is defined as human face region, i.e. area-of-interest, again the human face region determined in image to be matched and template image is amplified 8 times, return to original size size;
Second step, generates the integral image of reconstruct:
Again rgb space is transformed into the human face region determined in the above-mentioned first step, then utilizes formula (2) to be converted to gray level image,
X=0.299R+0.587G+0.114B(2),
In above formula, R, G and B are the redness of rgb space, green and blue channel respectively, and X represents the gray-scale value in gray level image;
Then calculate the significant factor of each pixel in gray level image, obtain significant factor figure, as shown in formula (3),
σ(X
c)=magn×arctan(V/X
c)(3),
In above formula, magn is amplification coefficient, amplification coefficient magn=10, σ (X
c) be the significant factor of the pixel in facial image, V is pixel X in facial image
cwith with pixel X
ccentered by eight neighborhood X
i(i=0 ..., 7) grey scale difference, computing method as shown in formula (4),
In the rectangular area that significant factor figure upper left corner initial point and this point are formed the pixel value of pixel value sum this point on integrogram a little, generate the integral image of reconstruct, as shown in formula (5),
IN (X in above formula
c) be X in the integral image of reconstruct
cthe pixel value at place, X
ccoordinate be (x, y), IN (X
c) value equal (0,0) point in significant factor figure, (x, 0) point, (0, y) put with (x, y) all pixel values in the rectangular area that forms with;
3rd step, twice SURF characteristic matching:
First matching process detects SURF feature and generates descriptor, and then carry out once thick coupling and obtain yardstick difference and direction difference information, finally recycle these information and carry out an exact matching, concrete steps are as follows:
(1) SURF descriptor is generated:
The integral image of the reconstruct obtained with box Filtering Template and the above-mentioned second step of different size asks for the Hessian matrix determinant response image of the different scale of significant factor figure, on these response images, adopt 3D non-maxima suppression to detect afterwards, the point with response maximum value is defined as unique point, the yardstick of this unique point is the yardstick of respective response image, if box Filtering Template size is L × L pixel, original dimension L=9 pixel, the response image yardstick s=1.2 of its correspondence, then L=15 pixel is used successively, 21 pixels, the size of 27 pixels, response image yardstick s corresponding respectively can be calculated by formula (6),
s=1.2×L/9(6),
After the position obtaining unique point and yardstick s, to each unique point, centered by unique point, 6s be radius border circular areas in response computing is carried out to the Haar small echo template that significant factor figure size is 4s × 4s pixel, here s need be rounded to integer, then one is used centered by unique point, subtended angle is the fan-shaped moving window of π/3, rotate around unique point with step-length 0.2 radian, often forward a place to, image Haar small echo level side's line and vertical direction response dx in statistics moving window, the accumulated value ∑ dx+ ∑ dy of dy, there is the principal direction of direction as unique point of peak response accumulated value, after obtaining principal direction, centered by unique point, along principal direction, the image of 20s × 20s pixel size is divided into 4 × 4 sub-blocks, each sub-block utilizes Haar template that size is 2s × 2s pixel to carry out the calculating of response, and its cumulative sum and absolute value cumulative sum ∑ dx are added up respectively to the response of horizontal direction x and vertical direction y, ∑ | dx|, ∑ dy, ∑ | dy|, morphogenesis characters vector, i.e. SURF descriptor, the SURF descriptor that each unique point symbiosis becomes 4 × 4 × 4=64 to tie up,
(2) SURF algorithm slightly mates:
Treat each unique point in matching image, calculate the Euclidean distance between all unique point SURF descriptors in it and template image, record nearest neighbor distance d1 and time nearest neighbor distance d2, as d1/d2<th1 and d1<th2 time, this point and its arest neighbors are recorded as a pair match point, stored in initial matching collection, complete SURF algorithm thus slightly to mate, above-mentioned th1 and th2 is the threshold value preset, th1=0.6, th2=0.3;
(3) SURF algorithm exact matching:
Calculate and add up initial matching that above-mentioned (2) step obtains and concentrate yardstick difference between matching double points and differential seat angle, yardstick mentioned here is the yardstick of unique point, angle is unique point principal direction, then average ds and the standard deviation dc of all matching double points yardstick differences is calculated, calculate the average dO of differential seat angle, initial matching collection is emptied, treat each unique point in matching image and calculate yardstick difference tds in it and template image between all unique points, differential seat angle tdO, yardstick difference between image to be matched and template image each several part and differential seat angle should be consistent, mate between yardstick difference and differential seat angle should fall within the specific limits, if tds and tdO meets formula (7) condition,
(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),
Then calculate the Euclidean distance between two match points, above-mentioned (2) step SURF algorithm of reforming slightly mates step, wherein th1=0.7, th2=0.4, by the match point that obtains stored in set of matches, if do not meet formula (7), then skip this to match point, no longer calculate their Euclidean distance;
4th step, remove error hiding:
Utilize Shape context algorithm to carry out the rejecting of error hiding to the matching result that above-mentioned 3rd step obtains, concrete grammar is as follows:
(1) Shape context descriptor is generated:
The sampled point of unique point as two width images of image to be matched and template image will be belonged in the set of matches obtained in above-mentioned 3rd step, each sampled point is calculated and records other sampled point in all figure to its distance and its angulation, then 6 intervals will be divided into after range normalization, and by angle [0,2 π] be divided into 12 intervals, 72 blocks are constituted with 12 angular interval between these 6 distance regions, add up the sampled point number dropped in each block, just generate the Shape context descriptor of 72 dimensions;
(2) reject error hiding, complete facial image coupling:
Calculate the Euclidean distance d of Shape context descriptor between often pair of match point in the set of matches obtained in the 3rd step
sc, obtain Euclidean distance average w and standard deviation f, the coupling then sc not being met formula (8) weeds out being used as error hiding, and remaining set of matches is then final set of matches,
d
sc≤w+f(8),
So far facial image coupling is completed.
The present embodiment utilizes VS2005 and OPENCV2.0 platform to realize, and carries out Matching Experiment to the image that espressiove change, attitudes vibration, illumination variation, expression and attitude in Technical University Of Denmark IMM and GeorgiaTech face database change simultaneously respectively.Wherein, IMM database comprises 40 people, everyone 6 width images, and image size is comprise 50 people in 640 × 480, GeorgiaTech face database, everyone 15 width images, and image size is 640 × 480, and experiment institute purpose processor is Intel Duo I3,4G internal memory.In order to verify the advantage of the inventive method in unique point quantity and accuracy, the present embodiment chooses classical matching algorithm SURF algorithm, SURF algorithm is in conjunction with RANSAC algorithm, the SURF algorithm improved contrasts in conjunction with RANSAC algorithm and this method TSURF+SC, record each group experiment picture respectively at different threshold value th1, the match point number obtained during th2 and the number of erroneous matching, experimental data shows that the present embodiment method on average exceeds contrast algorithm more than 80% on match point number, erroneous matching accounts for the ratio of match point number also all lower than contrast algorithm, but the working time of algorithm exceeds 80% than original SURF algorithm, efficiency of algorithm needs to improve in research afterwards.What table 1 was listed is in the one group of IMM database images vicissitudinous facial image of attitude is at empirical value th1=0.7, matched data during th2=0.4, table 2 is have the facial image of illumination variation at empirical value th1=0.7, matched data during th2=0.4 in one group of GeorgiaTech database.
Table 1 attitudes vibration matching effect
Table 2 illumination variation matching effect
The inventive method that TSURF+SC in table adopts for the present embodiment.
Result shows, the present embodiment method obtains a number of pairs, accuracy and is all better than three kinds of contrast algorithms, although match time is higher than first two algorithm, provide more correct coupling than first two algorithm, efficiency of algorithm needs to continue in research afterwards to improve.
Claims (4)
1. Humanface image matching method, is characterized in that: be the Humanface image matching method based on twice SURF and Shape context, concrete steps are as follows:
The first step, determine human face region:
Input the facial image of the formed objects of two width same persons, to a wherein width espressiove, the image of attitude and illumination variation is as image to be matched, another width standard front face facial image is as template image, image to be matched and template image are all carried out zooming to 1/8 of template image, then the Face datection search box of 20 × 20 pixel sizes is used to scan respectively above-mentioned two width images from top to bottom from left to right, to the subimage that each scans, the obverse face detection device carried with OpenCV judges it whether as facial image, if, then be labeled as human face region, one time has been scanned respectively to above-mentioned two width images at every turn, Face datection search box is amplified 10%, rescan one time again, repetition like this, until stop scanning when Face datection search box expands the half size of above-mentioned image to, then from RGB, YCrCb color space is transformed into all face subimages be labeled, to the Cr of each pixel wherein, Cb component carries out colour of skin checking, verify that colour of skin condition used is as shown in formula (1),
133≤Cr≤173∩77≤Cb≤127(1),
In formula, Cr, Cb represent tone and the saturation degree of image in YCrCb color space respectively,
The region that the pixel in the image of above-mentioned scanning with more than 40% meets formula (1) is defined as human face region, i.e. area-of-interest, again the human face region determined in image to be matched and template image is amplified 8 times, return to original size size;
Second step, generates the integral image of reconstruct:
Again rgb space is transformed into the human face region determined in the above-mentioned first step, then utilizes formula (2) to be converted to gray level image,
X=0.299R+0.587G+0.114B(2),
In above formula, R, G and B are the redness of rgb space, green and blue channel respectively, and X represents the gray-scale value in gray level image;
Then calculate the significant factor of each pixel in gray level image, obtain significant factor figure, as shown in formula (3),
σ(X
c)=magn×arctan(V/X
c)(3),
In above formula, magn is amplification coefficient, σ (X
c) be the significant factor of the pixel in facial image, V is pixel X in facial image
cwith with pixel X
ccentered by eight neighborhood X
i(i=0 ..., 7) grey scale difference, computing method as shown in formula (4),
In the rectangular area that significant factor figure upper left corner initial point and this point are formed the pixel value of pixel value sum this point on integrogram a little, generate the integral image of reconstruct, as shown in formula (5),
IN (X in above formula
c) be X in the integral image of reconstruct
cthe pixel value at place, X
ccoordinate be (x, y), IN (X
c) value equal (0,0) point in significant factor figure, (x, 0) point, (0, y) put with (x, y) all pixel values in the rectangular area that forms with;
3rd step, twice SURF characteristic matching:
First matching process detects SURF feature and generates descriptor, and then carry out once thick coupling and obtain yardstick difference and direction difference information, finally recycle these information and carry out an exact matching, concrete steps are as follows:
(1) SURF descriptor is generated:
The integral image of the reconstruct obtained with box Filtering Template and the above-mentioned second step of different size asks for the Hessian matrix determinant response image of the different scale of significant factor figure, on these response images, adopt 3D non-maxima suppression to detect afterwards, the point with response maximum value is defined as unique point, the yardstick of this unique point is the yardstick of respective response image, if box Filtering Template size is L × L pixel, original dimension L=9 pixel, the response image yardstick s=1.2 of its correspondence, then L=15 pixel is used successively, 21 pixels, the size of 27 pixels, response image yardstick s corresponding respectively can be calculated by formula (6),
s=1.2×L/9(6),
After the position obtaining unique point and yardstick s, to each unique point, centered by unique point, 6s be radius border circular areas in response computing is carried out to the Haar small echo template that significant factor figure size is 4s × 4s pixel, here s need be rounded to integer, then one is used centered by unique point, subtended angle is the fan-shaped moving window of π/3, rotate around unique point with step-length 0.2 radian, often forward a place to, image Haar small echo horizontal direction and vertical direction response dx in statistics moving window, the accumulated value ∑ dx+ ∑ dy of dy, there is the principal direction of direction as unique point of peak response accumulated value, after obtaining principal direction, centered by unique point, along principal direction, the image of 20s × 20s pixel size is divided into 4 × 4 sub-blocks, each sub-block utilizes Haar template that size is 2s × 2s pixel to carry out the calculating of response, and its cumulative sum and absolute value cumulative sum ∑ dx are added up respectively to the response of horizontal direction x and vertical direction y, ∑ | dx|, ∑ dy, ∑ | dy|, morphogenesis characters vector, i.e. SURF descriptor, the SURF descriptor that each unique point symbiosis becomes 4 × 4 × 4=64 to tie up,
(2) SURF algorithm slightly mates:
Treat each unique point in matching image, calculate the Euclidean distance between all unique point SURF descriptors in it and template image, record nearest neighbor distance d1 and time nearest neighbor distance d2, as d1/d2<th1 and d1<th2 time, this point and its arest neighbors are recorded as a pair match point, stored in initial matching collection, complete SURF algorithm thus and slightly mate, above-mentioned th1 and th2 is the threshold value preset;
(3) SURF algorithm exact matching:
Calculate and add up initial matching that above-mentioned (2) step obtains and concentrate yardstick difference between matching double points and differential seat angle, yardstick mentioned here is the yardstick of unique point, angle is unique point principal direction, then average ds and the standard deviation dc of all matching double points yardstick differences is calculated, calculate the average dO of differential seat angle, initial matching collection is emptied, treat each unique point in matching image and calculate yardstick difference tds in it and template image between all unique points, differential seat angle tdO, yardstick difference between image to be matched and template image each several part and differential seat angle should be consistent, mate between yardstick difference and differential seat angle should fall within the specific limits, if tds and tdO meets formula (7) condition,
(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),
Then calculate the Euclidean distance between two match points, above-mentioned (2) step SURF algorithm of reforming slightly mates step, by the match point that obtains stored in set of matches, if do not meet formula (7), then skip this to match point, no longer calculate their Euclidean distance;
4th step, remove error hiding:
Utilize Shape context algorithm to carry out the rejecting of error hiding to the matching result that above-mentioned 3rd step obtains, concrete grammar is as follows:
(1) Shape context descriptor is generated:
The sampled point of unique point as two width images of image to be matched and template image will be belonged in the set of matches obtained in above-mentioned 3rd step, each sampled point is calculated and records other sampled point in all figure to its distance and its angulation, then 6 intervals will be divided into after range normalization, and by angle [0,2 π] be divided into 12 intervals, 72 blocks are constituted with 12 angular interval between these 6 distance regions, add up the sampled point number dropped in each block, just generate the Shape context descriptor of 72 dimensions;
(2) reject error hiding, complete facial image coupling:
Calculate the Euclidean distance d of Shape context descriptor between often pair of match point in the set of matches obtained in the 3rd step
sc, obtain Euclidean distance average w and standard deviation f, the coupling then sc not being met formula (8) weeds out being used as error hiding, and remaining set of matches is then final set of matches,
d
sc≤w+f(8),
So far facial image coupling is completed.
2. Humanface image matching method according to claim 1, is characterized in that: the amplification coefficient magn=10 in described formula (3).
3. Humanface image matching method according to claim 1, is characterized in that: during described SURF algorithm slightly mates, the threshold value preset: th1=0.6, th2=0.3.
4. Humanface image matching method according to claim 1, is characterized in that: in described SURF algorithm exact matching, gets th1=0.7, th2=0.4 when above-mentioned (2) step SURF algorithm of reforming slightly mates step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510820897.7A CN105354558B (en) | 2015-11-23 | 2015-11-23 | Humanface image matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510820897.7A CN105354558B (en) | 2015-11-23 | 2015-11-23 | Humanface image matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105354558A true CN105354558A (en) | 2016-02-24 |
CN105354558B CN105354558B (en) | 2018-09-28 |
Family
ID=55330525
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510820897.7A Expired - Fee Related CN105354558B (en) | 2015-11-23 | 2015-11-23 | Humanface image matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105354558B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023187A (en) * | 2016-05-17 | 2016-10-12 | 西北工业大学 | Image registration method based on SIFT feature and angle relative distance |
CN106971164A (en) * | 2017-03-28 | 2017-07-21 | 北京小米移动软件有限公司 | Shape of face matching process and device |
CN107301718A (en) * | 2017-06-20 | 2017-10-27 | 深圳怡化电脑股份有限公司 | A kind of image matching method and device |
CN108171846A (en) * | 2017-12-30 | 2018-06-15 | 南京陶特思软件科技有限公司 | There is the access control system of fast verification |
CN109084898A (en) * | 2018-07-02 | 2018-12-25 | 北京印刷学院 | A method of it establishing observer and bores Cellular spectroscopic receptance function |
CN109086718A (en) * | 2018-08-02 | 2018-12-25 | 深圳市华付信息技术有限公司 | Biopsy method, device, computer equipment and storage medium |
CN109165657A (en) * | 2018-08-20 | 2019-01-08 | 贵州宜行智通科技有限公司 | A kind of image feature detection method and device based on improvement SIFT |
CN110210341A (en) * | 2019-05-20 | 2019-09-06 | 深圳供电局有限公司 | Identity card authentication method based on face recognition, system thereof and readable storage medium |
CN110852319A (en) * | 2019-11-08 | 2020-02-28 | 深圳市深视创新科技有限公司 | Rapid universal roi matching method |
CN110941989A (en) * | 2019-10-18 | 2020-03-31 | 北京达佳互联信息技术有限公司 | Image verification method, image verification device, video verification method, video verification device, equipment and storage medium |
CN111598176A (en) * | 2020-05-19 | 2020-08-28 | 北京明略软件系统有限公司 | Image matching processing method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070076922A1 (en) * | 2005-09-30 | 2007-04-05 | Sony United Kingdom Limited | Object detection |
CN104809731A (en) * | 2015-05-05 | 2015-07-29 | 北京工业大学 | Gradient binaryzation based rotation-invariant and scale-invariant scene matching method |
CN104851095A (en) * | 2015-05-14 | 2015-08-19 | 江南大学 | Workpiece image sparse stereo matching method based on improved-type shape context |
-
2015
- 2015-11-23 CN CN201510820897.7A patent/CN105354558B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070076922A1 (en) * | 2005-09-30 | 2007-04-05 | Sony United Kingdom Limited | Object detection |
CN104809731A (en) * | 2015-05-05 | 2015-07-29 | 北京工业大学 | Gradient binaryzation based rotation-invariant and scale-invariant scene matching method |
CN104851095A (en) * | 2015-05-14 | 2015-08-19 | 江南大学 | Workpiece image sparse stereo matching method based on improved-type shape context |
Non-Patent Citations (2)
Title |
---|
YANG GUI等: "Point-pattern matching method using SURF and Shape Context", 《OPTIK》 * |
庄萱怡: "基于形状上下文和SURF兴趣点的行为识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023187A (en) * | 2016-05-17 | 2016-10-12 | 西北工业大学 | Image registration method based on SIFT feature and angle relative distance |
CN106971164B (en) * | 2017-03-28 | 2020-02-04 | 北京小米移动软件有限公司 | Face shape matching method and device |
CN106971164A (en) * | 2017-03-28 | 2017-07-21 | 北京小米移动软件有限公司 | Shape of face matching process and device |
CN107301718A (en) * | 2017-06-20 | 2017-10-27 | 深圳怡化电脑股份有限公司 | A kind of image matching method and device |
CN107301718B (en) * | 2017-06-20 | 2019-07-26 | 深圳怡化电脑股份有限公司 | A kind of image matching method and device |
CN108171846A (en) * | 2017-12-30 | 2018-06-15 | 南京陶特思软件科技有限公司 | There is the access control system of fast verification |
CN109084898A (en) * | 2018-07-02 | 2018-12-25 | 北京印刷学院 | A method of it establishing observer and bores Cellular spectroscopic receptance function |
CN109086718A (en) * | 2018-08-02 | 2018-12-25 | 深圳市华付信息技术有限公司 | Biopsy method, device, computer equipment and storage medium |
CN109165657A (en) * | 2018-08-20 | 2019-01-08 | 贵州宜行智通科技有限公司 | A kind of image feature detection method and device based on improvement SIFT |
CN110210341A (en) * | 2019-05-20 | 2019-09-06 | 深圳供电局有限公司 | Identity card authentication method based on face recognition, system thereof and readable storage medium |
CN110210341B (en) * | 2019-05-20 | 2022-12-06 | 深圳供电局有限公司 | Identity card authentication method based on face recognition, system thereof and readable storage medium |
CN110941989A (en) * | 2019-10-18 | 2020-03-31 | 北京达佳互联信息技术有限公司 | Image verification method, image verification device, video verification method, video verification device, equipment and storage medium |
US11625819B2 (en) | 2019-10-18 | 2023-04-11 | Beijing Dajia Internet Information Technology Co., Ltd. | Method and device for verifying image and video |
CN110852319A (en) * | 2019-11-08 | 2020-02-28 | 深圳市深视创新科技有限公司 | Rapid universal roi matching method |
CN111598176A (en) * | 2020-05-19 | 2020-08-28 | 北京明略软件系统有限公司 | Image matching processing method and device |
CN111598176B (en) * | 2020-05-19 | 2023-11-17 | 北京明略软件系统有限公司 | Image matching processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN105354558B (en) | 2018-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105354558A (en) | Face image matching method | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN101620669B (en) | Method for synchronously recognizing identities and expressions of human faces | |
CN106204779B (en) | Check class attendance method based on plurality of human faces data collection strategy and deep learning | |
CN110728209A (en) | Gesture recognition method and device, electronic equipment and storage medium | |
CN111160264B (en) | Cartoon character identity recognition method based on generation countermeasure network | |
CN111753782B (en) | False face detection method and device based on double-current network and electronic equipment | |
CN105049911A (en) | Video special effect processing method based on face identification | |
CN111160291B (en) | Human eye detection method based on depth information and CNN | |
Rouhi et al. | A review on feature extraction techniques in face recognition | |
CN112541434B (en) | Face recognition method based on central point tracking model | |
CN107066952A (en) | A kind of method for detecting lane lines | |
CN110766016B (en) | Code-spraying character recognition method based on probabilistic neural network | |
CN103020898A (en) | Sequence iris image super-resolution reconstruction method | |
CN112101195A (en) | Crowd density estimation method and device, computer equipment and storage medium | |
CN110751635A (en) | Oral cavity detection method based on interframe difference and HSV color space | |
CN107766864A (en) | Extract method and apparatus, the method and apparatus of object identification of feature | |
CN109766850A (en) | Fingerprint image matching method based on Fusion Features | |
CN110969164A (en) | Low-illumination imaging license plate recognition method and device based on deep learning end-to-end | |
Devadethan et al. | Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing | |
Gour et al. | Fingerprint feature extraction using midpoint ridge contour method and neural network | |
Nguyen et al. | Focus-score weighted super-resolution for uncooperative iris recognition at a distance and on the move | |
CN109165551B (en) | Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics | |
Panetta et al. | Unrolling post-mortem 3D fingerprints using mosaicking pressure simulation technique | |
CN110348344A (en) | A method of the special facial expression recognition based on two and three dimensions fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180928 |
|
CF01 | Termination of patent right due to non-payment of annual fee |