CN107145820A - Eyes localization method based on HOG features and FAST algorithms - Google Patents
Eyes localization method based on HOG features and FAST algorithms Download PDFInfo
- Publication number
- CN107145820A CN107145820A CN201710155361.7A CN201710155361A CN107145820A CN 107145820 A CN107145820 A CN 107145820A CN 201710155361 A CN201710155361 A CN 201710155361A CN 107145820 A CN107145820 A CN 107145820A
- Authority
- CN
- China
- Prior art keywords
- mrow
- point
- image
- msub
- pointsfnl
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000004807 localization Effects 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 79
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims description 19
- 239000013256 coordination polymer Substances 0.000 claims description 9
- 238000009499 grossing Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 210000000162 simple eye Anatomy 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 3
- 238000007792 addition Methods 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 210000001747 pupil Anatomy 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 241001300078 Vitrea Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 125000003367 polycyclic group Chemical group 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A kind of eyes localization method based on HOG features and FAST algorithms, is loaded into the SVM model files trained;I-th two field picture to be positioned is obtained, and copies image midImage;Image midImage is pre-processed;Spot detection is carried out using FAST algorithms, candidate region center point coordinate vector points is obtained;Whether it is successively that eye areas judges to candidate region, obtains eye areas central point vector pointsTru;Screened for the point in vectorial pointsTru, obtain the eye areas central point vector for finally returning that vector magnitude is up to 2;If only detecting an eyes, repaired;Eye areas central point vector pointsFnl is returned, and travels through vectorial pointsFnl, the rectangular area intercepted centered on pointsFnl is eye areas.The stable change for adapting to environment of the present invention, rate of false alarm higher to the robustness and accuracy rate of environment are relatively low.
Description
Technical field
It is that one kind is taken the photograph for infrared light list the present invention relates to technical fields such as computer vision, image procossing, pattern-recognitions
As realizing the method that eyes are quickly positioned in facial image that head apparatus is gathered, this method can be used for school, bank, prison, work
The public arenas such as factory, it is equally applicable for the gate inhibition and peripheral region of private residence.
Background technology
Biometrics identification technology is that one kind detects that individual physiological feature or personal behavior feature are entered using automatic technology
The technology of row authentication, extensive use is obtained in commercial field, military field, criminal investigation in terms of field.Numerous
In biological characteristic, the advantages of iris recognition is with its uniqueness, stability, collection property, non-infringement is ground with important science
Study carefully value and wide application prospect, development at full speed has been obtained in recent decades, be just widely used in China
Customs, public security, security, financial, military, airports and border crossings, security, and other important industries and fields, and intelligent entrance guard,
The commercial markets such as door lock, work attendance, mobile phone, digital camera, intelligent toy.However, in actual applications, iris recognition still faces
Many challenges, the eyes position in the facial image gathered by single camera equipment how are quickly and accurately positioned, under being
The prerequisite of one step Iris Location, extraction feature and identification.
Due to having wide range of applications for iris recognition, camera device may be in optional position, and by all polycyclic
Border factor influence, causing the picture of collection has complicated and non-constant ambient noise;In addition eye is worn in people's daily life
The ratio increase of mirror, blocks the problem of causing also just more and more universal by glasses.These disturbing factors both increase eyes positioning
Precision and difficulty, have impact on the accuracy rate of following iris recognition in whole Verification System, become in iris recognition technology
Key issue urgently to be resolved hurrily.
With going deep into for real life scene iris recognition research, study has emphatically to the eyes localization method of noisy background
The theory significance and application value wanted.It can for example be applied in fatigue driving detection, can be entered according to the eyes positioned
Row analysis Quick driver status, so as to reduce the frequency of security incident generation.
The content of the invention
It is accurate to iris recognition in order to overcome existing eyes localization method can not stablize the change for adapting to environment, complex background
Really the higher deficiency of the larger, rate of false alarm of influence of rate, of the invention to provide a kind of change of stable adaptation environment, the robust to environment
Property and accuracy rate is higher, the relatively low eyes localization method based on HOG features and FAST algorithms of rate of false alarm.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of eyes localization method based on HOG features and FAST algorithms, comprises the following steps:
1) the SVM model files trained are loaded into;
2) i-th two field picture srcImage to be positioned is obtained, and it is positive integer to copy image midImage, i;
3) image midImage is pre-processed;
4) spot detection is carried out using FAST algorithms to image midImage, obtains candidate region center point coordinate vector
points;
5) whether it is successively that eye areas judges to the candidate region on image midImage, obtains in eye areas
Heart point vector pointsTru;
6) screened for the point in vectorial pointsTru, acquisition finally returns that vector magnitude is up to 2 eyes area
Domain central point vector pointsFnl;
If 7) only detect an eyes, i.e., vector pointFnl size is 1, then is repaired;
8) eye areas central point vector pointsFnl is returned, and travels through vectorial pointsFnl, is intercepted with pointsFnl
Centered on rectangular area be eye areas.
Further, the step 3) in, the process of pretreatment is as follows:
3.1) greyscale transformation is carried out to image midImage, image is converted into gray-scale map;
3.2) Gaussian smoothing is carried out to image midImage, noisy point is filtered;
3.3) expansive working is carried out to image midImage, hot spot point is amplified.
Further, the step 4) in, the process of FAST algorithms is as follows:
4.1) threshold value t, the gray scale difference value for comparing surrounding pixel point and central pixel point are set;
4.2) the pixel P in image midImage is chosen successively, and sets the gray value of the point as I (P).Using P as circle
The heart, radius is on the circumference of 3 pixels, to take 16 pixels.Using the pixel directly over point P as No. 1, clockwise to 16 pixels
Be numbered, P'[1 be set to successively], P'[2] ..., P'[16];
4.3) choose P'[1], P'[5], P'[9] and P'[13] pixel, if having at least three pixel in this four pixels
When the gray value of point is simultaneously greater than I (P)+t or is less than I (P)-t simultaneously, then step 4.4 is skipped to, otherwise return to step 4.2);
4.4) point P tentatively now is judged for angle point, travel through 1 to No. 16 pixel P'[i] (i=1,2 ..., 16), if P'
The gray value at [i] place is I (P'[i]).If in the presence of the gray value I (P'[i]) on continuous 9 pixels be simultaneously greater than I (P)+t or
Person less than I (P)-t, then judges point P for angle point simultaneously, point P is added into temp_points vectors, otherwise return to step 4.2);
4.5) repeat step 4.2) -4.4), until pixel traversal is completed in image midImage, obtain temp_points
Vector;
4.6) in vectorial temp_points angle point carry out non-maxima suppression, retrieve temp_points to
Amount;
4.7) first point in temp_points vectors is chosen, is added into points vectors;
4.8) continue to choose the angle point TP in temp_points vectors successively, and each point CP with points in vectorial
[i] is compared, and i is positive integer;
4.9) if there is CP [i] so that CP [i] is in TP 20*20 neighborhood of pixels, then direct return to step 4.7),
Otherwise point TP is added in points vectors, return to step 4.7), until traversal temp_points vectors terminate;
4.10) output points vectors.
Further, it is described 4.6) in, non-maxima suppression process is as follows:
4.6.1 the point TP in temp_points) is taken successively;
4.6.2 the 3*3 neighborhood of pixels centered on angle point TP) is taken, scoring function is calculated to each angle point P in neighborhood
Value V, V value be the summation of the absolute value of I (P) and I (P'[i]) (i=1,2..., 16) difference, formula is such as:
4.6.3 the angle point P for) taking V values maximum, retains as the maximum angle point in the neighborhood, in temp_points vectors
In delete neighborhood in other points.
The step 5) in, the process for obtaining eye areas central point vector pointsTru is as follows:
5.1) it is positive integer to obtain j-th candidates regional center point coordinates center, j;
5.2) interception point centered on center, a length of 2*a, a width of 2*b rectangle candidate area image cndImage, if
Candidate region is crossed the border, then it is translated into relative quantity to picture centre;
5.3) image cndImage HOG characteristic vectors are calculated;
If 5.4) carry out HOG characteristic vector calculating to image for the first time, then HOG feature vector dimensions are calculated, and initially
Change the eigenvectors matrix featureMat of image, line number is 1, columns is HOG feature vector dimensions;
5.5) the image cndImage calculated characteristic vector is copied into eigenvectors matrix featureMat;
5.6) characteristic vector with the SVM classifier trained for image cndImage is classified;
5.7) if the result that grader is returned is true, central point center is put into vectorial pointsTru.
The step 6) process it is as follows:
6.1) the 0th point in vectorial pointsTru is put into vectorial pointsFnl, is marked as point pointsTru
[0];
6.2) point pointsTru [k] is read in circulation successively;
If 6.3) point pointsTru [k] x coordinate differs sufficiently large with point pointsTru [0] x coordinate, and point
PointsTru [k] y-coordinate differs sufficiently small with point pointsTru [0] y-coordinate, then it is assumed that pointsTru [k] is difference
In pointTru [0] another eyes, then vectorial pointsFnl is put it into, k is positive integer;
6.4) if vector pointsFnl size is equal to 2, circulation is jumped out.
The step 7) in, the process of reparation is as follows:
7.1) obtain eye areas central point vector pointsFnl, according to the human face five-sense-organ ratio knowledge of priori and
PointsFnl [0] value, judge obtained by simple eye position as left eye or right eye and be marked, eyes on the basis of being referred to as;
7.2) according to the coordinate of benchmark eye areas central point, in symmetrical region (i.e. in image if left eye is demarcated as
Right half part region) delimit an a length of L, a height of H and central point ordinate and benchmark eye areas central point ordinate value phase
Same rectangular area R, meets condition:
H=8a tan10
Wherein imglen refers to image srcImage length;
7.3) a length of 2*a of definition, a width of 2*b rectangle r linear slides in the R of rectangular area, step-length are set to d, intercepted
Sample is repaired, eyes are referred to as matched, result images are preserved into set T;
7.4) centered on pointsFnl [0] coordinate (x, y), it is the small of P*Q that size is cut out from former gray level image
Block gray level image Ac, by image AcAlong marginal reversal, 180 ° obtain Bc;
7.5) for each result T in set Ti, go out the fritter gray level image that size is P*Q in heartcut and be designated as
Tic, i=0,1 ..., k.k is set T size;
7.6) for each Tic, calculate and BcSimilitude:
Take liIt is worth maximum image, if li> 0.7, then be put into vectorial pointsFnl by its center point coordinate, otherwise give up;
The step 8) in, intercept centered on pointsFnl [i], a length of 2*a, a width of 2*b rectangular area is eyes
Region.
The present invention proposes a kind of eyes location algorithm for infrared single camera iris capturing instrument, can be to the arbitrarily complicated back of the body
The picture collected under scape carries out quick and accurate eyes positioning and iris information is collected in localization region so that identity is recognized
Card.The invention provides it is a kind of to strong robustnesses such as environmental change, illumination, noise, picture qualities, adapt to lens wear condition
Detection method, collects eye pupil in image using infrared single camera equipment and there is hot spot this characteristic, calculated by FAST
Method provides candidate region, and carrying out classifying screen using SVMs selects real eye areas, the effectively wrong report of reduction system
Rate, improves robustness and accuracy rate of the system to environment.
Beneficial effects of the present invention are mainly manifested in:The stable change for adapting to environment, robustness and accuracy rate to environment
Higher, rate of false alarm is relatively low.
Brief description of the drawings
Fig. 1 is the basic flow sheet of the eyes localization method based on HOG features and FAST algorithms;
Fig. 2 is the detail flowchart of the eyes localization method based on HOG features and FAST algorithms;
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Figures 1 and 2, a kind of eyes localization method based on HOG features and FAST algorithms, comprises the following steps:
1) the SVM model files trained are loaded into;
2) i-th two field picture srcImage to be positioned is obtained, and it is positive integer to copy image midImage, i;
3) image midImage is pre-processed, detailed process is as follows:
3.1) greyscale transformation is carried out to image midImage, image is converted into gray-scale map;
3.2) Gaussian smoothing is carried out to image midImage, noisy point is filtered;
3.3) expansive working is carried out to image midImage, hot spot point is amplified.
4) spot detection is carried out using FAST algorithms to image midImage, obtains candidate region center point coordinate vector
Points, detailed process is as follows:
4.1) threshold value t, the gray scale difference value for comparing surrounding pixel point and central pixel point are set.In this patent, threshold
Value t sets size to be 10;
4.2) the pixel P in image midImage is chosen successively, and sets the gray value of the point as I (P).Using P as circle
The heart, radius is on the circumference of 3 pixels, to take 16 pixels.The pixel that there is no harm in directly over postulated point P is No. 1, clockwise to 16
Individual pixel is numbered, and P'[1 is set to successively], P'[2] ..., P'[16];
4.3) choose P'[1], P'[5], P'[9] and P'[13] pixel, if having at least three pixel in this four pixels
When the gray value of point is simultaneously greater than I (P)+t or is less than I (P)-t simultaneously, then step 4.4 is skipped to, otherwise return to step 4.2;
4.4) point P tentatively now is judged for angle point, travel through 1 to No. 16 pixel P'[i] (i=1,2 ..., 16), if P'
The gray value at [i] place is I (P'[i]).If in the presence of the gray value I (P'[i]) on continuous 9 pixels be simultaneously greater than I (P)+t or
Person less than I (P)-t, then judges point P for angle point simultaneously, point P is added into temp_points vectors, otherwise return to step 4.2;
4.5) repeat step 4.2-4.4, until image midImage in pixel traversal complete, obtain temp_points to
Amount;
4.6) in vectorial temp_points angle point carry out non-maxima suppression, retrieve temp_points to
Amount, non-maxima suppression process is as follows:
4.6.1 the point TP in temp_points) is taken successively;
4.6.2 the 3*3 neighborhood of pixels centered on angle point TP) is taken, scoring function is calculated to each angle point P in neighborhood
Value V, V value be the summation of the absolute value of I (P) and I (P'[i]) (i=1,2..., 16) difference, formula is such as:
4.6.3 the angle point P for) taking V values maximum, retains as the maximum angle point in the neighborhood, in temp_points vectors
In delete neighborhood in other points.
4.7) first point in temp_points vectors is chosen, is added into points vectors;
4.8) continue to choose the angle point TP in temp_points vectors successively, and each point CP with points in vectorial
[i] is compared;
4.9) if there is CP [i] so that CP [i] is in TP 20*20 neighborhood of pixels, then direct return to step 4.7, no
Then point TP is added in points vectors, return to step 4.7.Until traversal temp_points vectors terminate;
4.10) output points vectors.
5) whether it is successively that eye areas judges to the candidate region on image midImage, obtains in eye areas
Heart point vector pointsTru, detailed process is as follows:
5.1) it is positive integer to obtain j-th candidates regional center point coordinates center, j;
5.2) interception centered on center point, a length of 2*a, a width of 2*b rectangle candidate area image cndImage (if
Candidate region is crossed the border, then it is translated into relative quantity to picture centre);
5.3) image cndImage HOG characteristic vectors are calculated;
If 5.4) carry out HOG characteristic vector calculating to image for the first time, then HOG feature vector dimensions are calculated, and initially
Change the eigenvectors matrix featureMat of image (line number is 1, and columns is HOG feature vector dimensions);
5.5) the image cndImage calculated characteristic vector is copied into eigenvectors matrix featureMat;
5.6) characteristic vector with the SVM classifier trained for image cndImage is classified;
5.7) if the result that grader is returned is true, central point center is put into vectorial pointsTru.
6) screened for the point in vectorial pointsTru, acquisition finally returns that vector magnitude is up to 2 eyes area
Domain central point vector pointsFnl, detailed process is as follows:
6.1) the 0th point in vectorial pointsTru is put into vectorial pointsFnl, is marked as point pointsTru
[0];
6.2) point pointsTru [k] is read in circulation successively;
If 6.3) point pointsTru [k] x coordinate differs sufficiently large with point pointsTru [0] x coordinate, and point
PointsTru [k] y-coordinate differs sufficiently small with point pointsTru [0] y-coordinate, then it is assumed that pointsTru [k] is difference
In pointTru [0] another eyes, then vectorial pointsFnl is put it into, k is positive integer;
6.4) if vector pointsFnl size is equal to 2, circulation is jumped out.
If 7) only detect an eyes, i.e., vector pointFnl size is 1, then to repairing, detailed process is such as
Under:
7.1) obtain eye areas central point vector pointsFnl, according to the human face five-sense-organ ratio knowledge of priori and
PointsFnl [0] value, judge obtained by simple eye position as left eye or right eye and be marked, eyes on the basis of being referred to as;
7.2) according to the coordinate of benchmark eye areas central point, in symmetrical region (i.e. in image if left eye is demarcated as
Right half part region) delimit an a length of L, a height of H and central point ordinate and benchmark eye areas central point ordinate value phase
Same rectangular area R, meets condition:
H=8atan10
Wherein imglen refers to image srcImage length;
7.3) a length of 2*a of definition, a width of 2*b rectangle r linear slides in the R of rectangular area, step-length are set to d, intercepted
Sample is repaired, eyes are referred to as matched, result images are preserved into set T;
7.4) centered on pointsFnl [0] coordinate (x, y), it is the small of P*Q that size is cut out from former gray level image
Block gray level image Ac, by image AcAlong marginal reversal, 180 ° obtain Bc;
7.5) for each result T in set Ti, go out the fritter gray level image that size is P*Q in heartcut and be designated as
Tic(i=0,1 ..., k.k is set T size);
7.6) for each Tic, calculate and BcSimilitude:
Take liIt is worth maximum image, if li> 0.7, then be put into vectorial pointsFnl by its center point coordinate, otherwise give up.
8) eye areas central point vector pointsFnl is returned, and travels through vectorial pointsFnl, is intercepted with pointsFnl
Centered on [i], a length of 2*a, a width of 2*b rectangular area is eye areas.
Image preprocessing based on Gaussian smoothing and expansion:Gaussian smoothing can effectively reduce noise present in image,
There is two-dimensional Gaussian function the smoothness of rotational symmetry, i.e. wave filter in all directions to be identical simultaneously, so i.e.
Make the edge direction for not knowing pending image in advance, can also will not be inclined to either direction in subsequent edges detection.Two dimension
Gaussian function can be expressed as:
Wherein I (x, y) is pixel value of the point (x, y) after smoothing processing, and σ represents standard deviation.
Expansion is then that the high bright part in image is expanded so that design sketch has bigger highlight bar than artwork
Domain, you can to expand the hot spot part in eye image, so as to lift the coverage of candidate region.Expansive working formula is as follows:
Dst (x, y)=max src (x+dx, y+dy)+B (dx, dy) | (dx, dy) ∈ DB}
Wherein dst (x, y) is the gray level image after expansion, and src (x, y) is former gray level image, and B is structural element, dx and
Dy represents the component on image x and y directions respectively, and span falls in structural element region DBIt is interior.Dilation operation be by
Chosen in the neighborhood block that structural element is determined image value and structural element value and maximum.
Candidate region detection based on FAST feature detection algorithms:FAST feature detection algorithms are a kind of based on gray value ratio
Compared with algorithm.Algorithm is by comparing the gray value of the gray value of candidate feature point and the pixel that makes a circle in its week, so that it is determined that being somebody's turn to do
Whether candidate feature point is characterized a little.
Wherein p ' is the circumference C using p as the center of circlepUpper any point, I (p ') is p ' gray value, and I (p) is center of circle p ash
Angle value, εdFor the threshold value of gray value differences, think that p is a characteristic point if V is more than given threshold value.It is on this basis
Raising arithmetic speed, this patent uses four neighborhood accelerated methods, i.e., four points up and down around selected point, if 3
Think that this candidate point is characterized a candidate point with the enough great talents of gray value of candidate point.If being unsatisfactory for this condition directly to abandon.
Radius length is used in this patent for 3, having 16 surrounding pixels needs to compare, can be while ensureing to detect characteristic point
Reduce run time.
This patent is detected in the eye image that infrared iris collector is gathered because formed by reflection using FAST algorithms
Hot spot, pupil position and near zone that this hot spot generally occurs.Although after tested pupil region can more accurately by
Choose in candidate region, but the situation that many characteristic points are crowded together occurs, so need by certain operation, will
Characteristic point in same neighborhood is screened.This patent is sorted out to candidate point, i.e., according to the size of image, be selected in 20*
A characteristic point is only taken in the scope of 20 pixels, the testing result of characteristic point larger deviation is occurred, simultaneously
The quantity of candidate point can greatly be reduced.
Non- human eye area based on HOG features and SVM classifier is excluded:The unified contracting in candidate region got to previous step
Normal size is put into, HOG features is extracted, non-human eye area is excluded using SVM classifier.Wherein the extracting method of HOG features is such as
Under:
Step 1 standardizes Gamma spaces and color space
In order to reduce the influence of illumination factor, the standardization of color space is carried out to input picture using Gamma correction methods.
In the texture strength of image, the proportion of local top layer exposure contribution is larger, and Gamma corrections can adjust the contrast of image
Influence caused by the shade and illumination variation of degree, effectively reduction image local, while the interference of noise can be suppressed.Because face
Color information function less, is generally first converted into gray-scale map.Gamma updating formulas are:I ' (x, y)=I (x, y)gamma.Wherein I (x,
Y) it is sample image in the pixel value that coordinate is (x, y) place.When Gamma values are less than 1, the overall brightness of image, which is worth to, to be carried
Rise, while the contrast at low gray scale is increased, more conducively offer an explanation image detail during low gray value.Gamma in this patent
Value be 0.5.
Step 2 calculates image gradient
The gradient in image abscissa and ordinate direction is calculated, and calculates the gradient direction value of each location of pixels accordingly;
Derivation operations can not only capture profile, the shadow and some texture informations, moreover it is possible to the influence that further weakened light shines.Picture in image
The gradient of vegetarian refreshments (x, y) is:
Gx(x, y)=I (x+1, y)-I (x-1, y)
Gy(x, y)=I (x, y+1)-I (x, y-1)
G in formulax(x,y),Gy(x, y), I (x, y) represents the horizontal direction ladder at pixel (x, y) place in input picture respectively
Degree, vertical gradient and pixel value.The gradient magnitude and gradient direction at pixel (x, y) place be respectively:
Most common method is:Convolution algorithm is done to original image with [- 1,0,1] gradient operator first, x directions (water is obtained
Square to the right for positive direction) gradient component gradscalx, then with [1,0, -1]TGradient operator is rolled up to original image
Product computing, obtains the gradient component gradscaly of y directions (vertical direction, with upwards for positive direction).Then above formula is used again
Calculate gradient magnitude and the direction of the pixel.
Step 3 builds cell factory gradient orientation histogram
Purpose is to provide a coding for local image region, at the same can keep to the posture of destination object in image and
The hyposensitiveness perception of outward appearance.Divide the image into several join domains cell, each cell comprising n*n pixel.Use N number of Nogata
Figure is divided into N number of direction block to count the gradient information of this n*n pixel, that is, by cell 360 degree of gradient direction, to cell
Interior each pixel is weighted projection (being mapped to fixed angular range) with gradient direction in histogram, it is possible to obtain this
The corresponding N-dimensional characteristic vector of individual cell gradient orientation histogram, the i.e. cell.
Step 4 conjunction cell factory is blocking and normalizes block inside gradient histogram
Gradient intensity is normalized.Normalization further can be compressed to illumination, shade and edge.Take
Method is:Each cell factory is combined into big, space coconnected interval (block).All cell in each block
Characteristic vector, which is together in series, just obtains the HOG features of the block.Block vector after normalization is referred to as HOG characteristic vectors.
Simple eye missing inspection detection based on image Block- matching:Occur in actual detection process and only detect an eyes
Situation, the reason for causing such case may have a) FAST detection algorithms do not provide correctly include an other eyes area
The candidate region in domain;B) FAST detection algorithms have been presented for correctly including the sample of an other eye areas, but instruct
Practise the grader classification error come.Detect to solve above-mentioned feelings using the simple eye missing inspection based on image Block- matching in this patent
Condition, is comprised the following steps that:
Step1 benchmark eye determinings
According to that eye having been detected by, the coordinate of the Vitrea eye domain central point is obtained, according to the human face five-sense-organ of priori
Ratio knowledge judge obtained by simple eye position as left eye or right eye and be marked, eyes on the basis of being referred to as;
The sliding window sizes of step 2 are defined
According to the coordinate of benchmark eye areas central point, in symmetrical region (in image right half i.e. if left eye is demarcated as
Subregion) delimit an a length of L, a height of H and central point ordinate and benchmark eye areas central point ordinate value identical
Rectangular area R, meets condition:
H=8a tan10
Wherein imglen represents image length.X represents the abscissa value of benchmark eye areas central point, and a is defined before being
The half of good eye areas rectangle length.Define a length of 2*a, a width of 2*b rectangle r linear slide, step in the R of rectangular area
Long to be set to d, sample is repaired in interception, referred to as matches eyes, preserves result images into set T;
Step 3 calculates the similarity of pairing eyes and benchmark eyes
Go out the fritter gray level image A that size is P*Q from the heartcut of benchmark eye areas firstc, by image AcAlong edge
180 ° of reversion obtains Bc;Sequentially for each pairing eyes T in set Ti, wherein the heart cut out size be P*Q fritter
Gray level image is designated as Tic(i=0,1 ..., k), for each Tic(i=0,1 ..., k.k are set T size), calculates
With BcSimilitude:
liValue is bigger, and the similarity for representing pairing eyes and benchmark eyes is higher.
Take all liPairing eyes T corresponding to maximum l in value, if l is more than threshold epsilon, T is to repair out
The another eyes come, if l is not more than threshold epsilon, give up pairing eyes T, without repairing.Threshold epsilon is taken in this patent
For 0.7.
Claims (7)
1. a kind of eyes localization method based on HOG features and FAST algorithms, it is characterised in that:Comprise the following steps:
1) the SVM model files trained are loaded into;
2) i-th two field picture srcImage to be positioned is obtained, and it is positive integer to copy image midImage, i;
3) image midImage is pre-processed;
4) spot detection is carried out using FAST algorithms to image midImage, obtains candidate region center point coordinate vector
points;
5) whether it is successively that eye areas judges to the candidate region on image midImage, obtains eye areas central point
Vectorial pointsTru;
6) screened for the point in vectorial pointsTru, acquisition finally returns that vector magnitude is up in 2 eye areas
Heart point vector pointsFnl;
If 7) only detect an eyes, i.e., vector pointFnl size is 1, then is repaired;
8) eye areas central point vector pointsFnl is returned, and travels through vectorial pointsFnl, interception is using pointsFnl in
The rectangular area of the heart is eye areas.
2. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 1, it is characterised in that:Institute
State step 3) in, the process of pretreatment is as follows:
3.1) greyscale transformation is carried out to image midImage, image is converted into gray-scale map;
3.2) Gaussian smoothing is carried out to image midImage, noisy point is filtered;
3.3) expansive working is carried out to image midImage, hot spot point is amplified.
3. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 1 or 2, it is characterised in that:
The step 4) in, the process of FAST algorithms is as follows:
4.1) threshold value t, the gray scale difference value for comparing surrounding pixel point and central pixel point are set;
4.2) the pixel P in image midImage is chosen successively, and sets the gray value of the point as I (P).Using P as the center of circle, half
Footpath is on the circumference of 3 pixels, to take 16 pixels.Using the pixel directly over point P as No. 1,16 pixels are carried out clockwise
Numbering, P'[1 is set to successively], P'[2] ..., P'[16];
4.3) choose P'[1], P'[5], P'[9] and P'[13] pixel, if having at least three pixel in this four pixels
When gray value is simultaneously greater than I (P)+t or is less than I (P)-t simultaneously, then step 4.4 is skipped to, otherwise return to step 4.2);
4.4) point P tentatively now is judged for angle point, travel through 1 to No. 16 pixel P'[i] (i=1,2 ..., 16), if P'[i] place
Gray value be I (P'[i]).If being simultaneously greater than I (P)+t or same in the presence of the gray value I (P'[i]) on continuous 9 pixels
When be less than I (P)-t, then judge point P for angle point, point P additions temp_points is vectorial, otherwise return to step 4.2);
4.5) repeat step 4.2) -4.4), until pixel traversal is completed in image midImage, obtain temp_points vectors;
4.6) non-maxima suppression is carried out to the angle point in vectorial temp_points, retrieves temp_points vectors;
4.7) first point in temp_points vectors is chosen, is added into points vectors;
4.8) continue to choose the angle point TP in temp_points vectors successively, and enter with each point CP [i] in points vectors
Row compares, and i is positive integer;
4.9) if there is CP [i] so that CP [i] is in TP 20*20 neighborhood of pixels, then direct return to step 4.7), otherwise
Point TP is added in points vectors, return to step 4.7), until traversal temp_points vectors terminate;
4.10) output points vectors.
4. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 3, it is characterised in that:Institute
State 4..6) in, non-maxima suppression process is as follows:
4.6.1 the point TP in temp_points) is taken successively;
4.6.2 the 3*3 neighborhood of pixels centered on angle point TP) is taken, the value of scoring function is calculated each angle point P in neighborhood
V, V value are the summation of the absolute value of I (P) and I (P'[i]) (i=1,2..., 16) difference, and formula is such as:
<mrow>
<mi>V</mi>
<mo>=</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>16</mn>
</munderover>
<mo>|</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>P</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>P</mi>
<mo>&prime;</mo>
</msup>
<mo>&lsqb;</mo>
<mi>i</mi>
<mo>&rsqb;</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
4.6.3 the angle point P for) taking V values maximum, retains as the maximum angle point in the neighborhood, is deleted in temp_points vectors
Except other points in neighborhood.
5. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 1 or 2, it is characterised in that:
The step 5) in, the process for obtaining eye areas central point vector pointsTru is as follows:
5.1) it is positive integer to obtain j-th candidates regional center point coordinates center, j;
5.2) interception point, a length of 2*a, a width of 2*b rectangle candidate area image cndImage, if candidate centered on center
Region is crossed the border, then it is translated into relative quantity to picture centre;
5.3) image cndImage HOG characteristic vectors are calculated;
If 5.4) carry out HOG characteristic vector calculating to image for the first time, then HOG feature vector dimensions are calculated, and initialize figure
The eigenvectors matrix featureMat of picture, line number is 1, and columns is HOG feature vector dimensions;
5.5) the image cndImage calculated characteristic vector is copied into eigenvectors matrix featureMat;
5.6) characteristic vector with the SVM classifier trained for image cndImage is classified;
5.7) if the result that grader is returned is true, central point center is put into vectorial pointsTru.
6. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 1 or 2, it is characterised in that:
The step 6) process it is as follows:
6.1) the 0th point in vectorial pointsTru is put into vectorial pointsFnl, is marked as point pointsTru [0];
6.2) point pointsTru [k] is read in circulation successively;
If 6.3) point pointsTru [k] x coordinate differs sufficiently large with point pointsTru [0] x coordinate, and point
PointsTru [k] y-coordinate differs sufficiently small with point pointsTru [0] y-coordinate, then it is assumed that pointsTru [k] is difference
In pointTru [0] another eyes, then vectorial pointsFnl is put it into, k is positive integer;
6.4) if vector pointsFnl size is equal to 2, circulation is jumped out.
7. a kind of eyes localization method based on HOG features and FAST algorithms as claimed in claim 1 or 2, it is characterised in that:
The step 7) in, the process of reparation is as follows:
7.1) eye areas central point vector pointsFnl is obtained, according to the human face five-sense-organ ratio knowledge and pointsFnl of priori
[0] value, judge obtained by simple eye position as left eye or right eye and be marked, eyes on the basis of being referred to as;
7.2) according to the coordinate of benchmark eye areas central point, in symmetrical region (in image right half i.e. if left eye is demarcated as
Subregion) delimit an a length of L, a height of H and central point ordinate and benchmark eye areas central point ordinate value identical
Rectangular area R, meets condition:
H=8a tan 10
Wherein imglen refers to image srcImage length;
7.3) a length of 2*a of definition, a width of 2*b rectangle r linear slides in the R of rectangular area, step-length are set to d, and interception is repaired
Sample, referred to as matches eyes, preserves result images into set T;
7.4) centered on pointsFnl [0] coordinate (x, y), the fritter ash that size is P*Q is cut out from former gray level image
Spend image Ac, by image AcAlong marginal reversal, 180 ° obtain Bc;
7.5) for each result T in set Ti, go out the fritter gray level image that size is P*Q in heartcut and be designated as Tic, i
=0,1 ..., k.k is set T size;
7.6) for each Tic, calculate and BcSimilitude:
<mrow>
<msub>
<mi>l</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>P</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>Q</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mo>&lsqb;</mo>
<msub>
<mi>T</mi>
<mrow>
<mi>i</mi>
<mi>c</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<msub>
<mi>T</mi>
<mrow>
<mi>i</mi>
<mi>c</mi>
</mrow>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>&rsqb;</mo>
<mo>&lsqb;</mo>
<msub>
<mi>B</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<msub>
<mi>B</mi>
<mi>c</mi>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>&rsqb;</mo>
</mrow>
<msqrt>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>P</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>Q</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>T</mi>
<mrow>
<mi>i</mi>
<mi>c</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<msub>
<mi>T</mi>
<mrow>
<mi>i</mi>
<mi>c</mi>
</mrow>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>P</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>Q</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>B</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<msub>
<mi>B</mi>
<mi>c</mi>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>k</mi>
</mrow>
Take liIt is worth maximum image, if li> 0.7, then be put into vectorial pointsFnl by its center point coordinate, otherwise give up;
The step 8) in, intercept centered on pointsFnl [i], a length of 2*a, a width of 2*b rectangular area is eyes area
Domain.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710155361.7A CN107145820B (en) | 2017-03-16 | 2017-03-16 | Binocular positioning method based on HOG characteristics and FAST algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710155361.7A CN107145820B (en) | 2017-03-16 | 2017-03-16 | Binocular positioning method based on HOG characteristics and FAST algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107145820A true CN107145820A (en) | 2017-09-08 |
CN107145820B CN107145820B (en) | 2020-11-17 |
Family
ID=59784141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710155361.7A Expired - Fee Related CN107145820B (en) | 2017-03-16 | 2017-03-16 | Binocular positioning method based on HOG characteristics and FAST algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107145820B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960040A (en) * | 2018-05-07 | 2018-12-07 | 国网浙江省电力有限公司信息通信分公司 | Eye detection method |
CN109509345A (en) * | 2017-09-15 | 2019-03-22 | 富士通株式会社 | Vehicle detection apparatus and method |
CN109961407A (en) * | 2019-02-12 | 2019-07-02 | 北京交通大学 | Facial image restorative procedure based on face similitude |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140050370A1 (en) * | 2012-08-15 | 2014-02-20 | International Business Machines Corporation | Ocular biometric authentication with system verification |
CN104463128A (en) * | 2014-12-17 | 2015-03-25 | 智慧眼(湖南)科技发展有限公司 | Glass detection method and system for face recognition |
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN105512630A (en) * | 2015-12-07 | 2016-04-20 | 天津大学 | Human eyes detection and positioning method with near real-time effect |
CN105868731A (en) * | 2016-04-15 | 2016-08-17 | 山西天地科技有限公司 | Binocular iris characteristic obtaining method, binocular iris characteristic obtaining device, identity identification method and identity identification system |
CN105913487A (en) * | 2016-04-09 | 2016-08-31 | 北京航空航天大学 | Human eye image iris contour analyzing and matching-based viewing direction calculating method |
US20160360186A1 (en) * | 2015-06-03 | 2016-12-08 | University Of Connecticut | Methods and systems for human action recognition using 3d integral imaging |
-
2017
- 2017-03-16 CN CN201710155361.7A patent/CN107145820B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140050370A1 (en) * | 2012-08-15 | 2014-02-20 | International Business Machines Corporation | Ocular biometric authentication with system verification |
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN104463128A (en) * | 2014-12-17 | 2015-03-25 | 智慧眼(湖南)科技发展有限公司 | Glass detection method and system for face recognition |
US20160360186A1 (en) * | 2015-06-03 | 2016-12-08 | University Of Connecticut | Methods and systems for human action recognition using 3d integral imaging |
CN105512630A (en) * | 2015-12-07 | 2016-04-20 | 天津大学 | Human eyes detection and positioning method with near real-time effect |
CN105913487A (en) * | 2016-04-09 | 2016-08-31 | 北京航空航天大学 | Human eye image iris contour analyzing and matching-based viewing direction calculating method |
CN105868731A (en) * | 2016-04-15 | 2016-08-17 | 山西天地科技有限公司 | Binocular iris characteristic obtaining method, binocular iris characteristic obtaining device, identity identification method and identity identification system |
Non-Patent Citations (7)
Title |
---|
BATOUL HASHEM等: "Pedestrian Detection by Using FAST- HOG Features", 《PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION》 * |
C.-H. KUO等: "Face recognition based on a two-view projective transformation using one sample per subject", 《IET COMPUTER VISION》 * |
DAVID MONZO等: "Precise eye localization using HOG descriptors", 《MACHINE VISION AND APPLICATIONS》 * |
ELDHO ABRAHAM等: "HOG descriptor based Registration(A New Image Registration Technique)", 《2013 15TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING TECHNOLOGIES(ICACT)》 * |
GOUTAM MAJUMDER等: "Automatic Eye Detection Using Fast Corner Detector of North East Indian (NEI) Face Images", 《PROCEDIA TECHNOLOGY》 * |
TUMPA DEY等: "Facial Landmark Detection using FAST Corner Detector of UGC-DDMC Face Database of Tripura Tribes", 《PROCEEDINGS OF THE 2015 THIRD INTERNATIONAL CONFERENCE ON COMPUTER, COMMUNICATION, CONTROL AND INFORMATION TECHNOLOGY (C3IT)》 * |
王先梅等: "基于2阶段区域匹配的驾驶员眼睛细定位算法", 《数据采集与处理》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509345A (en) * | 2017-09-15 | 2019-03-22 | 富士通株式会社 | Vehicle detection apparatus and method |
CN108960040A (en) * | 2018-05-07 | 2018-12-07 | 国网浙江省电力有限公司信息通信分公司 | Eye detection method |
CN108960040B (en) * | 2018-05-07 | 2021-09-28 | 国网浙江省电力有限公司信息通信分公司 | Human eye detection method |
CN109961407A (en) * | 2019-02-12 | 2019-07-02 | 北京交通大学 | Facial image restorative procedure based on face similitude |
Also Published As
Publication number | Publication date |
---|---|
CN107145820B (en) | 2020-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103093215B (en) | Human-eye positioning method and device | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN103632136B (en) | Human-eye positioning method and device | |
Davis et al. | A two-stage template approach to person detection in thermal imagery | |
US8345921B1 (en) | Object detection with false positive filtering | |
CN109101871A (en) | A kind of living body detection device based on depth and Near Infrared Information, detection method and its application | |
CN100373397C (en) | Pre-processing method for iris image | |
CN105957054B (en) | A kind of image change detection method | |
CN110070010A (en) | A kind of face character correlating method identified again based on pedestrian | |
CN105956578A (en) | Face verification method based on identity document information | |
CN101739546A (en) | Image cross reconstruction-based single-sample registered image face recognition method | |
CN101142584A (en) | Method for facial features detection | |
Narang et al. | Face recognition in the SWIR band when using single sensor multi-wavelength imaging systems | |
CN109255319A (en) | For the recognition of face payment information method for anti-counterfeit of still photo | |
CN105590109A (en) | Method and device for pre-treating iris identification | |
CN103632137A (en) | Human iris image segmentation method | |
CN107145820A (en) | Eyes localization method based on HOG features and FAST algorithms | |
CN110348289A (en) | A kind of finger vein identification method based on binary map | |
CN106874825A (en) | The training method of Face datection, detection method and device | |
CN108446642A (en) | A kind of Distributive System of Face Recognition | |
CN107784263A (en) | Based on the method for improving the Plane Rotation Face datection for accelerating robust features | |
CN104217211B (en) | Multi-visual-angle gait recognition method based on optimal discrimination coupling projection | |
Devadethan et al. | Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing | |
CN103020599A (en) | Identity authentication method based on face | |
CN108520208A (en) | Localize face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201117 |