CN100345153C - Man face image identifying method based on man face geometric size normalization - Google Patents

Man face image identifying method based on man face geometric size normalization Download PDF

Info

Publication number
CN100345153C
CN100345153C CNB200510067962XA CN200510067962A CN100345153C CN 100345153 C CN100345153 C CN 100345153C CN B200510067962X A CNB200510067962X A CN B200510067962XA CN 200510067962 A CN200510067962 A CN 200510067962A CN 100345153 C CN100345153 C CN 100345153C
Authority
CN
China
Prior art keywords
facial image
man face
face
geometric size
coordinate position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB200510067962XA
Other languages
Chinese (zh)
Other versions
CN1687959A (en
Inventor
苏光大
孟凯
杜成
王俊艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB200510067962XA priority Critical patent/CN100345153C/en
Publication of CN1687959A publication Critical patent/CN1687959A/en
Application granted granted Critical
Publication of CN100345153C publication Critical patent/CN100345153C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a man face image identification method based on man face geometric size normalization, which belongs to the field of an image processing technology. The method comprises that coordinates of a left eyeball and a right eyeball are determined on an inputted man face image, the image is rotated to a horizontal position according to the coordinates, and a man face image 2 is obtained; the coordinates of the left eyeball, the right eyeball and a lower jaw point of the man face image 2 are determined again, a normalization geometric size numerical value of the man face image is prescribed, the man face image 2 is amplified and reduced, and a man face image 3 which meets a standard distance is obtained; the man face image 3 is cut according to coordinate positions of the left eyeball, the right eyeball and the lower jaw point of the man face image 3, and a standard normalization man face image is obtained. A geometric size normalization man face image is formed by a training set man face image and a known man face image and a man face image to be identified, man face characteristics are extracted, and the man face identification is carried out for a man face to be identified in the known man face database by methods similarity calculation and similarity ordering. The present invention improves a man face visual effect, and the identification rate is improved greatly.

Description

Facial image recognition method based on man face geometric size normalization
Technical field
The invention belongs to technical field of image processing, particularly improve the method for recognition of face rate.
Background technology
Recognition of face relates to a lot of subjects, comprises Flame Image Process, computer vision, pattern-recognition etc., also is closely related with physiology and the biology achievement in research to the human brain structure.The recognition of face difficult point of generally acknowledging is:
(1) people's face of causing of age changes;
(2) people's face diversity of causing of attitude;
(3) people's face plastic yield of expressing one's feelings and causing;
(4) multiplicity of people's face pattern of causing of factor such as glasses, makeup;
(5) otherness of the facial image that causes of illumination.
In general face recognition algorithms, facial image is not done the normalized of normal geometry, and geometric size normalization is handled the discrimination that not only can influence recognition of face, also can influence the people's face visual effect in the face database.Existing man face geometric size normalization method is a benchmark with the distance between two mainly, and still, the distance between two of people's faces is unsettled, and particularly in the facial image that horizontally rotates, this instability is just more outstanding.
Summary of the invention
In order to improve the discrimination of recognition of face, the present invention proposes facial image recognition method based on man face geometric size normalization, this method rolls off the production line upward with jaw, and the vertical range of any point to two line is that benchmark is done size normalization to facial image, extract face characteristic on this basis, can improve the discrimination and the visual effect of improving the facial image in the face database of recognition of face to a certain extent.
The present invention proposes a kind of facial image recognition method based on man face geometric size normalization, comprises man face geometric size normalization and recognition of face two parts, it is characterized in that, described man face geometric size normalization be may further comprise the steps:
1) the input facial image on determine a left side epibulbar 1 A coordinate position (x 1, y 1), the coordinate position (x of right epibulbar 1 B 2, y 2), be straight line L by A, B at 2 1, and definite lower jaw point C 0Coordinate (x 0, y 0);
2) calculated line L 1With horizontal angle α;
Straight line L1 and horizontal angle α are tried to achieve by (1) formula, wherein (x 1, y 1), (x 2, y 2) the corresponding right and left eyes spherical coordinates of difference:
α = arctan ( y 2 - y 1 x 2 - x 1 )
3) this facial image is rotated the rotation processing of angle for-α, obtains facial image 2;
The rotation expression formula is as follows:
x ' y ' = cos α sin α - sin α cos α x y
In the formula, x, y are the coordinate of input facial image, and x ', y ' are the coordinate of facial image 2;
4) on facial image 2, determine a left side epibulbar 1 C coordinate position (x 3, y 3), the coordinate position (x of right epibulbar 1 D 4, y 4), be straight line L by C, D at 2 2, and determine the coordinate position (x of the lower jaw point E of facial image 2 5, y 5);
5) numerical value of the physical dimension of the facial image of regulation geometric size normalization, wherein width is of a size of W, and height is of a size of H; The standard value of the vertical range of the line of any point to two on the regulation jaw rolls off the production line is H 0, be H to the standard value of the vertical range of image lower frame 1, two standard values that are wired to the vertical range of image upper side frame are H 2
6) obtain the vertical range h of E point to straight line L2 y, and computed image scaling COEFFICIENT K=h y/ H 0
Wherein, the E point is to the vertical range h of straight line L2 yFor
h y = y 5 - y 3 + y 4 2
7) facial image 2 is amplified according to the scaling COEFFICIENT K or dwindle processing, be met gauged distance H 0Facial image 3;
8) on facial image 3, determine a left side epibulbar 1 M coordinate position (x 6, y 6), the coordinate position (x of right epibulbar 1 N 7, y 7), and the ordinate y of lower jaw point P 8The position;
9) facial image 3 is reduced the normalization facial image of the standard of obtaining, cut out and go in the facial image 3 the x coordinate less than (x 6+ x 7)/2-W/2, greater than (x 6+ x 7The part of)/2+W/2, and the y coordinate is less than (y 7-H 2), greater than (y 8+ H 1) part.If the width of reducing the back image then adopts the method for interpolation less than W or highly less than H, width is mended W or highly mended H;
Described recognition of face may further comprise the steps:
10) everyone facial image to training set adopts step 1)~step 9) to form the facial image of geometric size normalization, and normalized facial image is extracted face characteristic;
11) facial image to known everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, normalized facial image is extracted face characteristic, and set up the database of the personal identification archives of the compressed image of the feature that comprises the known person face and known person face and known person;
12) facial image to be identified everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, and normalized facial image is extracted face characteristic;
13) in known face database, people's face to be identified is adopted the calculating similarity and carries out recognition of face by the method for sequencing of similarity.
Step 10)-13 in the above-mentioned recognition of face part) can adopt existing mature technology to realize.
Characteristics of the present invention and effect
Characteristics of the present invention are to have chosen precision is high in movable people's face stable key feature point to realize having reached the normalization of man face geometric size best's face image normalization effect, and improved the people's face visual effect in the face database as benchmark.The more common facial image recognizer of discrimination based on the facial image recognition method of man face geometric size normalization has had bigger lifting.
Description of drawings
Fig. 1 is protoplast's face image of input.
Fig. 2 is divided into 9 number of sub images area schematic for the present invention with the eyeball candidate region.
Fig. 3 is two of protoplast's face image, the location and the straight line L of lower jaw 1Synoptic diagram with horizontal angle α.
The facial image 2 of Fig. 4 for protoplast's face image is obtained through rotation, and two, lower jaw position view therein.
Fig. 5 is the definition and the standard synoptic diagram of physical dimension of the facial image of the embodiment of the invention.
The facial image 3 that Fig. 6 obtains through convergent-divergent for facial image 2, and two, lower jaw position view therein.
The normalized facial image that Fig. 7 finally obtains for the present invention.
Fig. 8 is for extracting the synoptic diagram of naked face, eyebrow+eyes, eyes, nose, five kinds of parts of mouth from normalized facial image.
Embodiment
The facial image recognition method embodiment based on man face geometric size normalization that the present invention proposes describes in detail in conjunction with each accompanying drawing, and this method may further comprise the steps:
1) the input facial image (as shown in Figure 1) on determine a left side epibulbar 1 A coordinate position (x 1, y 1), the coordinate position (x of right epibulbar 1 B 2, y 2), and be straight line L at 2 by A, B 1, and definite lower jaw point C 0Coordinate (x 0, y 0).
The coordinate position of the left and right sides epibulbar some A, B can adopt two kinds of methods to realize in this step, a kind of method is directly to determine the coordinate position of the left and right sides A, B on facial image at epibulbar with mouse, and another kind of method is the coordinate position of the left and right sides epibulbar some A, B on the automatically definite facial image of algorithm that adopts integral projection and feature space analysis to combine.This implementation method adopts second method, and this method may further comprise the steps:
1. detect human face region:
Human face region detects, promptly determine human face region in the image up and down and the left and right edges position.Present embodiment is used the sobel operator to input picture and is come the detected image edge, by edge image integral projection analysis in the horizontal and vertical directions being determined the position of human face region.Outline map in the horizontal direction with vertical direction on the calculating such as the formula (1) of integral projection, shown in (2).
H ( y ) = Σ x = 0 M E ( x , y ) - - - ( 1 )
V ( x ) = Σ y = 0 N E ( x , y ) - - - ( 2 )
As fruit dot (x y) is detected marginal point, E (x, y)=1; Otherwise E (x, y)=0.
The left and right edges x of human face region l, x rDetermine by following formula:
x l = arg min x V ( x ) > V ( x 0 ) / 3 - - - ( 3 )
x r = arg max x V ( x ) > V ( x 0 ) / 3 - - - ( 4 )
In the formula, x 0Be the x coordinate of outline map vertical integral projection maximal value correspondence, promptly make the vertical integral projection value greater than x value minimum and maximum of vertical integral projection maximal value 1/3 (empirical value) left and right edges as human face region.The last lower limb y of human face region t, y bBy formula (5), (6) are determined:
y t = ( arg min H ( y ) y > ( x r - x l ) / 10 ) + ( x r - x l ) / 3 - - - ( 5 )
y b=y t+(x r-x l)×0.8 (6)
1/3 (empirical value) that has added the human face region width in the formula (5) again is in order to reduce the influence of hair to positioning result as far as possible; Though the human face region in definite image that this step can only be rough, it has guaranteed that two eyes are included in wherein.
2. determine the eyeball position candidate point:
The candidate point of eyeball position is to determine by gray scale and the Gradient distribution of analyzing the eye areas image.Integral projection sum and below integral projection sum differ the initial candidate point Y of the point of maximum as the eyeball ordinate above selecting in the gradient projection histogram O, as shown in Equation (7):
y O = arg max y ( Σ i = 1 15 H ( y + i ) - Σ i = 1 15 H ( y - i ) ) - - - ( 7 )
Determine y OAfterwards, present embodiment is selected y OHave a few in 30 pixels in top (empirical value) zone is as the initial candidate point of eyeball position.For each candidate point, investigating with this point is the intensity profile of 30 * 30 image-regions at center.Present embodiment is divided into as shown in Figure 29 number of sub images zones to this image-region, and calculates the gray integration of each sub-image area, as shown in Equation (8).
Figure C20051006796200087
Here I (x, y) representative point (x, the gray-scale value of y) locating.Because generally the gray-scale value than its neighboring area is little for the gray-scale value of eyeball part, so if the gray integration s of certain sub-image area i, i=1,2,3,4,6,7,8,9 less than central subimage, i.e. the gray integration S in zone 5 5, just remove this candidate point.Remaining candidate point is as the candidate point of final eyeball position.
Because eyeball is reflective, a lot of facial images all can stay little bright spot in eyeball inside, and this can cause some good candidate points by the removal of mistake.So before calculating the gray integration projection, do to remove the processing of sparklet to facial image.For each point in the facial image, it is the minimum gradation value of 9 points in 3 * 3 neighborhoods of center that its gray-scale value is replaced with it.
3. eyeball position is definite:
For detected candidate point, the method (PCA) that the present invention analyzes with feature space is determined the position of final eyeball.Select for use eye areas image in other 9 groups of different attitude facial images as training set (eye areas can be determined by manual location), trained 18 feature spaces of left eye and right eye respectively.Candidate point C for each eyeball position i, the subimage of its correspondence is projected to this 18 stack features space respectively, obtain projection vector P i, i=1,2 ... 18, the matching error that defines each projection vector is:
E ( C i ) = Σ k = 1 D P k 2 λ k - - - ( 9 )
Here p kBe the value of projection vector k dimension, λ kBe the eigenwert of corresponding k proper vector, D is the dimension of the proper vector of reservation.The matching error in each unique point correspondence image zone is defined as its minimum value in 18 stack features space projections vector matching error.For all candidate points, select the position of the candidate point of matching error minimum as first eyeball.All be positioned on the same eyes for fear of two eyeballs, select and first eyeball position distance greater than the point of matching error minimum in the candidate point of set-point as the another one eyeball position.After these two positions were determined, that selection x coordinate is less was left eye ball A (x 1, y 1), that the x coordinate is bigger is right eye ball B (x 2, y 2)
4. determine lower jaw point C 0Coordinate (x 0, y 0): can adopt 2 kinds of methods to realize, the 1st kind is adopted mouse directly to determine on facial image, the 2nd kind of method that adopts integral projection and human face (parts) ratio to combine.
The method that present embodiment adopts integral projection and human face (parts) ratio to combine, promptly according to the position of the horizontal integral projection figure of human face region, eyeball and people on the face the proportionate relationship of each organ determine the lower jaw point.This method method comprises two steps: the first detects may be corresponding to the candidate valley point (being the valley point of horizontal integral projection curve) of organ, and it two is which candidate valley point what to determine each organ correspondence be.
Earlier horizontal integral projection curve is carried out mean filter, ask its second derivative then.The extreme point correspondence of second derivative the peak of drop shadow curve, valley point, its maximum point correspondence the valley point of drop shadow curve.Detect the peak of second derivative, as the position candidate of human face ordinate.To the position candidate that obtains, adopt the round-robin strategy, find out the situation of mating most, as final result with the human face proportionate relationship.So just obtained the ordinate of each organ, comprising lower jaw point C 0Ordinate y 0C 0Horizontal ordinate x 0Mid point by A, B determines, i.e. x 0=(x 1+ x 2)/2.As shown in Figure 3, A, B, C 03 coordinate points and lower jaw points that are respectively definite left and right eyes, among the figure, straight line L 1Be the line of A, B, L 0Be horizontal line, straight line L 1And horizontal line L 0Angle α.
2) calculated line L 1And horizontal line L 0Angle α.
Straight line L1 and horizontal line L 0Angle α try to achieve A (x wherein by (10) formula 1, y 1), B (x 2, y 2) the corresponding right and left eyes spherical coordinates of difference:
α = arctan ( y 2 - y 1 x 2 - x 1 ) - - - ( 10 )
3) this facial image is rotated the rotation processing of angle for-α, obtains facial image 2, as shown in Figure 4, in Fig. 4, C, D are respectively left eye ball point and right eye ball point, straight line L 2Be the line by 2 of C, D, E is the lower jaw point, L 3For horizontal jaw rolls off the production line, h yBe the vertical range of E point to straight line L2.
If the width of the facial image of input and highly be followed successively by SrcWidth, SrcHeight, the width of the facial image 2 that obtains through rotation and be highly respectively w, h.
Owing in the image rotation of reality, exist the translation of true origin, therefore need carry out offset correction to coordinate.If the side-play amount on level and the vertical direction is respectively dx, dy.Then, following relation is arranged according to the difference of α:
When α>0:
w=INT(SrcWidth×cosα+SrcHeight×sinα);
h=INT(SrcWidth×sinα+SrcHeight×cosα);
dx=0;
dy=SrcWidth×sinα; (10.1)
When α<0:
w=INT(SrcWidth×cosα-SrcHeight×sinα);
h=INT(-SrcWidth×sinα+SrcHeight×cosα);
dx=-SrcHeight×sinα;
dy=0; (10.2)
To the every bit in the facial image 2 (i, j), establish (i, j) corresponding point in original image be (io, jo).When 0 &le; io < SrcWidth 0 &le; jo < SrcHeight The time, show in former figure and to exist a bit (io, jo) and (i, j) corresponding, consider side-play amount dx, dy, have:
io = INT ( ( i - dx ) &times; cos &alpha; - ( j - dy ) &times; sin &alpha; ) jo = INT ( ( i - dx ) &times; sin &alpha; + ( j - dy ) &times; cos &alpha; ) - - - ( 11 )
Image2[j then] [i]=OriginalImage[jo] [io]; Otherwise, (io jo) is positioned at the blank parts of image 2, can be with its assignment: Image2[j] [i]=0.
4) on facial image 2, determine a left side epibulbar 1 C coordinate position (x 3, y 3), the coordinate position (x of right epibulbar 1 D 4, y 4), be straight line L by C, D at 2 2, and determine the coordinate position (x of the lower jaw point E of facial image 2 5, y 5);
The coordinate position of the left and right sides epibulbar some C, D can adopt 3 kinds of methods to realize in this step, first method is directly to determine the coordinate position of left and right sides C, D on facial image at epibulbar with mouse, second method is the coordinate position that the same procedure of the coordinate of employing above-mentioned definite A, B is determined the left and right sides C, D on the facial image at epibulbar, the third method is by the coordinate of A, B and α, calculates the coordinate that C, D are ordered.Present embodiment adopts the third method:
x 3=INT(x 1×cosα+y 1×sinα+dx+0.5);
y 3=INT(-x 1×sinα+y 1×cosα+dy+0.5);
x 4=INT(x 2×cosα+y 2×sinα+dx+0.5);
y 4=INT(-x 2×sinα+y 2×cosα+dy+0.5); (12)
Coordinate position (the x of the lower jaw point E of facial image 2 in this step 5, y 5) can adopt 3 kinds of methods to realize that first method is directly to determine the coordinate position that lower jaw point E is ordered with mouse on facial image, second method is to adopt above-mentioned definite lower jaw point C 0Same procedure determine the coordinate position that lower jaw point E is ordered on the facial image, the third method is to pass through C 0Coordinate and α, calculate the coordinate (x that E is ordered 5, y 5).Present embodiment adopts the third method:
x 5=INT(x 0×cosα+y 0×sinα+dx+0.5);
y 5=INT(-x 0×sinα+y 0×cosα+dy+0.5); (13)
5) numerical value of the physical dimension of the facial image of regulation geometric size normalization, as shown in Figure 5,1 is the image upper side frame among Fig. 5,2 is two determined straight line L 2, 3 are the jaw L that rolls off the production line 3, 4 is the image lower frame, and 5 is the image left frame, and 6 is the image left frame.
The width of specified image in the present embodiment (distances between 5,6) is the W=360 pixel, and highly (distances between 1,4) are the H=480 pixel.The standard value of the vertical range of the line of any point to two on the regulation jaw rolls off the production line (distances between 2,3) is H 0=200 pixels are H to the standard value of the vertical range (distances between 3,4) of image lower frame 1=28 pixels, two standard values that are wired to the vertical range (distances between 1,2) of image upper side frame are H 2=252 pixels.
6) obtain the vertical range h of E point to straight line L2 yFor
h y = y 5 - y 3 + y 4 2 - - - ( 14 )
And computed image scaling COEFFICIENT K=h y/ H 0
7) facial image 2 is amplified according to the scaling COEFFICIENT K or dwindle processing, be met gauged distance H 0Facial image 3, as shown in Figure 6, Fig. 6 for Fig. 4 through dwindling the figure after the processing, among the figure, M, N are the coordinate points of left eye ball point, right eye ball point, L 4Be horizontal line under the jaw, P is the lower jaw point.
Wide (the w of facial image 3 3), high (h 3) be followed successively by:
w 3 = INT ( K &times; w ) h 3 = INT ( K &times; h ) - - - ( 15 )
8) on facial image 3, determine a left side epibulbar 1 M coordinate position (x 6, y 6), the coordinate position (x of right epibulbar 1 N 7, y 7), and the ordinate position of lower jaw point P.
Then the coordinate of the mid point MidPoint of two M, N satisfies following formula:
MidPoint . x = x 6 + x 7 2 = INT ( K &times; x 3 + x 4 2 ) MidPoint . y = y 6 + y 7 2 = INT ( K &times; y 3 + y 4 2 ) - - - ( 16 )
The ordinate y that P is ordered 8Satisfy:
y 8=MidPoint.y+H 0 (17)
9) facial image 3 is reduced the normalization facial image of the standard of obtaining.
According to the size of standard normalized image, cut out and to go in the facial image 3 the x coordinate less than (x 6+ x 7)/2-W/2, greater than (x 6+ x 7The part of)/2+W/2, and the y coordinate is less than (y 7-H 2), greater than (y 8+ H 1) part.
During specific implementation, definition cutting rectangle CropRect, the coordinate of its border, left and right sides, up-and-down boundary is defined as CropRect.left, CropRect.right, CropRect.top, CropRect.bottom respectively.Concrete cutting is of a size of:
CropRect . left = ( MidPoint . x - W 2 ) > 0 ? ( MidPoint . x - W 2 ) : 0 CropRect . right = ( MidPoint . x + W 2 ) < w 3 ? ( MidPoint . x + W 2 ) : ( w 3 - 1 ) CropRect . top = ( MidPoint . y - H 2 ) > 0 ? ( MidPoint . y - H 2 ) : 0 CropRect . bottom = ( y 8 + H 1 ) < h 3 ? ( y 8 + H 1 ) : ( h 3 - 1 ) - - - ( 18 )
By this cutting size facial image 3 is carried out cutting.If the width of reducing the back image then adopts the method for interpolation less than W or highly less than H, width is mended W or highly mended H, thereby obtain standard-sized normalization facial image, as shown in Figure 7.
10) everyone facial image to training set adopts step 1)~step 9) to form the facial image of geometric size normalization, each normalized facial image is extracted everyone naked face, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, to the naked face that from training set people face, extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, utilize the eigenface method in the principal component method, form the naked face of feature respectively, feature (eyes+eyebrow), the feature eyes, the feature nose, feature face, Fig. 8 have provided the illustration of these 5 kinds of parts.The feature extraction of present embodiment and recognizer adopt the patent of the patent No. 01136577.3: based on the multi-mode face identification method of parts principal component analysis.
11) facial image to known everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, to the naked face that from known normalization people face, extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, utilize the feature projection value analytical approach in the principal component method, extract the naked face of known person face, eyes+eyebrow, eyes, nose, the projection properties value of five kinds of face components of mouth and foundation comprise the naked face of known person face, eyes+eyebrow, eyes, nose, the database of the personal identification archives of the projection properties value of five kinds of face components of mouth and the compressed image of known person face and known person.Feature extraction of this step and recognizer also adopt the patent of the patent No. 01136577.3: based on the multi-mode face identification method of parts principal component analysis.
12) facial image to be identified everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from normalization people face to be identified, extract, utilize the projection properties value analytical approach in the principal component method, extract naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face.Feature extraction of this step and recognizer also adopt the patent of the patent No. 01136577.3: based on the multi-mode face identification method of parts principal component analysis.
13) adopt the method for overall recognition of face and local recognition of face to carry out recognition of face.The process of recognition of face is: the feature with storage people face in the feature of people's face to be identified and the database is compared, calculate similarity, again by the people's face in the database being carried out from big to small ordering with the size of human face similarity degree to be identified, and according to this demonstrate in proper order the people who is found out photo, individual the identity archives and with people's to be identified similarity, thereby find out person to be identified identity or to person to be identified similar people's on looks identity.Calculate people's face to be identified and known person face similarity degree and adopt (19) formula.
R = 1 - | | A - B | | | | A | | + | | B | | - - - ( 19 )
(19) in the formula A be feature projection value string, the B of people's face to be identified be in the database known person face return feature projection value string.
When present embodiment adopts overall face identification method, to the feature projection value of the naked face of known person face, eyes+eyebrow, eyes, nose, face in 5: 6: 4: 3: 2 ratio is weighted, simultaneously to the feature projection value of the naked face of people's face to be identified, eyes+eyebrow, eyes, nose, face in 5: 6: 4: 3: 2 ratio is weighted, and calculates similarity by (19) formula then.
When adopting local face identification method, select the combination in any of naked face, eyes+eyebrow, eyes, nose, face with the method for man-machine interaction, its number of combinations for=120 kinds, promptly have 120 kinds of recognition of face patterns.The feature projection value of naked face, eyes+eyebrow, eyes, nose, face is still in 5: 6: 4: 3: 2 ratio is weighted.This step also adopts the patent of the patent No. 01136577.3: based on the multi-mode face identification method of parts principal component analysis.

Claims (10)

1, a kind of facial image recognition method based on man face geometric size normalization comprises man face geometric size normalization and recognition of face two parts, it is characterized in that, described man face geometric size normalization be may further comprise the steps:
1) the input facial image on determine a left side epibulbar 1 A coordinate position (x 1, y 1), the coordinate position (x of right epibulbar 1 B 2, y 2), be straight line L by A, B at 2 1, and definite lower jaw point C 0Coordinate (x 0, y 0);
2) calculated line L 1With horizontal angle α;
Straight line L1 and horizontal angle α by
&alpha; = arctan ( y 2 - y 1 x 2 - x 1 ) Formula is tried to achieve,
(x wherein 1, y 1), (x 2, y 2) respectively corresponding left and right sides eyeball A, B point coordinate:
3) this facial image is rotated the rotation processing of angle for-α, obtains second facial image;
The rotation expression formula is as follows:
x &prime; y &prime; = cos &alpha; sin &alpha; - sin &alpha; cos &alpha; x y
In the formula, x, y coordinate for putting on the input facial image, x ', y ' they are the coordinate of putting on second facial image;
4) on second facial image, determine a left side epibulbar 1 C coordinate position (x 3, y 3), the coordinate position (x of right epibulbar 1 D 4, y 4), be straight line L by C, D at 2 2, and determine the coordinate position (x of the lower jaw point E of second facial image 5, y 5);
5) numerical value of the physical dimension of the facial image of regulation geometric size normalization, wherein width is of a size of W, and height is of a size of H; The standard value of the vertical range of the line of any point to two on the regulation jaw rolls off the production line is H 0, be H to the standard value of the vertical range of image lower frame 1, two standard values that are wired to the vertical range of image upper side frame are H 2
6) obtain the E point to straight line L 2Vertical range h y, and computed image scaling COEFFICIENT K=h y/ H 0
Wherein, the E point is to straight line L 2Vertical range h yFor
h y = y 5 - y 3 + y 4 2
7) second facial image is amplified according to the scaling COEFFICIENT K or dwindle processing, be met gauged distance H 0The 3rd facial image;
8) on the 3rd facial image, determine a left side epibulbar 1 M coordinate position (x 6, y 6), the coordinate position (x of right epibulbar 1 N 7, y 7), and the ordinate y of lower jaw point P 8The position; y 8=MidPoint.y+H 0
9) the 3rd facial image is reduced the normalization facial image of the standard of obtaining, cut out and go in the 3rd facial image x coordinate less than (x 6+ x 7)/2-W/2, greater than (x 6+ x 7The part of)/2+W/2, and the y coordinate is less than (y 7-H 2), greater than (y 8+ H 1) part; If the width of reducing the back image then adopts the method for interpolation less than W or highly less than H, width is mended W or highly mended H;
Described recognition of face may further comprise the steps:
10) everyone facial image to training set adopts step 1)~step 9) to form the facial image of geometric size normalization, and normalized facial image is extracted face characteristic;
11) facial image to known everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, normalized facial image is extracted face characteristic, and set up the database of the personal identification archives of the compressed image of the feature that comprises the known person face and known person face and known person;
12) facial image to be identified everyone adopts step 1)~step 9) to form the facial image of geometric size normalization, and normalized facial image is extracted face characteristic;
13) in known face database, people's face to be identified is adopted the calculating similarity and carries out recognition of face by the method for sequencing of similarity.
2, the facial image recognition method based on man face geometric size normalization as claimed in claim 1, it is characterized in that, determine in the described step 1) that the coordinate position of the left and right sides A, B is a coordinate position of directly reading left and right sides A, B with mouse on facial image at epibulbar at epibulbar.
3, the facial image recognition method based on man face geometric size normalization as claimed in claim 1, it is characterized in that the coordinate position of determining the left and right sides A, B in the described step 1) at epibulbar is the coordinate position that the method that adopts integral projection and feature space analysis to combine is determined the left and right sides A, B on the facial image at epibulbar.
4, the facial image recognition method based on man face geometric size normalization as claimed in claim 1, it is characterized in that the coordinate position of the left and right sides C, D adopts mouse directly to read the coordinate position of left and right sides C, D on facial image at epibulbar in the described step 4) at epibulbar.
5, the facial image recognition method based on man face geometric size normalization as claimed in claim 1, it is characterized in that the method that the coordinate position of the left and right sides C, D adopts integral projection and feature space analysis to combine in the described step 4) is determined the coordinate position of the left and right sides C, D on the facial image at epibulbar at epibulbar.
6, the facial image recognition method based on man face geometric size normalization as claimed in claim 1 is characterized in that, the coordinate position of the left and right sides epibulbar some C, D adopts coordinate and the α by A, B in the described step 4), calculates the coordinate that C, D are ordered.
7, the facial image recognition method based on man face geometric size normalization as claimed in claim 1 is characterized in that, determines lower jaw point C in described step 1) and the step 4) 0Adopt mouse directly on facial image, to read with the E coordinate position.
8, the facial image recognition method based on man face geometric size normalization as claimed in claim 1 is characterized in that, determines lower jaw point C in described step 1) and the step 4) 0With the method that the E coordinate position adopts integral projection and human face ratio to combine, comprise two steps: the first detects the candidate valley point corresponding to organ, and it two is which candidate valley point what to determine each organ correspondence be.
9, the facial image recognition method based on man face geometric size normalization as claimed in claim 1 is characterized in that, determines in the described step 4) that lower jaw point E coordinate position passes through C 0Coordinate and α, calculate the coordinate that E is ordered.
10, as claimed in claim 1 based on the facial image recognition method of going into face geometric size normalization, it is characterized in that, the numerical value of the physical dimension of the facial image of regulation geometric size normalization in the described step 5), wherein the size W=360 pixel of width highly is of a size of the H=480 pixel; The standard value of the vertical range of the line of any point to two on the regulation jaw rolls off the production line is H 0=200 pixels are H to the standard value of the vertical range of image lower frame 1=28 pixels, two standard values that are wired to the vertical range of image lower frame are H 2=252 pixels.
CNB200510067962XA 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization Expired - Fee Related CN100345153C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200510067962XA CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200510067962XA CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Publications (2)

Publication Number Publication Date
CN1687959A CN1687959A (en) 2005-10-26
CN100345153C true CN100345153C (en) 2007-10-24

Family

ID=35306000

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200510067962XA Expired - Fee Related CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Country Status (1)

Country Link
CN (1) CN100345153C (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3617843A1 (en) * 2012-12-10 2020-03-04 Samsung Electronics Co., Ltd. Mobile device, control method thereof, and ui display method
US11134381B2 (en) 2012-12-10 2021-09-28 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100440246C (en) * 2006-04-13 2008-12-03 北京中星微电子有限公司 Positioning method for human face characteristic point
CN101033955B (en) * 2007-04-18 2010-10-06 北京中星微电子有限公司 Method, device and display for implementing eyesight protection
CN101393597B (en) * 2007-09-19 2011-06-15 上海银晨智能识别科技有限公司 Method for identifying front of human face
CN101615241B (en) * 2008-06-24 2011-10-12 上海银晨智能识别科技有限公司 Method for screening certificate photos
CN101383001B (en) * 2008-10-17 2010-06-02 中山大学 Quick and precise front human face discriminating method
CN101751559B (en) * 2009-12-31 2012-12-12 中国科学院计算技术研究所 Method for detecting skin stains on face and identifying face by utilizing skin stains
JP5434708B2 (en) * 2010-03-15 2014-03-05 オムロン株式会社 Collation apparatus, digital image processing system, collation apparatus control program, computer-readable recording medium, and collation apparatus control method
JP5500194B2 (en) * 2012-03-22 2014-05-21 日本電気株式会社 Captured image processing apparatus and captured image processing method
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN102968775B (en) * 2012-11-02 2015-04-15 清华大学 Low-resolution face image rebuilding method based on super-resolution rebuilding technology
CN103035049A (en) * 2012-12-12 2013-04-10 山东神思电子技术股份有限公司 FPGA (Field Programmable Gate Array)-based face recognition entrance guard device and FPGA-based face recognition entrance guard method
CN105279473B (en) * 2014-07-02 2021-08-03 深圳Tcl新技术有限公司 Face image correction method and device and face recognition method and system
CN105989331B (en) * 2015-02-11 2019-10-08 佳能株式会社 Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
CN105147264A (en) * 2015-08-05 2015-12-16 上海理工大学 Diagnosis and treatment system
CN108875515A (en) * 2017-12-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and capture machine
CN109934948B (en) * 2019-01-10 2022-03-08 宿迁学院 Novel intelligent sign-in device and working method thereof
CN113158914B (en) * 2021-04-25 2022-01-18 胡勇 Intelligent evaluation method for dance action posture, rhythm and expression

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1207532A (en) * 1997-07-31 1999-02-10 三星电子株式会社 Apparatus and method for retrieving image information in computer
JPH11144067A (en) * 1997-11-07 1999-05-28 Nec Corp System and method for image layout and recording medium
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1207532A (en) * 1997-07-31 1999-02-10 三星电子株式会社 Apparatus and method for retrieving image information in computer
JPH11144067A (en) * 1997-11-07 1999-05-28 Nec Corp System and method for image layout and recording medium
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3617843A1 (en) * 2012-12-10 2020-03-04 Samsung Electronics Co., Ltd. Mobile device, control method thereof, and ui display method
US11134381B2 (en) 2012-12-10 2021-09-28 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same
US20220007185A1 (en) 2012-12-10 2022-01-06 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same
US11930361B2 (en) 2012-12-10 2024-03-12 Samsung Electronics Co., Ltd. Method of wearable device displaying icons, and wearable device for performing the same

Also Published As

Publication number Publication date
CN1687959A (en) 2005-10-26

Similar Documents

Publication Publication Date Title
CN100345153C (en) Man face image identifying method based on man face geometric size normalization
CN1455374A (en) Apparatus and method for generating 3-D cartoon
CN101059836A (en) Human eye positioning and human eye state recognition method
CN100347718C (en) Iris identification system and method of identifying a person throagh iris recognition
CN1276389C (en) Graph comparing device and graph comparing method
CN1894703A (en) Pattern recognition method, and device and program therefor
CN1260680C (en) method and apparatus for digital image segmentation
CN1758264A (en) Biological authentification system register method, biological authentification system and program thereof
CN1395220A (en) Character locating method and device in picture of digital camera
CN101034481A (en) Method for automatically generating portrait painting
CN1710593A (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
CN1928886A (en) Iris identification method based on image segmentation and two-dimensional wavelet transformation
CN1977286A (en) Object recognition method and apparatus therefor
CN101038629A (en) Biometric authentication method and biometric authentication apparatus
KR20050081850A (en) Face indentification apparatus, face identification method, and face indentification program
CN101032405A (en) Safe driving auxiliary device based on omnidirectional computer vision
CN1910613A (en) Method for extracting person candidate area in image, person candidate area extraction system, person candidate area extraction program, method for judging top and bottom of person image, system for j
CN1866271A (en) AAM-based head pose real-time estimating method and system
CN1503194A (en) Status identification method by using body information matched human face information
CN1822024A (en) Positioning method for human face characteristic point
CN109902758A (en) The data set scaling method of lane region recognition based on deep learning
CN101038626A (en) Method and device for recognizing test paper score
CN1643540A (en) Comparing patterns
CN1658224A (en) Combined recognising method for man face and ear characteristics
CN1776712A (en) Human face recognition method based on human face statistics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20071024

Termination date: 20150430

EXPY Termination of patent right or utility model