CN1474357A - Accurately automatically positioning method for centre of human face and eyes in digital grey scale image - Google Patents

Accurately automatically positioning method for centre of human face and eyes in digital grey scale image Download PDF

Info

Publication number
CN1474357A
CN1474357A CNA031318827A CN03131882A CN1474357A CN 1474357 A CN1474357 A CN 1474357A CN A031318827 A CNA031318827 A CN A031318827A CN 03131882 A CN03131882 A CN 03131882A CN 1474357 A CN1474357 A CN 1474357A
Authority
CN
China
Prior art keywords
white pixel
black
white
eye center
positioning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA031318827A
Other languages
Chinese (zh)
Other versions
CN1219275C (en
Inventor
周志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN 03131882 priority Critical patent/CN1219275C/en
Publication of CN1474357A publication Critical patent/CN1474357A/en
Application granted granted Critical
Publication of CN1219275C publication Critical patent/CN1219275C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The accurately automatic human eye and face positioning process in digital grey scale image includes the steps of: setting distance between two eyes; performing 8 neighborhood comparison; changing too large white pixel block into black; changing the white pixel with too low neighborhood white pixel rate; changing the black pixel with too low neighborhood black pixel rate; change too large, too small or too perpendicular white pixel block into black one; repeated pairing white pixel blocks until two white pixel blocks meets the requirement as eye window; establishing two rectangular eye windows around two mass centers separately; calculating the mixed projection function of each windows; determining the position of eyes precisely with the mixed projection function; and scaling the original image and repeating the foregoing steps except the first one.

Description

The accurate automatic positioning method of people's face eye center in the gray level image
One, technical field
The present invention relates to digital facial image and detect and recognition device, particularly a kind of in gray level image accurately, the method that automatically people's face eye center positioned.
Two, background technology
The numeral facial image detects with recognition device and can be widely used in aspects such as identity document identification, buildings access and exit control, computer log control, credit cardholder's discriminating, criminal's tracking, accident detection.Carry out the method for identity discriminating with other human body biological characteristics such as utilizing fingerprint, iris and compare, end user's face is differentiated friendly more and convenient.Present digital facial image recognition technology all is based upon on the basis of the size of human face region and location aware, before discerning, need to determine whether comprise people's face in given image or the image sequence, and provide the position and the size of human face region by the facial image detection technique.And eye location is the key of facial image detection technique, in case the position of left and right sides eyes is decided, the position at facial image place is also just basic have been determined.Further, according to the length and the direction of line between two, the size and Orientation in facial image zone also can roughly estimate.Existing eye locating method only can provide the approximate location of eye center, can not accurately locate.Because eye location is the front end that follow-up detection and Identification are handled, if locate inaccurately, its error will be exaggerated in subsequent treatment, detect performance with recognition device thereby influence whole digital facial image.
Three, summary of the invention
1, the objective of the invention is only can carry out the problem of coarse eye center location, a kind of accurate eye center automatic positioning method is provided, with the auxiliary performance that improves digital facial image detection and recognition device at prior art.
2, for realizing purpose of the present invention, the invention provides a kind of method of accurately, automatically locating people's face eyes from gray level image, this method may further comprise the steps: (1) sets two oculocentric distances; (2) each pixel is carried out 8 neighborhoods relatively,, then it is labeled as white, otherwise is labeled as black if certain grey scale pixel value is lower than the average gray value of most of neighborhood territory pixels; (3) if certain white pixel block is excessive, then it is become black; (4) if in the neighborhood of certain white pixel white pixel to account for the ratio of total pixel number lower, then it is become black; (5) if in the neighborhood of certain black picture element the black picture element number to account for the ratio of total pixel number lower, then it is become white; (6) if certain white pixel block is excessive, too small or direction is too vertical, then it is become black; (7) white pixel block is matched; (8) to two white pixel block in each pairing, if it satisfies the requirement of eyes window, then execution in step 9, otherwise investigate other pairings, re-execute step 8; (9) be coarse eye center with these two barycenter respectively, respectively set up a rectangular eye windows; (10), calculate the custom-designed mixed projection function of the present invention respectively to each eyes window; (11) in each eyes window, utilize the rate of change of mixed projection function to determine accurate eye center position; (12) original image is done convergent-divergent, repeated execution of steps 2 to 11 is with the eye center of location different size facial image.
Four, description of drawings
Fig. 1 is that digital facial image detects and the recognition device workflow diagram.
Fig. 2 is the process flow diagram of the inventive method.
Fig. 3 is 8 neighborhood synoptic diagram of center pixel.
Fig. 4 is the initial number gray level image that obtains from input equipment.
Fig. 5 is the result after the step 2.
Fig. 6 is the result after the step 3.
Fig. 7 is the result after the step 4.
Fig. 8 is the result after the step 5.
Fig. 9 is the demonstration situation of result on initial pictures after the step 6.
Figure 10 is an eyes window synoptic diagram, the coarse eye center of circular mark, the accurate eye center of cross mark.
Figure 11 is the synoptic diagram that obtains accurate eye center by mixed projection function.
Five, embodiment
As shown in Figure 1, digital facial image detects with recognition device and obtains gray level image by Digital Image Input Device, and this image is partly handled by eye location then.
Method of the present invention as shown in Figure 2.Step 10 is initial actuatings.Step 11 setting two is oculocentric apart from d, and this distance can be the assembly average that obtains behind the facial image of a large amount of same sizes of tolerance, and unit is whole pixel count.According to the discovery of anthropometry, people's mean value of two horizontal ranges on the face roughly is the twice of eye-length, and the width of eyes roughly is half of eye-length.Therefore, the wide 0.5d of being about, the height that can estimate eyes after obtaining d is about 0.25d.
The step 12 of Fig. 2 is carried out 8 neighborhoods relatively.Certain pixel and 8 neighborhoods thereof as shown in Figure 3, wherein black is this pixel itself, all the other zones then are respectively different neighborhoods, numerical value unit is a pixel count among Fig. 3.To each pixel in the image, step 12 calculates earlier the average gray of each neighborhood respectively, gray-scale value with itself and this pixel itself compares then, if the gray-scale value of this pixel is lower than the average gray value of at least 6 neighborhoods, then this pixel may be the pixel of inside ofeye, it is labeled as white, otherwise is labeled as black.To initial pictures shown in Figure 4, become Fig. 5 through after the step 12.
The step 13 of Fig. 2 is analyzed each white pixel block, if the boundary rectangle width of certain piece greater than d or the height greater than 0.5d, then it can not be eye areas much larger than eyes, and it is become black.Concerning a block of pixels, its boundary rectangle width is the distance of two pixels farthest of being separated by in this block of pixels, and the boundary rectangle height is the distance of two pixels farthest of being separated by on the direction vertical with Width.Fig. 5 becomes Fig. 6 through after the step 13.
The step 14 of Fig. 2 will be removed sparse white pixel, and the ratio that accounts for total pixel number as if white pixel in 8 neighborhoods of certain white pixel is lower than 20%, and then it can not become black with it for the pixel of eye areas.Fig. 6 becomes Fig. 7 through after the step 14.Step 15 will be removed sparse black picture element, and the ratio that accounts for total pixel number as if black picture element in 8 neighborhoods of certain black picture element is lower than 65%, and then it is likely the pixel of eye areas, and it is become white.Fig. 7 becomes Fig. 8 through after the step 15.
The step 16 of Fig. 2 is analyzed each white pixel block again, if the extraneous rectangle width of certain piece greater than d or less than 0.25d, height greater than 0.5d or less than the ratio of 0.125d or height and the width greater than 0.8, then itself or much larger than eyes, too vertical much smaller than eyes or direction, can not be eye areas, it is become black.Fig. 8 is through step 16 back display result such as Fig. 6 on initial pictures.
The step 17 of Fig. 2 is matched all white pixel block in twos, if N white pixel block arranged, the individual pairing of N (N-1) is arranged then, and each pairing comprises two white pixel block.Concerning each white pixel block, the diagonal line intersection point of its boundary rectangle is exactly its barycenter.
To each pairing, execution in step 18 to 23.At first, judge whether two white pixel block in the pairing satisfy the condition of setting up the eyes window in step 18.If the coordinate of two barycenter is respectively (x 1, y 1) and (x 2, y 2), if 0.75 d < ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2 < 1.25 d And | x 1-x 2|<0.25d then satisfies condition execution in step 20; Otherwise do not satisfy condition, abandon this pairing, and forward step 23 to.
In step 20, the barycenter with two white pixel block is coarse eye center respectively, set up wide for 0.8d, high be the rectangular eye windows of 0.4d, the rectangle broadside is parallel with the line of two barycenter, coarse eye center is a rectangle diagonal line intersection point, as shown in figure 11.Notice that this window is bigger slightly than the eye sizes 0.5d * 0.25d that estimates.
Step 21 is calculated mixed projection function in each eyes window then.Mixed projection function is that the present invention is custom-designed, comprises vertical mixed projection function and horizontal mixed projection function.(x, y) (x, the grey scale pixel value of y) locating is then by the upper left corner (x for the expression point to suppose I 1, y 1) and the lower right corner (x 2, y 2) in the eyes window determined, point (x, horizontal vertical mixed projection function value of y) locating and horizontal mixed projection function value are respectively: HPF v ( x ) = 0.4 y 2 - y 1 &Integral; y 1 y 2 I ( x , y ) dy + 0.6 y 2 - y 1 &Sigma; y i = y 1 y 2 [ I ( x , y i ) - 1 y 2 - y 1 &Integral; y 1 y 2 I ( x , y ) dy ] HPF h ( y ) = 0.4 x 2 - x 1 &Integral; x 1 x 2 I ( x , y ) dx + 0.6 x 2 - x 1 &Sigma; x i = x 1 x 2 [ I ( x i , y ) - 1 x 2 - x 1 &Integral; x 1 x 2 I ( x , y ) dx ]
Because mixed projection function has reflected the situation of change of image gray average and variance on certain direction, so step 22 utilizes mixed projection function to determine the exact position of eye center.In each eyes window, calculate the rate of change of vertical and horizontal mixed projection function respectively.Approximate substitution algorithm below when calculating, can adopting: to vertical mixed projection function HPF v, to each the abscissa value x in the eyes window, calculate | HPF v(x+1)-HPF v(x) |, and with this as HPF vRate of change at the x place; To horizontal mixed projection function HPF h, to each the ordinate value y in the eyes window, calculate | HPF h(y+1)-HPF h(y) |, and with this as HPF hRate of change at the y place.Then,, find out 4 places of rate of change maximum, with the horizontal ordinate average in the outer part two places horizontal ordinate as final accurate eye center to vertical mixed projection function; To horizontal mixed projection function, find out 2 places of rate of change maximum, with the ordinate of its ordinate average as final accurate eye center.As shown in figure 11.
In step 23,, then forward step 18 to if also have the white pixel block pairing of not investigating; Otherwise enter step 24.In step 24, if scaled 7 times of present image then enters step 25 done state; Otherwise with 1.2 is that ratio is carried out convergent-divergent to image, is about to 1.2 times of image augmentation or dwindles 1.2 times, forwards step 12 again to.The purpose of image being carried out convergent-divergent is to find out the eyes of the facial image of different size size, and convergent-divergent then is for 7 times the result that verification and measurement ratio and time overhead are traded off, comprise in 7 times original image once, amplify 3 times, dwindle 3 times.

Claims (8)

1, the accurate automatic positioning method of people's face eye center in a kind of gray level image is characterized in that this method may further comprise the steps:
(1) setting two is oculocentric apart from d;
(2) each pixel is carried out 8 neighborhoods relatively,, then it is labeled as white, otherwise is labeled as black if certain grey scale pixel value is lower than the average gray value of most of neighborhood territory pixels;
(3) if certain white pixel block is excessive, then it is become black;
(4) if in the neighborhood of certain white pixel white pixel to account for the ratio of total pixel number lower, then it is become black;
(5) if in the neighborhood of certain black picture element the black picture element number to account for the ratio of total pixel number lower, then it is become white;
(6) if certain white pixel block is excessive, too small or direction is too vertical, then it is become black;
(7) white pixel block is matched;
(8) to two white pixel block in each pairing, if it satisfies the requirement of eyes window, then execution in step 9, otherwise investigate other pairings, re-execute step 8;
(9) be coarse eye center with these two barycenter respectively, respectively set up a rectangular eye windows;
(10) to each eyes window, calculate mixed projection function respectively;
(11) in each eyes window, utilize the rate of change of mixed projection function to determine accurate eye center position;
(12) original image is carried out convergent-divergent, repeated execution of steps 2 to 11 is with the eye center of location different size facial image.
2, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1, it is characterized in that: in (2), if certain grey scale pixel value is lower than the average gray value that is no less than 6 neighborhood territory pixels, then it is labeled as white, otherwise is labeled as black.
3, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1 is characterized in that: in (3), if the width of certain white pixel block is greater than d, or height then becomes black with it greater than 0.5d.
4, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1, it is characterized in that: in (4) and (5), the ratio that accounts for total pixel number as if white pixel in 8 neighborhoods of certain white pixel is lower than 20%, then it is become black; The ratio that accounts for total pixel number as if black picture element in 8 neighborhoods of certain black picture element is lower than 65%, then it is become white.
5, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1, it is characterized in that: in (6), if the extraneous rectangle width of white pixel block greater than d or less than 0.25d, height greater than 0.5d or less than the ratio of 0.125d or height and the width greater than 0.8, then it is become black.
6, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1, it is characterized in that: in (8), the coordinate of two white pixel block barycenter is respectively (x 1, y 1) and (x 2, y 2), if 0.75 d < ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2 < 1.25 d And | x 1-x 2|<0.25d, then satisfy the eyes window considerations.
7, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1 is characterized in that: in (10), mixed projection function comprises following vertical mixed projection function and horizontal mixed projection function: HPF v ( x ) = 0.4 y 2 - y 1 &Integral; y 1 y 2 I ( x , y ) dy + 0.6 y 2 - y 1 &Sigma; y i = y 1 y 2 [ I ( x , y i ) - 1 y 2 - y 1 &Integral; y 1 y 2 I ( x , y ) dy ] HPF h ( y ) = 0.4 x 2 - x 1 &Integral; x 1 x 2 I ( x , y ) dx + 0.6 x 2 - x 1 &Sigma; x i = x 1 x 2 [ I ( x i , y ) - 1 x 2 - x 1 &Integral; x 1 x 2 I ( x , y ) dx ] I in the formula (x, y) expression point (x, the grey scale pixel value of y) locating, (x 1, y 1) and (x 2, y 2) be respectively point (x, y) upper left corner and
The point of the lower right corner in the eyes window.
8, the accurate automatic positioning method of people's face eye center in the gray level image according to claim 1 is characterized in that: in (12), be that ratio is carried out convergent-divergent 7 times to image with 1.2.
CN 03131882 2003-06-13 2003-06-13 Accurately automatically positioning method for centre of human face and eyes in digital grey scale image Expired - Fee Related CN1219275C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 03131882 CN1219275C (en) 2003-06-13 2003-06-13 Accurately automatically positioning method for centre of human face and eyes in digital grey scale image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 03131882 CN1219275C (en) 2003-06-13 2003-06-13 Accurately automatically positioning method for centre of human face and eyes in digital grey scale image

Publications (2)

Publication Number Publication Date
CN1474357A true CN1474357A (en) 2004-02-11
CN1219275C CN1219275C (en) 2005-09-14

Family

ID=34153906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 03131882 Expired - Fee Related CN1219275C (en) 2003-06-13 2003-06-13 Accurately automatically positioning method for centre of human face and eyes in digital grey scale image

Country Status (1)

Country Link
CN (1) CN1219275C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100336071C (en) * 2005-08-19 2007-09-05 清华大学 Method of robust accurate eye positioning in complicated background image
CN101799866A (en) * 2010-03-31 2010-08-11 拓维信息系统股份有限公司 Method for positioning facial organs of cartoon character on mobile phone
CN102184543A (en) * 2011-05-16 2011-09-14 苏州两江科技有限公司 Method of face and eye location and distance measurement
CN101573714B (en) * 2006-10-02 2012-11-14 强生消费者公司 Method and apparatus for identifying facial regions
CN104704531A (en) * 2012-10-12 2015-06-10 皇家飞利浦有限公司 System for accessing data of a face of a subject
CN106897662A (en) * 2017-01-06 2017-06-27 北京交通大学 The localization method of the face key feature points based on multi-task learning

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100336071C (en) * 2005-08-19 2007-09-05 清华大学 Method of robust accurate eye positioning in complicated background image
CN101573714B (en) * 2006-10-02 2012-11-14 强生消费者公司 Method and apparatus for identifying facial regions
CN101799866A (en) * 2010-03-31 2010-08-11 拓维信息系统股份有限公司 Method for positioning facial organs of cartoon character on mobile phone
CN102184543A (en) * 2011-05-16 2011-09-14 苏州两江科技有限公司 Method of face and eye location and distance measurement
CN104704531A (en) * 2012-10-12 2015-06-10 皇家飞利浦有限公司 System for accessing data of a face of a subject
CN104704531B (en) * 2012-10-12 2017-09-12 皇家飞利浦有限公司 For the system for the facial data for accessing object
CN106897662A (en) * 2017-01-06 2017-06-27 北京交通大学 The localization method of the face key feature points based on multi-task learning
CN106897662B (en) * 2017-01-06 2020-03-10 北京交通大学 Method for positioning key feature points of human face based on multi-task learning

Also Published As

Publication number Publication date
CN1219275C (en) 2005-09-14

Similar Documents

Publication Publication Date Title
US7436430B2 (en) Obstacle detection apparatus and method
US8099213B2 (en) Road-edge detection
EP2434431A1 (en) Method and device for classifying image
US20080309516A1 (en) Method for detecting moving objects in a blind spot region of a vehicle and blind spot detection device
EP2555159A1 (en) Face recognition device and face recognition method
US20100195899A1 (en) Detection of people in real world videos and images
EP2698762B1 (en) Eyelid-detection device, eyelid-detection method, and program
EP2717219B1 (en) Object detection device, object detection method, and object detection program
KR101246120B1 (en) A system for recognizing license plate using both images taken from front and back faces of vehicle
US7876361B2 (en) Size calibration and mapping in overhead camera view
Guo et al. Lane detection and tracking in challenging environments based on a weighted graph and integrated cues
CN1219275C (en) Accurately automatically positioning method for centre of human face and eyes in digital grey scale image
EP2234388A1 (en) Object checking apparatus and method
CN102609724A (en) Method for prompting ambient environment information by using two cameras
CN104112141A (en) Method for detecting lorry safety belt hanging state based on road monitoring equipment
CN103475800A (en) Method and device for detecting foreground in image sequence
CN104599291A (en) Structural similarity and significance analysis based infrared motion target detection method
Simić et al. Driver monitoring algorithm for advanced driver assistance systems
CN102750522A (en) Method for tracking targets
CN101393597A (en) Method for identifying front of human face
CN103927523B (en) Fog level detection method based on longitudinal gray features
CN104615985B (en) A kind of recognition methods of human face similarity degree
CN113487649B (en) Vehicle detection method and device and computer storage medium
Lu et al. Monocular multi-kernel based lane marking detection
Hermawati et al. A real-time license plate detection system for parking access

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20050914

Termination date: 20100613