MXPA97009328A - Localization filter - Google Patents

Localization filter

Info

Publication number
MXPA97009328A
MXPA97009328A MXPA/A/1997/009328A MX9709328A MXPA97009328A MX PA97009328 A MXPA97009328 A MX PA97009328A MX 9709328 A MX9709328 A MX 9709328A MX PA97009328 A MXPA97009328 A MX PA97009328A
Authority
MX
Mexico
Prior art keywords
eyes
image
eye
positions
mask image
Prior art date
Application number
MXPA/A/1997/009328A
Other languages
Spanish (es)
Other versions
MX9709328A (en
Inventor
Fang Ming
Singh Ajit
Chiu Mingyee
Original Assignee
Siemens Corporate Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Corporate Research Inc filed Critical Siemens Corporate Research Inc
Publication of MX9709328A publication Critical patent/MX9709328A/en
Publication of MXPA97009328A publication Critical patent/MXPA97009328A/en

Links

Abstract

A system for rapid eye localization is described based on a filter that uses the relatively high horizontal contrast density of the eye region to determine the positions of the eye in a gray scale image of a human face. The system comprises a camera to scan an individual and a processor to perform the required filtration. Filtering includes a horizontal contrast computation filter, a horizontal contrast density determination filter, a facial geometry reasoning, and a position determination of the eye and works with various eye shapes, face orientations and other factors such as lenses or even when the eyes are closed

Description

EYE LOCALIZATION FILTER BACKGROUND OF THE INVENTION FIELD OF THE INVENTION The present invention relates to determining the positions of the eye, and more particularly to using the relatively high horizontal contrast density of the eye ratio in contrast to the gray scale image of a face.
DESCRIPTION OF THE PREVIOUS TECHNIQUE For many visual verification and monitoring applications, it is important to determine the positions of the human eye of a sequence of images containing a human face. Once the positions of the human eye are determined, all other important facial aspects, such as the positions of the nose and mouth, can be easily determined. The basic facial geometric information, such as the distance between the two eyes, nose and mouth size, etc., can be extracted additionally. This geometric information can then be used for a variety of tasks, such as to recognize a face of a given face database. The eye tracking system can also be directly used to detect the sleepy behavior of a driver of a car. There are some techniques for eye localization based on transformation verifications, Hough geometry and symmetry and deformable models. Most of these techniques are not robust enough against changes in form. These systems also require an extensive amount of computer processing time. In addition, none of these existing systems can locate the eyes when the eyes are closed.
BRIEF DESCRIPTION OF THE INVENTION The present invention is a system for rapid localization of the eye, which is based on filters that use the relatively high horizontal contrast density of the eye region to determine the positions of the eye in a grayscale image of a human face . The system comprises a camera that scans an individual and is linked to a processor, which performs the required filtering. The filtration comprises a horizontal contrast computation filter, a filter for determining horizontal contrast density, facial geometric reasoning and determination of the position of the eye.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates one embodiment of the present invention.
Figure two illustrates a signal flow diagram of the filtration of the present invention. Figure 3 illustrates the horizontal contrast filter used in the present invention. Figure 4 illustrates the determination of horizontal contrast density. Figure five illustrates the results of the horizontal contrast filter and the horizontal density density determination. Figure six illustrates the facial geometry reasoning. Figure seven illustrates another modality of facial geometry reasoning. Figure eight illustrates the determination of eye position. Figure nine illustrates the location of the eye for representative faces. Figure 10 illustrates three representative frames typical of a video sequence. Figure 11 illustrates examples that show how the system works with and without lenses.
DETAILED DESCRIPTION OF THE INVENTION The present invention uses the relatively high horizontal contrast of the eye regions to locate eye positions.
The basic system as shown in Figure 1 comprises a camera 11 that scans an individual 12 and is connected to a processor 13, which performs the required filtering of the scanned image. Filtering includes a horizontal contrast calculation, a horizontal density density determination, facial geometry reasoning and eye position determination. The signal flow diagram of the filtration of the present invention is shown in Figure 2. From Figure 2, the gray scale image of the face is an input to the horizontal contrast filter. The output of the horizontal contrast filter, the filtered image is then sent to the horizontal contrast density filter for additional filtering. The output of the horizontal contrast density filter flows into the facial geometry reasoning section of the system. The output of the facial geometry reasoning section is sent to the eye position determination section of the system. The output of the eye position determination section, the output of the present invention, are the left and right eye positions. The operation of the horizontal contrast filter, the horizontal contrast density filter, the facial geometry reasoning and the eye position determination are described below. The signal flow diagram of the horizontal contrast filter is shown in Figure 3. The horizontal contrast filter operates as follows. In a small local window of the size m pixels by n pixels in the image, a sum in the horizontal direction is made on m pixels first to soften the vertical structures within the filter window. Then, the maximum difference between the sum values of m pixels is calculated. If this maximum difference is greater than a given threshold, the pixel is classified as a pixel with a high horizontal contrast. If the horizontal contrast is high and if the values s1, ..., sn are in decreasing order, the filter output is "1", which is representative of the "white" pixels in an image. Otherwise, the filter output is "0", which corresponds to the "black" pixels in an image. As shown in the art, a window with a size of 3 x 3 pixels or 5 x 5 pixels is sufficient for an input image of a size of 256 by 256 pixels. A grayscale image of a scale typical of the face and the corresponding output image, the binary mask image, of the horizontal contrast filter are shown in Figures 5a and 5b, respectively. It is important to note that the horizontal contrast filter described above is only one of many possible modalities. You can also use existing horizontal edge detection techniques with some minor modifications. There are two observations that can be made from the binary mask image that is the output of the horizontal contrast filter. First, the output of the horizontal contrast filter is "1" near the eyes and hair, as well as near the nose and lips. Second, the filter gives some false answers in regions that do not correlate with facial features. In order to clean the binary mask image and to generate a more adequate image of the location of the eye, the determination of horizontal contrast density is required.
In Figure 4, the determination of horizontal contrast density is shown. The binary mask image output of the horizontal contrast filter is sent to the horizontal contrast density filter. A "white" pixel search is performed on the binary mask image. A relatively large window, such as 30 by 15 pixels, is used to count and determine the number of "white" pixels within this window for each "white" pixel in the binary mask image shown in Figure 5 (b). In other words, for each "white" pixel, the number of "white" pixels in its vicinity within the window are counted. Since the number of "white" pixels within the local window can be seen as the density of pixels with high horizontal contrast, this number is referred to as the horizontal contrast density. Next, a threshold is applied to remove the output pixels with the contrast density below a threshold to clean out the effects of noise and irrelevant features. Figure 5 (c) shows the scale mask image in gray representing the output of the horizontal contrast density filter. Figure 6 illustrates the reasoning of facial geometry, where a priori information is used with respect to the geometry of the facial features and verifies the positions of the eye. Since the eyes usually have a very high (and probably maximum) horizontal contrast density, the maximum intensity in a given area of the mask image is searched for in gray scales received from the horizontal contrast density filter for the first estimate. For most images, it can be assumed that the eyes are not located in the upper quarter of the image. Therefore, a quarter of the upper part of the mask image can be skipped to look for the locations of the eye. By eliminating these regions, the computational cost of the present invention is reduced. After the maximum pixel in the mask image is located, verification that this position really corresponds to one of the two eye positions occurs. The fact that the two eyes should be located, a horizontal strip with a width of 2k + 1 (allowing a small portion of the head) is used. The sum of the column width (projection) of the pixels in this strip can then be calculated. This result is a dimensional curve (1 D) C1, which has two significant peaks that correspond to the regions of the eye. If two significant peaks are not found, the search area is changed and the procedure is carried out again. Figure 7 illustrates a second modality of facial geometry reasoning. This modality uses more information about the facial geometry to refine the verification procedure for the location of the eye. One possible aspect is to use the additional information from the mouth and make the verification more robust. As shown in Figure 5 (c), the horizontal contrast density filter usually has a strong response near the eyes, as well as near the mouth. After detecting the peaks in C1, the system looks for a strong response to the mouth below the eyes. Since the distance between the two peaks in curve C1 indicates the close distance between the two eyes, an approximate region for the mouth can be calculated. A dimensional curve (1D) C2 for this region can then be generated. A strong peak in C2 verifies the position of the mouth, which in turn verifies the position of the eyes. Figure 8 illustrates the position determination of the eye, which refines the eye positions provided by the facial geometry reasoning of Figures 6 or 7. The original gray scale image of the face and the near positions of the eye provide the tickets required. A low pass filter is applied to the original grayscale image within small windows around the positions near the eye. After, the minimum search occurs within the small windows around the near locations of the eye and the minimum positions, exit, are the positions of the iris. The tests of the present invention have been performed in video sequences in different people. The test results have been recorded under different lighting conditions inside with a minimum of confusion. All the images were sub-sampled at a resolution of 256x256 pixels. The system needed approximately 200 msec. on a SUN SPARC 10 workstation to locate both eyes for a 256x256 image. Figure 9 illustrates facial images of different people with a reticle wire indicating the positions of the eye determined by the present invention. Figures 10a, 10b and 10c illustrate three representative frames typical of a video sequence with an eye enclosure and variation in head size and orientation. Figure 10a represents the case when both eyes are closed. Figure 10b shows a change in the size of the head and a slight change in the orientation of the head. Figure 10c represents a change in the orientation of the head. Figure 11 illustrates the operation of the system with and without eye lenses. The present invention is very simple, fast and robust against different forms of eyes, face orientations and other factors such as eye lenses. Another distinct and important aspect of the present invention is that the system can detect regions of the eye even when the eyes are closed. The system can operate very quickly on a general-purpose computer. As an example, for a facial image with 256x256 pixels, the system uses only 200 msec, in a SUN SPARC 10 workstation. The present invention can be implemented with specialized hardware for real-time operation. It is not intended that the present invention be limited to the hardware or software arrangement, or operational procedures shown and described. This invention includes all alterations and variations thereof that are encompassed within the scope of the claims that follow.

Claims (18)

1. An eye location filter, characterized in that it comprises: image means for scanning an individual to generate a gray scale image; and processing means for locating positions of the two eyes of the individual based on the gray scale image, the processing means comprise: horizontal contrast computing filter means for generating a binary mask image, in the gray scale image; horizontal contrast density determination filter means, provided with the binary mask image of the horizontal contrast computing filter means, to generate a gray scale mask image; facial geometry reasoning means, provided with the gray scale mask image of the horizontal contrast density determination filter means, to determine estimated positions of the two eyes; and means for determining the position of the eye, provided with the estimated positions of the two eyes of the facial geometry reasoning means, to determine the positions of the two eyes.
2. The eye location filter according to claim 1, characterized in that the horizontal contrast computing filter means comprise: adder means, receiving the gray scale image of a face, to attenuate the vertical structures within a window of local filter; and means of calculation for horizontal structures to calculate the maximum difference between the sum values, to analyze the maximum difference and to provide the binary mask image.
3. The eye localization filter according to claim 2, characterized in that the horizontal contrast density determination filter means comprise: pixel search means, to search for white pixels in the binary mask image; means counters for counting a number of white pixels within a local window for each white pixel; and threshold means for removing the output pixels with the contrast density below a threshold and for providing the scale mask image in gray.
4. The eye localization filter according to claim 3, characterized in that the facial geometry reasoning means comprise: determining means for establishing a row having a maximum value of white pixel in a selected search area in the mask image of scale in gray; calculation means for calculating the sum of the column width of the pixels in a strip; and means of analysis to establish if the strip has two peaks and to provide the estimated positions of the two eyes.
5. The eye localization filter according to claim 3, characterized in that the facial geometry reasoning means comprise: determining means for establishing a row having a maximum value of white pixel in a selected search area in the mask image of scale in gray; first calculating means for calculating the column width sum of the pixels in a first strip; first means of analysis to establish if the strip has two peaks; second calculation means for calculating the sum of the width of the pixel column in a second strip below the first strip; and second analysis means to establish whether the second strip has a peak and to provide the estimated positions of the two eyes.
6. The eye location filter according to claim 4, characterized in that the eye position determination means comprise: low pass filter means for filtering the gray scale image within small windows around the estimated positions of the two eyes; and search means to search for the minimum value of white pixel within the small windows around the estimated positions of the two eyes and to output the positions of the two eyes.
7. The eye location filter according to claim 5, characterized in that the eye position determination means comprise: low pass filter means for filtering the gray scale image within small windows around estimated positions of the two eyes; and search means to search the minimum within small windows around the estimated positions of the two eyes and to output the positions of the two eyes.
8. The location filter comprising: image forming means for scanning an individual to generate a gray scale image of an individual scale; and, processor means connected to the image forming means, wherein the processing means comprises: horizontal contrast computing filter means for receiving the gray scale image of the face of the image forming means and for providing an image of binary mask; horizontal contrast density determination filter means, for receiving the binary mask image and for providing a scale mask image in gray; means of facial geometry reasoning, to receive the scale mask image in gray and to provide close positions of the two eyes within the scale image in gray; and means for determining the position of the eye, for receiving the gray scale image of the face and the close positions of the two eyes and for providing positions of the two eyes.
9. The eye location filter according to claim 8, characterized in that the horizontal contrast computing filter means comprise: adder means, receiving the gray scale image of a face, to attenuate the vertical structures within a window of filter; and means of calculation for horizontal structures to calculate the maximum difference between the sum values, to analyze the maximum difference and to provide the binary mask image.
The eye localization filter according to claim 9, characterized in that the horizontal contrast density determination filter means comprise: pixel search means, to search for white pixels in the binary mask image; means counters for counting a number of white pixels within a local window for each white pixel; and threshold means for removing the output pixels with the contrast density below a threshold and for providing the scale mask image in gray.
11. The eye location filter according to claim 10, characterized in that the facial geometry reasoning means comprise: determination means for establishing a row having a maximum value of white pixel in a selected search area in the mask image of scale in gray; calculation means for calculating the sum of the column width of the pixels in a strip; and means of analysis to establish if the strip has two peaks and to provide the estimated positions of the two eyes.
12. The eye location filter according to claim 10, characterized in that the facial geometry reasoning means comprise: determination means for establishing a row having a maximum pixel value in a selected search area in the scale mask image in gray; first calculating means for calculating the column width sum of the pixels in a first strip; first means of analysis to establish if the strip has two peaks; second calculation means for calculating the sum of the width of the pixel column in a second strip below the first strip; and second analysis means to establish whether the second strip has a peak and to provide the estimated positions of the two eyes.
13. The eye location filter according to claim 12, characterized in that the eye position determination means comprise: low pass filter means for filtering the gray scale image within small windows around the estimated positions of the two eyes; and search means to search for the minimum value of white pixel within the small windows around the estimated positions of the two eyes and to output the positions of the two eyes.
14. A method for locating eyes, characterized in that it comprises the steps of: exploring an individual with a camera to generate a gray scale image of an individual's face; processing the scanned image, wherein the processing step comprises: performing the horizontal contrast computation filtering of the gray scale image of a face to provide a binary mask image; performing the horizontal contrast density determination filtering of the binary mask image to provide a gray scale mask image; carry out the facial geometric reasoning of the scale mask image in gray to provide close positions of the two eyes; and, performing the eye position determination of the gray scale image of a face and the close positions of the two eyes to provide positions of the two eyes.
15. The method for locating eyes according to claim 14, characterized in that the horizontal contrast computing filtering comprises the steps of: performing the addition in a horizontal direction on the gray scale image of a face to attenuate in this way the vertical structures within a filter window; calculate the maximum difference between the sum values; analyze the maximum difference; and provide the binary mask image.
16. The method for locating eyes according to claim 15, characterized in that the performance of the horizontal contrast density determination filtration comprises the steps of: searching for white pixels in the binary mask image; count the number of white pixels within a local window for each white pixel; remove the output pixels with the contrast density below a threshold; and provide the scale mask image in gray.
17. The method for locating eyes according to claim 16, characterized in that the facial geometry reasoning comprises the steps of: establishing a row having a maximum pixel value in a selected search area in the gray scale mask image; calculate the sum of pixel column width in a strip; analyze if the strip has two peaks; and provide the close positions of the two eyes.
18. The method for locating eyes according to claim 17, characterized in that the determination of eye position comprises the steps of: filtering the gray scale image within small windows around the close positions of the two eyes; look for the minimum inside the small windows around the close positions of the two eyes; and output the positions of the two eyes.
MXPA/A/1997/009328A 1995-06-02 1997-12-01 Localization filter MXPA97009328A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US46061095A 1995-06-02 1995-06-02
US460610 1995-06-02

Publications (2)

Publication Number Publication Date
MX9709328A MX9709328A (en) 1998-08-30
MXPA97009328A true MXPA97009328A (en) 1998-11-12

Family

ID=

Similar Documents

Publication Publication Date Title
Ahmed et al. A robust features-based person tracker for overhead views in industrial environment
EP1725975B1 (en) Method, apparatus and program for detecting an object
Little et al. Recognizing people by their gait: the shape of motion
Milanese et al. Attentive mechanisms for dynamic and static scene analysis
US5554983A (en) Object recognition system and abnormality detection system using image processing
US6081606A (en) Apparatus and a method for detecting motion within an image sequence
US5715325A (en) Apparatus and method for detecting a face in a video image
US7460693B2 (en) Method and apparatus for the automatic detection of facial features
US8977010B2 (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
Santosh et al. Tracking multiple moving objects using gaussian mixture model
US7068844B1 (en) Method and system for image processing for automatic road sign recognition
US7362885B2 (en) Object tracking and eye state identification method
JPH0944685A (en) Face image processor
Türkan et al. Human eye localization using edge projections.
US20220254171A1 (en) Event detector and method of generating textural image based on event count decay factor and net polarity
Gal Automatic obstacle detection for USV’s navigation using vision sensors
CN117523612A (en) Dense pedestrian detection method based on Yolov5 network
EP0829064B1 (en) An eye localization filter
MXPA97009328A (en) Localization filter
JPH10143669A (en) Dozing state detecting device
CN111488843A (en) Face sunglasses distinguishing method based on step-by-step inhibition of missing report and false report rate
Teschioni et al. Performance evaluation strategies of an image processing system for surveillance applications
Zhao et al. Real-time multiple-person tracking system
CN114821795B (en) Personnel running detection and early warning method and system based on ReiD technology
Sugandi et al. A block matching technique for object tracking based on peripheral increment sign correlation image