WO2016106617A1 - 人眼定位方法及装置 - Google Patents

人眼定位方法及装置 Download PDF

Info

Publication number
WO2016106617A1
WO2016106617A1 PCT/CN2014/095742 CN2014095742W WO2016106617A1 WO 2016106617 A1 WO2016106617 A1 WO 2016106617A1 CN 2014095742 W CN2014095742 W CN 2014095742W WO 2016106617 A1 WO2016106617 A1 WO 2016106617A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
human eye
point
image
corner point
Prior art date
Application number
PCT/CN2014/095742
Other languages
English (en)
French (fr)
Inventor
何康
罗莎莎
Original Assignee
深圳Tcl数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl数字技术有限公司 filed Critical 深圳Tcl数字技术有限公司
Publication of WO2016106617A1 publication Critical patent/WO2016106617A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a human eye positioning method and apparatus.
  • face recognition technology plays an increasingly important role, especially for the richly-shaped parts such as eyes and mouth in the face, which can be analyzed to realize expression recognition and age recognition.
  • the technology used is mainly processed by the edge or the contour, such as using the left and right corner points plus the contour for positioning, but the contour is easily interfered by the outside, especially in the case of wearing glasses. Because the glasses will affect the contour of the human eye, the positioning of the human eye is not accurate. In addition, when the background is more complicated and the light is affected, the positioning of the human eye is further affected, and the positioning effect is not satisfactory.
  • the main object of the present invention is to provide a human eye positioning method and device, which aims to solve the technical problem that the positioning of the human eye is not accurate enough.
  • the present invention provides a human eye positioning method, and the human eye positioning method includes the following steps:
  • the feature point including an outer corner point, an inner corner point, a highest corner point, and a lowest corner point;
  • the position of the feature point of the human eye in the human eye image is acquired based on the coarse position information of the feature point of the human eye acquired in advance from the human eye image and the position information of the strong corner point.
  • the step of establishing a mask according to the symmetrical feature of the feature points of the human eye comprises:
  • a mask corresponding to the shape of the human eye wherein the number of pixels in the mask corresponding to the outer corner point is from the edge of the outer corner point of the mask to the center of the mask
  • the step of establishing a mask according to the symmetrical feature of the feature points of the human eye comprises:
  • first mask plate including only outer corner points
  • second mask plate including only inner corner points
  • third mask plate including only the highest corner point
  • fourth mask plate including only the lowest corner point
  • the first mask, the second mask, the third mask, and the fourth mask are combined to correspond to a shape of a human eye, and the position of the first mask corresponding to the outer corner is
  • the number of pixels is in an arithmetic progression from the edge of the outer corner point of the first mask to the center of the first mask; the pixel of the second mask corresponding to the inner corner
  • the number is in an arithmetic progression from the edge of the inner corner point on the second mask to the direction of the center of the second mask.
  • the step of acquiring a mask image based on the pre-acquired human eye image and the mask includes:
  • the center position and the center of gravity position of the mask image are calculated, and the mask image is corrected according to the center position and the position of the center of gravity.
  • the step of moving the mask on the human eye image and performing pixel value comparison, and acquiring valid pixel points on the human eye image according to the comparison result includes:
  • I(i) is the pixel value of the human eye image at the position corresponding to the i-th effective pixel corresponding to the mask
  • I(o) is the currently calculated feature point corresponding to the mask
  • C(i) the pixel comparison result at the position corresponding to the ith effective pixel corresponding to the mask
  • i is the pixel position of the mask
  • t is the pixel difference
  • the step of acquiring position information of the strong corner point from the human eye image according to the mask image comprises:
  • the position information of the strong corner points corresponding to the feature points is obtained from the human eye image according to the effective pixel points of the mask image and the preset strong corner point algorithm.
  • the step of acquiring position information of the strong corner point from the human eye image according to the mask image comprises:
  • the position information of the strong corner points corresponding to the feature points is obtained from the human eye image according to the effective pixel points of the mask image after the pseudo-defect processing and the preset strong corner point algorithm.
  • the step of acquiring the position of the feature point of the human eye in the human eye graphic based on the rough position information of the feature points of the human eye acquired in advance from the human eye image and the position information of the strong corner point includes:
  • the present invention further provides a human eye positioning device, the human eye positioning device comprising:
  • Establishing a module configured to establish a mask according to a symmetrical feature of a feature point of the human eye, where the feature point includes an outer corner point, an inner corner point, a highest corner point, and a lowest corner point;
  • a first acquiring module configured to acquire a mask image based on the pre-acquired human eye image and the mask
  • a strong corner point acquiring module configured to obtain position information of a strong corner point from a human eye image according to the mask image
  • a second acquiring module configured to acquire a position of the feature point of the human eye in the human eye image based on the coarse position information of the feature point of the human eye acquired in advance from the human eye image and the position information of the strong corner point.
  • the establishing module is further configured to establish a mask corresponding to the shape of the human eye, wherein a number of pixels in the mask corresponding to the outer corner point is from an outer corner of the mask The direction of the edge of the position to the center of the mask is in an arithmetic progression; the number of pixels in the mask corresponding to the inner corner point is from the edge of the inner corner point of the mask to the mask The direction of the center of the diaphragm is in an arithmetic progression.
  • the establishing module is further configured to respectively establish a first mask that only includes outer corner points, a second mask comprising only inner corners, a third mask comprising only the highest corners, and a fourth mask comprising only the lowest corners, wherein the first mask and the second mask And combining the third mask and the fourth mask with the shape of the human eye, wherein the number of pixels in the first mask corresponding to the outer corner point is from the outer corner of the first mask
  • the direction of the edge of the position to the center of the first mask is in an arithmetic progression
  • the number of pixels in the second mask corresponding to the inner corner is from the inner corner of the second mask
  • the edge of the edge to the center of the second mask is in an arithmetic progression.
  • the first obtaining module comprises:
  • a comparison unit configured to move the mask on the human eye image and perform pixel value comparison, and obtain valid pixel points on the human eye image according to the comparison result
  • a processing unit configured to perform statistics on the effective pixel points, perform thresholding and non-maximum value suppression processing, and acquire a mask image
  • a correction unit configured to calculate a center position and a center of gravity position of the mask image, and correct the mask image according to the center position and the center of gravity position.
  • the comparing unit is specifically configured to calculate C(i) and obtain a pixel point with a value of C(i) of 1 as the effective pixel point:
  • I(i) is the pixel value of the human eye image at the position corresponding to the i-th effective pixel corresponding to the mask
  • I(o) is the currently calculated feature point corresponding to the mask
  • C(i) the pixel comparison result at the position corresponding to the ith effective pixel corresponding to the mask
  • i is the pixel position of the mask
  • t is the pixel difference
  • the strong corner point acquiring module is specifically configured to acquire position information of the strong corner point corresponding to the feature point from the human eye image according to the effective pixel point of the mask image and the preset strong corner point algorithm.
  • the strong corner point acquiring module is specifically configured to perform denoising processing on the mask image, and obtain feature points from the human eye image according to the effective pixel point of the mask image after the de-aliasing process and the preset strong corner point algorithm. Corresponding location information of strong corner points.
  • the second obtaining module comprises:
  • a calculation unit for estimating a rough position of a feature point of a human eye obtained from a human eye image in advance Information calculates the center position of the pupil
  • the obtaining unit is configured to obtain a distance between each strong corner point and a corresponding point of the human eye and a center point of the pupil, and the position of the strong corner point corresponding to the smallest distance is used as the person in the human eye image The location of the feature points of the eye.
  • the feature points of the selected human eye include an outer corner point, an inner corner point, a highest corner point and a lowest corner point, according to the outer corner point, the inner corner point, the highest corner point and the lowest corner point of the human eye
  • the symmetry characteristic establishes a mask plate, the shape of the mask plate is closer to the shape of the human eye, and then the mask image is obtained according to the human eye image and the mask plate, and the strong corner point is obtained from the human eye image according to the mask image, according to the person
  • the approximate position of the feature point of the eye and the position of the strong corner point select the optimal pixel point, and the position of the feature point of the human eye is used as the position of the feature point of the human eye to realize the positioning of the feature point of the human eye.
  • the present invention is established by using a plurality of corner points. a mask, and the area of the mask is smaller than the size of the image of the human eye, and the feature points in the frame can be obtained through the mask even in the case of external interference or wearing glasses. Accurate positioning of the feature points of the human eye.
  • FIG. 1 is a schematic flow chart of an embodiment of a human eye positioning method according to the present invention.
  • FIG. 2 is a schematic view showing characteristic points of a human eye of the present invention.
  • FIG. 3 is a schematic view of a mask plate established in accordance with the present invention.
  • step S102 in FIG. 1 is a schematic diagram of a refinement process of step S102 in FIG. 1;
  • FIG. 5 is a schematic flowchart of the refinement of step S104 in FIG. 1;
  • FIG. 6 is a schematic diagram of functional modules of an embodiment of a human eye positioning device according to the present invention.
  • FIG. 7 is a schematic diagram of a refinement function module of the first acquisition module in FIG. 6;
  • FIG. 8 is a schematic diagram of a refinement function module of the second acquisition module in FIG. 6.
  • the human eye positioning method includes:
  • Step S101 establishing a mask according to a symmetrical feature of a feature point of the human eye, the feature point including an outer corner point, an inner corner point, a highest corner point, and a lowest corner point;
  • the positions of the two eyes are represented by four feature points for each of the left and right eyes, as shown in FIG. 2, taking the right eye of the person as an example, and the outer corner point is 01.
  • the inner corner point is 02
  • the highest corner point is 03
  • the lowest corner point is 04.
  • the left eye and the right eye are similarly designed and processed.
  • the present invention needs to accurately position the four feature points.
  • the mask is designed according to the physical shape distribution characteristics of the human eye, that is, the symmetry.
  • the window template corresponding to the shape of the human eye is created by the pixel point, wherein the pixel points of the position corresponding to the outer corner point and the inner corner point in the window template are in an arithmetic progression from the edge to the center.
  • the window template after the completion of the creation is the mask, and the area of the mask is smaller than the area of the human eye image.
  • the mask of the present embodiment is approximately prismatic (shaded portion), and a mask corresponding to the shape of the human eye is established, and 01, 02, 03, and 04 respectively correspond to outer corner points, inner corner points, and highest angles.
  • the mask is a preset size binary image, that is, a fixed image size, the relative positions of the four corner points in the template are fixed, and conform to the shape characteristics of the human eye, the mask
  • the area consisting of the upper four corner points is an effective pixel area, the pixel value is 1, and the out-of-area pixel value is 0, so that the interference of the non-human eye area image in the human eye image is removed by the effective pixel.
  • the number of pixels in the area corresponding to the outer corner point of the mask is in an arithmetic progression from the edge of the outer corner point of the mask to the center of the mask (such as the portion enclosed by the dotted line), and in the mask
  • the number of pixels in the corresponding area of the inner corner point is an arithmetic progression from the edge of the position of the inner corner point on the mask to the center (such as the portion enclosed by the dotted line), and the outer corner point 01 (or inner corner point) of the right eye
  • the number of pixels in a column is 1, the number of pixels in the second column is 3, and the number of pixels in the third column is 5 equals; the first column is the edge near the outer corner point, and the third column is near the center of the mask.
  • the number of rows of pixels may not be equal to the number of columns or into the arithmetic progression.
  • the number of column pixels of the mask is larger than the number of rows of pixels, mainly because the curvature of the human eye at the highest corner (or lowest corner) is greater than that at the outer corner.
  • only one mask including outer corner points, inner corner points, highest corner points, and lowest corner points may be established, or a mask may be established for each corner point, such as a mask for separately establishing outer corner points,
  • the mask of the inner corner, the mask of the highest corner, and the mask of the lowest corner have a total of four masks, and the four masks merge to correspond to the shape of the human eye, and similarly, the outer corner and the inner corner Point corresponding to these two
  • the number of pixels of a mask is an arithmetic progression from the edge of the position on the mask to the center.
  • Step S102 acquiring a mask image based on the pre-acquired human eye image and the mask
  • the mask is moved on the human eye image, and the pixels on the mask are compared with the pixel values of the pixels on the human eye image to obtain effective pixel points in the human eye image, wherein
  • the set of effective pixel points of each corner point constitutes an effective pixel area of the corresponding corner point in the human eye image.
  • the effective pixel points in the human eye image are thresholded to obtain a mask image, but there are a large number of pseudo features in the mask image. Therefore, it is necessary to correct the mask image with a large number of pseudo features to finally obtain the inclusion.
  • a mask image of a small number of effective pixels are thresholded to obtain a mask image, but there are a large number of pseudo features in the mask image. Therefore, it is necessary to correct the mask image with a large number of pseudo features to finally obtain the inclusion.
  • Step S103 acquiring position information of a strong corner point from the human eye image according to the mask image
  • this step includes two ways of obtaining location information of strong corner points.
  • the first manner of acquiring the position information of the strong corner point in step S103 includes: obtaining the strong corresponding feature point from the human eye image according to the effective pixel point of the mask image and the preset strong corner point algorithm.
  • Position information of the corner point that is, determining the position of the effective pixel point in the human eye image according to the pixel position of the effective pixel point in the mask image, and then performing a preset strong corner point algorithm for determining the pixel of the effective pixel point position in the human eye image , determining the position information of the strong corner point in the human eye image.
  • the second manner of acquiring the position information of the strong corner point in step S103 includes: further obtaining the strong corner point position information from the human eye image according to the mask image.
  • the method includes de-false processing of the mask image, and the mask image de-mass processing comprises: calculating a mask image by using a predetermined strong corner algorithm, calculating a response function value of the mask image, and using the response function value and the pre-
  • the threshold is set to determine whether the effective pixel point in the mask image is a strong corner point, thereby further screening the effective pixel point, eliminating a large number of pseudo corner points, obtaining position information of the strong corner point on the mask image, and then
  • the strong corner point of the mask image after the de-authentic processing is used as the effective pixel point of the mask image after the anti-aliasing process and the preset strong corner point algorithm obtains the position information of the strong corner point of the human eye image from the human eye image .
  • the position information of the strong corner point corresponding to the feature point is obtained from the human eye image according to the preset strong corner point algorithm, including: the outer corner point and the inner corner point of the human eye.
  • the rough position of each of the highest corner point and the lowest corner point is obtained by using a preset strong corner point algorithm (such as Harris algorithm) to obtain a corner point response function value, and then defining a local range of non-maximum value suppression according to the mask image. And perform non-maximum suppression, and finally whether the value of the response function through the pixel meets the pre- A corner point threshold is set to determine the strong corner point of the human eye image.
  • a preset strong corner point algorithm such as Harris algorithm
  • Step S104 acquiring the position of the feature point of the human eye in the human eye image based on the coarse position information of the feature point of the human eye acquired in advance from the human eye image and the position information of the strong corner point.
  • a face detection algorithm such as Haar feature may be used first.
  • AAM Active Appearance Model
  • obtain the rough position information of the eight feature points of the left and right eyes and obtain the center position of the pupil at the same time, or
  • adaBoost ie iterative training to obtain the position of the eye corner detector (ie, the outer corner point)
  • the position information of the inner and outer corner points is obtained according to the classifier, and then the eye is obtained by back projection.
  • the position of the highest corner and the lowest corner may be used.
  • the point-to-line distance relationship between the line and the strong corner point is selected from the above-mentioned strong corner points.
  • the optimal pixel point is the position of the feature point of the human eye with its position, wherein the closest strong point is the feature point of the human eye.
  • the feature points of the selected human eye include an outer corner point, an inner corner point, a highest corner point and a lowest corner point, according to the outer corner point, the inner corner point, the highest corner point and the lowest angle of the human eye.
  • the symmetry characteristic of the point establishes a mask plate, the shape of the mask plate is closer to the shape of the human eye, and then the mask image is obtained according to the human eye image and the mask plate, and the strong corner point is obtained from the human eye image according to the mask image, according to The approximate position of the feature point of the human eye and the position of the strong corner point select the optimal pixel point, and the position of the feature point of the human eye is used to realize the positioning of the feature point of the human eye.
  • a plurality of corner points are used.
  • the area of the mask is smaller than the size of the image of the human eye, and the feature points in the frame can be obtained through the mask even in the case of external interference or wearing glasses. Thereby, the feature points of the human eye can be accurately positioned.
  • the foregoing step S102 includes:
  • Step S1021 moving the mask on the human eye image and performing pixel value comparison, and acquiring effective pixel points on the human eye image according to the comparison result;
  • Step S1022 performing statistics on the effective pixel points and performing thresholding and non-maximum value suppression To obtain a mask image
  • Step S1023 calculating a center position and a center of gravity position of the mask image, and correcting the mask image according to the center position and the position of the center of gravity.
  • the mask and the human eye image use the same coordinates, and the positions of the pixels of the two correspond.
  • I(i) refers to the pixel value of the human eye image at the position corresponding to the i-th effective pixel corresponding to the mask
  • I(o) refers to the currently calculated feature point corresponding to the mask
  • C(i) the pixel comparison result at the position corresponding to the i-th effective pixel corresponding to the mask; as follows:
  • the pixel is a valid pixel, in this embodiment,
  • g is a threshold between [1, 55].
  • the mask image mask(o) obtained in the above manner includes a large number of dummy features. Therefore, it is necessary to correct the center position and the center of gravity of the mask image mask(o), and the position of the center of gravity is calculated as follows:
  • (x0, y0) is the position of the center of gravity
  • x is the axis of abscissa
  • y is the axis of ordinate
  • m is the total number of all pixel points of the mask image mask(o).
  • a threshold value g, g is set as the number of effective pixels, and when the distance between the center position of the mask image mask (o) and the position of the center of gravity is larger than the threshold value, the mask image mask (o) is corrected.
  • the foregoing step S104 includes:
  • Step S1041 calculating a center position of the pupil based on the coarse position information of the feature points of the human eye acquired in advance from the human eye image;
  • Step S1042 Obtain a distance between each strong corner point and a line connecting the feature point of the corresponding human eye and the center position of the pupil, and the position of the strong corner point corresponding to the smallest distance is used as the human eye in the human eye image. The location of the feature points.
  • the candidate pixel points are relatively small, and then the best corner point is extracted as the eye feature point among the candidate points, which includes: acquiring the center of the pupil based on the feature image. The position is selected according to the distance between the center position of the pupil and the rough position of each of the feature points and the strong corner point.
  • the distance L is larger, the weight is lower, and can be removed; the smaller the distance L, the larger the weight, and the corresponding Strong corner point.
  • the rough position of the inner corner points of the left and right eyes is obtained, and the distance d between the two is calculated.
  • the width of the human eye is calculated based on this value.
  • the human eye positioning device includes:
  • a module 101 is configured to establish a mask according to a symmetrical feature of a feature point of the human eye, where the feature point includes an outer corner point, an inner corner point, a highest corner point, and a lowest corner point;
  • the positions of the two eyes are represented by four feature points for each of the left and right eyes, as shown in FIG. 2, taking the right eye of the person as an example, and the outer corner point is 01.
  • the inner corner point is 02
  • the highest corner point is 03
  • the lowest corner point is 04.
  • the left eye and the right eye are similarly designed and processed.
  • the present invention needs to accurately position the four feature points.
  • the mask is designed according to the physical shape distribution characteristics of the human eye, that is, the symmetry. Specifically, a window template corresponding to the shape of the human eye is created by the pixel point, wherein the pixel points of the position corresponding to the outer corner point and the inner corner point in the window template are in an arithmetic progression from the edge to the center. After the completion of the window template is the mask, the size of the mask area is smaller than the size of the human eye image area.
  • the mask of the present embodiment is approximately prismatic (shaded portion), and a mask corresponding to the shape of the human eye is established, and 01, 02, 03, and 04 respectively correspond to outer corner points, inner corner points, and highest angles.
  • the mask is a preset size binary image, that is, a fixed image size, the relative positions of the four corner points in the template are fixed, and conform to the shape characteristics of the human eye, the mask
  • the area consisting of the upper four corner points is an effective pixel area, the pixel value is 1, and the out-of-area pixel value is 0, so that the interference of the non-human eye area image in the human eye image is removed by the effective pixel.
  • the number of pixels in the area corresponding to the outer corner point of the mask is in an arithmetic progression from the edge of the outer corner point of the mask to the center of the mask (such as the portion enclosed by the dotted line), and in the mask
  • the number of pixels in the corresponding area of the inner corner point is an arithmetic progression from the edge of the position of the inner corner point on the mask to the center (such as the portion enclosed by the dotted line), and the outer corner point 01 (or inner corner point) of the right eye
  • the number of pixels in a column is 1, the number of pixels in the second column is 3, and the number of pixels in the third column is 5 equals; the first column is the edge near the outer corner point, and the third column is near the center of the mask.
  • the number of rows of pixels may not be equal to the number of columns or into the arithmetic progression.
  • the number of column pixels of the mask is larger than the number of rows of pixels, mainly because the curvature of the human eye at the highest corner (or lowest corner) is greater than that at the outer corner.
  • Arc (or inner corner) degree is a parameter that specifies the curvature of the human eye at the highest corner (or lowest corner) for the outer corner.
  • only one mask including outer corner points, inner corner points, highest corner points, and lowest corner points may be established, or a mask may be established for each corner point, such as a mask for separately establishing outer corner points,
  • the mask of the inner corner, the mask of the highest corner, and the mask of the lowest corner have a total of four masks, and the four masks merge to correspond to the shape of the human eye, and similarly, the outer corner and the inner corner
  • the number of pixels of the two masks corresponding to the points is an arithmetic progression from the edge of the position of the corner point on the mask to the center.
  • the first obtaining module 102 is configured to acquire a mask image based on the human eye image and the mask obtained in advance from the human eye image;
  • the mask is moved on the human eye image, and the pixels on the mask are compared with the pixel values of the pixels on the human eye image to obtain effective pixel points in the human eye image, wherein
  • the set of effective pixel points of each corner point constitutes an effective pixel area of the corresponding corner point in the human eye image.
  • the effective pixel points in the human eye image are thresholded to obtain a mask image, but there are a large number of pseudo features in the mask image. Therefore, it is necessary to correct the mask image with a large number of pseudo features to finally obtain the inclusion.
  • a mask image of a small number of effective pixels are thresholded to obtain a mask image, but there are a large number of pseudo features in the mask image. Therefore, it is necessary to correct the mask image with a large number of pseudo features to finally obtain the inclusion.
  • a strong corner point obtaining module 103 configured to obtain position information of a strong corner point from a human eye image according to the mask image;
  • the location information of the strong corner point acquisition module 103 for obtaining the strong corner point includes two modes.
  • the first manner of obtaining the position information of the strong corner point by the strong corner point acquiring module 103 includes: acquiring the feature from the human eye image according to the effective pixel point of the mask image and the preset strong corner point algorithm.
  • the position information of the strong corner point corresponding to the point that is, the position of the effective pixel point in the human eye image is determined according to the pixel position of the effective pixel point in the mask image, and then the pixel determining the effective pixel point position in the human eye image is preset.
  • the strong corner algorithm determines the position information of the strong corner points in the human eye image.
  • the second manner in which the strong corner point acquiring module 103 obtains the position information of the strong corner point includes: the strong corner point acquiring module 103 first defalps the mask image.
  • the mask image de-mass processing specifically includes: calculating a mask image using a predetermined strong corner algorithm, calculating a response function value of the mask image, and determining a mask image by using a response function value and a preset threshold value Whether the effective pixel point in the middle is a strong corner point, thereby further screening the effective pixel point, eliminating a large number of pseudo corner points, obtaining the position information of the strong corner point on the mask image, and then using the mask after the de-false processing
  • the strong corner of the film image is used as an effective image of the mask image after the de-aliasing process
  • the prime point and the preset strong corner point algorithm obtain position information of the strong corner point of the human eye image from the human eye image.
  • the position information of the strong corner point corresponding to the feature point is obtained from the human eye image according to the preset strong corner point algorithm, including: the outer corner point and the inner corner point of the human eye.
  • the rough position of each of the highest corner point and the lowest corner point is obtained by using the Harris algorithm to obtain the corner point response function value, and then the local range of the non-maximum value suppression is defined according to the mask image, and non-maximum value suppression is performed, and finally Whether the value of the response function of the pixel satisfies the preset corner threshold to determine the strong corner of the human eye image.
  • the second obtaining module 104 is configured to acquire the position of the feature point of the human eye in the human eye image based on the coarse position information of the feature point of the human eye acquired in advance from the human eye image and the position information of the strong corner point.
  • a face detection algorithm such as Haar feature may be used first.
  • a face detection algorithm such as Haar feature
  • adaBoost ie iterative training
  • obtains the position of the eye corner detector (ie, the outer corner point) of the cascaded eye angle detector obtains the position information of the inner and outer corner points according to the classifier, and then uses the back projection to obtain the highest corner point and the lowest angle of the eye. The location of the point.
  • the point-to-line distance relationship between the line and the strong corner point is selected from the above-mentioned strong corner points.
  • the optimal pixel point is the position of the feature point of the human eye with its position, wherein the closest strong point is the feature point of the human eye.
  • the first obtaining module 102 includes:
  • the comparing unit 1021 is configured to move the mask on the human eye image and perform pixel value comparison, and acquire valid pixel points on the human eye image according to the comparison result;
  • the processing unit 1022 is configured to perform statistics on the effective pixel points, perform thresholding and non-maximum value suppression processing, and acquire a mask image.
  • a correcting unit 1023 configured to calculate a center position and a center of gravity position of the mask image, according to The mask image is corrected by the center position and the center of gravity position.
  • the mask and the human eye image use the same coordinates, and the positions of the pixels of the two correspond.
  • I(i) refers to the pixel value of the human eye image at the position corresponding to the i-th effective pixel corresponding to the mask
  • I(o) refers to the currently calculated feature point corresponding to the mask
  • C(i) the pixel comparison result at the position corresponding to the i-th effective pixel corresponding to the mask; as follows:
  • the pixel is a valid pixel, in this embodiment,
  • g is a threshold between [1, 55].
  • the mask image mask(o) obtained in the above manner includes a large number of dummy features. Therefore, it is necessary to correct the center position and the center of gravity of the mask image mask(o), and the position of the center of gravity is calculated as follows:
  • (x0, y0) is the position of the center of gravity
  • x is the axis of abscissa
  • y is the axis of ordinate
  • m is the total number of all pixels of the mask image mask(o).
  • a threshold value g, g is set as the number of effective pixels, and when the distance between the center position of the mask image mask (o) and the position of the center of gravity is larger than the threshold value, the mask image mask (o) is corrected.
  • the second obtaining module 104 includes:
  • a calculating unit 1041 configured to calculate a center position of the pupil based on the coarse position information of the feature points of the human eye acquired in advance from the human eye image;
  • the obtaining unit 1042 is configured to obtain a distance between each strong corner point and a feature point of the corresponding human eye and a center point of the pupil, and the position of the strong corner point corresponding to the smallest distance is used as the human eye image.
  • the location of the feature points of the human eye is configured to obtain a distance between each strong corner point and a feature point of the corresponding human eye and a center point of the pupil, and the position of the strong corner point corresponding to the smallest distance is used as the human eye image. The location of the feature points of the human eye.
  • the candidate pixel points are relatively small, and then the best corner point is extracted as the eye feature point among the candidate points, which includes: acquiring the center of the pupil based on the feature image. The position is selected according to the distance between the center position of the pupil and the rough position of each of the feature points and the strong corner point.
  • the distance L is larger, the weight is lower, and can be removed; the smaller the distance L, the larger the weight, and the corresponding Strong corner point.
  • the rough position of the inner corner points of the left and right eyes is obtained, and the distance d between the two is calculated, and the width of the human eye can be calculated according to the value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种人眼定位方法及装置,所述人眼定位方法包括以下步骤:根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;基于预先获取的人眼图像及所述掩膜板获取掩膜图像;根据掩膜图像从人眼图像中获取强角点的位置信息;基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。本发明能够对人眼的特征点进行准确的定位。

Description

人眼定位方法及装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种人眼定位方法及装置。
背景技术
在人机交互技术领域,人脸识别技术起到越来越重要的作用,特别是对于人脸中的双眼、嘴巴等形态丰富的部位,可以对其进行分析实现表情识别、年龄识别等。目前,对于人眼的定位,所采用的技术主要是通过边缘或者轮廓进行处理的,如使用左右角点再加上轮廓进行定位,但轮廓容易受到外界干扰,特别是在人戴眼镜的情况下,由于眼镜会对人眼轮廓产生影响从而造成人眼定位并不准确;另外,在受到背景较为复杂以及光照的影响时,会进一步影响人眼的定位,定位的效果不理想。
发明内容
本发明的主要目的在于提供一种人眼定位方法及装置,旨在解决人眼的定位不够准确的技术问题。
为实现上述目的,本发明提供一种人眼定位方法,所述人眼定位方法包括以下步骤:
根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
基于预先获取的人眼图像及所述掩膜板获取掩膜图像;
根据掩膜图像从人眼图像中获取强角点的位置信息;
基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
优选地,所述根据人眼的特征点的对称特征建立掩膜板的步骤包括:
建立一与人眼形状对应的掩膜板,其中,所述掩膜板中与所述外角点对应的位置的像素数量从所述掩膜板上外角点位置的边缘至所述掩膜板中心的方向呈等差数列;所述掩膜板中与所述内角点对应的位置的像素数量从所述掩膜板上内角点位置的边缘至所述掩膜板中心的方向呈等差数列。
优选地,所述根据人眼的特征点的对称特征建立掩膜板的步骤包括:
分别建立仅包含外角点的第一掩膜板、仅包含内角点的第二掩膜板、仅包含最高角点的第三掩膜板及仅包含最低角点的第四掩膜板,其中,所述第一掩膜板、第二掩膜板、第三掩膜板及第四掩膜板合并后与人眼形状对应,所述第一掩膜板中与所述外角点对应的位置的像素数量从所述第一掩膜板上外角点位置的边缘至所述第一掩膜板中心的方向呈等差数列;所述第二掩膜板中与所述内角点对应的位置的像素数量从所述第二掩膜板上内角点位置的边缘至所述第二掩膜板中心的方向呈等差数列。
优选地,所述基于预先获取的人眼图像及所述掩膜板获取掩膜图像的步骤包括:
将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点;
对所述有效像素点进行统计并进行阈值化及非极大值抑制处理,获取掩膜图像;
计算所述掩膜图像的中心位置及重心位置,根据所述中心位置及重心位置对所述掩膜图像进行修正。
优选地,所述将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点的步骤包括:
计算C(i)并获取C(i)的值为1的像素点作为所述有效像素点:
Figure PCTCN2014095742-appb-000001
其中,I(i)为与掩膜板相对应的第i个有效像素所对应的位置下人眼图像的像素值;I(o)为与掩膜板相对应的当前所计算的特征点所对应的位置下人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果,i值为掩膜板的像素位置,t为像素差异阈值,C(i)=1的像素点为有效像素点,
Figure PCTCN2014095742-appb-000002
优选地,所述根据掩膜图像从人眼图像中获取强角点的位置信息的步骤包括:
根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
优选地,所述根据掩膜图像从人眼图像中获取强角点的位置信息的步骤包括:
对掩膜图像进行去伪处理;
根据去伪处理后掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
优选地,所述基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图形中人眼的特征点的位置的步骤包括:
基于预先从人眼图形中获取的人眼的特征点的粗略位置信息计算瞳孔的中心位置;
获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为人眼图像中所述人眼的特征点的位置。
此外,为实现上述目的,本发明还提供一种人眼定位装置,所述人眼定位装置包括:
建立模块,用于根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
第一获取模块,用于基于预先获取的人眼图像及所述掩膜板获取掩膜图像;
强角点获取模块,用于根据掩膜图像从人眼图像中获取强角点的位置信息;
第二获取模块,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
优选地,所述建立模块进一步用于建立一与人眼形状对应的掩膜板,其中,所述掩膜板中与所述外角点对应的位置的像素数量从所述掩膜板上外角点位置的边缘至所述掩膜板中心的方向呈等差数列;所述掩膜板中与所述内角点对应的位置的像素数量从所述掩膜板上内角点位置的边缘至所述掩膜板中心的方向呈等差数列。
优选地,所述建立模块进一步用于分别建立仅包含外角点的第一掩膜板、 仅包含内角点的第二掩膜板、仅包含最高角点的第三掩膜板及仅包含最低角点的第四掩膜板,其中,所述第一掩膜板、第二掩膜板、第三掩膜板及第四掩膜板合并后与人眼形状对应,所述第一掩膜板中与所述外角点对应的位置的像素数量从所述第一掩膜板上外角点位置的边缘至所述第一掩膜板中心的方向呈等差数列;所述第二掩膜板中与所述内角点对应的位置的像素数量从所述第二掩膜板上内角点位置的边缘至所述第二掩膜板中心的方向呈等差数列。
优选地,所述第一获取模块包括:
比较单元,用于将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点;
处理单元,用于对所述有效像素点进行统计并进行阈值化及非极大值抑制处理,获取掩膜图像;
修正单元,用于计算所述掩膜图像的中心位置及重心位置,根据所述中心位置及重心位置对所述掩膜图像进行修正。
优选地,所述比较单元具体用于计算C(i)并获取C(i)的值为1的像素点作为所述有效像素点:
Figure PCTCN2014095742-appb-000003
其中,I(i)为与掩膜板相对应的第i个有效像素所对应的位置下人眼图像的像素值;I(o)为与掩膜板相对应的当前所计算的特征点所对应的位置下人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果,i值为掩膜板的像素位置,t为像素差异阈值,C(i)=1的像素点为有效像素点,
Figure PCTCN2014095742-appb-000004
优选地,所述强角点获取模块具体用于根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
优选地,所述强角点获取模块具体用于对掩膜图像进行去伪处理,根据去伪处理后掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
优选地,所述第二获取模块包括:
计算单元,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置 信息计算瞳孔的中心位置;
获取单元,用于获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为人眼图像中所述人眼的特征点的位置。
本发明一种人眼定位方法及装置,所选取人眼的特征点包括外角点、内角点、最高角点及最低角点,根据人眼的外角点、内角点、最高角点及最低角点的对称特性建立掩膜板,掩膜板的形状更接近人眼的形状,然后根据人眼图像及掩膜板获取掩膜图像,根据掩膜图像从人眼图像中获取强角点,根据人眼的特征点的粗略位置及强角点的位置选取出最佳像素点,以其位置作为人眼的特征点的位置,实现人眼特征点的定位,本发明由于使用多个角点来建立掩膜板,且掩膜板的面积大小要小于人眼图像的面积大小,即使在受到外界干扰或者是在人戴眼镜的情况下,能通过掩膜板获取眼镜框内的特征点,从而能够对人眼的特征点进行准确的定位。
附图说明
图1为本发明人眼定位方法一实施例的流程示意图;
图2为本发明人眼的特征点的示意图;
图3为本发明建立的掩膜板的示意图;
图4为图1中步骤S102的细化流程示意图;
图5为图1中步骤S104的细化流程示意图;
图6为本发明人眼定位装置一实施例的功能模块示意图;
图7为图6中第一获取模块的细化功能模块示意图;
图8为图6中第二获取模块的细化功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种人眼定位方法,参照图1,在一实施例中,该人眼定位方法包括:
步骤S101,根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
本实施例中,在对输入人眼图像进行分析的过程中,双眼的位置为左右眼各采用4个特征点进行表述,如图2所示,以人的右眼为例,外角点为01,内角点为02、最高角点为03及最低角点为04,左眼与右眼进行类似的设计和处理,本发明需要对这四个特征点进行准确的定位。
本实施例中掩膜板是根据人眼的物理形状分布特性,即对称性进行设计。具体为:以像素点建立一与人眼形状对应的窗口模板,其中,窗口模板中与外角点、内角点对应的位置的像素点从边缘至中心的方向呈等差数列。建立完成后的窗口模板即为掩膜板,掩膜板的面积大小小于人眼图像的面积大小。
如图3所示,本实施例的掩膜板近似为棱形(阴影部分),建立与人眼形状对应的掩膜板,01、02、03、04分别对应外角点、内角点、最高角点及最低角点,掩膜板为一预设大小的二值化图像,即固定的图像大小,四个角点在模板中的相对位置是固定的,并符合人眼形状特征,掩膜板上四个角点组成的区域内为有效像素区域,像素值为1,而区域外像素值为0,以便通过有效像素去掉人眼图像中非人眼区域图像的干扰。掩膜板中与外角点对应区域的像素数量从掩膜板上外角点所在位置的边缘至掩膜板的中心的方向呈等差数列(如虚线所框住的部分),掩膜板中与内角点对应区域的像素数量从掩膜板上内角点所在位置的边缘至中心的方向呈等差数列(如虚线所框住的部分),对右眼的外角点01(或内角点),第一列像素数为1,第二列像素数为3,第三列像素数为5成等差数列;其中第一列是靠近外角点位置的边缘,而第三列是靠近掩膜板中心位置(即掩膜板上瞳孔所在位置),而在最高角点(或最低角点),行像素数可以不成等差数列或者成等差数列。另外,在进行掩膜板设计的过程中,掩膜板的列像素的数量要大于行像素的数量,这主要是因为人眼在最高角点(或最低角点)的弧度要大于在外角点(或内角点)的弧度。
本实施例中,可以只建立包括外角点、内角点、最高角点及最低角点的一个掩膜板,也可以每一个角点建立一个掩膜板,如单独建立外角点的掩膜板、内角点的掩膜板、最高角点的掩膜板、最低角点的掩膜板共四个掩膜板,四个掩膜板合并后与人眼形状对应,且同样地,外角点及内角点对应的这两 个掩膜板的像素数量从掩膜板上角点所在位置的边缘至中心的方向呈等差数列。
步骤S102,基于预先获取的人眼图像及所述掩膜板获取掩膜图像;
本实施例中,将掩膜板在人眼图像上进行移动,并将掩膜板上的像素与人眼图像上的像素的像素值进行比较,以获取人眼图像中的有效像素点,其中,各角点的有效像素点的集合组成人眼图像中对应角点的有效像素区域。
然后,对人眼图像中的有效像素点进行阈值化得到掩膜图像,但该掩膜图像中存在大量的伪特征,因此,需要对存在大量伪特征的掩膜图像进行修正,以最终得到包含少量有效像素点的掩膜图像。
步骤S103,根据掩膜图像从人眼图像中获取强角点的位置信息;
其中,本步骤包括两种获取强角点的位置信息的方式。
在第一实施例中,步骤S103获取强角点的位置信息的第一种方式包括:根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息,即按照掩膜图像中有效像素点的像素位置确定人眼图像中有效像素点的位置,然后对确定人眼图像中有效像素点位置的像素进行预设的强角点算法,确定人眼图像中强角点的位置信息。
在第二实施例中,在第一实施例的基础上,步骤S103获取强角点的位置信息的第二种方式包括:根据掩膜图像从人眼图像中获取强角点位置信息之前还进一步包括对掩膜图像进行去伪处理,掩膜图像去伪处理具体包括:可对掩膜图像采用预定的强角点算法进行计算,计算得到掩膜图像的响应函数值,通过响应函数值与预设的阈值来判断掩膜图像中的有效像素点是否为强角点,从而进一步对有效像素点进行筛选,剔除大量的伪角点,获取到掩膜图像上强角点的位置信息,然后再利用该去伪处理后的掩膜图像的强角点作为去伪处理后掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取人眼图像的强角点的位置信息。
其中,上述两种获取强角点的位置信息方式中,按照预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息包括:对于人眼的外角点、内角点、最高角点及最低角点各自的粗略位置,利用预设的强角点算法(如Harris算法)求得角点响应函数值,然后根据掩膜图像来定义非极大值抑制的局部范围,并进行非极大值抑制,最终通过像素点的响应函数值是否满足预 设角点阈值来确定人眼图像的强角点。
步骤S104,基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
本实施例中,对于人眼的外角点、内角点、最高角点及最低角点各自的粗略位置,可采用现有技术中的各种实现方式,如可首先利用Haar特征等人脸检测算法,获取人脸的位置,然通过AAM(Active Appearance Model,主观外观模型)算法获取人脸的位置,并得到左右眼的八个特征点的粗略位置信息,同时获取瞳孔的中心位置,或者也可以采用其他方法,如采用adaBoost(即迭代)训练得到级联式的眼角检测器定位眼角的位置(即外角点),根据分类器得到内外两个角点的位置信息,然后利用反向投影得到眼睛最高角点及最低角点的位置。
本实施例分别根据瞳孔的中心位置及人眼的特征点的粗略位置之间的连线,根据该连线与上述强角点之间的点到线的距离关系从上述强角点中选出最佳像素点,以其位置作为人眼的特征点的位置,其中,距离最近对应的强角点为人眼的特征点。
与现有技术相比,本实施例中,所选取人眼的特征点包括外角点、内角点、最高角点及最低角点,根据人眼的外角点、内角点、最高角点及最低角点的对称特性建立掩膜板,掩膜板的形状更接近人眼的形状,然后根据人眼图像及掩膜板获取掩膜图像,根据掩膜图像从人眼图像中获取强角点,根据人眼的特征点的粗略位置及强角点的位置选取出最佳像素点,以其位置作为人眼的特征点的位置,实现人眼特征点的定位,本实施例由于使用多个角点来建立掩膜板,且掩膜板的面积大小要小于人眼图像的面积大小,即使在受到外界干扰或者是在人戴眼镜的情况下,能通过掩膜板获取眼镜框内的特征点,从而能够对人眼的特征点进行准确的定位。
在一优选的实施例中,如图4所示,在上述图1的实施例的基础上,上述步骤S102包括:
步骤S1021,将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点;
步骤S1022,对所述有效像素点进行统计并进行阈值化及非极大值抑制处 理,获取掩膜图像;
步骤S1023,计算所述掩膜图像的中心位置及重心位置,根据所述中心位置及重心位置对所述掩膜图像进行修正。
本实施例中,掩膜板与人眼图像使用同一坐标,两者的像素的位置对应。将掩膜板在人眼图像上进行移动,在移动的过程中,将该掩膜板有效像素内的对人眼图像的所有像素值与人眼图像上各特征点位置像素值进行比较;其中,I(i)指与掩膜板相对应的第i个有效像素所对应的位置下,人眼图像的像素值;I(o)指与掩膜板相对应的当前所计算的特征点所对应的位置下,人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果;如下所示:
Figure PCTCN2014095742-appb-000005
其中,i值为掩膜板的像素位置,t是一个像素差异阈值,通常对于对比度比较低的区域,选取较小的t;反之,则t的阈值可以选择大些,C(i)=1的像素点为有效像素点,本实施例中,
Figure PCTCN2014095742-appb-000006
将上述有效像素点进行求和统计,即:
Figure PCTCN2014095742-appb-000007
然后,进行阈值化及非极大值抑制处理,得到掩模图像在当前特征点下某位置的值mask(o);
Figure PCTCN2014095742-appb-000008
其中,g为[1,55]之间的一阈值。
上述方式得到的掩模图像mask(o)中,包含大量的伪特征,因此需要根据掩模图像mask(o)的中心位置及重心位置对其进行修正,重心位置的计算如下:
Figure PCTCN2014095742-appb-000009
(x0,y0)即为重心位置,x为横坐标轴,y为纵坐标轴,m为掩模图像mask(o)所有像素点的总个数,中心位置的计算可采用现有技术。
设定一阈值g,g为有效像素的个数,掩模图像mask(o)的中心位置及重心位置两者的距离大于该阈值时,对掩模图像mask(o)进行修正。
在一优选的实施例中,如图5所示,在上述图4的实施例的基础上,上述步骤S104包括:
步骤S1041,基于预先从人眼图像中获取的人眼的特征点的粗略位置信息计算瞳孔的中心位置;
步骤S1042,获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为所述人眼图像中人眼的特征点的位置。
本实施例中,在获取强角点后,候选的像素点已经比较少了,接下来要在这些候选点中提取最佳的角点作为眼睛特征点,其包括:基于特征图像获取瞳孔的中心位置,根据瞳孔的中心位置分别与特征点各自的粗略位置连线与所述强角点之间的距离关系选取出人眼的特征点。
本实施例中,使用上述的AAM算法获知人眼瞳孔的位置,根据瞳孔的中心位置分别与特征点各自的粗略位置建立直线方程,如瞳孔的中心位置与最高角点的直线方程为:y=kx+b,则强角点到该直线的的距离L为最佳人眼特征点的权重,当距离L越大,权重越低,可去除;距离L越小,权重越大,保留对应的强角点。
然后计算所保留的强角点到瞳孔间的距离L与人眼宽度的差值,该差值在一定误差之内的强角点成为最佳强角点,该最佳强角点作为人眼的特征点,然后获取其位置。若无最佳强角点,则将上述对应的粗略位置作为最佳的人眼的特征点。
本实施例中,获取左右眼的内角点粗略位置,计算两者之间的距离d,可 根据该值计算出人眼的宽度。
本发明提供一种人眼定位装置,参照图6,在一实施例中,该人眼定位装置包括:
建立模块101,用于根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
本实施例中,在对输入人眼图像进行分析的过程中,双眼的位置为左右眼各采用4个特征点进行表述,如图2所示,以人的右眼为例,外角点为01,内角点为02、最高角点为03及最低角点为04,左眼与右眼进行类似的设计和处理,本发明需要对这四个特征点进行准确的定位。
本实施例中掩膜板是根据人眼的物理形状分布特性,即对称性进行设计。具体为:以像素点建立一对称的且与人眼形状对应的窗口模板,其中,窗口模板中与外角点、内角点对应的位置的像素点从边缘至中心的方向呈等差数列。建立完成后的窗口模板即为掩膜板,掩膜板面积的大小小于人眼图像面积的大小。
如图3所示,本实施例的掩膜板近似为棱形(阴影部分),建立与人眼形状对应的掩膜板,01、02、03、04分别对应外角点、内角点、最高角点及最低角点,掩膜板为一预设大小的二值化图像,即固定的图像大小,四个角点在模板中的相对位置是固定的,并符合人眼形状特征,掩膜板上四个角点组成的区域内为有效像素区域,像素值为1,而区域外像素值为0,以便通过有效像素去掉人眼图像中非人眼区域图像的干扰。掩膜板中与外角点对应区域的像素数量从掩膜板上外角点所在位置的边缘至掩膜板的中心的方向呈等差数列(如虚线所框住的部分),掩膜板中与内角点对应区域的像素数量从掩膜板上内角点所在位置的边缘至中心的方向呈等差数列(如虚线所框住的部分),对右眼的外角点01(或内角点),第一列像素数为1,第二列像素数为3,第三列像素数为5成等差数列;其中第一列是靠近外角点位置的边缘,而第三列是靠近掩膜板中心位置(即掩膜板上瞳孔所在位置),而在最高角点(或最低角点),行像素数可以不成等差数列或者成等差数列。另外,在进行掩膜板设计的过程中,掩膜板的列像素的数量要大于行像素的数量,这主要是因为人眼在最高角点(或最低角点)的弧度要大于在外角点(或内角点)的弧 度。
本实施例中,可以只建立包括外角点、内角点、最高角点及最低角点的一个掩膜板,也可以每一个角点建立一个掩膜板,如单独建立外角点的掩膜板、内角点的掩膜板、最高角点的掩膜板、最低角点的掩膜板共四个掩膜板,四个掩膜板合并后与人眼形状对应,且同样地,外角点及内角点对应的这两个掩膜板的像素数量从掩膜板上角点所在位置的边缘至中心的方向呈等差数列。
第一获取模块102,用于基于预先从人眼图像中获取的人眼图像及所述掩膜板获取掩膜图像;
本实施例中,将掩膜板在人眼图像上进行移动,并将掩膜板上的像素与人眼图像上的像素的像素值进行比较,以获取人眼图像中的有效像素点,其中,各角点的有效像素点的集合组成人眼图像中对应角点的有效像素区域。
然后,对人眼图像中的有效像素点进行阈值化得到掩膜图像,但该掩膜图像中存在大量的伪特征,因此,需要对存在大量伪特征的掩膜图像进行修正,以最终得到包含少量有效像素点的掩膜图像。
强角点获取模块103,用于根据掩膜图像从人眼图像中获取强角点的位置信息;
其中,强角点获取模块103获取强角点的位置信息包括两种方式。
在第一实施例中,强角点获取模块103获取强角点的位置信息的第一种方式包括:根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息,即按照掩膜图像中有效像素点的像素位置确定人眼图像中有效像素点的位置,然后对确定人眼图像中有效像素点位置的像素进行预设的强角点算法,确定人眼图像中强角点的位置信息。
在第二实施例中,在第一实施例的基础上,强角点获取模块103获取强角点的位置信息的第二种方式包括:强角点获取模块103先对掩膜图像进行去伪处理,掩膜图像去伪处理具体包括:可对掩膜图像采用预定的强角点算法进行计算,计算得到掩膜图像的响应函数值,通过响应函数值与预设的阈值来判断掩膜图像中的有效像素点是否为强角点,从而进一步对有效像素点进行筛选,剔除大量的伪角点,获取到掩膜图像上强角点的位置信息,然后再利用该去伪处理后的掩膜图像的强角点作为去伪处理后掩膜图像的有效像 素点以及预设的强角点算法从人眼图像中获取人眼图像的强角点的位置信息。
其中,上述两种获取强角点的位置信息方式中,按照预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息包括:对于人眼的外角点、内角点、最高角点及最低角点各自的粗略位置,利用Harris算法求得角点响应函数值,然后根据掩膜图像来定义非极大值抑制的局部范围,并进行非极大值抑制,最终通过像素点的响应函数值是否满足预设角点阈值来确定人眼图像的强角点。
第二获取模块104,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
本实施例中,对于人眼的外角点、内角点、最高角点及最低角点各自的粗略位置,可采用现有技术中的各种实现方式,如可首先利用Haar特征等人脸检测算法,获取人脸的位置,然通过AAM主观外观模型算法获取人脸的位置,并得到左右眼的八个特征点的粗略位置信息,同时获取瞳孔的中心位置,或者也可以采用其他方法,如采用adaBoost(即迭代)训练得到级联式的眼角检测器定位眼角的位置(即外角点),根据分类器得到内外两个角点的位置信息,然后利用反向投影得到眼睛最高角点及最低角点的位置。
本实施例分别根据瞳孔的中心位置及人眼的特征点的粗略位置之间的连线,根据该连线与上述强角点之间的点到线的距离关系从上述强角点中选出最佳像素点,以其位置作为人眼的特征点的位置,其中,距离最近对应的强角点为人眼的特征点。
在一优选的实施例中,如图7所示,在上述图6的实施例的基础上,所述第一获取模块102包括:
比较单元1021,用于将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点;
处理单元1022,用于对所述有效像素点进行统计并进行阈值化及非极大值抑制处理,获取掩膜图像;
修正单元1023,用于计算所述掩膜图像的中心位置及重心位置,根据所 述中心位置及重心位置对所述掩膜图像进行修正。
本实施例中,掩膜板与人眼图像使用同一坐标,两者的像素的位置对应。将掩膜板在人眼图像上进行移动,在移动的过程中,将该掩膜板有效像素内的对人眼图像的所有像素值与人眼图像上各特征点位置像素值进行比较;其中,I(i)指与掩膜板相对应的第i个有效像素所对应的位置下,人眼图像的像素值;I(o)指与掩膜板相对应的当前所计算的特征点所对应的位置下,人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果;如下所示:
Figure PCTCN2014095742-appb-000010
其中,i值为掩膜板的像素位置,t是一个像素差异阈值,通常对于对比度比较低的区域,选取较小的t;反之,则t的阈值可以选择大些,C(i)=1的像素点为有效像素点,本实施例中,
Figure PCTCN2014095742-appb-000011
将上述有效像素点进行求和统计,即:
Figure PCTCN2014095742-appb-000012
然后,进行阈值化及非极大值抑制处理,得到掩模图像在当前特征点下某位置的值mask(o);
Figure PCTCN2014095742-appb-000013
其中,g为[1,55]之间的一阈值。
上述方式得到的掩模图像mask(o)中,包含大量的伪特征,因此需要根据掩模图像mask(o)的中心位置及重心位置对其进行修正,重心位置的计算如下:
Figure PCTCN2014095742-appb-000014
(x0,y0)即为重心位置,x为横坐标轴, y为纵坐标轴,m为掩模图像mask(o)所有像素点的总个数,中心位置的计算可采用现有技术。
设定一阈值g,g为有效像素的个数,掩模图像mask(o)的中心位置及重心位置两者的距离大于该阈值时,对掩模图像mask(o)进行修正。
在一优选的实施例中,如图8所示,在上述图6的实施例的基础上,所述第二获取模块104包括:
计算单元1041,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置信息计算瞳孔的中心位置;
获取单元1042,用于获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为所述人眼图像中人眼的特征点的位置。
本实施例中,在获取强角点后,候选的像素点已经比较少了,接下来要在这些候选点中提取最佳的角点作为眼睛特征点,其包括:基于特征图像获取瞳孔的中心位置,根据瞳孔的中心位置分别与特征点各自的粗略位置连线与所述强角点之间的距离关系选取出人眼的特征点。
本实施例中,使用上述的AAM算法获知人眼瞳孔的位置,根据瞳孔的中心位置分别与特征点各自的粗略位置建立直线方程,如瞳孔的中心位置与最高角点的直线方程为:y=kx+b,则强角点到该直线的的距离L为最佳人眼特征点的权重,当距离L越大,权重越低,可去除;距离L越小,权重越大,保留对应的强角点。
然后计算所保留的强角点到瞳孔间的距离L与人眼宽度的差值,该差值在一定误差之内的强角点成为最佳强角点,该最佳强角点作为人眼的特征点,然后获取其位置。若无最佳强角点,则将上述对应的粗略位置作为最佳的人眼的特征点。
本实施例中,获取左右眼的内角点粗略位置,计算两者之间的距离d,可根据该值计算出人眼的宽度。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是 利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (16)

  1. 一种人眼定位方法,其特征在于,所述人眼定位方法包括以下步骤:
    根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
    基于预先获取的人眼图像及所述掩膜板获取掩膜图像;
    根据掩膜图像从人眼图像中获取强角点的位置信息;
    基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
  2. 如权利要求1所述的人眼定位方法,其特征在于,所述根据人眼的特征点的对称特征建立掩膜板的步骤包括:
    建立一与人眼形状对应的掩膜板,其中,所述掩膜板中与所述外角点对应的位置的像素数量从所述掩膜板上外角点位置的边缘至所述掩膜板中心的方向呈等差数列;所述掩膜板中与所述内角点对应的位置的像素数量从所述掩膜板上内角点位置的边缘至所述掩膜板中心的方向呈等差数列。
  3. 如权利要求1所述的人眼定位方法,其特征在于,所述根据人眼的特征点的对称特征建立掩膜板的步骤包括:
    分别建立仅包含外角点的第一掩膜板、仅包含内角点的第二掩膜板、仅包含最高角点的第三掩膜板及仅包含最低角点的第四掩膜板,其中,所述第一掩膜板、第二掩膜板、第三掩膜板及第四掩膜板合并后与人眼形状对应,所述第一掩膜板中与所述外角点对应的位置的像素数量从所述第一掩膜板上外角点位置的边缘至所述第一掩膜板中心的方向呈等差数列;所述第二掩膜板中与所述内角点对应的位置的像素数量从所述第二掩膜板上内角点位置的边缘至所述第二掩膜板中心的方向呈等差数列。
  4. 如权利要求1所述的人眼定位方法,其特征在于,所述基于预先获取的人眼图像及所述掩膜板获取掩膜图像的步骤包括:
    将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果 获取所述人眼图像上的有效像素点;
    对所述有效像素点进行统计并进行阈值化及非极大值抑制处理,获取掩膜图像;
    计算所述掩膜图像的中心位置及重心位置,根据所述中心位置及重心位置对所述掩膜图像进行修正。
  5. 如权利要求4所述的人眼定位方法,其特征在于,所述将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点的步骤包括:
    计算C(i)并获取C(i)的值为1的像素点作为所述有效像素点:
    Figure PCTCN2014095742-appb-100001
    其中,I(i)为与掩膜板相对应的第i个有效像素所对应的位置下人眼图像的像素值;I(o)为与掩膜板相对应的当前所计算的特征点所对应的位置下人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果,i值为掩膜板的像素位置,t为像素差异阈值,C(i)=1的像素点为有效像素点,
    Figure PCTCN2014095742-appb-100002
  6. 如权利要求4所述的人眼定位方法,其特征在于,所述根据掩膜图像从人眼图像中获取强角点的位置信息的步骤包括:
    根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
  7. 如权利要求4所述的人眼定位方法,其特征在于,所述根据掩膜图像从人眼图像中获取强角点的位置信息的步骤包括:
    对掩膜图像进行去伪处理;
    根据去伪处理后掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
  8. 如权利要求1所述的人眼定位方法,其特征在于,所述基于预先从人 眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图形中人眼的特征点的位置的步骤包括:
    基于预先从人眼图形中获取的人眼的特征点的粗略位置信息计算瞳孔的中心位置;
    获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为人眼图像中所述人眼的特征点的位置。
  9. 一种人眼定位装置,其特征在于,所述人眼定位装置包括:
    建立模块,用于根据人眼的特征点的对称特征建立掩膜板,所述特征点包括外角点、内角点、最高角点及最低角点;
    第一获取模块,用于基于预先获取的人眼图像及所述掩膜板获取掩膜图像;
    强角点获取模块,用于根据掩膜图像从人眼图像中获取强角点的位置信息;
    第二获取模块,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置信息及所述强角点的位置信息获取人眼图像中人眼的特征点的位置。
  10. 如权利要求9所述的人眼定位装置,其特征在于,所述建立模块进一步用于建立一与人眼形状对应的掩膜板,其中,所述掩膜板中与所述外角点对应的位置的像素数量从所述掩膜板上外角点位置的边缘至所述掩膜板中心的方向呈等差数列;所述掩膜板中与所述内角点对应的位置的像素数量从所述掩膜板上内角点位置的边缘至所述掩膜板中心的方向呈等差数列。
  11. 如权利要求9所述的人眼定位装置,其特征在于,所述建立模块进一步用于分别建立仅包含外角点的第一掩膜板、仅包含内角点的第二掩膜板、仅包含最高角点的第三掩膜板及仅包含最低角点的第四掩膜板,其中,所述第一掩膜板、第二掩膜板、第三掩膜板及第四掩膜板合并后与人眼形状对应,所述第一掩膜板中与所述外角点对应的位置的像素数量从所述第一掩膜板上外角点位置的边缘至所述第一掩膜板中心的方向呈等差数列;所述第二掩膜 板中与所述内角点对应的位置的像素数量从所述第二掩膜板上内角点位置的边缘至所述第二掩膜板中心的方向呈等差数列。
  12. 如权利要求9所述的人眼定位装置,其特征在于,所述第一获取模块包括:
    比较单元,用于将所述掩膜板在所述人眼图像上移动并进行像素值比较,根据比较结果获取所述人眼图像上的有效像素点;
    处理单元,用于对所述有效像素点进行统计并进行阈值化及非极大值抑制处理,获取掩膜图像;
    修正单元,用于计算所述掩膜图像的中心位置及重心位置,根据所述中心位置及重心位置对所述掩膜图像进行修正。
  13. 如权利要求12所述的人眼定位装置,其特征在于,所述比较单元具体用于计算C(i)并获取C(i)的值为1的像素点作为所述有效像素点:
    Figure PCTCN2014095742-appb-100003
    其中,I(i)为与掩膜板相对应的第i个有效像素所对应的位置下人眼图像的像素值;I(o)为与掩膜板相对应的当前所计算的特征点所对应的位置下人眼图像的像素值;C(i)与掩膜板相对应的第i个有效像素所对应的位置下像素比较结果,i值为掩膜板的像素位置,t为像素差异阈值,C(i)=1的像素点为有效像素点,
    Figure PCTCN2014095742-appb-100004
  14. 如权利要求12所述的人眼定位装置,其特征在于,所述强角点获取模块具体用于根据掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
  15. 如权利要求12所述的人眼定位装置,其特征在于,所述强角点获取模块具体用于对掩膜图像进行去伪处理,根据去伪处理后掩膜图像的有效像素点以及预设的强角点算法从人眼图像中获取特征点对应的强角点的位置信息。
  16. 如权利要求9所述的人眼定位装置,其特征在于,所述第二获取模块包括:
    计算单元,用于基于预先从人眼图像中获取的人眼的特征点的粗略位置信息计算瞳孔的中心位置;
    获取单元,用于获取各强角点与对应的人眼的特征点及瞳孔的中心位置两点所在的连线的距离,以距离最小对应的强角点的位置作为人眼图像中所述人眼的特征点的位置。
PCT/CN2014/095742 2014-12-29 2014-12-31 人眼定位方法及装置 WO2016106617A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410837111.8A CN105809085B (zh) 2014-12-29 2014-12-29 人眼定位方法及装置
CN201410837111.8 2014-12-29

Publications (1)

Publication Number Publication Date
WO2016106617A1 true WO2016106617A1 (zh) 2016-07-07

Family

ID=56283892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095742 WO2016106617A1 (zh) 2014-12-29 2014-12-31 人眼定位方法及装置

Country Status (2)

Country Link
CN (1) CN105809085B (zh)
WO (1) WO2016106617A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240112B (zh) * 2017-06-28 2021-06-22 北京航空航天大学 一种复杂场景下个体x角点提取方法
CN107808397B (zh) * 2017-11-10 2020-04-24 京东方科技集团股份有限公司 瞳孔定位装置、瞳孔定位方法和视线追踪设备
CN110443203B (zh) * 2019-08-07 2021-10-15 中新国际联合研究院 基于对抗生成网络的人脸欺骗检测系统对抗样本生成方法
CN111783621B (zh) * 2020-06-29 2024-01-23 北京百度网讯科技有限公司 人脸表情识别及模型训练的方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059836A (zh) * 2007-06-01 2007-10-24 华南理工大学 一种人眼定位及人眼状态识别方法
WO2012144020A1 (ja) * 2011-04-19 2012-10-26 アイシン精機株式会社 瞼検出装置、瞼検出方法及びプログラム
CN102831399A (zh) * 2012-07-30 2012-12-19 华为技术有限公司 确定眼睛状态的方法和装置
CN103136512A (zh) * 2013-02-04 2013-06-05 重庆市科学技术研究院 一种瞳孔定位方法及系统
CN104063700A (zh) * 2014-07-04 2014-09-24 武汉工程大学 自然光照正面人脸图像中的眼睛中心点定位的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8811726B2 (en) * 2011-06-02 2014-08-19 Kriegman-Belhumeur Vision Technologies, Llc Method and system for localizing parts of an object in an image for computer vision applications
CN103514452B (zh) * 2013-07-17 2016-09-28 浙江大学 一种水果形状检测方法及装置
CN103839050A (zh) * 2014-02-28 2014-06-04 福州大学 基于特征点扩充及pca特征提取的asm定位算法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059836A (zh) * 2007-06-01 2007-10-24 华南理工大学 一种人眼定位及人眼状态识别方法
WO2012144020A1 (ja) * 2011-04-19 2012-10-26 アイシン精機株式会社 瞼検出装置、瞼検出方法及びプログラム
CN102831399A (zh) * 2012-07-30 2012-12-19 华为技术有限公司 确定眼睛状态的方法和装置
CN103136512A (zh) * 2013-02-04 2013-06-05 重庆市科学技术研究院 一种瞳孔定位方法及系统
CN104063700A (zh) * 2014-07-04 2014-09-24 武汉工程大学 自然光照正面人脸图像中的眼睛中心点定位的方法

Also Published As

Publication number Publication date
CN105809085A (zh) 2016-07-27
CN105809085B (zh) 2019-07-26

Similar Documents

Publication Publication Date Title
CN106803067B (zh) 一种人脸图像质量评估方法及装置
CN107506693B (zh) 畸变人脸图像校正方法、装置、计算机设备和存储介质
CN110147721B (zh) 一种三维人脸识别方法、模型训练方法和装置
TWI611353B (zh) 一種眼球追蹤的方法及裝置
WO2020038254A1 (zh) 一种用于目标识别的图像处理方法及装置
US20210089753A1 (en) Age Recognition Method, Computer Storage Medium and Electronic Device
EP2590140A1 (en) Facial authentication system, facial authentication method, and facial authentication program
CN104318262A (zh) 通过人脸照片更换皮肤的方法及系统
US9007481B2 (en) Information processing device and method for recognition of target objects within an image
WO2016106617A1 (zh) 人眼定位方法及装置
US9613404B2 (en) Image processing method, image processing apparatus and electronic device
US9854967B2 (en) Gaze detector
KR101786754B1 (ko) 나이 추정 장치 및 방법
US20150049952A1 (en) Systems and methods of measuring facial characteristics
US11475707B2 (en) Method for extracting image of face detection and device thereof
CN109785396A (zh) 基于双目相机的写字姿态监测方法、系统、装置
CN109934790A (zh) 带有自适应阈值的红外成像系统非均匀性校正方法
Han et al. Research and implementation of an improved canny edge detection algorithm
CN113810611A (zh) 一种事件相机的数据模拟方法和装置
WO2017088391A1 (zh) 视频去噪与细节增强方法及装置
CN105389476B (zh) 基于梯度特征的调强放射治疗计划剂量数据的插值算法
CN107346544B (zh) 一种图像处理方法和电子设备
JP7106296B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN112991159A (zh) 人脸光照质量评估方法、系统、服务器与计算机可读介质
CN104966271B (zh) 基于生物视觉感受野机制的图像去噪方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14909426

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/10/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14909426

Country of ref document: EP

Kind code of ref document: A1