CN117281504A - Automatic eye feature measurement method based on facial image acquisition - Google Patents

Automatic eye feature measurement method based on facial image acquisition Download PDF

Info

Publication number
CN117281504A
CN117281504A CN202311501277.8A CN202311501277A CN117281504A CN 117281504 A CN117281504 A CN 117281504A CN 202311501277 A CN202311501277 A CN 202311501277A CN 117281504 A CN117281504 A CN 117281504A
Authority
CN
China
Prior art keywords
eye
eyelid
iris
image
canthus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311501277.8A
Other languages
Chinese (zh)
Inventor
赵穆欣
陈思
刘俐
雷斯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Hospital of Dalian Medical University
Original Assignee
Second Hospital of Dalian Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Hospital of Dalian Medical University filed Critical Second Hospital of Dalian Medical University
Priority to CN202311501277.8A priority Critical patent/CN117281504A/en
Publication of CN117281504A publication Critical patent/CN117281504A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Dentistry (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the technical field of eye feature measurement, and provides an eye feature automatic measurement method based on facial image acquisition, which comprises the following steps: step 1, acquiring a face front image and extracting an eye image; step 2, edge detection is carried out on the eye image; step 3, extracting an iris region, a sclera region and a double eyelid region from the eye image to obtain an image after human eye segmentation; step 4, connecting the centers of the irises on two sides of the human eyes as horizontal lines, and correcting the images after the human eyes are segmented; step 5, obtaining eye feature points according to the image after human eye segmentation; and 6, measuring the eye feature data according to the acquired eye feature points. The invention can automatically detect the eye characteristics and provide the eye characteristic mark data.

Description

Automatic eye feature measurement method based on facial image acquisition
Technical Field
The invention relates to the technical field of eye feature measurement, in particular to an automatic eye feature measurement method based on facial image acquisition.
Background
Eyes are regarded as windows of hearts for a long time, eyes are used as attractive aesthetic subunits of faces, so that the defects of other organs of the faces can be overcome to a certain extent, and the mental state, the air quality and the emotion of a person can be conveyed.
With the economic development and the improvement of aesthetic level, the demands of individuals for beautifying eyes are becoming stronger. In the clinical work at present, the plastic doctor mostly adopts the mode of taking the photo to record the eye condition of patient, can't provide the specific data of eye characteristic, and partial patient is difficult to more directly perceivedly, objectively know the basis condition of oneself eye in the short time, and the adjustment in the art relies on doctor's experience to judge mostly, lacks the auxiliary evaluation of objective index, and postoperative contrast also lacks quantitative eye characteristic index before the art.
Currently, there is a lack of a method to enable automatic measurement of ocular characteristics.
Disclosure of Invention
The invention mainly solves the technical problem that the prior art lacks a method capable of automatically measuring the eye characteristics, and provides an eye characteristic automatic measurement method based on facial image acquisition so as to automatically detect eye points and provide eye characteristic mark data.
The invention provides an automatic eye feature measurement method based on facial image acquisition, which comprises the following steps:
step 1, acquiring a face front image and extracting an eye image;
step 2, edge detection is carried out on the eye image;
step 3, extracting an iris region, a sclera region and a double eyelid region from the eye image to obtain an image after human eye segmentation;
step 4, connecting the centers of the irises on two sides of the human eyes as horizontal lines, and correcting the images after the human eyes are segmented;
step 5, obtaining eye feature points according to the image after human eye segmentation; the eye feature points comprise one or more of an upper eyelid margin contour, a lower eyelid margin contour, an iris fitting circle contour, an iris circle and upper eyelid margin intersection point, an inner canthus point, an outer canthus point and a pupil center point;
step 6, measuring eye feature data according to the obtained eye feature points; the ocular feature data includes, but is not limited to, one or more of upper eyelid margin corneal mapping distance, lower eyelid margin corneal mapping distance, iris exposure, lid cleavage axis tilt angle, lid cleavage length, lid cleavage width, lid cleavage height, eye cleavage index, lid cleavage area, inner canthus distance, inter-pupillary distance, outer canthus distance, canthus index, double lid eyelid area.
Preferably, in step 1, an eye image is extracted by a classifier; the classifier adopts a Haar cascade classifier.
Preferably, the extracting the eye image by the classifier includes the following steps:
scanning and positioning a face region in the image by using a Haar cascade classifier for face training;
the eye regions are further scanned and located within each face region using a Haar cascade classifier trained on the eyes.
Preferably, between step 1 and step 2, further comprising: preprocessing an eye image;
the preprocessing includes, but is not limited to, one or more of noise removal processing, histogram equalization processing, and graying conversion processing.
Preferably, in step 2, the edge detection algorithm is used to perform edge detection on the eye image to detect the iris and sclera edge characteristics of the human eye, thereby determining the contours of the iris and sclera
Preferably, step 3, extracting iris region, sclera region and double eyelid region from the eye image includes the following steps:
extracting an iris region, and obtaining the iris region by carrying out Hough circle transformation detection;
the method comprises the steps of performing contour detection on a human eye double eyelid area, filtering the identified contour, marking the screened contour to determine the boundary and the position of the area, and acquiring the double eyelid area according to the marked area;
the scleral region is extracted, and the scleral region is separated from other regions by using OTUS thresholding, thereby obtaining the scleral region.
Preferably, in step 4, the line connecting the centers of the irises on both sides of the human eye is a horizontal line, and the image after the human eye is segmented is corrected, which comprises the following steps:
and (3) taking the connecting lines of the centers of the irises on the two sides as horizontal lines, calculating angles between the connecting lines of the centers of the irises on the two sides and the horizontal direction of the image after the human eyes are segmented, and rotating the image after the human eyes are segmented according to the angle calculation result, so that the connecting lines of the centers of the irises on the two sides are parallel to the horizontal direction, and further, the correction of the image after the human eyes are segmented is realized.
Preferably, in step 6, ocular feature data is measured, including the following process:
the upper eyelid margin cornea reflecting distance is obtained by counting the number of pixels in the vertical distance from the central position of the iris to the upper eyelid margin and calculating the distance by an iris diameter ruler;
the lower eyelid margin cornea reflecting distance is obtained by counting the number of pixels in the vertical distance from the central position of the iris to the lower eyelid margin and calculating by an iris diameter ruler;
the iris exposure is obtained by calculating the ratio of the area of the iris exposure pixel points to the total area of the iris;
the eyelid cleavage axis inclination angle is obtained by solving and calculating the inclination of each inner canthus point and each outer canthus point of each human eye through a binary once equation;
the eyelid cleavage length, eyelid cleavage width and eyelid cleavage height are obtained through the following processes: the highest point of the upper eyelid contour is the highest point of the upper eyelid margin, the highest point of the lower eyelid contour is the lowest point of the lower eyelid margin, and the eyelid cleavage length is obtained by taking the difference value of the coordinate values corresponding to the pixel points in the X-axis horizontal direction through the known coordinate information of the pixel points corresponding to the inner canthus point and the outer canthus point; obtaining the eyelid cleavage width by making a difference value according to the coordinate values corresponding to the Y-axis vertical direction; obtaining the eyelid cleavage height by knowing the coordinate information of the pixel points corresponding to the highest point of the upper eyelid margin and the lowest point of the lower eyelid and making a difference value according to the coordinate values corresponding to the respective vertical directions of the Y axis;
the eye fracture index is obtained by calculating the ratio of the eyelid fracture height to the eyelid fracture width;
the eyelid cleavage area, inner canthus spacing, outer canthus spacing and canthus index are obtained through the following processes: acquiring the number of pixels in a corresponding area of each eyelid of the image, and calculating to obtain a corresponding eyelid cracking area; finally, calculating the horizontal distance between the inner canthus points at the two sides as the inner canthus distance through the position information of the inner canthus points and the central coordinates of the pupil; calculating the horizontal distance between the center points of the iris at two sides as the interpupillary distance; calculating the horizontal distance between bilateral outer canthus points to obtain the outer canthus distance; obtaining an index of canthus according to the ratio of the inner canthus spacing to the outer canthus spacing;
the double eyelid area is obtained by calculating the region formed by the double eyelid line and the lower edge of the upper eyelid or the perpendicular line passing through the inner and outer canthus points.
According to the automatic eye feature measurement method based on facial image acquisition, an eye image is automatically extracted through a classifier, eyelid margin contours and double eyelid areas are automatically identified and measured, eye feature points are automatically identified, and eye feature data measurement is performed; and the iris center is used as a cornea reflecting point in the traditional measurement, the iris diameter ruler is used for replacing the traditional millimeter measurement unit, and the iris center is used as a reference to provide more personalized, comprehensive and rapid data of the eye feature mark. The invention provides a comprehensive eye feature data detection result, can provide a certain reference for guiding the preoperative design and intraoperative correction of the eye surgery, observing postoperative recovery and postoperative comparison before and after the surgery, provides possibility for the patient to self-detect the eye condition and remote face diagnosis, and can provide convenience for the doctor to communicate with the patient by quantified specific indexes; the invention can be an objective and convenient auxiliary diagnosis and treatment tool, can more comprehensively preserve eye data, has a certain significance for judging the morphology and symmetry of eyes, and also provides a new thought for eye data analysis and evaluation and eye aesthetic measurement and exploration.
Drawings
FIG. 1 is a flow chart of an implementation of an automatic measurement method for eye features based on facial image acquisition provided by the invention;
figures 2a-e are pictures of examples provided by the present invention.
Detailed Description
In order to make the technical problems solved by the invention, the technical scheme adopted and the technical effects achieved clearer, the invention is further described in detail below with reference to the accompanying drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present invention are shown in the accompanying drawings.
As shown in fig. 1, the method for automatically measuring the eye features based on facial image acquisition provided by the embodiment of the invention includes:
and step 1, acquiring a face front image and extracting an eye image.
When a patient is in plane view, the shooting equipment and eyes are at the same horizontal height, the light rays on the left side and the right side of the face are kept uniform, the front face image of the face is collected, and the eye images are extracted through the classifier. The invention supports the shooting of devices such as a mobile phone or a camera and the like, and obtains the head-up front photo of the patient in a simple mode.
The classifier adopts a Haar cascade classifier, and comprises a face classifier and an eye classifier; among them, the Haar cascade classifier is a machine learning technique for object detection and face recognition, which is based on Haar features and the concept of cascade classifier. The method adopts Harr characteristics of Haar cascade classifiers for face and eye extraction, namely, the positions and the sizes of faces and eyes are detected in a given image. First, a face region is scanned and located in the image using a Haar cascade classifier trained on faces, and then an eye region is further scanned and located in each face region using a Haar cascade classifier trained on eyes. By using the AdaBoost algorithm and a large number of strong classifiers, the Haar cascade classifier can achieve highly accurate target detection while maintaining a low false detection rate, and by the cascade structure can rapidly exclude most negative samples without detecting all possible window positions, thereby significantly improving detection speed.
As a preferred step of the present invention, between step 1 and step 2, it may further comprise: the eye image is preprocessed.
The eye image is preprocessed to remove noise, enhance contrast, and prepare the image for subsequent processing. The preprocessing includes, but is not limited to, one or more of noise removal processing, histogram equalization processing, and grayscaling conversion processing.
Noise removal processing: noise in an image is removed using a filter technique (gaussian filtering), which is a linear smoothing filter suitable for removing gaussian noise and widely used in a noise reduction process of image processing. Gaussian filtering is a process of weighted averaging over the entire image, where the value of each pixel is obtained by weighted averaging itself and other pixel values in the neighborhood. The specific operations of gaussian filtering are: each pixel in the image is scanned with a template (or convolution, mask), and the value of the center pixel point of the template is replaced with the weighted average gray value of the pixels in the neighborhood determined by the template.
Histogram equalization processing: the contrast of the image is enhanced so that edges and features are more easily detected. Histogram equalization is a method in the field of image processing that uses image histograms to adjust contrast. In this way, the brightness can be better distributed over the histogram. Mainly for enhancing local contrast without affecting overall contrast, histogram equalization accomplishes this by effectively expanding the usual brightness.
Graying conversion treatment: because the original image is color, it needs to be converted into a gray scale image to simplify the process.
And 2, performing edge detection on the eye image.
Edge detection is performed on the eye image using an edge detection algorithm (Canny edge detection) to detect the iris and sclera edge features of the human eye, thereby determining the contours of the iris and sclera.
Canny edge detection is a technique that extracts useful structural information from different visual objects and greatly reduces the amount of data to be processed, and the main process flow is: (1) And calculating the gradient strength and the gradient direction of each pixel point in the eye image. (2) Non-maximum (Non-Maximum Suppression) suppression is applied to eliminate spurious responses from edge detection. (3) Double-Threshold (Double-Threshold) detection is applied to determine true and potential edges. (4) Edge detection is ultimately accomplished by suppressing isolated weak edges.
And 3, extracting iris areas, sclera areas and double eyelid areas from the eye images.
Firstly, extracting an iris region, and obtaining the iris region by carrying out Hough circle transformation detection.
The principle of Hough transformation is that edge pixels are connected by utilizing global features of an image to form a region closed boundary, the image space is converted into a parameter space, and the point is described in the parameter space, so that the purpose of detecting the edge of the image is achieved. The method carries out statistical calculation on all points which possibly fall on the outline of the iris edge, and determines the degree of belonging to the edge according to the statistical result of the data. The essence of Hough circle transformation is to transform the coordinates of the image and transform the plane coordinates into parameter coordinates, so that the transformation result is easier to identify and detect.
And then, carrying out contour detection on the human eye double eyelid area, filtering the identified contour, marking the screened contour to determine the boundary and the position of the area, and acquiring the double eyelid area according to the marked area. This can be accomplished by creating a double-lid template on the original image, and then applying the template to the image to extract the double-lid region.
The scleral region of the human eye is then extracted, and the scleral region (white) is separated from the other regions (black) by using OTUS thresholding to obtain the scleral region. The algorithm is derived by adopting a least square method based on a gray level histogram, and has the optimal segmentation in the statistical sense. The basic principle is that the gray value of the image is divided into two parts by the human eye with the optimal threshold value, so that the variance between the two parts is maximum, namely the maximum separability is achieved, and the sclera division is completed.
And 4, connecting the centers of the irises on the two sides of the human eyes as horizontal lines, and correcting the images after the human eyes are segmented.
And the two-side iris center connecting line is used as a horizontal line, the angle calculation is carried out by calculating the two-eye iris center connecting line and the horizontal direction of the image after human eye segmentation, and the image after human eye segmentation is rotated according to the angle calculation result, so that the two-side iris center connecting line is parallel to the horizontal direction, and further, the image correction after human eye segmentation is realized. The iris center position replaces the corneal photosites.
And step 5, obtaining eye feature points according to the image after human eye segmentation.
The eye feature points include, but are not limited to, one or more of an upper eyelid margin contour, a lower eyelid margin contour, an iris fitting circle contour, an iris circle and upper eyelid margin intersection point, a inner canthus point, an outer canthus point, and a pupil center point.
If the double eyelid line is clear, the invention can obtain the highest point Us of the double eyelid line. Wherein, the inner canthus point En represents the medial junction of the upper eyelid and the lower eyelid; the outer canthus point Ex represents the lateral junction of the upper and lower lids; pupil center point Pc represents an iris center point; the highest point Ps of the upper eyelid margin represents the highest point of the upper eyelid margin arc tangent to the horizontal line; the lowest point Pi of the lower eyelid margin represents the lowest point at which the arc line of the lower eyelid margin is tangent to the horizontal line; the horizontal line represents a bilateral iris center line; the highest point Us of the double eyelid line represents the highest point of the arc line of the double eyelid tangent to the horizontal line.
The matrix data points on the sclera outline are judged according to the outlines of the iris area and the sclera area, and the intersection point of the iris circle and the upper eyelid margin is obtained by determining the highest point (namely the upper eyelid margin highest point Ps) on the sclera outline, determining the lowest point (namely the lower eyelid margin lowest point Pi) on the sclera outline and determining the same pixel points on the sclera outline and the iris circle.
The eyes are guaranteed to be positioned on the same horizontal line through image correction before, the closest point (namely, the inner canthus point of human eyes) on the x-axis direction of the contours of the sclera of the left eye and the right eye is selected through knowing the coordinate information of the contours (x, y) on the images of the human eyes, and the farthest point (namely, the outer canthus point of the human eyes) on the contours of the sclera of the left eye and the right eye in the x-axis direction of the contours of the sclera of the left eye and the right eye is selected in the same way.
And the pixel coordinate of the iris circle center point is the pupil center point.
And 6, measuring the eye feature data according to the acquired eye feature points.
The iris diameter pixel point in each image was measured at 1/20 iris diameter as 1 unit (U) using an iris diameter ruler.
The ocular feature data includes, but is not limited to, one or more of upper eyelid margin corneal photophobia MRD1, lower eyelid margin corneal photophobia MRD2, iris exposure, lid split axis tilt angle, lid split length, lid split width, lid split height, lid split index, lid split area, inner canthus spacing, inter-pupillary distance, outer canthus spacing, canthus index, double lid area.
The upper eyelid margin cornea reflection distance MRD1 represents the vertical distance between the central position of the iris and the upper eyelid margin, and is obtained by calculating the number of pixels of the vertical distance between the central position of the iris and the upper eyelid margin through an iris diameter ruler;
the lower eyelid margin cornea reflection distance MRD2 represents the vertical distance from the central position of the iris to the lower eyelid margin, and is obtained by counting the number of pixels in the vertical distance from the central position of the iris to the lower eyelid margin and calculating through an iris diameter ruler;
the iris exposure represents iris exposure area/iris area, and is obtained by calculating the ratio of the iris exposure pixel point area to the total iris area;
the eyelid cleavage axis inclination angle represents the included angle between the inner canthus connecting line and the horizontal line, and is obtained by solving and calculating the inclination of each inner canthus point and each outer canthus point of each human eye through a binary once equation;
the eyelid cleavage length represents the linear distance between the inner canthus and the outer canthus of the ipsilateral eye; the eyelid cleavage width represents the horizontal distance between the inner and outer canthus points of the ipsilateral eye; the lid separation height represents the vertical distance between the highest point of the free edge of the upper lid and the lowest point of the free edge of the lower lid. The eyelid cleavage length, eyelid cleavage width and eyelid cleavage height are obtained through the following processes:
the maximum point of the upper eyelid contour is the maximum point of the upper eyelid margin, the maximum point of the lower eyelid contour is the minimum point of the lower eyelid margin, and the eyelid cleavage length is obtained by taking the difference value of the coordinate values corresponding to the pixel points in the X-axis horizontal direction through the known coordinate information of the pixel points corresponding to the inner canthus point and the outer canthus point. And obtaining the eyelid cleavage width by making a difference value according to the coordinate values corresponding to the Y-axis vertical direction. And obtaining the eyelid cleavage height by knowing the coordinate information of the pixel points corresponding to the highest point of the upper eyelid margin and the lowest point of the lower eyelid and making a difference value by the coordinate values corresponding to the respective vertical directions of the Y axis.
The eye-break index represents the ratio of the eyelid-break height to the eyelid-break width, and is obtained by calculating the ratio of the eyelid-break height to the eyelid-break width.
The eyelid cleavage area represents an area within the eyelid margin contour; the inner canthus spacing represents the horizontal distance between bilateral inner canthus points; the interpupillary distance represents the horizontal distance between center points of the iris on two sides; the outer canthus spacing represents the horizontal distance between bilateral outer canthus points; the canthus index represents the ratio of the inner canthus spacing to the outer canthus spacing. The eyelid cleavage area, inner canthus spacing, outer canthus spacing and canthus index are obtained through the following processes:
acquiring the number of pixels in a corresponding area of each eyelid of the image, and calculating to obtain a corresponding eyelid cracking area; finally, calculating the horizontal distance between the inner canthus points at the two sides as the inner canthus distance through the position information of the inner canthus points and the central coordinates of the pupil; calculating the horizontal distance between the center points of the iris at two sides as the interpupillary distance; calculating the horizontal distance between bilateral outer canthus points to obtain the outer canthus distance; the canthus index was obtained by the ratio of the inner canthus spacing to the outer canthus spacing.
The double eyelid area represents the area formed by the double eyelid line and the lower edge of the upper eyelid or the perpendicular line passing through the inner and outer canthus points, and is obtained by calculating the area formed by the double eyelid line and the lower edge of the upper eyelid or the perpendicular line passing through the inner and outer canthus points.
The invention can obtain the upper eyelid margin cornea reflecting distance MRD1, the lower eyelid margin cornea reflecting distance MRD2, the iris exposure, the eyelid cleavage axis inclination angle, the eyelid cleavage length, the eyelid cleavage width, the eyelid cleavage height, the eyelid cleavage index, the eyelid cleavage area, the inner canthus distance, the pupil distance, the outer canthus distance and the canthus index through the calculation of the step 6. If a double eyelid line is defined, the double eyelid area can be provided.
According to the invention, the eye image data is collected by a convenient method, the iris region, the sclera region and the double eyelid region are extracted from the eye image, and a comprehensive eye characteristic data detection result is obtained according to the image after human eye segmentation, so that the complex work of manual measurement and the hysteresis of data provision are avoided. The invention does not need the traditional millimeter measurement unit and avoids the error of unit conversion.
The invention adopts the iris diameter ruler as a unit for measurement, is convenient for individuation to obtain the data of the eye characteristics, is convenient for self comparison, and avoids the influence of individual shape difference on measurement data and aesthetic analysis.
The invention is further illustrated by way of example:
and acquiring a face front image and extracting an eye image. Referring to fig. 2, fig. 2a is an eye image after correction level, fig. 2b is an iris semantic segmentation image, fig. 2c is a sclera semantic segmentation image, fig. 2d is a double eyelid area semantic segmentation image, and fig. 2e is an output image after recognition. The sides referred to herein are the left and right sides of the subject. And (3) injection: 1 unit (U) =1/20 iris diameter. The recognition result table is as follows:
analysis of the recognition result table is performed by fig. 2: the MRD1 of the two eyes is more than 6.67U, namely the two eyes are normal eyes, and the ptosis does not exist; the exposure of the irises on the two sides is close to 90%, the exposure is more sufficient, and the eyes are beautiful and attractive; the inclination angle of the double-side eyelid cleavage axis, the eyelid cleavage length, the eyelid cleavage height, the eyelid cleavage index, the eyelid cleavage area and the double eyelid area are relatively similar, and the double eyes are relatively symmetrical.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments is modified or some or all of the technical features are replaced equivalently, so that the essence of the corresponding technical scheme does not deviate from the scope of the technical scheme of the embodiments of the present invention.

Claims (8)

1. An automatic eye feature measurement method based on facial image acquisition is characterized by comprising the following steps:
step 1, acquiring a face front image and extracting an eye image;
step 2, edge detection is carried out on the eye image;
step 3, extracting an iris region, a sclera region and a double eyelid region from the eye image to obtain an image after human eye segmentation;
step 4, connecting the centers of the irises on two sides of the human eyes as horizontal lines, and correcting the images after the human eyes are segmented;
step 5, obtaining eye feature points according to the image after human eye segmentation; the eye feature points comprise one or more of an upper eyelid margin contour, a lower eyelid margin contour, an iris fitting circle contour, an iris circle and upper eyelid margin intersection point, an inner canthus point, an outer canthus point and a pupil center point;
step 6, measuring eye feature data according to the obtained eye feature points; the ocular feature data includes, but is not limited to, one or more of upper eyelid margin corneal mapping distance, lower eyelid margin corneal mapping distance, iris exposure, lid cleavage axis tilt angle, lid cleavage length, lid cleavage width, lid cleavage height, eye cleavage index, lid cleavage area, inner canthus distance, inter-pupillary distance, outer canthus distance, canthus index, double lid eyelid area.
2. The automatic measurement method of ocular features based on facial image acquisition according to claim 1, characterized in that in step 1, ocular images are extracted by a classifier; the classifier adopts a Haar cascade classifier.
3. The automatic measurement method of eye features based on facial image acquisition according to claim 2, wherein the extracting of eye images by a classifier comprises the following processes:
scanning and positioning a face region in the image by using a Haar cascade classifier for face training;
the eye regions are further scanned and located within each face region using a Haar cascade classifier trained on the eyes.
4. The automatic measurement method of eye features based on facial image acquisition according to claim 1, further comprising, between step 1 and step 2: preprocessing an eye image;
the preprocessing includes, but is not limited to, one or more of noise removal processing, histogram equalization processing, and graying conversion processing.
5. The method according to claim 1, wherein in step 2, the edge detection algorithm is used to detect the edge characteristics of the iris and sclera of the human eye, so as to determine the outlines of the iris and sclera.
6. The automatic measurement method of eye features based on facial image acquisition according to claim 1, wherein step 3, extracting iris area, sclera area and double eyelid area from the eye image comprises the following procedures:
extracting an iris region, and obtaining the iris region by carrying out Hough circle transformation detection;
the method comprises the steps of performing contour detection on a human eye double eyelid area, filtering the identified contour, marking the screened contour to determine the boundary and the position of the area, and acquiring the double eyelid area according to the marked area;
the scleral region is extracted, and the scleral region is separated from other regions by using OTUS thresholding, thereby obtaining the scleral region.
7. The automatic measurement method of eye features based on facial image acquisition according to claim 1, wherein in step 4, the line connecting the centers of the irises on both sides of the human eye is a horizontal line, and the image after the human eye segmentation is corrected, comprising the following steps:
and (3) taking the connecting lines of the centers of the irises on the two sides as horizontal lines, calculating angles between the connecting lines of the centers of the irises on the two sides and the horizontal direction of the image after the human eyes are segmented, and rotating the image after the human eyes are segmented according to the angle calculation result, so that the connecting lines of the centers of the irises on the two sides are parallel to the horizontal direction, and further, the correction of the image after the human eyes are segmented is realized.
8. The automatic measurement method of ocular feature based on facial image acquisition according to claim 1, wherein in step 6, ocular feature data is measured, comprising the following process:
the upper eyelid margin cornea reflecting distance is obtained by counting the number of pixels in the vertical distance from the central position of the iris to the upper eyelid margin and calculating the distance by an iris diameter ruler;
the lower eyelid margin cornea reflecting distance is obtained by counting the number of pixels in the vertical distance from the central position of the iris to the lower eyelid margin and calculating by an iris diameter ruler;
the iris exposure is obtained by calculating the ratio of the area of the iris exposure pixel points to the total area of the iris;
the eyelid cleavage axis inclination angle is obtained by solving and calculating the inclination of each inner canthus point and each outer canthus point of each human eye through a binary once equation;
the eyelid cleavage length, eyelid cleavage width and eyelid cleavage height are obtained through the following processes: the highest point of the upper eyelid contour is the highest point of the upper eyelid margin, the highest point of the lower eyelid contour is the lowest point of the lower eyelid margin, and the eyelid cleavage length is obtained by taking the difference value of the coordinate values corresponding to the pixel points in the X-axis horizontal direction through the known coordinate information of the pixel points corresponding to the inner canthus point and the outer canthus point; obtaining the eyelid cleavage width by making a difference value according to the coordinate values corresponding to the Y-axis vertical direction; obtaining the eyelid cleavage height by knowing the coordinate information of the pixel points corresponding to the highest point of the upper eyelid margin and the lowest point of the lower eyelid and making a difference value according to the coordinate values corresponding to the respective vertical directions of the Y axis;
the eye fracture index is obtained by calculating the ratio of the eyelid fracture height to the eyelid fracture width;
the eyelid cleavage area, inner canthus spacing, outer canthus spacing and canthus index are obtained through the following processes: acquiring the number of pixels in a corresponding area of each eyelid of the image, and calculating to obtain a corresponding eyelid cracking area; finally, calculating the horizontal distance between the inner canthus points at the two sides as the inner canthus distance through the position information of the inner canthus points and the central coordinates of the pupil;
calculating the horizontal distance between the center points of the iris at two sides as the interpupillary distance; calculating the horizontal distance between bilateral outer canthus points to obtain the outer canthus distance; obtaining an index of canthus according to the ratio of the inner canthus spacing to the outer canthus spacing;
the double eyelid area is obtained by calculating the region formed by the double eyelid line and the lower edge of the upper eyelid or the perpendicular line passing through the inner and outer canthus points.
CN202311501277.8A 2023-11-13 2023-11-13 Automatic eye feature measurement method based on facial image acquisition Pending CN117281504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311501277.8A CN117281504A (en) 2023-11-13 2023-11-13 Automatic eye feature measurement method based on facial image acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311501277.8A CN117281504A (en) 2023-11-13 2023-11-13 Automatic eye feature measurement method based on facial image acquisition

Publications (1)

Publication Number Publication Date
CN117281504A true CN117281504A (en) 2023-12-26

Family

ID=89258811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311501277.8A Pending CN117281504A (en) 2023-11-13 2023-11-13 Automatic eye feature measurement method based on facial image acquisition

Country Status (1)

Country Link
CN (1) CN117281504A (en)

Similar Documents

Publication Publication Date Title
WO2020259209A1 (en) Fundus image recognition method, apparatus and device, and storage medium
Aquino et al. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques
CN107292877B (en) Left and right eye identification method based on fundus image characteristics
EP2188779B1 (en) Extraction method of tongue region using graph-based approach and geometric properties
US20040037460A1 (en) Method for detecting objects in digital images
CN108961280B (en) Fundus optic disc fine segmentation method based on SLIC super-pixel segmentation
CN110705468B (en) Eye movement range identification method and system based on image analysis
Kennell et al. Binary morphology and local statistics applied to iris segmentation for recognition
CN106651888A (en) Color fundus image optic cup segmentation method based on multi-feature fusion
CN108186051A (en) A kind of image processing method and processing system of the automatic measurement fetus Double Tops electrical path length from ultrasonoscopy
Manchalwar et al. Detection of cataract and conjunctivitis disease using histogram of oriented gradient
Datta et al. A new contrast enhancement method of retinal images in diabetic screening system
CN108665474B (en) B-COSFIRE-based retinal vessel segmentation method for fundus image
Kumar et al. Automatic optic disc segmentation using maximum intensity variation
CN116342636B (en) Eye anterior segment OCT image contour fitting method
CN110276333B (en) Eye ground identity recognition model training method, eye ground identity recognition method and equipment
CN112258532A (en) Method for positioning and segmenting corpus callosum in ultrasonic image
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
CN115272333B (en) Cup-disk ratio data storage system
CN111861977A (en) Feature extraction method of anterior segment tomogram based on machine vision
CN116030042A (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
CN117281504A (en) Automatic eye feature measurement method based on facial image acquisition
Akhade et al. Automatic optic disc detection in digital fundus images using image processing techniques
CN110751064B (en) Blink frequency analysis method and system based on image processing
CN114972148A (en) Fundus image quality evaluation method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination