WO2020019286A1 - 眼睑下垂检测方法及系统 - Google Patents
眼睑下垂检测方法及系统 Download PDFInfo
- Publication number
- WO2020019286A1 WO2020019286A1 PCT/CN2018/097367 CN2018097367W WO2020019286A1 WO 2020019286 A1 WO2020019286 A1 WO 2020019286A1 CN 2018097367 W CN2018097367 W CN 2018097367W WO 2020019286 A1 WO2020019286 A1 WO 2020019286A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- eyelid
- eye
- image
- pupil
- edge
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0041—Operational features thereof characterised by display arrangements
- A61B3/0058—Operational features thereof characterised by display arrangements for multiple images
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/11—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
Definitions
- the invention relates to a detection method and system, in particular to a machine vision technology that can be used to determine relevant data for judging eyelid droop, and based on the data to infer the severity of eyelid droop and whether eye muscle function is normal or not Detection method and system.
- Eyelid ptosis can be divided into congenital eyelid ptosis and acquired eyelid ptosis.
- One of the causes of congenital eyelid ptosis is that the patient is born with levator muscular dysplasia and one of the causes of acquired eyelid ptosis. It is caused by the tension of the eye-lifting muscles, which prevents the upper eyelid from opening to a normal height.
- the pupil of the eyelid covers the pupil, in addition to the visual field is affected, the patient is likely to unconsciously raise the eyebrows or raise the chin by opening the upper eyelid, and it may cause forehead wrinkles, shoulder and neck pain, low back pain or Causes eye fatigue and other symptoms.
- the present invention aims to provide a method for detecting eyelid sag, which can use machine vision technology to derive relevant data for judging eyelid sagging, and infer that the severity of eyelid sagging and eye lift function are normal based on the data.
- Another object of the present invention is to provide an eyelid droop detection system, which can use image processing and machine vision to derive relevant data for eyelid droop, and automatically detect whether the eye muscle function is normal and detect eyelid droop based on the data. severity.
- the method for detecting drooping eyelids of the present invention includes: shooting to generate an eye image; performing image processing on the eye image to generate an edge image; performing image calculations on the eye image and the edge image to obtain a plurality of feature variables; The multiple characteristic variables are calculated to obtain a characteristic parameter group; and the characteristic parameter group is compared with a preset eyelid sagging information to infer the severity of the eyelid sagging and the function of the eye-lifting muscle.
- the eyelid droop detection system of the present invention includes: a photographing unit for photographing and generating eye images; a storage unit for storing a preset eyelid droop information; and a processing unit coupled to the photographing unit and the storage unit
- the processing unit performs image processing on the eye image to generate an edge image.
- the processing unit performs image operations on the eye image and the edge image to obtain a plurality of feature variables, and the processing unit calculates based on the plurality of feature variables.
- a characteristic parameter group is obtained, and the characteristic parameter group is compared with the preset eyelid sagging information to infer the severity of the eyelid sagging and the function of the eye-lifting muscle.
- the eyelid sagging detection method and system of the present invention can obtain the patient's eye information by using image processing technology and machine vision, and use the eye information to derive relevant data for judging eyelid droop, and perform automatic detection based on the data Eye lift function and detection of the severity of eyelid sagging. This can achieve the purposes of convenient operation, greatly shortening the measurement time, and improving measurement consistency.
- the eye image includes a pupil head-up image, a pupil forced upward image, and a pupil forced downward image.
- the eyelid droop detection method of the present invention has the effects of being able to simultaneously detect the severity of eyelid droop and the function of lifting the eye muscles.
- the plurality of characteristic variables include an eye contour area, a scleral area, an iris area, a pupil area, a pupil center point, an upper eyelid lower edge curve, an upper eyelid upper edge curve, and an upper eyelid lower edge intersection
- the plurality of characteristic variables further include position coordinates of a left edge point of the eye corner and a right edge point of the eye corner.
- the characteristic parameter group includes a height difference between an iris diameter and a palpebral fissure height, and a maximum moving distance when the pupil is forced upward and downward.
- the eyelid droop detection method of the present invention has the effect of simultaneously detecting the severity of eyelid droop and the function of lifting the eye muscles.
- the feature parameter group further includes a first distance between the center point of the pupil and the intersection point of the lower edge of the upper eyelid, a second distance between the center point of the pupil and the intersection point of the upper edge of the lower eyelid, and the upper point
- a virtual digital eye is formed, and the virtual digital eye is overlapped with the eye image to analyze whether the virtual digital eye has a great deviation from the eye image.
- the method for detecting eyelid droop of the present invention has the effect of improving detection accuracy.
- the eye image includes a pupil head-up image, a pupil forced upward image, and a pupil forced downward image.
- the eyelid droop detection system of the present invention has the effects of being able to simultaneously detect the severity of eyelid droop and the function of lifting the eye muscles.
- the plurality of characteristic variables include an eye contour area, a scleral area, an iris area, a pupil area, a pupil center point, an upper eyelid lower edge curve, an upper eyelid upper edge curve, and an upper eyelid lower edge intersection
- the plurality of characteristic variables further include position coordinates of a left edge point of the eye corner and a right edge point of the eye corner.
- the eyelid droop detection system of the present invention assists in judging the severity of eyelid droop and the efficacy of the function of lifting the eye muscles by using the above parameters.
- the characteristic parameter group includes a height difference between an iris diameter and a palpebral fissure height, and a maximum moving distance when the pupil is forced upward and downward.
- the eyelid droop detection system of the present invention has the effect of simultaneously detecting the severity of eyelid droop and the function of lifting the eye muscles.
- the feature parameter group further includes a first distance between the center point of the pupil and the intersection point of the lower edge of the upper eyelid, a second distance between the center point of the pupil and the intersection point of the upper edge of the lower eyelid.
- the eyelid droop detection system of the present invention has the function of assisting in judging the severity of eyelid droop and the function of raising eye muscles by using the above parameters.
- the processing unit forms a virtual digital eye according to the multiple feature variables, and the processing unit overlaps the virtual digital eye with the eye image to analyze whether the virtual digital eye has a polar image with the eye image. Large deviations occur.
- the method for detecting eyelid droop of the present invention has the effect of improving detection accuracy.
- FIG. 1 is a processing flowchart of a preferred embodiment of the present invention
- FIG. 2 is a schematic diagram of an eye image of a pupil head-up image according to a preferred embodiment of the present invention
- FIG. 3 is a schematic diagram of an image of an eye with a pupil forced upward and a downward forced image according to a preferred embodiment of the present invention
- FIG. 4 is a system architecture diagram of a preferred embodiment of the present invention.
- FIG. 1 is a preferred embodiment of the eyelid droop detection method of the present invention, which includes an image capturing step S1, an image processing step S2, a feature capturing step S3, a feature computing step S4, and a feature analyzing step S5.
- the image capturing step S1 can capture and generate an eye image, and the eye image is a color image.
- the eye image may include a pupil head-up image, a pupil force upward image, and a pupil force downward image.
- the image capturing step S1 can capture and generate a facial image, and select a region of interest (ROI) as the eye image from the facial image.
- ROI region of interest
- the position coordinates of the starting pixel of the rectangle formed by the region of interest, and the length and width values of the rectangle can be set so as to cover the eye parts including the upper eyelid, lower eyelid, sclera, iris, and pupil.
- the image processing step S2 can perform image processing on the eye image to generate an edge image. Specifically, a grayscale process is performed on the eye image to segment the foreground and background of the eye image to generate a grayscale image. In addition, the image processing step S2 retains the part of interest in the eye image, simplifies subsequent image processing procedures, and improves overall computing efficiency.
- the image processing step S2 can perform binarization processing on the grayscale image to Generate a binarized image.
- the binarized threshold can be distinguished as a fixed threshold or an adaptive threshold (such as Otsu, bimodal method, P-parameter method, or iterative method).
- the image processing step S2 can perform edge monitoring processing on the binarized image, so that the edge image is generated to further greatly reduce the amount of data of the eye image, remove potentially irrelevant information, and retain the eye Important structural attributes of the external image, such as, but not limited to, the edge detection can use edge detection algorithms such as: Sobel, Prewitt or Canny.
- the feature extraction step S3 can perform image operations on the eye image and the edge image to obtain a plurality of characteristic variables for analyzing the severity of eyelid droop and the function of the eye-lifting muscle.
- the plurality of characteristic variables may include one Eye contour area A, one scleral area A1, one iris area A2, one pupil area A3, one pupil center point P1, one upper eyelid lower edge curve C1, one lower eyelid upper edge curve C2, one upper eyelid lower edge intersection point P2 And a position coordinate of the intersection point P3 of the upper edge of the lower eyelid.
- the plurality of feature variables may further include position coordinates of a left edge point P4 and a right edge point P5 of the eye corner.
- the feature extraction step S3 can perform a symmetry transformation on the eye image to obtain an eye area. Performing a symmetrical transformation on each pixel point of the eye image to generate a plurality of symmetrical transformation results, and using the position coordinates of a pixel point having a maximum value among the plurality of symmetrical transformation results as an initial point for generating the eye contour area A.
- the eye contour area A may include eye features such as the scleral area A1, the iris area A2, the pupil area A3, the upper eyelid and the lower eyelid.
- the feature extraction step S3 can convert the eye image from the RGB color space to The HSV color space produces an HSV image.
- An S-channel image is obtained from the HSV image, and the pixels in the S-channel image whose saturation is less than the threshold value form the scleral region A1.
- the setting of the threshold value can be understood by a person of ordinary skill in the related fields of the present invention, and details are not described herein.
- the feature extraction step S3 can perform a Symmetry Transform on the edge image to For multiple candidate pupil regions
- the symmetric transformation may be a Fast Radial Symmetry Transform (FRST).
- the feature extraction step S3 calculates and obtains two projection points of each edge point of the edge image in the gradient direction, and respectively forms a gradient projection image (Orientation Projection Image) and a gradient amplitude image ( Magnitude Projection Image) to obtain multiple radial symmetric transformation results, that is, to obtain the multiple candidate pupil regions.
- a pupil black value ratio is calculated for each of the plurality of candidate pupil areas, and the candidate pupil area having the largest ratio among the plurality of pupil black values is used as the pupil area A3.
- the pupil black value ratio is a ratio occupied by black pixels among all pixels in each candidate pupil area.
- the position coordinates of the pupil center point P1 can also be obtained by positioning in the pupil area A3.
- the feature extraction step S3 can obtain the upper edge lower edge curve C1 and the lower eyelid upper edge curve C2 in the eye contour area A.
- a gradient direction (Gradient Orientation) is used to calculate a tangent slope of each pixel point on the boundary of the scleral region A1 with respect to the eye contour region A, and each pixel point on the boundary of the scleral region A1 and the eye contour
- the boundary where the slope of the tangent of area A is zero is expressed as the eyelid curve.
- the eyelid curve is divided into the upper eyelid lower edge curve C1 and the lower eyelid upper edge curve C2.
- a vertical line can be extended from the pupil center point P1 in a direction perpendicular to the plane formed by the pupil's head-up direction, and the vertical lines can be intersected at the upper eyelid lower edge curve C1 and the lower eyelid upper edge curve C2 respectively. To obtain the position coordinates of the upper edge intersection point P2 and the lower edge intersection point P3 of the upper eyelid.
- the feature extraction step S3 can also obtain position coordinates of the left corner point P4 and the right corner point P5 of the eye corner in the eye contour area A.
- a corner distance (Corner Distance) is used to calculate the boundary between the upper edge lower curve C1 and the lower edge upper curve C2, and the respective left edge point P4 and the right edge point P5 of the eye corner are respectively obtained.
- Position coordinates are used to calculate the boundary between the upper edge lower curve C1 and the lower edge upper curve C2, and the respective left edge point P4 and the right edge point P5 of the eye corner are respectively obtained.
- the feature calculation step S4 can be calculated according to the plurality of feature variables to obtain a feature parameter group.
- the feature parameter group includes a height difference PS (Ptosis Severity) between the diameter of the iris and the palpebral fissure height (PFH), and a maximum movement distance LF when the pupil is forced upwards and downwards.
- the characteristic parameter group may further include a first distance MRD1 between the pupil center point P1 and the lower eyelid intersection point P2, and the pupil center point P1 to the lower eyelid upper edge intersection point P3.
- PFW palpebral fissure width
- OSA eyeball surface area
- the feature analysis step S5 can infer the severity of the eyelid droop and the function of the eye-lift muscle according to the comparison of the characteristic parameter group with the preset eyelid droop information.
- the iris diameter is 11 mm calculated from the iris area A2
- the height of the palpebral fissure PFH between the intersection of the upper edge of the lower eyelid P2 and the intersection of the lower edge of the upper eyelid P3 is 8 mm.
- the height difference between the diameter of the iris and the height of the palpebral fissure PFH is 3 mm, that is, the severity of eyelid droop is mild.
- the preset eyelid droop information can be shown in Table 1 below:
- the eyelid droop detection method of the present invention may further include a feature overlapping step S6.
- the feature overlapping step S6 is to form a virtual digital eye according to the multiple feature variables, and overlap the virtual digital eye with the eye image to analyze the feature. Whether the virtual digital eye has a great deviation from the eye image.
- the feature overlapping step S6 can set a weight value for the scleral area A1, the pupil area A3, and the eyelid curve, respectively.
- the formula for calculating the weight value of the scleral area A1 can be expressed by the following formula (1) Show:
- P sclera represents a pixel point on the sclera
- P skin represents a pixel point on the skin
- eye total represents all pixels of the pupil
- eye black represents pupil black pixels
- ⁇ indicates the boundary of the eye contour area A
- ⁇ indicates the length of the boundary of the eye contour area A
- ⁇ (x, y) indicates the gradient direction in the (x, y) coordinate system
- m (x, y) The tangent slope of the eye contour area A is shown.
- the feature overlapping step S6 can also set another weight value for the left corner point P4 and the right corner point P5 of the eye corner, and calculate the weight value of the left corner point P4 and the right corner point P5 of the eye corner.
- the formula can be expressed by the following formulas (4) to (5):
- w (x, y) represents the weighted value with the x and y coordinates as the center; G x represents the derivative in the x-axis direction; G y represents the derivative in the y-axis direction; k represents the Harris algorithm parameter.
- D ⁇ represents the weight value of the eyelid curve
- D color represents the weight value of the scleral area A1
- D sym represents the weight value of the pupil area A3
- D cor represents the weight value of the corner point of the eye
- ⁇ i represents the value obtained through repeated experiments
- the optimal parameter values of D D i represents the respective parameter values of D ⁇ , D color , D sym, and D cor
- ⁇ i represents the average weight of D ⁇ , D color , D sym, and D cor .
- FIG. 4 is a preferred embodiment of the eyelid droop detection system of the present invention, which includes a photographing unit 1, a storage unit 2, and a processing unit 3.
- the processing unit 3 is coupled to the photographing unit 1 and the processing unit 3.
- the photographing unit 1 can be used to shoot and generate a facial image, preferably to generate an eye image.
- the eye image may include a pupil head-up image, a pupil forced upward image, and a pupil forced downward image.
- the photographing unit 1 may be a charge coupled device CCD color camera or a complementary metal oxide semiconductor CMOS color camera.
- the storage unit 2 may be any storage medium used to store electronic data, such as a hard disk or a memory, but is not limited thereto.
- the storage unit 2 can be used to store preset eyelid sagging information.
- the preset eyelid droop information may be as shown in Table 1 above.
- the processing unit 3 is coupled to the photographing unit 1 and the storage unit 2.
- the processing unit 3 may be a circuit unit having functions such as data processing, signal generation, and control.
- the processing unit 3 may be a microprocessor, a microcontroller, or digital signal processing. Processor, logic circuit or special application integrated circuit (ASIC).
- ASIC application integrated circuit
- the processing unit 3 may be a microprocessor, but it is not limited thereto.
- the processing unit 3 can perform image processing on the eye image to generate an edge image.
- the image processing may include performing image processing programs such as grayscale, binarization, and edge monitoring on the eye image to generate the edge image.
- the processing unit 3 can set a region of interest in the facial image as the eye image.
- the position coordinates of the starting pixel of the rectangle formed by the region of interest, and the length and width values of the rectangle can be set so as to cover the eye parts including the upper eyelid, lower eyelid, sclera, iris, and pupil.
- the present invention Those of ordinary skill in the relevant arts can understand and will not repeat them here.
- the processing unit 3 can perform image calculations on the eye image and the edge image to obtain a plurality of characteristic variables for analyzing the severity of eyelid droop and the function of the eye-lifting muscle.
- the plurality of characteristic variables may include an eye contour. Area A, one scleral area A1, one iris area A2, one pupil area A3, one pupil center point P1, one upper eyelid lower edge curve C1, one lower eyelid upper edge curve C2, one upper eyelid lower edge intersection point P2, and one The position coordinates of the intersection point P3 of the upper edge of the lower eyelid.
- the plurality of feature variables may further include position coordinates of a left edge point P4 and a right edge point P5 of the eye corner.
- the processing unit 3 performs a symmetric transformation on the eye image to obtain an eye region.
- the processing unit 3 can perform symmetrical transformation on each pixel point of the eye image to generate multiple symmetrical transformation results.
- the processing unit 3 uses the position coordinates of the maximum pixel point in the symmetrical transformation result as the eye contour area.
- the eye contour area A may include eye features such as the scleral area A1, the iris area A2, the pupil area A3, the upper eyelid and the lower eyelid.
- the processing unit 3 converts the eye image from the RGB color space to the HSV color space. To generate an HSV image.
- the processing unit 3 obtains an S-channel image from the HSV image, and forms pixels in the S-channel image whose saturation is less than a threshold value to form the scleral region A1.
- the setting of the threshold value can be understood by a person of ordinary skill in the related fields of the present invention, and details are not described herein.
- the processing unit 3 performs a symmetric transformation on the edge image to obtain a plurality of candidate pupil regions.
- the symmetric transformation may be a fast radial symmetric transformation.
- the processing unit 3 calculates two projection points of each edge point of the edge image in the gradient direction, and obtains a plurality of radial symmetrical transformation results according to the gradient projection image and the gradient amplitude image formed by the two projection points respectively. , That is, to obtain the plurality of candidate pupil regions.
- the processing unit 3 calculates a pupil black value ratio for each of the candidate pupil areas, and uses the candidate pupil area with the largest ratio among the pupil black value ratios as the pupil area A3.
- the pupil black value ratio is a ratio occupied by black pixels among all pixels in each candidate pupil area.
- the processing unit 3 can also obtain the position coordinates of the pupil center point P1 by positioning in the pupil area A3.
- the processing unit 3 can obtain the upper eyelid lower edge curve C1 and the lower eyelid upper edge curve C2 in the eye contour area A. Specifically, the processing unit 3 uses a gradient direction formula to calculate a tangent slope of each pixel point on the boundary of the scleral region A1 with respect to the eye contour region A, and each pixel point on the boundary of the scleral region A1 and the eye The boundary where the slope of the tangent of the contour area A is zero is expressed as the eyelid curve. The processing unit 3 can distinguish the eyelid curve into the upper eyelid lower edge curve C1 and the lower eyelid upper edge curve C2 according to the position coordinates of the eyelid curve.
- the processing unit 3 can also extend a vertical line from the pupil center point P1 in a direction perpendicular to the plane formed by the pupil head-up direction, and make the vertical lines intersect the lower eyelid curve C1 and the lower eyelid, respectively.
- the eyelid upper edge curve C2 is used to obtain the position coordinates of the upper edge lower edge intersection point P2 and the lower eyelid upper edge intersection point P3, respectively.
- the processing unit 3 can obtain the left edge point P4 and the right edge point P5 of the eye corner in the eye contour area A. Specifically, the processing unit 3 calculates the boundary between the upper edge lower eye curve C1 and the lower edge upper eye curve C2 by the corner distance formula, and obtains the left edge point P4 and the right edge point P5 of the eye corner. Preferably, the processing unit 3 can also extend from the pupil center point P1 and intersect at the lower edge curve C1 of the upper eyelid and the upper edge curve C2 of the lower eyelid respectively to obtain the intersection point P2 of the upper eyelid and the lower edge The position coordinates of the upper edge intersection point P3.
- the processing unit 3 can calculate according to the plurality of feature variables to obtain a feature parameter group.
- the feature parameter group includes a height difference PS (Ptosis Severity) between the diameter of the iris and the palpebral fissure height (PFH), and a maximum movement distance LF when the pupil is forced upwards and downwards.
- the characteristic parameter group may further include a first distance MRD1 between the pupil center point P1 and the lower eyelid intersection point P2, and the pupil center point P1 to the lower eyelid upper edge intersection point P3.
- PFW palpebral fissure width
- OSA eyeball surface area
- the processing unit 3 infers the severity of the eyelid droop and the function of the eye-lifting muscle by comparing the characteristic parameter set with a preset eyelid droop information. For example, the processing unit 3 calculates and obtains a first position coordinate P6 when the pupil is forced upwards according to a plurality of feature variables generated by the pupil with the pupil upwards, and according to a plurality of images generated by the pupil with the pupils downward.
- the characteristic variable calculates a second position coordinate P7 when the pupil is forced downward, and the processing unit 3 calculates a distance difference between the first position coordinate P6 and the second position coordinate P7 to generate the maximum moving distance LF.
- the maximum moving distance LF is equal to 7 mm, according to the above table 1 as an example, it means that the eye lifting muscle function is moderately abnormal.
- the processing unit 3 may further form a virtual digital eye according to the multiple feature variables, and the processing unit 3 may overlap the virtual digital eye with the eye image to analyze the virtual digital eye. Whether there is a great deviation from this eye image. Specifically, the processing unit 3 sets a weight value for each of the scleral region A1, the pupil region A3, and the eyelid curve. Preferably, the processing unit 3 can also use the left edge point P4 of the corner of the eye and the corner of the eye. The right edge point P5 additionally sets a weight value, and the calculation formulas of the multiple weight values can be shown in the foregoing formulas (1) to (5).
- the processing unit 3 can use the weight value as an input variable of the weight value formula, and calculate and obtain a plurality of virtual digital eyes with different weights.
- the processing unit 3 selects the ones with the highest weight from the plurality of virtual digital eyes to replace the original ones.
- the calculation formula of the weight value can be shown in the above formula (6).
- the eyelid droop detection method and system of the present invention can obtain the patient's eye information by using image processing technology and machine vision, and use the eye information to derive relevant data for judging eyelid droop, and according to the data Automated testing of the function of the eyelift and detection of the severity of eyelid sagging. This can achieve the purposes of convenient operation, greatly shortening the measurement time, and improving measurement consistency.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Pathology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
一种眼睑下垂检测方法及系统,用以解决现有人工眼睑下垂检测方法需耗费大量时间,以及不同医师测量不一致造成测量结果产生误差的问题。该方法及系统包括:以一个摄影单元拍摄产生眼部影像;以一个处理单元对该眼部影像执行影像处理,产生一个边缘影像;以该处理单元对该眼部影像及该边缘影像执行影像运算,取得多个特征变量;以该处理单元依据该多个特征变量计算,取得一个特征参数组;及以该处理单元将该特征参数组与一个预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。
Description
本发明是关于一种检测方法及系统,尤其是一种可以通过机器视觉技术推导用以判断眼睑下垂的相关数据,并依该数据推知眼睑下垂严重程度及提眼肌功能正常与否的眼睑下垂检测方法及系统。
眼睑下垂可区分为先天性眼睑下垂及后天性眼睑下垂,造成先天性眼睑下垂的其中一个原因是病患于出生时,其提眼肌发育不良所造成,而造成后天性眼睑下垂的其中一个原因则是提眼肌无张力所引起,导致上眼睑无法张开至正常高度。并且,当病患的眼睑边缘盖住瞳孔后,病患除视野受到影响外,病患为张开上眼睑容易无意识地提高眉毛或抬高下巴,更会造成额头皱纹、肩颈酸痛、腰痛或造成眼睛疲劳等症状。
现有治疗眼睑下垂的手术方式是取决于眼睑下垂的程度,或/及提眼肌功能的状态,因此,在决定适当手术方式前,医师会以人工方式拿尺测量病患眼睑静态或眼睑动态位置,以分别取得瞳孔中心点至上眼睑下缘交集点距离(MRD1)、瞳孔中心点至下眼睑上缘交集点距离(MRD2)、下垂严重程度(Ptosis Severity)及提眼肌功能(Levator Function)等相关数据,医师再依据这些数据分析眼睑下垂严重程度。并且,当医师依眼睑下垂严重程度决定合适的手术方式,并以该手术方式进行手术后,医师会再以人工方式拿尺测量这些数据,评估手术功效。
然而,上述现有眼睑下垂检测方式耗费大量时间于测量如瞳孔中心点至上眼睑下缘交集点距离(MRD1)、瞳孔中心点至下眼睑上缘交集点距离 (MRD2)、下垂严重程度(Ptosis Severity)及提眼肌功能(Levator Function)等相关数据,且不同医师的测量也不一致,容易造成测量结果的误差。
有鉴于此,现有的眼睑下垂检测方法确实仍有加以改善的必要。
发明内容
为解决上述问题,本发明目的是提供一种眼睑下垂检测方法,可以通过机器视觉技术推导用以判断眼睑下垂的相关数据,并依该数据推知眼睑下垂严重程度及提眼肌功能正常。
本发明的另一目的是提供一种眼睑下垂检测系统,能够通过影像处理及机器视觉推导用以判断眼睑下垂的相关数据,并依该数据进行自动化检测提眼肌功能正常与否及检测眼睑下垂严重程度。
本发明的眼睑下垂检测方法,包括:拍摄产生眼部影像;对该眼部影像执行影像处理,产生一个边缘影像;对该眼部影像及该边缘影像执行影像运算,取得多个特征变量;依据该多个特征变量计算,取得一个特征参数组;及将该特征参数组与一个预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。
本发明的眼睑下垂检测系统,包括:一个摄影单元,用以拍摄产生眼部影像;一个储存单元,用以储存一个预设眼睑下垂信息;及一个处理单元,耦接该摄影单元及该储存单元,该处理单元对该眼部影像执行影像处理,产生一个边缘影像,该处理单元对该眼部影像及该边缘影像执行影像运算,取得多个特征变量,该处理单元依该多个特征变量计算取得一个特征参数组,并将该特征参数组与该预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。
因此,本发明的眼睑下垂检测方法及系统,能够以影像处理技术搭配机器视觉取得病患眼部信息,并以该眼部信息推导用以判断眼睑下垂的相关数据,及依该数据进行自动化检测提眼肌的功能及检测眼睑下垂的严重程度。借此,可以达到操作方便、大幅缩短测量时间及提升测量一致性等目的。
其中,该眼部影像包括一张瞳孔平视影像、一张瞳孔用力朝上影像及一张瞳孔用力朝下影像。如此,本发明的眼睑下垂检测方法具有能够同时检测眼睑下垂的严重程度及提眼肌的功能等功效。
其中,该多个特征变量包括一个眼睛轮廓区域、一个巩膜区域、一个虹膜区域、一个瞳孔区域、一个瞳孔中心点、一个上眼睑下缘曲线、一个下眼睑上缘曲线、一个上眼睑下缘交集点及一个下眼睑上缘交集点各自的位置坐标。如此,本发明的眼睑下垂检测方法具有提供较完整的静态测量参数及动态测量参数,以同时检测眼睑下垂的严重程度及提眼肌的功能等功效。
其中,该多个特征变量还包括一个眼角左缘点及一个眼角右缘点各自的位置坐标。如此,本发明的眼睑下垂检测方法以上述参数辅助判断眼睑下垂的严重程度及提眼肌的功能的功效。
其中,该特征参数组包括一个虹膜直径与一个睑裂高度之间的一个高度差,以及瞳孔用力朝上与用力朝下时的一个最大移动距离。如此,本发明的眼睑下垂检测方法具有同时检测眼睑下垂的严重程度及提眼肌的功能的功效。
其中,该特征参数组还包括该瞳孔中心点至该上眼睑下缘交集点之间的一个第一距离、该瞳孔中心点至该下眼睑上缘交集点之间的一个第二距离、该上眼睑下缘交集点至该下眼睑上缘交集点之间的该睑裂高度、该眼角左缘 点至该眼角右缘点之间的一个睑裂宽度,以及依据该虹膜区域推导计算出的一个眼球体表面积。如此,本发明的眼睑下垂检测方法具有以上述参数辅助判断眼睑下垂的严重程度及提眼肌的功能的功效。
其中,依据该多个特征变量,形成一个虚拟数字眼睛,将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。如此,本发明的眼睑下垂检测方法具有提升检测准确性的功效。
其中,该眼部影像包括一张瞳孔平视影像、一张瞳孔用力朝上影像及一张瞳孔用力朝下影像。如此,本发明的眼睑下垂检测系统具有能够同时检测眼睑下垂的严重程度及提眼肌的功能等功效。
其中,该多个特征变量包括一个眼睛轮廓区域、一个巩膜区域、一个虹膜区域、一个瞳孔区域、一个瞳孔中心点、一个上眼睑下缘曲线、一个下眼睑上缘曲线、一个上眼睑下缘交集点及一个下眼睑上缘交集点各自的位置坐标。如此,本发明的眼睑下垂检测系统具有提供完整的静态测量参数及动态测量参数,同时检测眼睑下垂的严重程度及提眼肌的功能等功效。
其中,该多个特征变量还包括一个眼角左缘点及一个眼角右缘点各自的位置坐标。如此,本发明的眼睑下垂检测系统以上述参数辅助判断眼睑下垂的严重程度及提眼肌的功能的功效。
其中,该特征参数组包括一个虹膜直径与一个睑裂高度之间的一个高度差,以及瞳孔用力朝上与用力朝下时的一个最大移动距离。如此,本发明的眼睑下垂检测系统具有同时检测眼睑下垂的严重程度及提眼肌的功能的功效。
其中,该特征参数组另包括该瞳孔中心点至该上眼睑下缘交集点之间的一个第一距离、该瞳孔中心点至该下眼睑上缘交集点之间的一个第二距离、该上眼睑下缘交集点至该下眼睑上缘交集点之间的该睑裂高度、该眼角左缘点至该眼角右缘点之间的一个睑裂宽度,以及依据该虹膜区域推导计算出的一个眼球体表面积。如此,本发明的眼睑下垂检测系统具有以上述参数辅助判断眼睑下垂的严重程度及提眼肌的功能的功效。
其中,该处理单元依据该多个特征变量,形成一个虚拟数字眼睛,该处理单元将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。如此,本发明的眼睑下垂检测方法具有提升检测准确性的功效。
下面结合附图和具体实施方式对本发明作进一步详细的说明。
图1:本发明一较佳实施例的处理流程图;
图2:本发明一较佳实施例的一瞳孔平视影像的眼部影像示意图;
图3:本发明一较佳实施例的一瞳孔用力朝上影像及用力朝下影像的眼部影像示意图;
图4:本发明一较佳实施例的系统架构图。
附图标记说明
S1 影像撷取步骤 S2 影像处理步骤
S3 特征撷取步骤 S4 特征运算步骤
S5 特征分析步骤 S6 特征重叠步骤
1 摄影单元 2 储存单元
3 处理单元 LF 最大移动距离
A 眼睛轮廓区域 A1 巩膜区域
A2 虹膜区域 A3 瞳孔区域
C1 上眼睑下缘曲线 C2 下眼睑上缘曲线
P1 瞳孔中心点 P2 上眼睑下缘交集点
P3 下眼睑上缘交集点 P4 眼角左缘点
P5 眼角右缘点 P6 第一位置坐标
P7 第二位置坐标
MRD1 第一距离MRD2第二距离
PFH 睑裂高度 PFW 睑裂宽度
PS 高度差。
为使本发明的上述及其他目的、特征及优点能更明显易懂,下文特根据本发明的较佳实施例,并配合附图,作详细说明如下:
请参照图1所示,其是本发明眼睑下垂检测方法的一个较佳实施例,包括影像撷取步骤S1、影像处理步骤S2、特征撷取步骤S3、特征运算步骤S4及特征分析步骤S5。
请一并参照图2~3所示,该影像撷取步骤S1能够拍摄产生眼部影像,该眼部影像为一个彩色影像。较佳地,该眼部影像可以包括一个瞳孔平视影像、一个瞳孔用力朝上影像及一个瞳孔用力朝下影像。具体而言,该影像撷取步骤S1能够拍摄产生一个脸部影像,并由该脸部影像中选取一个感兴趣区域(Region of Interest,ROI)作为该眼部影像。该感兴趣区域所形成的矩 形的起始像素的位置坐标及矩形的长度值与宽度值的设定,以能涵盖包括上眼睑、下眼睑、巩膜、虹膜及瞳孔等眼睛部位即可,本发明相关领域中普通技术人员可以理解,在此不多加赘述。
该影像处理步骤S2能够对该眼部影像执行影像处理,产生一个边缘影像。具体而言,对该眼部影像执行灰阶化处理,以分割该眼部影像的前景与背景,产生一个灰阶影像。并且,该影像处理步骤S2将该眼部影像中感兴趣的部分保留下来,简化后续影像处理程序,并提高整体运算效率,该影像处理步骤S2能够对该灰阶影像执行二值化处理,以产生一个二值化影像,例如但不限制地,该二值化的阀值可以区分为一个固定阀值或一个自适应阀值(如:Otsu、双峰法、P参数法或迭代法)。同时,该影像处理步骤S2能够对该二值化影像执行边缘监测处理,使产生该边缘影像,以进一步大幅度地减少该眼部影像的数据量,剔除可能不相关的信息,并保留该眼部影像重要的结构属性,例如但不限制地,该边缘监测可以采用如:Sobel、Prewitt或Canny等边缘监测算法。
该特征撷取步骤S3能够对该眼部影像及该边缘影像执行影像运算,以取得用以分析眼睑下垂的严重程度及提眼肌的功能的多个特征变量,该多个特征变量可以包括一个眼睛轮廓区域A、一个巩膜区域A1、一个虹膜区域A2、一个瞳孔区域A3、一个瞳孔中心点P1、一个上眼睑下缘曲线C1、一个下眼睑上缘曲线C2、一个上眼睑下缘交集点P2及一个下眼睑上缘交集点P3各自的位置坐标。较佳地,该多个特征变量还可以另外包括一个眼角左缘点P4及一个眼角右缘点P5各自的位置坐标。
具体而言,该特征撷取步骤S3能够对该眼部影像执行对称变换 (Symmetry Transform),以取得一个眼部区域。对该眼部影像的各像素点执行对称变换,产生多个对称变换结果,并将该多个对称变换结果中的一个最大值的像素点的位置坐标作为产生该眼睛轮廓区域A的初始点。该眼睛轮廓区域A可以包括该巩膜区域A1、该虹膜区域A2、该瞳孔区域A3、上眼睑及下眼睑等眼部特征。并且,由于巩膜相对于瞳孔、虹膜、上眼睑及下眼睑等眼部特征而言,其色彩饱和度相对较低,因此,该特征撷取步骤S3能够将该眼部影像由RGB色彩空间转换至HSV色彩空间,产生一个HSV影像。由该HSV影像中取得S通道影像,并使该S通道影像中饱和度小于门坎值的像素点形成该巩膜区域A1。该门坎值的设定是本发明相关领域中普通技术人员可以理解的,兹不赘述。
另一方面,该特征撷取步骤S3能够对该边缘影像执行对称变换(Symmetry Transform),以
多个候补瞳孔区域,在本实施例中,该对称转换可以为快速径向对称变换(Fast Radial Symmetry Transform,FRST)。该特征撷取步骤S3计算取得该边缘影像的各像素点在其梯度方向上的两个投影点,并依据该两个投影点分别形成的梯度投影影像(Orientation Projection Image)及梯度幅值影像(Magnitude Projection Image)取得多个径向对称变换结果,即取得该多个候补瞳孔区域。对该多个候补瞳孔区域分别计算瞳孔黑值比例,并以该多个瞳孔黑值比例中比值最大的候补瞳孔区域作为该瞳孔区域A3。该瞳孔黑值比例为各该候补瞳孔区域的所有像素点中黑色像素点所占有的比值。并且,还能够由该瞳孔区域A3中定位取得该瞳孔中心点P1的位置坐标。
该特征撷取步骤S3能够在该眼睛轮廓区域A中取得该上眼睑下缘曲线 C1及该下眼睑上缘曲线C2。具体而言,以一梯度方向(Gradient Orientation)计算该巩膜区域A1的边界上的各像素点相对于该眼睛轮廓区域A的切线斜率,该巩膜区域A1的边界上的各像素点与该眼睛轮廓区域A切线斜率为零的交界处,即表示为眼睑曲线。依据该眼睑曲线的位置坐标,将该眼睑曲线区分为该上眼睑下缘曲线C1及该下眼睑上缘曲线C2。并且,还能够由该瞳孔中心点P1朝与瞳孔平视方向所形成的平面的垂直方向延伸一垂直线,且使该垂直线分别交集于该上眼睑下缘曲线C1及该下眼睑上缘曲线C2,以分别取得该上眼睑下缘交集点P2及该下眼睑上缘交集点P3各自的位置坐标。
较佳地,该特征撷取步骤S3还能够在该眼睛轮廓区域A中取得该眼角左缘点P4及该眼角右缘点P5各自的位置坐标。具体而言,以一个角点距离(Corner Distance)计算该上眼睑下缘曲线C1与该下眼睑上缘曲线C2的交界处,分别取得该眼角左缘点P4及该眼角右缘点P5各自的位置坐标。
请参照图2~3所示,该特征运算步骤S4能够依据该多个特征变量计算,取得特征参数组。举例而言,该特征参数组包括虹膜直径与睑裂高度PFH(Palpebral Fissure Height)之间的一个高度差PS(Ptosis Severity),以及瞳孔用力朝上与用力朝下时的一个最大移动距离LF。较佳地,该特征参数组还可以另外包括该瞳孔中心点P1至该上眼睑下缘交集点P2之间的一个第一距离MRD1、该瞳孔中心点P1至该下眼睑上缘交集点P3之间的一个第二距离MRD2、该上眼睑下缘交集点P2至该下眼睑上缘交集点P3之间的一个睑裂高度PFH、该眼角左缘点P4至该眼角右缘点P5之间的一个睑裂宽度PFW(Palpebral Fissure Width),以及依据该虹膜区域A2推导计算出的一个眼球体表面积OSA(Ocular Surface Area)。
该特征分析步骤S5能够依据该特征参数组与预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。举例而言,由该该虹膜区域A2计算取得该虹膜直径为11毫米,另外,计算取得该上眼睑下缘交集点P2至该下眼睑上缘交集点P3之间的睑裂高度PFH为8毫米,则该虹膜直径与该睑裂高度PFH的高度差为3毫米,即眼睑下垂严重程度为轻度。该预设眼睑下垂信息可以如下表一所示:
表一 预设眼睑下垂信息
本发明的眼睑下垂检测方法,还可以包括特征重叠步骤S6,该特征重叠步骤S6是依据该多个特征变量,形成虚拟数字眼睛,将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。具体而言,该特征重叠步骤S6能够对该巩膜区域A1、该瞳孔区域A3及该眼睑曲线分别设定一个权重值,其中,该巩膜区域A1的权重值的计算公式可以如下式(1)所示:
D
color=α∑ΔP
sclera-β∑ΔP
skin (1)
其中,P
sclera表示巩膜上的像素点;P
skin表示皮肤上的像素点;α表示控制P
sclera的权重,β表示控制P
skin的权重,且α+β=1。
该瞳孔区域A3的权重值的计算公式可以如下式(2)所示:
其中,eye
total表示瞳孔全部像素;eye
black表示瞳孔黑色像素。
该眼睑曲线的权重值的计算公式可以如下式(3)所示:
其中,Ω表示该眼睛轮廓区域A的边界;|Ω|表示该眼睛轮廓区域A的边界长度;θ
(x,y)表示在(x,y)坐标系的梯度方向;m
(x,y)表示该眼睛轮廓区域A的切线斜率。
较佳地,该特征重叠步骤S6还能够对该眼角左缘点P4与该眼角右缘点P5另设定一个权重值,该眼角左缘点P4与该眼角右缘点P5的权重值的计算公式可以如下式(4)~(5)所示:
D
cor=|H|-k·trace(H)
2 (4)
其中,w(x,y)表示以x,y坐标为中心的加权值;G
x表示x轴方向的导数;G
y表示y轴方向的导数;k表示Harris算法参数。
以上述权重值作为权重值公式的输入变量,并计算取得多个具有不同权重的虚拟数字眼睛,由该多个虚拟数字眼睛中选择具有最高权重的取代由原本的多个特征变量所形成的虚拟数字眼睛。该权重值的计算公式可以如下式(6)所示:
其中,D
θ表示眼睑曲线的权重值;D
color表示该巩膜区域A1的权重值;D
sym系示瞳孔区域A3的权重值;D
cor表示眼角缘点的权重值;σ
i表示通过反复试验得到的最适参数值;d
i表示D
θ、D
color、D
sym及D
cor各别的参数值; μ
i表示D
θ、D
color、D
sym及D
cor的权重平均值。
请参照图4所示,其是本发明眼睑下垂检测系统的一个较佳实施例,包括一个摄影单元1、一个储存单元2及一个处理单元3,该处理单元3耦接该摄影单元1及该储存单元2。
该摄影单元1能够用以拍摄产生一个脸部影像,较佳拍摄产生一个眼部影像,该眼部影像可以包括一个瞳孔平视影像、一个瞳孔用力朝上影像及一个瞳孔用力朝下影像。例如但不限制地,该摄影单元1可以为电荷耦合组件CCD彩色摄影机或互补式金属氧化半导体CMOS彩色摄影机。
该储存单元2可以为任何用以储存电子数据的储存媒体,例如可以为硬盘或内存,但是不以此为限,该储存单元2能够用以储存预设眼睑下垂信息。该预设眼睑下垂信息可以如上述表一所示。
该处理单元3耦接该摄影单元1及该储存单元2,该处理单元3可以为具有数据处理、讯号产生及控制等功能的电路单元,例如可以为微处理器、微控制器、数字信号处理器、逻辑电路或特殊应用集成电路(ASIC),在本实施例中,该处理单元3可以为微处理器,但是不以此为限。该处理单元3能够对该眼部影像执行影像处理,以产生一个边缘影像。具体而言,该影像处理可以包括对该眼部影像执行灰阶化、二值化及边缘监测等影像处理程序,以产生该边缘影像。其中,当该拍摄单元1拍摄产生一个脸部影像时,该处理单元3能够于该脸部影像中设定一个感兴趣区域作为该眼部影像。该感兴趣区域所形成的矩形的起始像素的位置坐标及矩形的长度值与宽度值的设定,以能涵盖包括上眼睑、下眼睑、巩膜、虹膜及瞳孔等眼睛部位即可,本发明相关领域中普通技术人员可以理解,兹不赘述。
该处理单元3能够对该眼部影像及该边缘影像执行影像运算,以取得用以分析眼睑下垂的严重程度及提眼肌的功能的多个特征变量,该多个特征变量可以包括一个眼睛轮廓区域A、一个巩膜区域A1、一个虹膜区域A2、一个瞳孔区域A3、一个瞳孔中心点P1、一个上眼睑下缘曲线C1、一个下眼睑上缘曲线C2、一个上眼睑下缘交集点P2及一个下眼睑上缘交集点P3各自的位置坐标。较佳地,该多个特征变量还可以另外包括一个眼角左缘点P4及一个眼角右缘点P5各自的位置坐标。
具体而言,该处理单元3对该眼部影像执行对称变换,取得一个眼部区域。该处理单元3能够对该眼部影像的各像素点执行对称变换,产生多个对称变换结果,该处理单元3将该对称变换结果中的最大值的像素点个位置坐标作为产生该眼睛轮廓区域A的初始点。该眼睛轮廓区域A可以包括该巩膜区域A1、该虹膜区域A2、该瞳孔区域A3、上眼睑及下眼睑等眼部特征。并且,由于巩膜相对于瞳孔、虹膜、上眼睑及下眼睑等眼部特征而言,其色彩饱和度相对较低,因此,该处理单元3将该眼部影像由RGB色彩空间转换至HSV色彩空间,产生一个HSV影像。该处理单元3由该HSV影像中取得S通道影像,并将该S通道影像中饱和度小于门坎值的像素点形成该巩膜区域A1。该门坎值的设定本发明相关领域中普通技术人员可以理解,兹不赘述。
另一方面,该处理单元3对该边缘影像执行对称变换,以取得多个候补瞳孔区域,在本实施例中,该对称转换可以为快速径向对称变换。该处理单元3计算取得该边缘影像的各像素点在其梯度方向上的两个投影点,并依据该两个投影点分别形成的梯度投影影像及梯度幅值影像取得多个径向对称 变换结果,即取得该多个候补瞳孔区域。该处理单元3对该多个候补瞳孔区域分别计算瞳孔黑值比例,并以该多个瞳孔黑值比例中比值最大的候补瞳孔区域作为该瞳孔区域A3。该瞳孔黑值比例为各候补瞳孔区域的所有像素点中黑色像素点所占有的比值。并且,该处理单元3还能够由该瞳孔区域A3中定位取得该瞳孔中心点P1的位置坐标。
该处理单元3能够在该眼睛轮廓区域A中取得该上眼睑下缘曲线C1及该下眼睑上缘曲线C2。具体而言,该处理单元3以一个梯度方向公式计算该巩膜区域A1的边界上的各像素点相对于该眼睛轮廓区域A的切线斜率,该巩膜区域A1的边界上的各像素点与该眼睛轮廓区域A切线斜率为零的交界处,即表示为眼睑曲线。该处理单元3能够依据该眼睑曲线的位置坐标,将该眼睑曲线区分为该上眼睑下缘曲线C1及该下眼睑上缘曲线C2。并且,还能够以该处理单元3由该瞳孔中心点P1朝与瞳孔平视方向所形成的平面的垂直方向延伸一垂直线,且使该垂直线分别交集于该上眼睑下缘曲线C1及该下眼睑上缘曲线C2,以分别取得该上眼睑下缘交集点P2及该下眼睑上缘交集点P3各自的位置坐标。
较佳地,该处理单元3能够在该眼睛轮廓区域A中取得该眼角左缘点P4及该眼角右缘点P5。具体而言,该处理单元3以角点距离公式计算该上眼睑下缘曲线C1与该下眼睑上缘曲线C2的交界处,取得该眼角左缘点P4及该眼角右缘点P5。较佳地,该处理单元3还能够由该瞳孔中心点P1延伸且分别交集于该上眼睑下缘曲线C1及该下眼睑上缘曲线C2,分别取得该上眼睑下缘交集点P2及该下眼睑上缘交集点P3各自的位置坐标。
该处理单元3能够依据该多个特征变量计算,以取得特征参数组。举例 而言,该特征参数组包括虹膜直径与睑裂高度PFH(Palpebral Fissure Height)之间的一个高度差PS(Ptosis Severity),以及瞳孔用力朝上与用力朝下时的一个最大移动距离LF。较佳地,该特征参数组还可以另外包括该瞳孔中心点P1至该上眼睑下缘交集点P2之间的一个第一距离MRD1、该瞳孔中心点P1至该下眼睑上缘交集点P3之间的一个第二距离MRD2、该上眼睑下缘交集点P2至该下眼睑上缘交集点P3之间的一个睑裂高度PFH、该眼角左缘点P4至该眼角右缘点P5之间的一个睑裂宽度PFW(Palpebral Fissure Width),以及依据该虹膜区域A2推导计算出的一个眼球体表面积OSA(Ocular Surface Area)。
该处理单元3依据该特征参数组与一预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。举例而言,该处理单元3依据该瞳孔用力朝上影像所产生的多个特征变量,计算取得瞳孔用力朝上时的一个第一位置坐标P6,另依据瞳孔用力朝下影像所产生的多个特征变量,计算取得瞳孔用力朝下时的一个第二位置坐标P7,该处理单元3计算该第一位置坐标P6与该第二位置坐标P7的距离差,以产生该最大移动距离LF,当该最大移动距离LF等于7毫米时,依据上述表一为例,则表示提眼肌功能为中度异常。
本发明的眼睑下垂检测系统,该处理单元3还可以依据该多个特征变量,形成虚拟数字眼睛,该处理单元3可以将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。具体而言,该处理单元3对该巩膜区域A1、该瞳孔区域A3及该眼睑曲线分别设定一个权重值,较佳地,还能够以该处理单元3对该眼角左缘点P4与该 眼角右缘点P5另外设定一个权重值,该多个权重值的计算公式可以如上述式(1)~(5)所示。该处理单元3能够以上述权重值作为权重值公式的输入变量,并计算取得多个具有不同权重的虚拟数字眼睛,该处理单元3由该多个虚拟数字眼睛中选择具有最高权重者取代由原本的多个特征变量所形成的虚拟数字眼睛。该权重值的计算公式可以如上述式(6)所示。
综上所述,本发明的眼睑下垂检测方法及系统,能够以影像处理技术搭配机器视觉取得病患眼部信息,并以该眼部信息推导用以判断眼睑下垂的相关数据,及依该数据进行自动化检测提眼肌的功能及检测眼睑下垂的严重程度。借此,可以达到操作方便、大幅缩短测量时间及提升测量一致性等目的。
Claims (14)
- 一种眼睑下垂检测方法,其特征在于,包括:拍摄产生眼部影像;对该眼部影像执行影像处理,产生一个边缘影像;对该眼部影像及该边缘影像执行影像运算,取得多个特征变量;依据该多个特征变量计算,取得一个特征参数组;及将该特征参数组与一个预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。
- 如权利要求1所述的眼睑下垂检测方法,其特征在于,该眼部影像包括一张瞳孔平视影像、一张瞳孔用力朝上影像及一张瞳孔用力朝下影像。
- 如权利要求2所述的眼睑下垂检测方法,其特征在于,该多个特征变量包括一个眼睛轮廓区域、一个巩膜区域、一个虹膜区域、一个瞳孔区域、一个瞳孔中心点、一个上眼睑下缘曲线、一个下眼睑上缘曲线、一个上眼睑下缘交集点及一个下眼睑上缘交集点各自的位置坐标。
- 如权利要求3所述的眼睑下垂检测方法,其特征在于,该多个特征变量还包括一个眼角左缘点及一个眼角右缘点各自的位置坐标。
- 如权利要求4所述的眼睑下垂检测方法,其特征在于,该特征参数组包括一个虹膜直径与一个睑裂高度之间的一个高度差,以及瞳孔用力朝上与用力朝下时的一个最大移动距离。
- 如权利要求5所述的眼睑下垂检测方法,其特征在于,该特征参数组还包括该瞳孔中心点至该上眼睑下缘交集点之间的一个第一距离、该瞳孔中心点至该下眼睑上缘交集点之间的一个第二距离、该上眼睑下缘交集点至该下眼睑上缘交集点之间的该睑裂高度、该眼角左缘点至该眼角右缘点之间的一 个睑裂宽度,以及依据该虹膜区域推导计算出的一个眼球体表面积。
- 如权利要求4所述的眼睑下垂检测方法,其特征在于,依据该多个特征变量,形成一个虚拟数字眼睛,将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。
- 一种眼睑下垂检测系统,其特征在于,包括:一个摄影单元,用以拍摄产生眼部影像;一个储存单元,用以储存一个预设眼睑下垂信息;及一个处理单元,耦接该摄影单元及该储存单元,该处理单元对该眼部影像执行影像处理,产生一个边缘影像,该处理单元对该眼部影像及该边缘影像执行影像运算,取得多个特征变量,该处理单元依该多个特征变量计算取得一个特征参数组,并将该特征参数组与该预设眼睑下垂信息相比对,推知眼睑下垂的严重程度及提眼肌的功能。
- 如权利要求8所述的眼睑下垂检测系统,其特征在于,该眼部影像包括一张瞳孔平视影像、一张瞳孔用力朝上影像及一张瞳孔用力朝下影像。
- 如权利要求9所述的眼睑下垂检测系统,其特征在于,该多个特征变量包括一个眼睛轮廓区域、一个巩膜区域、一个虹膜区域、一个瞳孔区域、一个瞳孔中心点、一个上眼睑下缘曲线、一个下眼睑上缘曲线、一个上眼睑下缘交集点及一个下眼睑上缘交集点各自的位置坐标。
- 如权利要求10所述的眼睑下垂检测系统,其特征在于,该多个特征变量还包括一个眼角左缘点及一个眼角右缘点各自的位置坐标。
- 如权利要求11所述的眼睑下垂检测系统,其特征在于,该特征参数组包括一个虹膜直径与一个睑裂高度之间的一个高度差,以及瞳孔用力朝上与 用力朝下时的一个最大移动距离。
- 如权利要求12所述的眼睑下垂检测系统,其特征在于,该特征参数组还包括该瞳孔中心点至该上眼睑下缘交集点之间的一个第一距离、该瞳孔中心点至该下眼睑上缘交集点之间的一个第二距离、该上眼睑下缘交集点至该下眼睑上缘交集点之间的该睑裂高度、该眼角左缘点至该眼角右缘点之间的一个睑裂宽度,以及依据该虹膜区域推导计算出的一个眼球体表面积。
- 如权利要求11所述的眼睑下垂检测系统,其特征在于,该处理单元依据该多个特征变量,形成一个虚拟数字眼睛,该处理单元将该虚拟数字眼睛与该眼部影像进行重叠,以分析该虚拟数字眼睛是否与该眼部影像之间具有极大偏差产生。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880095482.XA CN112384127B (zh) | 2018-07-27 | 2018-07-27 | 眼睑下垂检测方法及系统 |
US17/263,428 US11877800B2 (en) | 2018-07-27 | 2018-07-27 | Method and system for detecting blepharoptosis |
PCT/CN2018/097367 WO2020019286A1 (zh) | 2018-07-27 | 2018-07-27 | 眼睑下垂检测方法及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/097367 WO2020019286A1 (zh) | 2018-07-27 | 2018-07-27 | 眼睑下垂检测方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020019286A1 true WO2020019286A1 (zh) | 2020-01-30 |
Family
ID=69180602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/097367 WO2020019286A1 (zh) | 2018-07-27 | 2018-07-27 | 眼睑下垂检测方法及系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11877800B2 (zh) |
CN (1) | CN112384127B (zh) |
WO (1) | WO2020019286A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021260526A1 (en) * | 2020-06-23 | 2021-12-30 | Mor Research Applications Ltd. | System and method for characterizing droopy eyelid |
KR20220041431A (ko) | 2020-09-25 | 2022-04-01 | 의료법인 성광의료재단 | 다양한 눈꺼풀질환을 진단하기 위한 정보를 제공하는 방법 및 이를 이용한 장치 |
CN114821754A (zh) * | 2022-04-27 | 2022-07-29 | 南昌虚拟现实研究院股份有限公司 | 半闭眼图像生成方法、装置、可读存储介质及电子设备 |
CN115908237A (zh) * | 2022-08-18 | 2023-04-04 | 上海佰翊医疗科技有限公司 | 一种眼裂宽度的测量方法、装置和存储介质 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220383502A1 (en) * | 2021-05-28 | 2022-12-01 | Blinktbi, Inc. | Systems and methods for eyelid localization |
US20230214996A1 (en) * | 2021-12-30 | 2023-07-06 | National Yang Ming Chiao Tung University | Eyes measurement system, method and computer-readable medium thereof |
SE2250299A1 (en) * | 2022-03-04 | 2023-09-05 | Tobii Ab | Eye openness |
CN115281601A (zh) * | 2022-08-18 | 2022-11-04 | 上海市内分泌代谢病研究所 | 一种眼裂宽度测量装置及其使用方法 |
CN115886717B (zh) * | 2022-08-18 | 2023-09-29 | 上海佰翊医疗科技有限公司 | 一种眼裂宽度的测量方法、装置和存储介质 |
WO2024076441A2 (en) * | 2022-10-06 | 2024-04-11 | The George Washington University | Eye segmentation system for telehealth myasthenia gravis physical examination |
WO2024182845A1 (en) * | 2023-03-06 | 2024-09-12 | Newsouth Innovations Pty Limited | Eye models for mental state analysis |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010056228A1 (en) * | 2000-06-27 | 2001-12-27 | Drdc Limited | Diagnosis system, diagnosis data producing method, information processing device, terminal device and recording medium used in the diagnosis data producing method |
CN101264007A (zh) * | 2007-03-14 | 2008-09-17 | 爱信精机株式会社 | 眼睑检测装置及其程序 |
CN101866420A (zh) * | 2010-05-28 | 2010-10-20 | 中山大学 | 一种用于光学体全息虹膜识别的图像前处理方法 |
US8684529B2 (en) * | 2011-04-28 | 2014-04-01 | Carl Zeiss Meditec, Inc. | Systems and methods for improved visual field testing |
CN108053615A (zh) * | 2018-01-10 | 2018-05-18 | 山东大学 | 基于微表情的驾驶员疲劳驾驶状态检测方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3880475B2 (ja) * | 2002-07-12 | 2007-02-14 | キヤノン株式会社 | 眼科装置 |
US8223143B2 (en) * | 2006-10-27 | 2012-07-17 | Carl Zeiss Meditec, Inc. | User interface for efficiently displaying relevant OCT imaging data |
US9298985B2 (en) * | 2011-05-16 | 2016-03-29 | Wesley W. O. Krueger | Physiological biosensor system and method for controlling a vehicle or powered equipment |
CN103493100B (zh) * | 2011-04-19 | 2016-04-06 | 爱信精机株式会社 | 眼睑检测装置、眼睑检测方法 |
US9916538B2 (en) * | 2012-09-15 | 2018-03-13 | Z Advanced Computing, Inc. | Method and system for feature detection |
US9041727B2 (en) * | 2012-03-06 | 2015-05-26 | Apple Inc. | User interface tools for selectively applying effects to image |
US8971617B2 (en) * | 2012-03-06 | 2015-03-03 | Apple Inc. | Method and interface for converting images to grayscale |
US11956414B2 (en) * | 2015-03-17 | 2024-04-09 | Raytrx, Llc | Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing |
CN105559802B (zh) | 2015-07-29 | 2018-11-02 | 北京工业大学 | 基于注意和情感信息融合的抑郁诊断系统及数据处理方法 |
EP3337383B1 (en) * | 2015-08-21 | 2024-10-16 | Magic Leap, Inc. | Eyelid shape estimation |
US20180173011A1 (en) * | 2016-12-21 | 2018-06-21 | Johnson & Johnson Vision Care, Inc. | Capacitive sensing circuits and methods for determining eyelid position using the same |
-
2018
- 2018-07-27 US US17/263,428 patent/US11877800B2/en active Active
- 2018-07-27 CN CN201880095482.XA patent/CN112384127B/zh active Active
- 2018-07-27 WO PCT/CN2018/097367 patent/WO2020019286A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010056228A1 (en) * | 2000-06-27 | 2001-12-27 | Drdc Limited | Diagnosis system, diagnosis data producing method, information processing device, terminal device and recording medium used in the diagnosis data producing method |
CN101264007A (zh) * | 2007-03-14 | 2008-09-17 | 爱信精机株式会社 | 眼睑检测装置及其程序 |
CN101866420A (zh) * | 2010-05-28 | 2010-10-20 | 中山大学 | 一种用于光学体全息虹膜识别的图像前处理方法 |
US8684529B2 (en) * | 2011-04-28 | 2014-04-01 | Carl Zeiss Meditec, Inc. | Systems and methods for improved visual field testing |
CN108053615A (zh) * | 2018-01-10 | 2018-05-18 | 山东大学 | 基于微表情的驾驶员疲劳驾驶状态检测方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021260526A1 (en) * | 2020-06-23 | 2021-12-30 | Mor Research Applications Ltd. | System and method for characterizing droopy eyelid |
KR20220041431A (ko) | 2020-09-25 | 2022-04-01 | 의료법인 성광의료재단 | 다양한 눈꺼풀질환을 진단하기 위한 정보를 제공하는 방법 및 이를 이용한 장치 |
KR102421748B1 (ko) * | 2020-09-25 | 2022-07-15 | 의료법인 성광의료재단 | 다양한 눈꺼풀질환을 진단하기 위한 정보를 제공하는 방법 및 이를 이용한 장치 |
CN114821754A (zh) * | 2022-04-27 | 2022-07-29 | 南昌虚拟现实研究院股份有限公司 | 半闭眼图像生成方法、装置、可读存储介质及电子设备 |
CN115908237A (zh) * | 2022-08-18 | 2023-04-04 | 上海佰翊医疗科技有限公司 | 一种眼裂宽度的测量方法、装置和存储介质 |
CN115908237B (zh) * | 2022-08-18 | 2023-09-08 | 上海佰翊医疗科技有限公司 | 一种眼裂宽度的测量方法、装置和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20210298595A1 (en) | 2021-09-30 |
CN112384127B (zh) | 2023-11-10 |
CN112384127A (zh) | 2021-02-19 |
US11877800B2 (en) | 2024-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020019286A1 (zh) | 眼睑下垂检测方法及系统 | |
US11250241B2 (en) | Face image processing methods and apparatuses, and electronic devices | |
JP4307496B2 (ja) | 顔部位検出装置及びプログラム | |
TWI694809B (zh) | 檢測眼球運動的方法、其程式、該程式的記憶媒體以及檢測眼球運動的裝置 | |
EP2888718B1 (en) | Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation | |
CN104463159B (zh) | 一种定位虹膜的图像处理方法和装置 | |
US20150029461A1 (en) | Cycloduction measurement device, cycloduction measurement method, and cycloduction measurement program | |
JP6956986B1 (ja) | 判定方法、判定装置、及び判定プログラム | |
Septiarini et al. | Automatic detection of peripapillary atrophy in retinal fundus images using statistical features | |
WO2018078857A1 (ja) | 視線推定装置、視線推定方法及びプログラム記録媒体 | |
Argade et al. | Automatic detection of diabetic retinopathy using image processing and data mining techniques | |
WO2019073962A1 (ja) | 画像処理装置及びプログラム | |
Nugroho et al. | Automated segmentation of optic disc area using mathematical morphology and active contour | |
TWI673034B (zh) | 眼瞼下垂檢測方法及系統 | |
Bhangdiya | Cholesterol presence detection using iris recognition | |
Akhade et al. | Automatic optic disc detection in digital fundus images using image processing techniques | |
CN111588345A (zh) | 眼部疾病检测方法、ar眼镜及可读存储介质 | |
CN116030042A (zh) | 一种针对医生目诊的诊断装置、方法、设备及存储介质 | |
Singh et al. | Assessment of disc damage likelihood scale (DDLS) for automated glaucoma diagnosis | |
US20240104923A1 (en) | Focus determination device, iris authentication device, focus determination method, and recording medium | |
JP6994704B1 (ja) | 虹彩検出方法、虹彩検出装置、及び虹彩検出プログラム | |
CN114283176A (zh) | 基于人眼视频的瞳孔轨迹生成方法 | |
Sahu et al. | Impact of Novel Pre-Processing Techniques on Retinal Images | |
Nirmala et al. | Adaptive gamma correction enhanced retinal image for automated detection of glaucoma | |
Prabhu et al. | Study of Retinal Biometrics with Respect to Peripheral Degeneration with Clinically Significant Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18927735 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18927735 Country of ref document: EP Kind code of ref document: A1 |