CN106934365A - A kind of reliable glaucoma patient self-detection method - Google Patents

A kind of reliable glaucoma patient self-detection method Download PDF

Info

Publication number
CN106934365A
CN106934365A CN201710139010.7A CN201710139010A CN106934365A CN 106934365 A CN106934365 A CN 106934365A CN 201710139010 A CN201710139010 A CN 201710139010A CN 106934365 A CN106934365 A CN 106934365A
Authority
CN
China
Prior art keywords
face
human eye
detection
iris
pupil center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710139010.7A
Other languages
Chinese (zh)
Inventor
王军
李日富
江伟鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SYSU CMU Shunde International Joint Research Institute
National Sun Yat Sen University
Original Assignee
SYSU CMU Shunde International Joint Research Institute
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SYSU CMU Shunde International Joint Research Institute, National Sun Yat Sen University filed Critical SYSU CMU Shunde International Joint Research Institute
Priority to CN201710139010.7A priority Critical patent/CN106934365A/en
Publication of CN106934365A publication Critical patent/CN106934365A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of reliable glaucoma patient self-detection method, Face detection:Collection facial image simultaneously identifies human face region by skin color segmentation, determines face border and extracts face;Human eye detection:After face is extracted, human eye area is identified;Pupil center's canthus position vector is extracted, judges whether direction of visual lines there occurs movement, and come to estimate actual direction of visual lines according to the value of pupil center's canthus position vector in real time by means of the method for demarcating each Main way in advance, so as to be detected to glaucoma.The present invention provides the checking of detection data validity for visualFieldseasy, to improve the accuracy for judging whether tester suffers from glaucoma.

Description

A kind of reliable glaucoma patient self-detection method
Technical field
The present invention relates to field of medical device, more particularly, to a kind of reliable glaucoma patient self-detection method.
Background technology
Glaucoma is a kind of disease caused by being increased by intraocular pressure continuation, when internal pressure is too high, can be to human eye inside group Irreversible damage is weaved into, blindness can be caused in the later stage.It then becomes necessary to treated in early stage, with reduce risk.In State, also no popularization is not checked once eye every year.There are some glaucoma patient early visions very well, but once find to suffer from Glaucoma has but been late period.
At present, exist on the market the free of entitled visualFieldseasy, public good, detection glaucoma it is soft Part, user in need can be detected using it at any time.This can improve and find that the probability of glaucoma in early stage, The blindness for reducing glaucoma patient is possible, is a contribution for greatness to society.Software visualFieldseasy is with " green light The visual field of eye can narrow " as the theoretical foundation of detection glaucoma.VisualFieldseasy is in the process of running, it is desirable to user Cover wherein one eyes, another eye gaze is to one jiao of screen.Software can dynamically produce flicker in screen other positions Information point, and by test user whether there is to these information points produce reaction, and then determine user field range, judge use Whether family may suffer from glaucoma.
VisualFieldseasy there is a problem of one it is serious:Some detection datas are likely to be invalid.Due to inspection Survey process occupies the regular hour, user can more or less be influenceed by external information and cause sight line do not watch attentively as requested to Red roundlet, the testing result of the white roundlet for now occurring is invalid.Whether invalid detection data can be to suffering from green light The judgement of eye brings influence, or even can provide a false judgment for not suffering from glaucoma for glaucoma patient, causes patient Miss the time of healing.
The content of the invention
The present invention overcomes visualFieldseasy to there is a problem of data invalid when using, there is provided a kind of reliable green light Eye patient's self-detection method, to improve the accuracy for judging whether tester suffers from glaucoma.
In order to solve the above technical problems, technical scheme is as follows:
A kind of reliable glaucoma patient self-detection method, comprises the following steps:
S1:Face detection:Collection facial image simultaneously identifies human face region by skin color segmentation, determines face border and carries Take face;
S2:Human eye detection:After face is extracted, human eye area is identified;
S3:Pupil center-canthus position vector is extracted, judges whether direction of visual lines there occurs movement, and by means of in advance The method for demarcating each Main way estimates actual direction of visual lines according to the value of pupil center-canthus position vector in real time, from And glaucoma is detected.
In a kind of preferred scheme, in step S1, cross skin color segmentation and identify that the specific method of human face region is: Each pixel is put black or puts white by YCbCr space according to the numerical value of triple channel, and its computing formula is as follows:
Pow=m2+n2
Wherein y, cb, cr refer to image single pixel Y passages, Cb passages, Cr passages value;And value represents the pixel The binaryzation result of point, represents it is colour of skin point if value is 255, is not otherwise colour of skin point.
In a kind of preferred scheme, in step S1, after binaryzation terminates, a corrosion behaviour is applied to binary image Make, be used to take out part ambient noise, and it is little to the result image of face part.
In a kind of preferred scheme, in step S1, the specific steps for determining face border and extracting face include:
1) skin color segmentation result is carried out into longitudinal projection, extracts " plateau " part, thereby determine that face right boundary;
2) in skin color segmentation result, cut according to face right boundary;
3) new segmentation result is carried out into horizontal Gray Projection, extracts " plateau " part, thereby determine that face up-and-down boundary.
In a kind of preferred scheme, in step S2, what human eye area was identified concretely comprises the following steps:
1) Region detection algorithms, detection black region part are called on face complexion segmentation result;
2) result to region detection carries out appropriate area expansion, obtains human eye candidate region collection;
3) each human eye candidate region is constantly intercepted during a part sends into AdaBoost graders and is detected, Zhi Daojian Measure human eye or whole candidate region and not comprising human eye;
4) in the presence of having human eye in detecting certain candidate region, judge testing result its right boundary whether there is and the colour of skin Black patch in segmentation result intersects, if so, then carrying out corresponding area expansion.
In a kind of preferred scheme, in step S3, the specific steps for extracting pupil center-canthus position vector include:
S3.1:Pupil center positions:After binaryzation is carried out to human eye, vertical, horizontal black picture element is carried out to picture Point quantity projection, by the right boundary of vertical black picture element spot projection iris, is thus carried out laterally in human eye binary map Interception iris composition, then horizontal black pixel point quantity projection is carried out to it, after new horizontal black pixel point quantity projection, just The up-and-down boundary of iris is can determine that, pupil center's point abscissa can be obtained by the median for calculating iris right boundary;It is logical Crossing the median of calculating iris up-and-down boundary can obtain the ordinate of pupil center's point;
S3.2:Human eye self-adaption binaryzation:Carry out Automatic adjusument edge strengthening template by Basic Criteria of iris area Central value, thus obtains the binaryzation result of self adaptation, and it is as follows that it performs step:
1) the number of pixels num1 of black clumps at iris is calculated;
2) human eye is processed using the edge strengthening template that central value is 10.8, then changes into gray-scale map, and use OTSU Self-balancing binaryzation obtains binary map, now calculates the number num2 of black pixel point in whole binary map;
3) num2 is worked as>During num1*1.4, perform 4), otherwise perform 5);
4) human eye is processed using the edge strengthening template that central value is 10, then changes into gray-scale map, and using OTSU certainly Balance binaryzation obtains binary map, and is subject to an expansive working removal noise;
5) the upper and lower, left and right border according to iris, puts black, with environment of going out completely by binary map corresponding region herein The influence that bright light is caused in iris reflex.What is now obtained is desired binary map effect;
S3.3:Position vector is normalized:After positioning pupil center every time, enter on the basis of the distance by between two canthus points Row normalization;
Assuming that detection obtains left eye angular coordinate for (Lx,Ly), right eye angular coordinate is (Rx,Ry), center coordinate of eye pupil (Cx, Cy), and position vector using pupil center relative to left eye angle point is used as sight line criterion, then the position vector after normalization (Δ x, Δ y) can be tried to achieve by following:
Thus, will pass through (whether Δ x, Δ y) change to judge whether direction of visual lines there occurs movement, and by means of The method for demarcating each Main way in advance is come in real time according to (value of Δ x, Δ y) estimates actual direction of visual lines.
Compared with prior art, the beneficial effect of technical solution of the present invention is:The present invention provides a kind of reliable glaucoma Patient's self-detection method, Face detection:Collection facial image simultaneously identifies human face region by skin color segmentation, determines face side Simultaneously extract face in boundary;Human eye detection:After face is extracted, human eye area is identified;Extract pupil center-canthus position arrow Amount, judges whether direction of visual lines there occurs movement, and is come in real time according to pupil by means of the method for demarcating each Main way in advance The value of hole center-canthus position vector estimates actual direction of visual lines, so as to be detected to glaucoma.The present invention is VisualFieldseasy provides the checking of detection data validity, judges whether tester is accurate with glaucoma to improve Property.
Brief description of the drawings
Fig. 1 is the flow chart of reliable glaucoma patient self-detection method.
Fig. 2 is the algorithm principle figure of AdaBoost cascade classifiers.
Fig. 3 is the module map of pupil center-canthus vector line-of-sight detection subsystem.
Fig. 4 is testing result and " cluster circle " schematic diagram.
Specific embodiment
Technical scheme is described further with reference to the accompanying drawings and examples.
Embodiment 1
As shown in figure 1, a kind of reliable glaucoma patient self-detection method, comprises the following steps:
S1:Face detection:Collection facial image simultaneously identifies human face region by skin color segmentation, determines face border and carries Take face;
The colour of skin is one of face characteristic, and not agnate crowd can ensure each comfortable colour of skin concentration, height on the face It is close.Under normal circumstances, the background possibility similar to the colour of skin be not high, face can be separated with background by the colour of skin. According to existing achievement in research, in the world the not agnate colour of skin after YCbCr color spaces are transformed into, in Cb-Cr spaces Characteristic be consistent substantially, with cluster feature.Therefore face can be split using the dividing method based on the colour of skin.
The specific method for identifying human face region by skin color segmentation is:Will according to the numerical value of triple channel in YCbCr space Each pixel is put black or puts white, and its computing formula is as follows:
Pow=m2+n2
Wherein y, cb, cr refer to image single pixel Y passages, Cb passages, Cr passages value;And value represents the pixel The binaryzation result of point, represents it is colour of skin point if value is 255, is not otherwise colour of skin point.
In specific implementation process, after binaryzation terminates, an etching operation is applied to binary image, be used to take out Part ambient noise, and it is little to the result image of face part.
The specific steps for determining face border and extracting face include:
1) skin color segmentation result is carried out into longitudinal projection, extracts " plateau " part, thereby determine that face right boundary;
2) in skin color segmentation result, cut according to face right boundary;
3) new segmentation result is carried out into horizontal Gray Projection, extracts " plateau " part, thereby determine that face up-and-down boundary.
S2:Human eye detection:After face is extracted, human eye area is identified, concretely comprised the following steps:
1) Region detection algorithms, detection black region part are called on face complexion segmentation result;
2) result to region detection carries out appropriate area expansion, obtains human eye candidate region collection;The area of preliminary candidate Domain is possible to lose the Partial Feature of eyes, and being expanded by area can offset this loss situation, improves eye recognition Success rate.
3) as shown in Fig. 2 being carried out during part feeding AdaBoost graders are constantly intercepted to each human eye candidate region Detection, until detecting human eye or whole candidate region and not including human eye;
In AdaBoost, base grader is sequentially trained, also, each base grader is entered using the data set of weighting Row training, wherein the weights of each data point are determined by previous grader implementation effect.If a data point is preceding Divided by mistake in one grader, then its weights in current class device can be aggravated;If a data point is at previous point Correctly classified in class device, then its weights in current class device will be reduced.
4) in the presence of having human eye in detecting certain candidate region, judge testing result its right boundary whether there is and the colour of skin Black patch in segmentation result intersects, if so, then carrying out corresponding area expansion.This is primarily to solve AdaBoost positioning not Accurate problem, can carry out position correction by the Intersection of black patch.
S3:Pupil center-canthus position vector is extracted, its specific steps includes:
S3.1:Pupil center positions:After binaryzation is carried out to human eye, vertical, horizontal black picture element is carried out to picture Point quantity projection, by the right boundary of vertical black picture element spot projection iris, is thus carried out laterally in human eye binary map Interception iris composition, then horizontal black pixel point quantity projection is carried out to it, after new horizontal black pixel point quantity projection, just The up-and-down boundary of iris is can determine that, pupil center's point abscissa can be obtained by the median for calculating iris right boundary;It is logical Crossing the median of calculating iris up-and-down boundary can obtain the ordinate of pupil center's point;
S3.2:Human eye self-adaption binaryzation:Carry out Automatic adjusument edge strengthening template by Basic Criteria of iris area Central value, thus obtains the binaryzation result of self adaptation, and it is as follows that it performs step:
1) the number of pixels num1 of black clumps at iris is calculated;
2) human eye is processed using the edge strengthening template that central value is 10.8, then changes into gray-scale map, and use OTSU Self-balancing binaryzation obtains binary map, now calculates the number num2 of black pixel point in whole binary map;
3) num2 is worked as>During num1*1.4, perform 4), otherwise perform 5);
4) human eye is processed using the edge strengthening template that central value is 10, then changes into gray-scale map, and using OTSU certainly Balance binaryzation obtains binary map, and is subject to an expansive working removal noise;
5) the upper and lower, left and right border according to iris, puts black, with environment of going out completely by binary map corresponding region herein The influence that bright light is caused in iris reflex.What is now obtained is desired binary map effect;
S3.3:Position vector is normalized:After positioning pupil center every time, enter on the basis of the distance by between two canthus points Row normalization;
Assuming that detection obtains left eye angular coordinate for (Lx,Ly), right eye angular coordinate is (Rx,Ry), center coordinate of eye pupil (Cx, Cy), and position vector using pupil center relative to left eye angle point is used as sight line criterion, then the position vector after normalization (Δ x, Δ y) can be tried to achieve by following:
Thus, will pass through (whether Δ x, Δ y) change to judge whether direction of visual lines there occurs movement, and by means of The method for demarcating each Main way in advance is come in real time according to (value of Δ x, Δ y) estimates actual direction of visual lines.
Embodiment 2
Fig. 3 is the module map of pupil center-canthus vector line-of-sight detection subsystem.The system is input into treatment as seen from the figure 4 modules are broadly divided on picture:Face detection module, human eye locating module, extract position vector module, judge sight line whether there is Change module.
Face detection module is responsible for extracting face part in the coloured image of input.This part mainly passes through base Completed in the skin color segmentation method of YCbCr space.
Human eye locating module is responsible for drawing a circle to approve human eye on the face part for filtering out.This part mainly by using AdaBoost graders are completed.
Pupil center-canthus position vector module is extracted to be mainly according to the human eye picture normalized pupil of acquisition for extracting Center-canthus position vector.This part mainly by set forth herein interference-type hybrid projection obtain pupil center's point with And the method for combining adaptive binaryzation and X-ray detection X positions two canthus.
Judge that sight line has unconverted module to be mainly the position that will herein extract the position vector for obtaining Yu accumulate before Vector is contrasted, and determines whether significant change.
When judging that sight line has significant change, just can determine that the moment human eye does not have and seen to specific bit according to software requirement Put, therefore the glaucoma detection data point on correspondence time point is insincere, it is necessary to reject or allow visualFieldseasy principal series System is detected in the point again.
An agreement is needed between visualFieldseasy main systems and the system, to cause that the system more preferably aids in inspection Glaucoma is surveyed, for the detection data of visualFieldseasy provides reliability criterion.
Protocol contents:VisualFieldseasy main systems are when detection obtains certain data point, it is necessary to the moment is obtained The user's face picture for obtaining reaches the system;After the system is processed picture, the value for returning to a true/false is given VisualFieldseasy main systems, represent that the detection data at the moment is trusted/untrusted.
Due to the high efficiency of PCCV algorithms of the present invention, user only need to sequentially to watch attentively as requested screen one jiao 1~2 second Substantial amounts of position vector can be obtained.Again because the accuracy rate of PCCV is higher than 96%, thus can respectively be calculated user and watch attentively Effective mean place vector during four corners of screen.
The major function of Face detection module is locating human face position, it is to avoid complex background is made to follow-up human eye positioning Into interference.The photochrome of input is first gone to YCbCr space by this part, then theoretical by the figure after conversion according to skin color segmentation Piece carries out binaryzation, carries out vertical and horizontal white pixel point quantity projection to binary map afterwards, thereby determines that human face region.
The major function of human eye locating module is to determine position of human eye on face colour picture.Here determination one is only needed to Eyes.This part raising efficiency by face colour picture, it is necessary to be transformed into YCbCr space and entered according to skin color segmentation Row binaryzation, then determines candidate's human eye area by the way that black clumps are carried out with the method for region segmentation.If in binary map first Some ambient noises are rejected with expansive working, the efficiency of region segmentation will be improved, and the number of candidate's human eye area can be reduced Amount.The corresponding cromogram of candidate's human eye area is partially placed into afterwards real human eye is determined in AdaBoost graders.Here Grader have certain inaccurate, it is necessary to carry out region extension with reference to binary map in its positioning result in positioning.
The major function for extracting the module of pupil center-canthus position vector is to obtain position vector.Extracting pupil It is that the 3*3 templates that 12, boundary value is -1 carry out edge strengthening to human eye colour picture first by central value during center And non-linear brightness lifting;Then the colour picture after treatment is converted into gray-scale map, and binaryzation is completed using OTSU, by This can obtain the iris black clumps of " clean ", and other positions of eyes are removed substantially;Then to binary map using perpendicular The right boundary of iris is determined to projection, the transverse projection of binary map is thereby interfered with, and then determines the up-and-down boundary of iris, and with The center of iris as pupil central point.It is that -1, central value is variable first by boundary value during canthus is positioned 3*3 templates carry out edge strengthening and non-linear brightness lifting to human eye colour picture, then change into gray-scale map and use OTSU Binaryzation is completed, the binaryzation result of self adaptation is thus obtained;Typically in self-adaption binaryzation result, iris and eye are left behind The black clumps at angle, other are removed into branch, now can determine the position at canthus using the method for X-ray detection X.It is determined that Behind the position at pupil center and canthus, position vector of the pupil center relative to one of canthus just can be calculated, but be Comparability before and after the time of this position vector is improved, it is necessary to be according to being normalized with distance between two canthus.
Judging sight line has the unconverted module to need that the position vector for obtaining will be extracted therewith in the colored input picture of this frame The preceding position vector for obtaining that counts compares, and determines position vector whether there is significant change, thereby determines that sight line has unchanged.When regarding Line change when, should point out glaucoma inspection software main system the moment detection data it is insincere, it is necessary to reject, Huo Zhechong Newly carry out the detection of the test point.
Field range of the single eyes of people in front can be approximately a rectangle.Therefore, we are by the field range of people 49 regions are cut into, test template is fabricated to.During test, we allow tester once to watch 9 therein attentively Symmetrical colored region, the real-time pupil center-eye of tester is detected by pupil center-canthus vector detection subsystem Angular position vector data.
Because people is when long-time watches a bit attentively, notice inevitably can be sent out in the certain area on the periphery Dissipate, the position vector for thus causing this subsystem to be tried to achieve forms one and clusters.In order to distinguish the two states of sight line:Human eye is referring to Region sight line diverging and people's eye fixation are determined to other regions, it is necessary to calculating position vector forms the border for clustering.Thus, when asking Position vector when specifying within the clustering of direction of gaze, then it is assumed that human eye to be had watch attentively as requested to assigned direction;It is no Then think that human eye is not watched attentively now to assigned direction as requested, certain visualFieldseasy test data now is not It is credible, it is necessary to reject or retest the data point.By the average value and the radius that clusters that calculate the interior position vector that clusters Obtain " cluster circle ".Testing result and " cluster circle " are as shown in Figure 4.Table 1 is the system method sight line prediction effect.
Table 1
In Fig. 4, erroneous judgement point will be considered as without the color point in corresponding " cluster circle ", to the erroneous judgement of test process point Shown in analysis result table 1, can be seen that this method can highly accurately judge human eye sight direction by data above, thus using this It is practicable that method provides Effective Judge to glaucoma detection data.

Claims (6)

1. a kind of reliable glaucoma patient self-detection method, it is characterised in that comprise the following steps:
S1:Face detection:Collection facial image simultaneously identifies human face region by skin color segmentation, determines face border and extracts people Face;
S2:Human eye detection:After face is extracted, human eye area is identified;
S3:Pupil center-canthus position vector is extracted, judges whether direction of visual lines there occurs movement, and by means of demarcating in advance The method of each Main way estimates actual direction of visual lines according to the value of pupil center-canthus position vector in real time, so that right Glaucoma is detected.
2. reliable glaucoma patient self-detection method according to claim 1, it is characterised in that in step S1, mistake Skin color segmentation identifies that the specific method of human face region is:Each pixel is put black according to the numerical value of triple channel in YCbCr space Or put white, its computing formula is as follows:
m = 41 1024 ( 51 + 819 ( c r - 152 ) - 614 ( c b - 109 ) 32 )
n = 73 1024 ( 77 + 819 ( c r - 152 ) + 614 ( c b - 109 ) 32 )
Pow=m2+N2
Wherein y, cb, cr refer to image single pixel Y passages, Cb passages, Cr passages value;And value represents the pixel Binaryzation result, represents it is colour of skin point if value is 255, is not otherwise colour of skin point.
3. reliable glaucoma patient self-detection method according to claim 2, it is characterised in that in step S1, After binaryzation terminates, an etching operation is applied to binary image, be used to take out part ambient noise.
4. reliable glaucoma patient self-detection method according to claim 1, it is characterised in that in step S1, really The specific steps determined face border and extract face include:
1) skin color segmentation result is carried out into longitudinal projection, extracts " plateau " part, thereby determine that face right boundary;
2) in skin color segmentation result, cut according to face right boundary;
3) new segmentation result is carried out into horizontal Gray Projection, extracts " plateau " part, thereby determine that face up-and-down boundary.
5. reliable glaucoma patient self-detection method according to claim 1, it is characterised in that right in step S2 What human eye area was identified concretely comprises the following steps:
1) Region detection algorithms, detection black region part are called on face complexion segmentation result;
2) result to region detection carries out area expansion, obtains human eye candidate region collection;
3) each human eye candidate region is constantly intercepted during a part sends into AdaBoost graders and is detected, until detecting Human eye or whole candidate region simultaneously do not include human eye;
4) in the presence of having human eye in detecting certain candidate region, judge testing result its right boundary whether there is and skin color segmentation Black patch in result intersects, if so, then carrying out corresponding area expansion.
6. reliable glaucoma patient self-detection method according to claim 1, it is characterised in that in step S3, carry The specific steps for taking pupil center-canthus position vector include:
S3.1:Pupil center positions:After binaryzation is carried out to human eye, vertical, horizontal black picture element points are carried out to picture Amount projection, by the right boundary of vertical black picture element spot projection iris, is thus laterally intercepted in human eye binary map Iris composition, then horizontal black pixel point quantity projection is carried out to it, after new horizontal black pixel point quantity projection, just can be true Determine the up-and-down boundary of iris, pupil center's point abscissa can be obtained by the median for calculating iris right boundary;By meter Calculating the median of iris up-and-down boundary can obtain the ordinate of pupil center's point;
S3.2:Human eye self-adaption binaryzation:Come the center of Automatic adjusument edge strengthening template as Basic Criteria with iris area Value, thus obtains the binaryzation result of self adaptation, and it is as follows that it performs step:
1) the number of pixels num1 of black clumps at iris is calculated;
2) human eye is processed using the edge strengthening template that central value is 10.8, then changes into gray-scale map, and it is certainly flat using OTSU Weighing apparatus binaryzation obtains binary map, now calculates the number num2 of black pixel point in whole binary map;
3) num2 is worked as>During num1*1.4, perform 4), otherwise perform 5);
4) human eye is processed using the edge strengthening template that central value is 10, then changes into gray-scale map, and use OTSU self-balancings Binaryzation obtains binary map, and is subject to an expansive working removal noise;
5) the upper and lower, left and right border according to iris, puts black, with ambient bright of going out completely by binary map corresponding region herein The influence that light is caused in iris reflex.What is now obtained is desired binary map effect;
S3.3:Position vector is normalized:After positioning pupil center every time, returned on the basis of the distance by between two canthus points One changes;
Assuming that detection obtains left eye angular coordinate for (Lx,Ly), right eye angular coordinate is (Rx,Ry), center coordinate of eye pupil (Cx,Cy), and Position vector using pupil center relative to left eye angle point is used as sight line criterion, then position vector (Δ x, Δ after normalization Y) can be tried to achieve by following:
Δ x = C x - L x ( R x - L x ) 2 + ( R y - L y ) 2
Δ y = C y - L y ( R x - L x ) 2 + ( R y - L y ) 2
Thus, will pass through (whether Δ x, Δ y) change to judge whether direction of visual lines there occurs movement, and by means of in advance The method for demarcating each Main way is come in real time according to (value of Δ x, Δ y) estimates actual direction of visual lines.
CN201710139010.7A 2017-03-09 2017-03-09 A kind of reliable glaucoma patient self-detection method Pending CN106934365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710139010.7A CN106934365A (en) 2017-03-09 2017-03-09 A kind of reliable glaucoma patient self-detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710139010.7A CN106934365A (en) 2017-03-09 2017-03-09 A kind of reliable glaucoma patient self-detection method

Publications (1)

Publication Number Publication Date
CN106934365A true CN106934365A (en) 2017-07-07

Family

ID=59432989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710139010.7A Pending CN106934365A (en) 2017-03-09 2017-03-09 A kind of reliable glaucoma patient self-detection method

Country Status (1)

Country Link
CN (1) CN106934365A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921227A (en) * 2018-07-11 2018-11-30 广东技术师范学院 A kind of glaucoma medical image classification method based on capsule theory
CN109086713A (en) * 2018-07-27 2018-12-25 腾讯科技(深圳)有限公司 Eye recognition method, apparatus, terminal and storage medium
CN109480808A (en) * 2018-09-27 2019-03-19 深圳市君利信达科技有限公司 A kind of heart rate detection method based on PPG, system, equipment and storage medium
CN110598635A (en) * 2019-09-12 2019-12-20 北京大学第一医院 Method and system for face detection and pupil positioning in continuous video frames
CN110969084A (en) * 2019-10-29 2020-04-07 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN113239754A (en) * 2021-04-23 2021-08-10 泰山学院 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921227A (en) * 2018-07-11 2018-11-30 广东技术师范学院 A kind of glaucoma medical image classification method based on capsule theory
CN108921227B (en) * 2018-07-11 2022-04-08 广东技术师范学院 Glaucoma medical image classification method based on capsule theory
CN109086713A (en) * 2018-07-27 2018-12-25 腾讯科技(深圳)有限公司 Eye recognition method, apparatus, terminal and storage medium
CN109086713B (en) * 2018-07-27 2019-11-15 腾讯科技(深圳)有限公司 Eye recognition method, apparatus, terminal and storage medium
CN109480808A (en) * 2018-09-27 2019-03-19 深圳市君利信达科技有限公司 A kind of heart rate detection method based on PPG, system, equipment and storage medium
CN110598635A (en) * 2019-09-12 2019-12-20 北京大学第一医院 Method and system for face detection and pupil positioning in continuous video frames
CN110969084A (en) * 2019-10-29 2020-04-07 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN110969084B (en) * 2019-10-29 2021-03-05 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN113239754A (en) * 2021-04-23 2021-08-10 泰山学院 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles

Similar Documents

Publication Publication Date Title
CN106934365A (en) A kind of reliable glaucoma patient self-detection method
CN108615051B (en) Diabetic retina image classification method and system based on deep learning
Ran et al. Cataract detection and grading based on combination of deep convolutional neural network and random forests
US7370969B2 (en) Corneal topography analysis system
CN103632136B (en) Human-eye positioning method and device
CN108734086B (en) Blink frequency and sight line estimation method based on eye area generation network
Dey et al. FCM based blood vessel segmentation method for retinal images
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
US20200211235A1 (en) Method of modifying a retina fundus image for a deep learning model
CN105224285A (en) Eyes open and-shut mode pick-up unit and method
Wong et al. Intelligent fusion of cup-to-disc ratio determination methods for glaucoma detection in ARGALI
CN102867179A (en) Method for detecting acquisition quality of digital certificate photo
CN116309584B (en) Image processing system for cataract area identification
CN113239805A (en) Mask wearing identification method based on MTCNN
CN103729646B (en) Eye image validity detection method
CN109840484A (en) A kind of pupil detection method based on edge filter, oval evaluation and pupil verifying
CN105488799A (en) Automatic detection method for microaneurysm in color eye fundus image
CN114445666A (en) Deep learning-based method and system for classifying left eye, right eye and visual field positions of fundus images
CN106446805A (en) Segmentation method and system for optic cup in eye ground photo
CN111588345A (en) Eye disease detection method, AR glasses and readable storage medium
CN110310254A (en) A kind of room angle image automatic grading method based on deep learning
US10956735B1 (en) System and method for determining a refractive error from red reflex images of eyes
CN115456974A (en) Strabismus detection system, method, equipment and medium based on face key points
WO2011108995A1 (en) Automatic analysis of images of the anterior chamber of an eye
US10617294B1 (en) System and method for determining the spherical power of eyes based on measured refractive error

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170707

RJ01 Rejection of invention patent application after publication