CN110287797A - A kind of dioptric screening technique based on mobile phone - Google Patents
A kind of dioptric screening technique based on mobile phone Download PDFInfo
- Publication number
- CN110287797A CN110287797A CN201910442129.0A CN201910442129A CN110287797A CN 110287797 A CN110287797 A CN 110287797A CN 201910442129 A CN201910442129 A CN 201910442129A CN 110287797 A CN110287797 A CN 110287797A
- Authority
- CN
- China
- Prior art keywords
- target object
- face
- mobile phone
- eyes
- dioptric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/103—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Processing (AREA)
- Telephone Function (AREA)
Abstract
The present invention provides a kind of dioptric screening technique based on mobile phone, the dioptric screening technique based on mobile phone is detected by face of the mobile phone to target object, the characteristic information of the face about the target object is extracted with this, again based on the angle character information in this feature information about the eyes of the target object, obtain the formed light spot image of eyes refraction that natural light outside enters the target object, finally it is based on deep learning mode, matching treatment is carried out to the light spot image, with the dioptric information of this eye for determining the target object.
Description
Technical field
The present invention relates to the technical field of eyesight detection, in particular to a kind of dioptric screening techniques based on mobile phone.
Background technique
Eyesight is for characterizing eye health degree important indicator.People are in work and life it should be noted that using
Eye health could maintain good eyesight state.But as the work and life of people is increasingly dependent on electronic product, and
People cannot keep good constantly and are accustomed to eye, this makes the eyesight state of people that serious decline occur.In addition, with electricity
For the universal and eye hygiene knowledge of sub- product not by wide-scale distribution and attention, the development trend to become younger is presented in visual problems.?
In numerous visual problems, myopia and astigmatism are relatively conventional visual problems, wherein myopia can be divided into pseudo-myopia and true property is close
Depending on pseudo-myopia refers to that because of the reasons such as excess eye-using and fatigue temporary myopia occurs for eyes, and pseudo-myopia has
Restorability, can restore normal condition completely as long as eyes can obtain sufficient rest and loosen, and true myopia
Be due to eyes crystalline lens occur it is irreversible variation and cause, true myopia can not Lock-out restore, can only
It is made up by wearing the extraneous means such as eyes.Therefore, visual problems have become a serious and universal health and ask
Topic.
In order to accurately obtaining the correlation values of eye eyesight, need by relevant eyesight optometry unit to eyes into
Row measurement.Although eyesight optometry unit has good and accurate eyesight measurement performance, the volume of eyesight optometry unit is heavy
And it is at high price, usually only in the associated mechanisms such as configuration and hospital or optician's shop.People can only go to these mechanisms into
The corresponding eyesight testing of row can know the vision condition of itself, this is for having the people of frequent eyesight testing demand
It is extremely inconvenient, and the operating process of eyesight optometry unit is complex and take a long time, and needs special operator's ability
The eyesight optometry unit is operated, this is all unfavorable for shortening the time of eyesight detection and improves eyesight detection efficiency.As it can be seen that existing view
The detection that power detection pattern can not enable people to whenever and wherever possible and rapidly carry out eyesight state obtains, this is not easy to improve
The convenience and accuracy of eyesight detection.
Summary of the invention
In view of the defects existing in the prior art, the present invention provides a kind of dioptric screening technique based on mobile phone, should be based on hand
The dioptric screening technique of machine is detected by face of the mobile phone to target object, extracts the face about the target object with this
Characteristic information, then based on the angle character information in this feature information about the eyes of the target object, obtain external natural
The eyes that light enters the target object reflect formed light spot image, are finally based on deep learning mode, to the light spot image into
Row matching treatment, with the dioptric information of this eye for determining the target object.The dioptric screening technique based on mobile phone by
Mobile phone shooting natural light enters the light spot image generated after eyes due to eyes refraction action, the light distribution of the light spot image
It is all closely related with the dioptric imaging function of eyes itself with light spot shape, in this way by carrying out deep learning to the light spot image
The analysis of mode is handled, it will be able to rapidly and accurately determine the dioptric imaging of the eyes itself, thus finally determine about
The eyesights state such as dioptric information of eyes.Since the dioptric screening method carries out the shooting of image and calculates to divide simply by mobile phone
Analysis processing, this can carry out whenever and wherever possible eyesight detection operation of eyes convenient for people, this is compared to can only pass through optometry in the past
For instrument just can be carried out eyesight detection operation, which has higher convenience, and the dioptric screening method
Relevant analytical calculation processing is also carried out by deep learning mode, the calculating that can effectively improve ophthalmic refractive value is quick
Property and calculated result accuracy, this all meet to eyesight detection quickly, requirement efficiently and accurately.
The present invention provides a kind of dioptric screening technique based on mobile phone, which is characterized in that the dioptric sieve based on mobile phone
Choosing method includes the following steps:
Step (1), is detected by face of the mobile phone to target object, extracts the face about the target object with this
The characteristic information in portion;
Step (2) is obtained outer based on the angle character information in the characteristic information about the eyes of the target object
The eyes that portion's natural light enters the target object reflect formed light spot image;
Step (3) is based on deep learning mode, carries out matching treatment to the light spot image, determines the target with this
The dioptric information of the eyes of object;
Further, in the step (1), detection is carried out by face of the mobile phone to target object and is specifically included,
Step (A101) obtains face image about the target object by mobile phone, and to the face image into
Row facial contours process of fitting treatment;
Step (A102), based on the facial contours process of fitting treatment as a result, determining that the face of the target object are distributed
Whether default face state condition is met, wherein the face distribution includes at least the distribution feelings of two eyes of target object
Condition;
Step (A103) indicates institute if the face distribution of the target object meets the first default face state condition
It states mobile phone to detect the face of the target object by the first default detection platform, if the face of the target object point
Cloth meets the second default face state condition, then indicates the mobile phone by the second default detection platform to the target object
Face is detected;
Further, in the step (A101), the face image about the target object is obtained by mobile phone, and right
The face image carries out facial contours process of fitting treatment and specifically includes,
Step (A1011) is obtained respectively by the first photographing module and the second photographing module of the mobile phone about described
The first face image and the second face image of target object, wherein first photographing module and second photographing module
The different location region of the mobile phone is located at, about the mesh in first face image and second face image
The shooting area for marking the face of object meets default Duplication condition;
Step (A1012) determines the parallax information of first face image and second face image, and according to institute
It states parallax information and executes the facial contours process of fitting treatment, wherein the facial contours fitting is according to the target object
Face profile information, the plane geometry distribution shape that the facial contours fitting of the target object is converted into Joint of Line and Dot;
Further, in the step (A102), based on the facial contours process of fitting treatment as a result, determining the target
Whether the face distribution of object, which meets default face state condition, specifically includes
Step (A1021) determines that the face of the target object are distributed corresponding profile depth distributed intelligence;
Step (A1022), if the face are distributed respective profile depth distributed intelligence and meet first profile distribution gradient
Condition, it is determined that the face distribution of the target object meets the first default face state condition, if face distribution is respective
Profile depth distributed intelligence meet the second contoured profile gradient condition, it is determined that it is pre- that the face of the target object meets second
If face state condition, wherein the first default face state condition has compared to the described second default face state condition
There is higher gradient distribution value range;
Alternatively,
In the step (A103), the first default detection platform includes mtcnn detection platform or ncnn detection
Platform, the second default detection platform include opencv detection platform or openmp detection platform;
Further, it in the step (1), extracts and is specifically included about the characteristic information of the face of the target object,
Step (B101), network mode based on opencv detection platform combination ncnn detection platform or is based on
The acceleration of openmp detection platform captures mode, carries out human face characteristic point really to the testing result of the face of the target object
It is fixed, wherein the human face characteristic point includes at least eyes, nose or the corners of the mouth;
Step (B102) is constructed according to the determining human face characteristic point about the corresponding three-dimensional of the human face characteristic point
Spatial geometric shape, and according to the three-dimensional space geometry, determine the eyes of the target object in the three-dimensional space
Corresponding angle character information in geometry, in this, as the characteristic information of the face about target object;
Further, in the step (2), based on the angle in the characteristic information about the eyes of the target object
Characteristic information, the eyes for obtaining natural light outside into the target object reflect formed light spot image and specifically include,
Step (201) is based on the angle character information, determines the modulating mode to the natural light outside;
Step (202) carries out adaptive modulation processing to the natural light outside according to identified modulating mode, with
Enter modulated natural light outside in the eyes of the target object;
Step (203) is based on the angle character information, is monitored to the eyes of the target object, and according to institute
State monitoring as a result, the eyes for obtaining one or more described natural light outside into the target object reflect formed light
Spot image;
Further, in the step (201), it is based on the angle character information, is determined to the natural light outside
Modulating mode specifically includes,
Step (2011) is based on the angle character information, determines effective light eye orbit areas face of the target object
Long-pending and/or effective light pupil region area;
Step (2012) is determined according to effective light eye orbit areas area and/or effective light pupil region area
Intensity modulated mode and/or beam diameter modulating mode are executed to the natural light outside;
Further, in the step (203), be based on the angle character information, to the eyes of the target object into
Row monitoring, and according to the monitoring as a result, obtaining one or more described natural light outside into the target object
Eyes reflect formed light spot image and specifically include,
Step (2031) is based on the angle character information, determines the prison being monitored to the eyes of the target object
Control area of space;
Step (2032) indicates the mobile phone into described in the monitoring space region according to prefixed time interval
The eyes of target object execute image capture operations, obtain one or more eyes figure about the target object with this
Picture;
Step (2033) is based on the angle character information, carries out angle to one or more of eye images and rectifys
Positive processing accordingly obtains one or more described natural light outside into the eyes of the target object and reflects formed light
Spot image;
Further, in the step (2032), indicate the mobile phone to the mesh for entering the monitoring space region
The eyes of mark object execute image capture operations and specifically include,
Step (A20321) determines the relative tertiary location between the mobile phone and the monitoring space region;
Step (A20322) indicates that the gyro module of the mobile phone carries out adaptability according to the relative tertiary location
Angle adjustment so that the photographing module of the mobile phone is in alignment with the monitoring space region;
Step (A20323) is taken the photograph described in instruction when the photographing module of the mobile phone is in alignment with the monitoring space region
As module executes described image shooting operation;
Alternatively,
In the step (2032), indicate the mobile phone to the target object for entering the monitoring space region
Eyes execute image capture operations and specifically include,
Step (B20321) determines the monitoring space region currently corresponding ambient brightness value, and the environment is bright
Angle value and default ambient brightness range compare processing;
Step (B20322) indicates the hand if the ambient brightness value is in the default ambient brightness range
Machine executes image capture operations to the eyes for the target object for entering the monitoring space region and otherwise indicates the hand
Machine pause executes described image shooting operation;
Further, in the step (3), it is based on deep learning mode, matching treatment is carried out to the light spot image, with
This determines that the dioptric information of the eyes of the target object specifically includes,
Step (301) is based on the deep learning model, obtain in the light spot image corresponding light spot shape and/or
Hot spot light distribution;
Step (302), be based on the deep learning model, determine the light spot image correspond to hot spot status information from it is different
Relevance distribution situation between ophthalmic refractive information;
Step (303) is based on the relevance distribution situation, by the light spot shape and/or the hot spot light distribution
Control matching treatment is carried out from different ophthalmic refractive information, with the dioptric information of this eye for determining the target object.
Compared with the prior art, the dioptric screening technique based on mobile phone is somebody's turn to do to examine by face of the mobile phone to target object
Survey, extract the characteristic information of the face about the target object with this, then based in this feature information about the target object
The angle character information of eyes obtains the formed light spot image of eyes refraction that natural light outside enters the target object, finally
Based on deep learning mode, matching treatment is carried out to the light spot image, is believed with the dioptric of this eye for determining the target object
Breath.The dioptric screening technique based on mobile phone enters after eyes by mobile phone shooting natural light to be generated due to eyes refraction action
Light spot image, the light distribution of the light spot image and light spot shape are all closely related with the dioptric imaging function of eyes itself,
It is handled in this way by carrying out the analysis of deep learning mode to the light spot image, it will be able to rapidly and accurately determine the eyes
The dioptric imaging of itself, to finally determine the eyesights states such as the dioptric information about eyes.Only due to the dioptric screening method
It is to carry out the shooting of image by mobile phone and calculate analysis processing, this can carry out whenever and wherever possible the eyesight inspection of eyes convenient for people
Operation is surveyed, for it can only just can be carried out eyesight detection operation by optometry unit in the past, which has for this
Higher convenience, and the dioptric screening method also passes through deep learning mode and carries out relevant analytical calculation processing, energy
Enough effectively improve the calculating rapidity of ophthalmic refractive value and the accuracy of calculated result, this all meet to eyesight detection quickly,
Requirement efficiently and accurately.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by written explanation
Specifically noted structure is achieved and obtained in book, claims and attached drawing.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of the dioptric screening technique based on mobile phone provided by the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
It refering to fig. 1, is a kind of flow diagram of the dioptric screening technique based on mobile phone provided in an embodiment of the present invention.It should
Dioptric screening technique based on mobile phone includes the following steps:
Step (1), is detected by face of the mobile phone to target object, extracts the face about the target object with this
Characteristic information.
Preferably, in the step (1), detection is carried out by face of the mobile phone to target object and is specifically included,
Step (A101) obtains the face image about the target object by mobile phone, and carries out face to the face image
The processing of portion's form fit;
Step (A102), based on the facial contours process of fitting treatment as a result, determine the target object face distribution whether
Meet default face state condition, wherein face distribution includes at least the distribution situation of two eyes of target object;
Step (A103) indicates the hand if the face distribution of the target object meets the first default face state condition
Machine detects the face of the target object by the first default detection platform, if the face distribution of the target object meets the
Two default face state conditions then indicate that the mobile phone examines the face of the target object by the second default detection platform
It surveys.
Preferably, in the step (A101), the face image about the target object is obtained by mobile phone, and to the face
Portion's image carries out facial contours process of fitting treatment and specifically includes,
Step (A1011) is obtained respectively by the first photographing module and the second photographing module of the mobile phone about the target
The first face image and the second face image of object, wherein first photographing module and second photographing module are located at
Bat in the different location region of the mobile phone, first face image and second face image about the face of the target object
It takes the photograph region and meets default Duplication condition;
Step (A1012) determines the parallax information of first face image and second face image, and according to the parallax
Information executes the facial contours process of fitting treatment, wherein facial contours fitting be according to the face profile information of the target object,
The facial contours fitting of the target object is converted into the plane geometry distribution shape of Joint of Line and Dot.
Preferably, in the step (A102), based on the facial contours process of fitting treatment as a result, determining the target object
Whether face distribution, which meets default face state condition, specifically includes
Step (A1021) determines that the face of the target object are distributed corresponding profile depth distributed intelligence;
Step (A1022), if the face are distributed respective profile depth distributed intelligence and meet first profile distribution gradient item
Part, it is determined that the face distribution of the target object meets the first default face state condition, if the face are distributed respective profile
Depth profile information meets the second contoured profile gradient condition, it is determined that the face of the target object meets second default face's shape
State condition, wherein the first default face state condition has higher gradient compared to the second default face state condition
Distribution Value range.
Preferably, in the step (A103), which includes mtcnn detection platform or ncnn inspection
Platform is surveyed, which includes opencv detection platform or openmp detection platform.
Preferably, it in the step (1), extracts and is specifically included about the characteristic information of the face of the target object,
Step (B101), network mode based on opencv detection platform combination ncnn detection platform or is based on
The acceleration of openmp detection platform captures mode, carries out human face characteristic point really to the testing result of the face of the target object
It is fixed, wherein the human face characteristic point includes at least eyes, nose or the corners of the mouth;
Step (B102) is constructed according to the determining human face characteristic point about the corresponding three-dimensional space of the human face characteristic point
Geometry, and according to the three-dimensional space geometry, determine the eyes of the target object in the three-dimensional space geometry
Corresponding angle character information, in this, as the characteristic information of the face about target object.
Step (2) obtains outside certainly based on the angle character information in this feature information about the eyes of the target object
The eyes that right light enters the target object reflect formed light spot image.
Preferably, in the step (2), based on the angle character letter in this feature information about the eyes of the target object
Breath, the eyes for obtaining natural light outside into the target object reflect formed light spot image and specifically include,
Step (201) is based on the angle character information, determines the modulating mode to the natural light outside;
Step (202) carries out adaptive modulation processing to the natural light outside according to identified modulating mode, so that
Modulated natural light outside enters in the eyes of the target object;
Step (203) is based on the angle character information, is monitored to the eyes of the target object, and according to the monitoring
As a result, the eyes for obtaining one or more natural light outside into the target object reflect formed light spot image.
Preferably, in the step (201), it is based on the angle character information, determines the modulation mould to the natural light outside
Formula specifically includes,
Step (2011), be based on the angle character information, determine the target object effective light eye orbit areas area and/
Or effective light pupil region area;
Step (2012), according to effective light eye orbit areas area and/or effective light pupil region area, determining pair
The natural light outside executes intensity modulated mode and/or beam diameter modulating mode.
Preferably, in the step (203), it is based on the angle character information, the eyes of the target object are monitored,
And according to the monitoring as a result, obtaining formed by eyes refraction of one or more natural light outside into the target object
Light spot image specifically includes,
Step (2031) is based on the angle character information, determines that the monitoring being monitored to the eyes of the target object is empty
Between region;
Step (2032) indicates the mobile phone to the target pair for entering the monitoring space region according to prefixed time interval
The eyes of elephant execute image capture operations, obtain one or more eye image about the target object with this;
Step (2033) is based on the angle character information, carries out at angle correction to one or more eye image
Reason accordingly obtains one or more natural light outside into the eyes of the target object and reflects formed light spot image.
Preferably, in the step (2032), indicate the mobile phone to the target object for entering the monitoring space region
Eyes execute image capture operations and specifically include,
Step (A20321) determines the relative tertiary location between the mobile phone and the monitoring space region;
Step (A20322) indicates that the gyro module of the mobile phone carries out the angle of adaptability according to the relative tertiary location
Degree adjustment, so that the photographing module of the mobile phone is in alignment with the monitoring space region;
Step (A20323) indicates the photographing module when the photographing module of the mobile phone is in alignment with the monitoring space region
Execute the image capture operations.
Preferably, in the step (2032), indicate the mobile phone to the target object for entering the monitoring space region
Eyes execute image capture operations and specifically include,
Step (B20321), determines the monitoring space region currently corresponding ambient brightness value, and by the ambient brightness value
Processing is compared with default ambient brightness range;
Step (B20322), if the ambient brightness value is in the default ambient brightness range, indicate the mobile phone into
The eyes for entering the target object in the monitoring space region execute image capture operations, otherwise, indicate that mobile phone pause executes and are somebody's turn to do
Image capture operations.
Step (3) is based on deep learning mode, carries out matching treatment to the light spot image, determines the target object with this
Eyes dioptric information.
Preferably, in the step (3), it is based on deep learning mode, matching treatment is carried out to the light spot image, really with this
The dioptric information of the eyes of the fixed target object specifically includes,
Step (301) is based on the deep learning model, obtains corresponding light spot shape and/or hot spot in the light spot image
Light distribution;
Step (302) is based on the deep learning model, determines that the light spot image corresponds to hot spot status information and different eyes
Relevance distribution situation between dioptric information;
Step (303) is based on the relevance distribution situation, by the light spot shape and/or the hot spot light distribution from it is different
Ophthalmic refractive information carries out control matching treatment, with the dioptric information of this eye for determining the target object.
From above-described embodiment as can be seen that the dioptric screening technique based on mobile phone passes through mobile phone to the face of target object
It is detected, the characteristic information of the face about the target object is extracted with this, then be based in this feature information about the target
The angle character information of the eyes of object obtains the formed hot spot figure of eyes refraction that natural light outside enters the target object
Picture is finally based on deep learning mode, carries out matching treatment to the light spot image, with bending for this eye for determining the target object
Optical information.The dioptric screening technique based on mobile phone enter after eyes due to eyes refraction action by mobile phone shooting natural light and
The light spot image of generation, the light distribution of the light spot image and light spot shape all with the close phase of dioptric imaging function of eyes itself
It closes, is handled in this way by carrying out the analysis of deep learning mode to the light spot image, it will be able to rapidly and accurately determine this
The dioptric imaging of eyes itself, to finally determine the eyesights states such as the dioptric information about eyes.Due to the dioptric screening side
Method carries out the shooting of image simply by mobile phone and calculates analysis processing, this can carry out whenever and wherever possible the view of eyes convenient for people
Power detection operation, this by optometry unit for can only just can be carried out eyesight detection operation in the past, the dioptric screening method
With higher convenience, and the dioptric screening method also passes through deep learning mode and carries out relevant analytical calculation processing,
It can effectively improve the calculating rapidity of ophthalmic refractive value and the accuracy of calculated result, this all meets fast to eyesight detection
Speed, requirement efficiently and accurately.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of dioptric screening technique based on mobile phone, which is characterized in that the dioptric screening technique based on mobile phone includes such as
Lower step:
Step (1) is detected by face of the mobile phone to target object, extracts the face about the target object with this
Characteristic information;
Step (2) obtains outside certainly based on the angle character information in the characteristic information about the eyes of the target object
The eyes that right light enters the target object reflect formed light spot image;
Step (3) is based on deep learning mode, carries out matching treatment to the light spot image, determines the target object with this
Eyes dioptric information.
2. the dioptric screening technique based on mobile phone as described in claim 1, it is characterised in that:
In the step (1), detection is carried out by face of the mobile phone to target object and is specifically included,
Step (A101) obtains the face image about the target object by mobile phone, and carries out face to the face image
The processing of portion's form fit;
Step (A102), based on the facial contours process of fitting treatment as a result, determine the target object face distribution whether
Meet default face state condition, wherein the face distribution includes at least the distribution situation of two eyes of target object;
Step (A103) indicates the hand if the face distribution of the target object meets the first default face state condition
Machine detects the face of the target object by the first default detection platform, if the face of the target object are distributed symbol
The second default face state condition is closed, then indicates face of the mobile phone by the second default detection platform to the target object
It is detected.
3. the dioptric screening technique based on mobile phone as claimed in claim 2, it is characterised in that:
In the step (A101), the face image about the target object is obtained by mobile phone, and scheme to the face
It is specifically included as carrying out facial contours process of fitting treatment,
Step (A1011) is obtained respectively by the first photographing module and the second photographing module of the mobile phone about the target
The first face image and the second face image of object, wherein first photographing module and second photographing module difference
Positioned at the different location region of the mobile phone, about the target pair in first face image and second face image
The shooting area of the face of elephant meets default Duplication condition;
Step (A1012) determines the parallax information of first face image and second face image, and according to the view
Poor information executes the facial contours process of fitting treatment, wherein the facial contours fitting is the face according to the target object
Profile information, the plane geometry distribution shape that the facial contours fitting of the target object is converted into Joint of Line and Dot.
4. the dioptric screening technique based on mobile phone as claimed in claim 2, it is characterised in that:
In the step (A102), based on the facial contours process of fitting treatment as a result, determining the face of the target object
Whether distribution, which meets default face state condition, specifically includes
Step (A1021) determines that the face of the target object are distributed corresponding profile depth distributed intelligence;
Step (A1022), if the face are distributed respective profile depth distributed intelligence and meet first profile distribution gradient condition,
Then determine that the face distribution of the target object meets the first default face state condition, if the face are distributed respective profile
Depth profile information meets the second contoured profile gradient condition, it is determined that the face of the target object meets the second default face
Status condition, wherein the first default face state condition has higher compared to the described second default face state condition
Gradient distribution value range;
Alternatively,
In the step (A103), the first default detection platform includes mtcnn detection platform or ncnn detection platform,
The second default detection platform includes opencv detection platform or openmp detection platform.
5. the dioptric screening technique based on mobile phone as described in claim 1, it is characterised in that:
In the step (1), extracts and is specifically included about the characteristic information of the face of the target object,
Step (B101), network mode based on opencv detection platform combination ncnn detection platform or is based on openmp
The acceleration of detection platform captures mode, and the determination of human face characteristic point is carried out to the testing result of the face of the target object,
In, the human face characteristic point includes at least eyes, nose or the corners of the mouth;
Step (B102) is constructed according to the determining human face characteristic point about the corresponding three-dimensional space of the human face characteristic point
Geometry, and according to the three-dimensional space geometry, determine the eyes of the target object in the three-dimensional space geometry
Corresponding angle character information in shape, in this, as the characteristic information of the face about target object.
6. the dioptric screening technique based on mobile phone as described in claim 1, it is characterised in that:
In the step (2), based on the angle character information in the characteristic information about the eyes of the target object, obtain
The formed light spot image of eyes refraction for taking natural light outside to enter the target object specifically includes,
Step (201) is based on the angle character information, determines the modulating mode to the natural light outside;
Step (202) carries out adaptive modulation processing to the natural light outside according to identified modulating mode, so as to adjust
Natural light outside after system enters in the eyes of the target object;
Step (203) is based on the angle character information, is monitored to the eyes of the target object, and according to the prison
Control as a result, the eyes for obtaining one or more described natural light outside into the target object reflect formed hot spot figure
Picture.
7. the dioptric screening technique based on mobile phone as claimed in claim 6, it is characterised in that:
In the step (201), it is based on the angle character information, determination is specific to the modulating mode of the natural light outside
Including,
Step (2011), be based on the angle character information, determine the target object effective light eye orbit areas area and/
Or effective light pupil region area;
Step (2012) is determined according to effective light eye orbit areas area and/or effective light pupil region area to institute
It states natural light outside and executes intensity modulated mode and/or beam diameter modulating mode.
8. the dioptric screening technique based on mobile phone as claimed in claim 6, it is characterised in that:
In the step (203), it is based on the angle character information, the eyes of the target object are monitored, and root
According to the monitoring as a result, obtain one or more described natural light outside into the target object eyes refraction institute at
Light spot image specifically include,
Step (2031) is based on the angle character information, determines that the monitoring being monitored to the eyes of the target object is empty
Between region;
Step (2032) indicates the mobile phone to the target for entering the monitoring space region according to prefixed time interval
The eyes of object execute image capture operations, obtain one or more eye image about the target object with this;
Step (2033) is based on the angle character information, carries out at angle correction to one or more of eye images
Reason accordingly obtains one or more described natural light outside into the eyes of the target object and reflects formed hot spot figure
Picture.
9. the dioptric screening technique based on mobile phone as claimed in claim 8, it is characterised in that:
In the step (2032), eyes of the mobile phone to the target object for entering the monitoring space region are indicated
Image capture operations are executed to specifically include,
Step (A20321) determines the relative tertiary location between the mobile phone and the monitoring space region;
Step (A20322) indicates that the gyro module of the mobile phone carries out the angle of adaptability according to the relative tertiary location
Degree adjustment, so that the photographing module of the mobile phone is in alignment with the monitoring space region;
Step (A20323) indicates the camera shooting mould when the photographing module of the mobile phone is in alignment with the monitoring space region
Block executes described image shooting operation;
Alternatively,
In the step (2032), eyes of the mobile phone to the target object for entering the monitoring space region are indicated
Image capture operations are executed to specifically include,
Step (B20321), determines the monitoring space region currently corresponding ambient brightness value, and by the ambient brightness value
Processing is compared with default ambient brightness range;
Step (B20322) indicates the mobile phone pair if the ambient brightness value is in the default ambient brightness range
Otherwise eyes execution image capture operations into the target object in the monitoring space region indicate that the mobile phone is temporary
Stop executing described image shooting operation.
10. the dioptric screening technique based on mobile phone as described in claim 1, it is characterised in that:
In the step (3), it is based on deep learning mode, matching treatment is carried out to the light spot image, is determined with this described
The dioptric information of the eyes of target object specifically includes,
Step (301) is based on the deep learning model, obtains corresponding light spot shape and/or hot spot in the light spot image
Light distribution;
Step (302) is based on the deep learning model, determines that the light spot image corresponds to hot spot status information and different eyes
Relevance distribution situation between dioptric information;
Step (303) is based on the relevance distribution situation, by the light spot shape and/or the hot spot light distribution and not
Control matching treatment is carried out with ophthalmic refractive information, with the dioptric information of this eye for determining the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910442129.0A CN110287797B (en) | 2019-05-24 | 2019-05-24 | Refractive screening method based on mobile phone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910442129.0A CN110287797B (en) | 2019-05-24 | 2019-05-24 | Refractive screening method based on mobile phone |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110287797A true CN110287797A (en) | 2019-09-27 |
CN110287797B CN110287797B (en) | 2020-06-12 |
Family
ID=68002362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910442129.0A Active CN110287797B (en) | 2019-05-24 | 2019-05-24 | Refractive screening method based on mobile phone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287797B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110811537A (en) * | 2019-11-12 | 2020-02-21 | 赵成玉 | Functional glasses system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104887176A (en) * | 2015-06-18 | 2015-09-09 | 苏州四海通仪器有限公司 | Handheld independent vision measurement device and method |
CN105326471A (en) * | 2014-07-31 | 2016-02-17 | 汉王科技股份有限公司 | Children visual acuity testing device and testing method |
US20170359495A1 (en) * | 2016-06-13 | 2017-12-14 | Delphi Technologies, Inc. | Sunlight glare reduction system |
CN107890336A (en) * | 2017-12-05 | 2018-04-10 | 中南大学 | Diopter detecting system based on intelligent handheld device |
US20180350070A1 (en) * | 2017-05-31 | 2018-12-06 | Fujitsu Limited | Recording medium storing computer program for pupil detection, information processing apparatus, and pupil detecting method |
CN109725721A (en) * | 2018-12-29 | 2019-05-07 | 上海易维视科技股份有限公司 | Human-eye positioning method and system for naked eye 3D display system |
-
2019
- 2019-05-24 CN CN201910442129.0A patent/CN110287797B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105326471A (en) * | 2014-07-31 | 2016-02-17 | 汉王科技股份有限公司 | Children visual acuity testing device and testing method |
CN104887176A (en) * | 2015-06-18 | 2015-09-09 | 苏州四海通仪器有限公司 | Handheld independent vision measurement device and method |
US20170359495A1 (en) * | 2016-06-13 | 2017-12-14 | Delphi Technologies, Inc. | Sunlight glare reduction system |
US20180350070A1 (en) * | 2017-05-31 | 2018-12-06 | Fujitsu Limited | Recording medium storing computer program for pupil detection, information processing apparatus, and pupil detecting method |
CN107890336A (en) * | 2017-12-05 | 2018-04-10 | 中南大学 | Diopter detecting system based on intelligent handheld device |
CN109725721A (en) * | 2018-12-29 | 2019-05-07 | 上海易维视科技股份有限公司 | Human-eye positioning method and system for naked eye 3D display system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110811537A (en) * | 2019-11-12 | 2020-02-21 | 赵成玉 | Functional glasses system |
CN110811537B (en) * | 2019-11-12 | 2020-10-13 | 赵成玉 | Functional glasses system |
Also Published As
Publication number | Publication date |
---|---|
CN110287797B (en) | 2020-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2012328140B2 (en) | System and method for identifying eye conditions | |
KR102056333B1 (en) | Method and apparatus and computer program for setting the indication of spectacle lens edge | |
CN103431840B (en) | Eye optical parameter detecting system and method | |
CN106022304B (en) | A kind of real-time body's sitting posture situation detection method based on binocular camera | |
CN107184178A (en) | A kind of hand-held vision drop instrument of intelligent portable and optometry method | |
CN109690553A (en) | The system and method for executing eye gaze tracking | |
US10002293B2 (en) | Image collection with increased accuracy | |
CN107890336B (en) | Diopter detecting system based on intelligent handheld device | |
CN113808160B (en) | Sight direction tracking method and device | |
JP3273614B2 (en) | Ophthalmic measurement and inspection equipment | |
CN103558909A (en) | Interactive projection display method and interactive projection display system | |
CN103604412B (en) | Localization method and locating device | |
CN106214118A (en) | A kind of ocular movement based on virtual reality monitoring system | |
CN106461983A (en) | Method of determining at least one parameter of visual behaviour of an individual | |
JPH0782539B2 (en) | Pupil imager | |
CN113692527B (en) | Method and device for measuring the local refractive power and/or the power distribution of an ophthalmic lens | |
CN104661580A (en) | Strabismus detection | |
CN105354825A (en) | Intelligent device for automatically identifying position of reading material in read-write scene and application of intelligent device | |
CN105354822A (en) | Intelligent apparatus for automatically identifying position of read-write element in read-write scene and application | |
CN109008937A (en) | Method for detecting diopter and equipment | |
CN106264443A (en) | A kind of stravismus intelligence Inspection and analysis system | |
CN110287797A (en) | A kind of dioptric screening technique based on mobile phone | |
CN107765840A (en) | A kind of Eye-controlling focus method equipment of the general headset equipment based on binocular measurement | |
CN110287796A (en) | A kind of dioptric screening method based on mobile phone and external equipment | |
CN110598635B (en) | Method and system for face detection and pupil positioning in continuous video frames |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |