CN108197548A - It is a kind of to detect the ametropic method of human eye, client and server - Google Patents
It is a kind of to detect the ametropic method of human eye, client and server Download PDFInfo
- Publication number
- CN108197548A CN108197548A CN201711448361.2A CN201711448361A CN108197548A CN 108197548 A CN108197548 A CN 108197548A CN 201711448361 A CN201711448361 A CN 201711448361A CN 108197548 A CN108197548 A CN 108197548A
- Authority
- CN
- China
- Prior art keywords
- ametropia
- user
- diopter
- human eye
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 208000014733 refractive error Diseases 0.000 claims abstract description 119
- 208000029091 Refraction disease Diseases 0.000 claims abstract description 87
- 230000004430 ametropia Effects 0.000 claims abstract description 87
- 230000001815 facial effect Effects 0.000 claims description 59
- 238000012545 processing Methods 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 11
- 230000003449 preventive effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 8
- 230000004438 eyesight Effects 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010020675 Hypermetropia Diseases 0.000 description 1
- 208000010415 Low Vision Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 201000009310 astigmatism Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004305 hyperopia Effects 0.000 description 1
- 201000006318 hyperopia Diseases 0.000 description 1
- 230000004303 low vision Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 208000001491 myopia Diseases 0.000 description 1
- 230000004379 myopia Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/103—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/103—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
- A61B3/1035—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes for measuring astigmatism
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Multimedia (AREA)
- Animal Behavior & Ethology (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of ametropic method of detection human eye, client and servers, belong to the ametropia detection field of human eye.The application includes building ametropia data set, and the ametropia data set includes the face-image of ametrope's at least one side portion angle and the associated diopter of face-image with ametrope;Build depth convolutional network;The depth convolutional network is trained using the ametropia data set;The face-image of user is obtained, the face-image of user is identified using the trained depth convolutional network, exports the diopter of user.By the application, detection diopter oneself can be realized in user, and the face-image upload of oneself can be obtained refractive diopter evidence, can know concrete condition in time in ametropia early stage so as to fulfill convenient for user, convenient further to take preventive measures.
Description
Technical Field
The application belongs to the field of human eye ametropia detection, and particularly relates to a method, a client and a server for detecting human eye ametropia.
Background
Ametropia means that when the eye is not adjusted, parallel rays of light cannot form a clear object image on the retina after passing through the refractive action of the eye, and the object image is formed in front of or behind the retina. Refractive errors include hyperopia, myopia and astigmatism.
The degree of refractive error cannot be reduced by non-surgical means, and it is common practice to slow or stop further deterioration by means of prescription or the like.
The current discovery modes of ametropia by using the eye environment mainly comprise two modes, wherein one mode is that a professional optometrist directly checks and diagnoses the ametropia through an instrument; the second is to obtain the overall vision through the visual chart test, and to re-diagnose the patient with low vision to the specialized ophthalmologist to determine the reason of the vision problem, and then to the specialized optometrist to determine the diopter number if the ametropia is correct. The detection results of the two methods are accurate, but in practical application, because the detection results require operation of a professional, people go to the professional to perform detection, separate time arrangement is needed, and conflict in time arrangement often occurs.
The ametropia is divided into a serious ametropia and a slight ametropia, the slight ametropia is better recovered, and the serious ametropia is difficult to recover, so that the early detection has great significance for the development and control of the ametropia.
However, in the early stage of ametropia, on one hand, the requirement of the human on the eyesight is not high and is not easy to realize due to the environment where people are located, on the other hand, more importantly, the requirement for detecting ametropia is reduced due to the fact that professionals are required to detect ametropia through professional equipment at present, and people are troublesome, and meanwhile, due to the fact that time arrangement is inconvenient, early detection and prevention opportunities are delayed during detection, and further the ametropia becomes more serious.
Disclosure of Invention
To overcome, at least to some extent, the problems in the related art, the present application provides a method, a client and a server for detecting refractive error of a human eye.
In order to achieve the purpose, the following technical scheme is adopted in the application:
a method of detecting refractive error of an eye comprising
Constructing an ametropia data set comprising facial images of at least one facial angle of an ametropia person and a refractive power associated with the facial images of the ametropia person;
constructing a deep convolutional network;
training the deep convolutional network using the ametropia dataset;
and acquiring a facial image of the user, identifying the facial image of the user by using the trained deep convolution network, and outputting diopter of the user.
Further, the method for detecting the ametropia of the human eye further comprises preprocessing the facial image of the ametropia in the ametropia data set, wherein the preprocessing comprises extracting a face part in the facial image of the ametropia, adjusting the face part to preset pixels, and filling non-face parts in the facial image of the ametropia into preset colors.
Further, the preprocessing further comprises performing enhancement processing on the facial image of the ametropia person, wherein the enhancement processing comprises one or more of contrast adjustment processing, brightness adjustment processing, saturation adjustment processing, turning processing and scaling processing.
Further, the deep convolutional network comprises a first convolutional layer module, a second convolutional layer module, a fully-connected layer module and an activation function layer which are sequentially connected; wherein,
the first convolution layer module comprises at least two convolution layers, each convolution layer is configured to be arranged according to a preset sequence, and the output of the previous convolution layer is used as the input of the next convolution layer;
the second convolutional layer module comprises at least three convolutional layers, each convolutional layer is configured to be arranged according to a preset sequence, and the output of each convolutional layer is used as the input of each convolutional layer after the convolutional layer;
the full-connection layer module comprises at least one full-connection layer, when the full-connection layer number of the full-connection layer module is more than two, all the full-connection layers are arranged in a preset sequence, and the output of the previous full-connection layer is used as the input of the next full-connection layer.
Furthermore, the number of the convolutional layers in the first convolutional layer module is five, the number of the convolutional layers in the second convolutional layer module is eight, the number of the fully-connected layers in the fully-connected layer module is two, and the activation function layer adopts a Softmax function.
Further, L1 and L2 regularization is performed on each of the first convolutional layer module and the second convolutional layer module.
Further, the method for detecting refractive error of human eyes further comprises the step of optimizing the deep convolutional network by using an optimizer, wherein the optimizing process comprises
And identifying the facial image of the ametropia person by using the trained deep convolutional network and obtaining a test diopter, wherein the optimizer takes the difference value between the test diopter and the actual diopter of the ametropia person as a loss, and each convolutional layer in the deep convolutional network is adjusted through back propagation.
Further, the deep convolutional network is also pre-trained using an ImageNet dataset and/or an MS-Celeb-1M dataset before being trained using the ametropia dataset.
A client for detecting refractive error of a human eye, the client configured to:
the system comprises a server, a face image acquisition unit, a face recognition unit and a display unit, wherein the face image acquisition unit is used for acquiring a face image of at least one face angle of a user and sending the face image of the user to the server;
receiving diopter detected by the server according to facial image recognition of a user;
and feeding back the diopter detected by identification to the user.
A server for detecting ametropia of a human eye includes
An ametropia dataset module configured for constructing an ametropia dataset comprising facial images of at least one facial angle of an ametropia, and a refractive power associated with the facial images of the ametropia;
a deep convolutional network module configured to
Constructing a deep convolutional network, and training by using the ametropia data set to obtain a trained deep convolutional network;
identifying the facial image of the user by using the trained deep convolution network, and outputting the diopter of the user;
the receiving module is configured to receive the facial image of the user sent by the client and send the facial image of the user to the deep convolutional network module for identification;
a feedback module configured to feed back the diopter outputted by the deep convolutional network module to the client.
This application adopts above technical scheme, possesses following beneficial effect at least:
the method comprises the steps of constructing the ametropia data set and the depth convolution network, training the depth convolution network by using the ametropia data set, identifying and detecting a face image of a user by using the trained depth convolution network, and outputting diopter of the user. Through the method and the device, the user can detect the diopter by himself and upload the facial image of the user to obtain diopter data, so that the user can know specific conditions in time at the early stage of ametropia conveniently, and preventive measures can be taken conveniently.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is one embodiment of a method of detecting refractive error of a human eye according to the present application;
fig. 2 is an embodiment of a server for detecting refractive error of a human eye according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiments of the present application provide a method, a client and a server for detecting refractive error of a human eye, which are described in detail below with reference to the accompanying drawings.
In one embodiment of the present application, as shown in FIG. 1, there is provided a method of detecting refractive error of a human eye comprising
Constructing an ametropia data set comprising facial images of at least one facial angle of an ametropia person and a refractive power associated with the facial images of the ametropia person;
constructing a deep convolutional network;
training the deep convolutional network using the ametropia dataset;
and acquiring a facial image of the user, identifying the facial image of the user by using the trained deep convolution network, and outputting diopter of the user.
In the application, when the ametropia data set is constructed, the facial image of an ametropia person can be in a photo form or a video form, can be acquired through a camera and a mobile phone, and can also be acquired through other modes such as infrared and ultrasonic, the acquisition of the facial image can be a front facial image, and can also be facial images of a plurality of different angles.
In the related art, a deep convolutional network is an algorithm which can automatically learn a proper characterization and judgment mode from data.
In the application, the ametropia data set is constructed, the deep convolution network is constructed, the ametropia data set is used for training the deep convolution network, the trained deep convolution network is used for identifying and detecting the face image of a user, and diopter of the user is output. Through the scheme, the user can detect diopter by himself and upload the face image of the user to obtain diopter data. Thereby realizing that the user can know the specific condition in time at the early stage of ametropia and conveniently taking preventive measures.
In one application scenario, recovery measures are initiated when the user knows himself that there is a mild ametropia condition by following the above-described protocol. In the stage of taking recovery measures, the user can also detect the diopter condition of the user at any time according to the requirement of the user through the scheme, so that the recovery condition can be conveniently tracked in the stage of recovering mild ametropia. The following problems in the examination by operating a professional device by a professional who goes to the professional many times can be overcome:
1. each trip requires separate scheduling, scheduling conflicts may arise, and the user may be limited in the number of checks, or even reduced.
2. Early detection prevention opportunities may have been delayed when the user delays to review, and refractive errors have become more severe.
3. The suspected trouble results in a reduced need to detect refractive errors, which in turn results in the refractive errors becoming severe.
In one embodiment of the application, the method for detecting refractive error of human eyes further comprises preprocessing the facial image of the refractive error of the human eye in the refractive error data set, wherein the preprocessing comprises extracting a face part in the facial image of the refractive error of the human eye, adjusting the face part to preset pixels, and filling a non-face part in the facial image of the refractive error of the human eye with preset color.
According to the scheme, after the face image is preprocessed, the face part is adjusted to preset pixels, such as 500 pixels by 500 pixels, consistency of each sample for deep convolutional network training and learning is guaranteed, meanwhile, the non-face part is filled with preset colors, and if the non-face part is filled with black, the situation that interference factors exist in the non-face part in the image during deep convolutional network training can be eliminated, so that accuracy of recognizing face image output diopter after the deep convolutional network training is well improved.
In one embodiment of the present application, the preprocessing further includes performing enhancement processing on the facial image of the ametropia person, the enhancement processing including one or more of contrast adjustment processing, brightness adjustment processing, saturation adjustment processing, flipping processing, and scaling processing.
By the scheme, the image of the face part is enhanced, the facial features related to ametropia can be enhanced, on one hand, the deep convolutional network is guaranteed to be trained and learned, on the other hand, the difference between the features can be amplified, and the matching accuracy of the ametropia features and the corresponding diopters is improved.
In one embodiment of the present application, the deep convolutional network comprises a first convolutional layer module, a second convolutional layer module, a fully-connected layer module and an activation function layer, which are sequentially connected in sequence; wherein,
the first convolution layer module comprises at least two convolution layers, each convolution layer is configured to be arranged according to a preset sequence, and the output of the previous convolution layer is used as the input of the next convolution layer;
the second convolutional layer module comprises at least three convolutional layers, each convolutional layer is configured to be arranged according to a preset sequence, and the output of each convolutional layer is used as the input of each convolutional layer after the convolutional layer;
the full-connection layer module comprises at least one full-connection layer, when the full-connection layer number of the full-connection layer module is more than two, all the full-connection layers are arranged in a preset sequence, and the output of the previous full-connection layer is used as the input of the next full-connection layer.
Through the scheme, among the convolution layers sequentially arranged in the first convolution layer module, the output of the previous convolution layer is used as the input of the next convolution layer, each convolution layer further processes the previous convolution layer to learn higher representation, and information transmission is unidirectional; in order to help information flow and enable information transfer to be more sufficient, among the convolution layers sequentially arranged in the second convolution layer module, the output of each convolution layer is used as the input of each convolution layer behind the convolution layer, so that multi-directional output of the convolution layers is achieved, the representation of the bottom layer can be better utilized, and the storage space occupied by the constructed deep convolution network is smaller.
In an embodiment of the present application, the number of convolutional layers in the first convolutional layer module is five, the number of convolutional layers in the second convolutional layer module is eight, the number of fully-connected layers in the fully-connected layer module is two, and the activation function layer employs a Softmax function.
In one embodiment of the present application, the method further comprises performing L1 and L2 regularization on each of the first convolutional layer module and the second convolutional layer module.
In the related art of deep convolutional networks to prevent overfitting of data, overfitting fits too well to the current data, so that deep convolutional networks only perform well on the current data set and generally on other data sets. As a solution, the solution can be realized through regularization, L1 regularization can realize sparseness, and L2 enables parameters to approach 0 more, enables coefficient vectors to be smooth and improves generalization capability. In the present application, balanced sparsity and smoothness is achieved by L1 and L2 regularization processes.
In one embodiment of the present application, the method for detecting refractive error of a human eye further comprises performing an optimization process on the deep convolutional network using an optimizer, the optimization process comprising
And identifying the facial image of the ametropia person by using the trained deep convolutional network and obtaining a test diopter, wherein the optimizer takes the difference value between the test diopter and the actual diopter of the ametropia person as a loss, and each convolutional layer in the deep convolutional network is adjusted through back propagation.
By the scheme, the accuracy of the output diopter of the depth convolution network after the facial image of the ametropia person is identified can be ensured. In a specific application, the optimizer can adopt an Adam optimizer.
In one embodiment of the present application, the deep convolutional network is also pre-trained using an ImageNet data set and/or an MS-Celeb-1M data set before training the deep convolutional network using the ametropia data set.
The ImageNet data set and the MS-Celeb-1M data set are image recognition databases in the prior art, and the data volume is huge, wherein the ImageNet data set has one thousand and one million pictures of the real world, and the MS-Celeb-1M data set has one million pictures of human faces. The ametropia data set constructed in the application has the advantages that the photo quantity in practical application is possibly small, for some general visual representations such as boundaries, faces, eyes and the like, the deep convolution network is pre-trained through a large number of photos in the ImageNet data set and/or the MS-Celeb-1M data set, the deep convolution network can learn the general visual representations more accurately, and then when the ametropia data set is learned, diopter recognition results are more accurate.
In one embodiment of the present application, the present application provides a client for detecting refractive error of a human eye, the client being configured to:
the system comprises a server, a face image acquisition unit, a face recognition unit and a display unit, wherein the face image acquisition unit is used for acquiring a face image of at least one face angle of a user and sending the face image of the user to the server;
receiving diopter detected by the server according to facial image recognition of a user;
and feeding back the diopter detected by identification to the user.
In a specific application, the client may be a mobile phone client or a web page client, and the user can obtain the diopter of the user by uploading the facial image of the user through the client. The user can conveniently detect and track the diopter condition of the user according to the requirement. In addition, the above embodiments related to the method have been described in detail, and will not be elaborated upon herein.
In one embodiment of the application of the present application, as shown in fig. 2, the present application provides a server 1 for detecting refractive errors of a human eye, comprising
An ametropia dataset module 101 configured for constructing an ametropia dataset comprising facial images of at least one facial angle of an ametropia, and a refractive power associated with the facial images of the ametropia;
a deep convolutional network module 102 configured to
Constructing a deep convolutional network, and training by using the ametropia data set to obtain a trained deep convolutional network;
identifying the facial image of the user by using the trained deep convolution network, and outputting the diopter of the user;
a receiving module 103 configured to receive the facial image of the user sent by the client 2, and send the facial image of the user to the deep convolutional network module 102 for identification;
a feedback module 104 configured to feed back diopter outputted by the deep convolutional network module 102 to the client 2.
With regard to the above-mentioned server embodiment, the specific manner in which each module performs operations and the advantages thereof have been described in detail in the above-mentioned embodiment related to the method, and will not be described in detail here.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (10)
1. A method of detecting refractive error in a human eye, comprising: comprises that
Constructing an ametropia data set comprising facial images of at least one facial angle of an ametropia person and a refractive power associated with the facial images of the ametropia person;
constructing a deep convolutional network;
training the deep convolutional network using the ametropia dataset;
and acquiring a facial image of the user, identifying the facial image of the user by using the trained deep convolution network, and outputting diopter of the user.
2. A method of detecting refractive error of a human eye according to claim 1, wherein: the method for detecting the ametropia of the human eyes further comprises the step of preprocessing the facial images of the ametropia in the ametropia data set, wherein the preprocessing comprises the steps of extracting face parts in the facial images of the ametropia, adjusting the face parts to preset pixels, and filling non-face parts in the facial images of the ametropia into preset colors.
3. A method of detecting refractive error of a human eye according to claim 2, wherein: the preprocessing further comprises performing enhancement processing on the facial image of the ametropia person, wherein the enhancement processing comprises one or more of contrast adjustment processing, brightness adjustment processing, saturation adjustment processing, turning processing and scaling processing.
4. A method of detecting refractive error of a human eye according to claim 1, wherein: the deep convolutional network comprises a first convolutional layer module, a second convolutional layer module, a full-connection layer module and an activation function layer which are sequentially connected; wherein,
the first convolution layer module comprises at least two convolution layers, each convolution layer is configured to be arranged according to a preset sequence, and the output of the previous convolution layer is used as the input of the next convolution layer;
the second convolutional layer module comprises at least three convolutional layers, each convolutional layer is configured to be arranged according to a preset sequence, and the output of each convolutional layer is used as the input of each convolutional layer after the convolutional layer;
the full-connection layer module comprises at least one full-connection layer, when the full-connection layer number of the full-connection layer module is more than two, all the full-connection layers are arranged in a preset sequence, and the output of the previous full-connection layer is used as the input of the next full-connection layer.
5. The method for detecting refractive error of a human eye according to claim 4, wherein: the number of the convolution layers in the first convolution layer module is five, the number of the convolution layers in the second convolution layer module is eight, the number of the full-connection layers in the full-connection layer module is two, and the activation function layer adopts a Softmax function.
6. The method for detecting refractive error of a human eye according to claim 4 or 5, wherein: further comprising L1 and L2 regularization of each of the first convolutional layer module and the second convolutional layer module.
7. A method of detecting refractive error of a human eye according to claim 1, wherein: the method for detecting refractive errors of the human eye further comprises the step of optimizing the deep convolutional network by using an optimizer, wherein the optimizing process comprises
And identifying the facial image of the ametropia person by using the trained deep convolutional network and obtaining a test diopter, wherein the optimizer takes the difference value between the test diopter and the actual diopter of the ametropia person as a loss, and each convolutional layer in the deep convolutional network is adjusted through back propagation.
8. A method of detecting refractive error of a human eye according to claim 1, wherein: the deep convolutional network is also pre-trained using an ImageNet dataset and/or an MS-Celeb-1M dataset before training the deep convolutional network using the ametropia dataset.
9. A client for detecting refractive error of a human eye, comprising: the client is configured to:
the system comprises a server, a face image acquisition unit, a face recognition unit and a display unit, wherein the face image acquisition unit is used for acquiring a face image of at least one face angle of a user and sending the face image of the user to the server;
receiving diopter detected by the server according to facial image recognition of a user;
and feeding back the diopter detected by identification to the user.
10. A server for detecting refractive error of a human eye, comprising: comprises that
An ametropia dataset module configured for constructing an ametropia dataset comprising facial images of at least one facial angle of an ametropia, and a refractive power associated with the facial images of the ametropia;
a deep convolutional network module configured to
Constructing a deep convolutional network, and training by using the ametropia data set to obtain a trained deep convolutional network;
identifying the facial image of the user by using the trained deep convolution network, and outputting the diopter of the user;
the receiving module is configured to receive the facial image of the user sent by the client and send the facial image of the user to the deep convolutional network module for identification;
a feedback module configured to feed back the diopter outputted by the deep convolutional network module to the client.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448361.2A CN108197548A (en) | 2017-12-27 | 2017-12-27 | It is a kind of to detect the ametropic method of human eye, client and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448361.2A CN108197548A (en) | 2017-12-27 | 2017-12-27 | It is a kind of to detect the ametropic method of human eye, client and server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108197548A true CN108197548A (en) | 2018-06-22 |
Family
ID=62584814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711448361.2A Pending CN108197548A (en) | 2017-12-27 | 2017-12-27 | It is a kind of to detect the ametropic method of human eye, client and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108197548A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024060418A1 (en) * | 2022-09-22 | 2024-03-28 | 深圳大学 | Abnormal refractive state recognition method and apparatus based on abnormal eye posture |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105764405A (en) * | 2013-06-06 | 2016-07-13 | 6超越6视觉有限公司 | System and method for measurement of refractive error of eye based on subjective distance metering |
CN106934798A (en) * | 2017-02-20 | 2017-07-07 | 苏州体素信息科技有限公司 | Diabetic retinopathy classification stage division based on deep learning |
CN107423571A (en) * | 2017-05-04 | 2017-12-01 | 深圳硅基仿生科技有限公司 | Diabetic retinopathy identifying system based on eye fundus image |
-
2017
- 2017-12-27 CN CN201711448361.2A patent/CN108197548A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105764405A (en) * | 2013-06-06 | 2016-07-13 | 6超越6视觉有限公司 | System and method for measurement of refractive error of eye based on subjective distance metering |
CN106934798A (en) * | 2017-02-20 | 2017-07-07 | 苏州体素信息科技有限公司 | Diabetic retinopathy classification stage division based on deep learning |
CN107423571A (en) * | 2017-05-04 | 2017-12-01 | 深圳硅基仿生科技有限公司 | Diabetic retinopathy identifying system based on eye fundus image |
Non-Patent Citations (8)
Title |
---|
GAO HUANG ET AL.: "Densely Connected Convolutional Networks", 《HTTPS://ARXIV.ORG/PDF/1608.06993V4.PDF》 * |
孔佑磊: "低分辨率人脸识别技术及其应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
徐桂从: "人脸的跟踪识别与年龄估计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李长云 等: "《智能感知技术及在电气工程中的应用》", 31 May 2017, 电子科技大学出版社 * |
杨眷玉: "基于卷积神经网络的物体识别研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王斌 等: "卡式数码相机改装进行摄影验光的临床应用", 《国际眼科杂志》 * |
苏欣 等: "《Android手机应用网络流量分析与恶意行为检测研究》", 31 October 2016, 湖南大学出版社 * |
褚宝增 等: "《现代数学地质》", 31 August 2014, 中国科学技术出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024060418A1 (en) * | 2022-09-22 | 2024-03-28 | 深圳大学 | Abnormal refractive state recognition method and apparatus based on abnormal eye posture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7075085B2 (en) | Systems and methods for whole body measurement extraction | |
US12114929B2 (en) | Retinopathy recognition system | |
US20200250821A1 (en) | Image feature recognition method and apparatus, storage medium, and electronic apparatus | |
US20200356805A1 (en) | Image recognition method, storage medium and computer device | |
US11645748B2 (en) | Three-dimensional automatic location system for epileptogenic focus based on deep learning | |
CN110909780B (en) | Image recognition model training and image recognition method, device and system | |
CN109543526B (en) | True and false facial paralysis recognition system based on depth difference characteristics | |
JP2021536057A (en) | Lesion detection and positioning methods, devices, devices, and storage media for medical images | |
KR20200005433A (en) | Cloud server and diagnostic assistant systems based on cloud server | |
CN110458829B (en) | Image quality control method, device, equipment and storage medium based on artificial intelligence | |
US11947717B2 (en) | Gaze estimation systems and methods using relative points of regard | |
WO2022088665A1 (en) | Lesion segmentation method and apparatus, and storage medium | |
CN112101424B (en) | Method, device and equipment for generating retinopathy identification model | |
Jebadurai et al. | Super-resolution of retinal images using multi-kernel SVR for IoT healthcare applications | |
US11842490B2 (en) | Fundus image quality evaluation method and device based on multi-source and multi-scale feature fusion | |
US11295117B2 (en) | Facial modelling and matching systems and methods | |
US11721023B1 (en) | Distinguishing a disease state from a non-disease state in an image | |
CN113240655A (en) | Method, storage medium and device for automatically detecting type of fundus image | |
CN112446860A (en) | Automatic screening method for diabetic macular edema based on transfer learning | |
CN112613471A (en) | Face living body detection method and device and computer readable storage medium | |
WO2021159643A1 (en) | Eye oct image-based optic cup and optic disc positioning point detection method and apparatus | |
CN117995391A (en) | Strabism diagnosis and treatment auxiliary system based on deep learning | |
CN110473176A (en) | Image processing method and device, method for processing fundus images, electronic equipment | |
CN113989217A (en) | Human eye diopter detection method based on deep learning | |
CN108197548A (en) | It is a kind of to detect the ametropic method of human eye, client and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180622 |
|
RJ01 | Rejection of invention patent application after publication |