CN113288038A - Self-service vision testing method based on computer vision - Google Patents
Self-service vision testing method based on computer vision Download PDFInfo
- Publication number
- CN113288038A CN113288038A CN202110503183.9A CN202110503183A CN113288038A CN 113288038 A CN113288038 A CN 113288038A CN 202110503183 A CN202110503183 A CN 202110503183A CN 113288038 A CN113288038 A CN 113288038A
- Authority
- CN
- China
- Prior art keywords
- module
- vision
- eye
- test
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/028—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
- A61B3/032—Devices for presenting test symbols or characters, e.g. test chart projectors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0008—Apparatus for testing the eyes; Instruments for examining the eyes provided with illuminating means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Probability & Statistics with Applications (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a self-service vision testing method based on computer vision, which comprises the following steps: s1, judging whether the testee is in place; s2, judging which eye of the testee is in the test state; and S3, performing vision test and outputting the result. The invention adopts the computer vision technology to automatically detect the body state, the eye shielding and the gesture of the tested person, realizes good natural interaction experience in the vision test process, finally forms a vision test report, records the vision history of the tested person by a method of associating accounts, and reflects the vision development condition of the tested person.
Description
Technical Field
The invention belongs to the technical field of computer vision, and relates to a self-service vision testing method based on computer vision.
Background
The visual test is a very conventional physical examination item for human health, and generally measures the vision by letting a testee identify an eye chart, wherein the eye chart consists of 14 rows of 'E' characters with different sizes and different opening directions; measuring from 0.1 to 1.5 (or from 4.0 to 5.2); each row is labeled, and the visual chart of the tested person is operated in a way that the visual chart is parallel to one row of 1.0 and is 5 meters away from the visual chart. The vision test is completed by the test assistant staff indicating different 'E' from large to small and the tester making an action or speaking the direction of the 'E', which is a traditional method for testing vision and is widely adopted.
In recent years, with the development of smart phones, there are applications in which an eye chart is moved to a mobile phone, eyes and the mobile phone are kept at a certain distance, the eye chart is pulled on a mobile phone screen, and a user clicks the orientation of a letter "E" with a finger to complete vision testing. There are also some special visual examination equipments, this kind of inspection instrument adopts the standard logarithmic visual chart to test the principle design, show "E" shape character by the inspection instrument host computer at random, the person to be examined answers the orientation of "E" through the remote control answer ware, the host computer adjusts the character size according to the answer situation to confirm the character that the person to be examined can see clearly, reach the purpose of checking eyesight.
The vision examination is a high-frequency human health examination, and the existing methods have the defects, and the traditional method needs a test assistant to inform a testee whether the current vision judgment is correct or not, which undoubtedly increases the labor cost and cannot completely finish the test by the testee; the eyesight is tested on the smart phone, the essence is that the eyesight table is moved to the mobile phone, a testee informs a recognition result by clicking a mobile phone screen, and then the mobile phone gives out whether the recognition is correct or not, the testee keeps the distance between eyes and the mobile phone, which is difficult, if the head deviates, the testing precision is also influenced, and the mobile phone test does not accord with the standard of testing the eyesight. The dedicated vision tester is expensive, and a hand-held remote controller is required to inform the identification result, but the remote controller is easy to lose, and the devices are inconvenient to use. The existing methods or devices can not achieve complete self-service test or natural interaction, which is the problem to be solved by the invention.
Disclosure of Invention
In order to solve the problems, the technical scheme of the invention is a self-service vision testing method based on computer vision, which comprises the following steps:
s1, judging whether the testee is in place;
s2, judging which eye of the testee is in the test state;
s3, performing vision test and outputting a result;
the self-service vision testing system based on computer vision corresponding to the method comprises a host computer control module, a light ray projection module, a camera module, a display screen module, a voice prompt module, a network storage module, a human body detection module, a human face five sense organs feature positioning module, an eye state detection module and a gesture recognition module, wherein the host computer control module is respectively connected with the light ray projection module, the camera module, the display screen module, the voice prompt module and the network storage module, the camera module is respectively connected with the human body detection module, the human face five sense organs feature positioning module, the eye state detection module and the gesture recognition module, and after the camera module shoots, the position of a human body, the human face five sense organs, the eye state and the gesture recognition are processed and analyzed and fed back to the host computer control module.
Preferably, the judging whether the testee is in position comprises the following steps:
s11, projecting the colorful light emitted by the light projection module onto the ground at a preset distance from the vision testing system, and sending a voice prompt from the station to the line edge by the voice prompt module;
s12, the camera module acquires images in real time, the human body detection module picks up images of ground projection line areas, a deep convolution network recognition algorithm is adopted to judge whether a testee stands on the projection line, and a human body bounding box is detected.
Preferably, the projection light module comprises a laser light lamp.
Preferably, the determining which eye of the subject is in the test state comprises the steps of:
s21, intercepting the region of interest of the human head image according to the position of the human body bounding box, positioning the position of the facial feature points of the human face by the facial feature positioning module by adopting a depth convolution network, and predicting all the facial feature points by the network no matter whether the eyes are blocked or not;
and S22, intercepting the interested areas of the left and right eyes according to the characteristic points of the left and right eyes, sending the interested areas into a left and right eye shielding network in the eye state detection module, and outputting the state of which eyes are shielded.
Preferably, the performing the vision test and outputting the result comprises the following steps:
s31, the voice prompt module outputs prompt to enter a vision test preparation state, letters E with a plurality of sizes and orientations are sequentially displayed on the screen, the camera module captures images and sends the images to the deep neural network in the gesture recognition module to classify gestures;
s32, when the letter E is processed to a certain level, if the recognition error of the preset times is continuously made, the vision level is judged, the test is stopped, and the voice prompt is used for testing the other eye;
and S33, outputting the vision level by voice or sending the vision level by short message or WeChat, and storing and recording the vision level in the host control module.
The beneficial effects of the invention at least comprise:
1) the self-service vision testing system designed by the invention breaks away from the guidance of auxiliary personnel and realizes the complete self-test of vision examination;
2) by adopting a computer vision technology, the body state, the eye shielding and the gesture of a tested person are automatically detected, good natural interaction experience in the vision testing process is realized, a vision testing report is finally formed, the vision history record of the tested person is recorded by a method of associating accounts, and the vision development condition of the tested person is reflected.
Drawings
FIG. 1 is a flow chart of steps of a computer vision-based self-service vision testing method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for a self-service vision testing method based on computer vision according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a testing process of a self-service vision testing method based on computer vision according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of five gestures of a self-service vision testing method based on computer vision according to an embodiment of the present invention;
fig. 5 is a network module and a network structure diagram of the self-service vision testing method based on computer vision according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, a technical solution of the present invention, which is an embodiment of the present invention, is a flowchart of a self-service vision testing method based on computer vision, including the following steps:
s1, judging whether the testee is in place;
s2, judging which eye of the testee is in the test state;
s3, performing vision test and outputting a result;
referring to fig. 2, the self-service vision testing system based on computer vision corresponding to the method comprises a host computer control module 10, a light projection module 20, a camera module 30, a display screen module 40, a voice prompt module 50, a network storage module 60, a human body detection module 31, a human face five sense organ feature positioning module 32, an eye state detection module 33 and a gesture recognition module 34, wherein the host computer control module 10 is respectively connected with the light projection module 20, the camera module 30, the display screen module 40, the voice prompt module 50 and the network storage module 60, the camera module 30 is respectively connected with the human body detection module 31, the human face five sense organ feature positioning module 32, the eye state detection module 33 and the gesture recognition module 34, after the camera module 30 shoots, the position of a human body, the human face five sense organs positioning, the eye state and the gesture recognition are processed and analyzed, and feeds back the results to the host control module, and stores the results in the network storage module 60.
Referring to fig. 3, which is a schematic flow chart of the system of the present invention, first a laser ray lamp, i.e. a projection light module 20, projects a laser ray to a specified position, which ensures the distance between the subject and the visual chart, a human body detection module 31 intercepts an image area on the projection line, determines whether a person is present, then a human face five-sense organ feature positioning module 32 and an eye state detection module 33 classify the network to determine which eye is blocked, determines which eye is tested, then polls the letter "E" on the visual chart, a gesture recognition module 34 recognizes the arm posture of the subject to determine whether a correct response is made to the current letter "E" orientation, and finally forms a vision detection result.
S1: determining whether the subject is already in position, further comprising:
s11: the projection light module emits green (or other colors) light to project on the ground which is far away from the vision test system by a certain distance, the distance can be adjusted, because the distance is different for different test requirements, generally 5 meters, the ray projected on the ground is used for ensuring that a tested person can keep the distance of a standard vision test with the tested system, and the projection line is not influenced by the environment and does not need to manually mark the distance line on the ground;
s12: during the operation of the whole system, the camera is in an open state, the state of a testee is monitored in real time, whether the state meets the requirement of vision test or not is judged, the distance of the testee is closely related to the accuracy of vision, the camera captures an image of a ground projection line area, a special deep neural network algorithm is designed to judge whether the testee stands at the edge of a projection light, if the distance exceeds or is too far away from the projection light, a prompt sound is sent out by the system to remind the testee of adjusting the standing position so as to meet the standard requirement. Resnet-18 deep nerves are used in the present inventionThe network identifies whether the testee is standing in the light. Firstly, detecting a projection straight line by adopting image colors and Hough transformation, and deducting an image in a straight line area according to a proportion and sending the image into a network, wherein the network outputs a vector VBody=[Pb,xb,yb,wb,hb]In which P isbIs probability, which represents the probability of whether a person is in the current projection line area, if the probability is larger, it represents that a person wants to perform vision measurement at the position, xb,yb,wb,hbRespectively representing the coordinates of the center point and the length and width of the frame, the position of the tester in the projected line area is determined by the network.
S2: after detecting the human body, determining which eye of the subject is in the test state, further comprising:
s21: and intercepting an interested region of the human head image according to the position of the human body bounding box and the proportion of the head body, and positioning 7 characteristic points of five sense organs on the human face by adopting a ResNet-18 depth convolution network, wherein the characteristic points are two eye corner points of a left eye and a right eye, two side points of a mouth and a nose central point. All the positions of feature points of the five sense organs can be predicted by the network whether the eyes are blocked or not.
S22: according to the characteristic points of left and right eyes, the interested regions of the left and right eyes are intercepted and sent to a left and right eye shielding network, the ResNet-18 deep convolution trunk network is adopted in the invention, and V is outputeye=[PLeft,PRight,PGlass]In which P isLeftIndicating the probability that the left eye is open, PRightIndicating the probability that the right eye is open. If the judgment threshold for the open and closed eyes is set to 0.5 and the probability less than the threshold indicates that the eyes are blocked, it is judged that the other eyes are under test. PGlassIs the probability of wearing glasses and is used for indicating whether the testee wears the glasses to perform vision test. The system can send out voice prompt according to the shielding condition of the head and eyes of the testee if the testee is identified not to be shielded according to the regulation.
S3: after the testee finishes the preparation, the vision testing link is entered, and the step further comprises:
s31: the system starts to appear the letters "E" in different orientations in turn from large to small on the screen, and the orientation of "E" in each same level appears randomly. Meanwhile, the color of the periphery of the screen changes, which indicates that the detected person is waiting to make a recognition action, during which the system analyzes the gesture action of the detected person in real time through the camera, and for testing the right eye, a preparation gesture (fig. 4 gesture (a)), a downward gesture (fig. 4 gesture (b)), a leftward gesture (fig. 4 gesture (c)), a rightward gesture (fig. 4 gesture (d)) and an upward gesture (fig. 4 gesture (d)) are predefined. The testee is generally in a preparation gesture (a) in figure 4), the gesture definition during testing the left eye is to test the transverse mirror image of the right eye, only when the letter E needs to be judged, the gesture in four directions is made, and the arms are in preparation gestures at other timesGesture=[PReady,Pdown,Pleft,Pright,Pup]The occurrence probabilities of the five arm postures are respectively shown. During the test, the arm posture of the testee needs to be kept for a certain time, the host computer can light another color around the screen after the recognition is finished to prompt that the current letter recognition is finished, and the testee needs to return to a preparation state to wait for the next test letter to appear. If the five probabilities are below a certain threshold, it indicates that the subject made an action that is not compliant, and the system will sound a voice prompt. The system can automatically record the recognition accuracy of the testee to the letters with different sizes according to the recognition result.
S32: when the system detects that the accuracy of the testee is lower than a certain threshold value on a certain letter size level, or all the letters on the certain level are tested, a prompt tone is sent out to inform the testee that the current left eye (right eye) test is finished. The tested person goes to the letter E at a certain level, recognition errors are continuously made for several times, the system judges the vision level of the tested person, the test is stopped, and the voice prompt is used for testing the other eye.
S33: the system can also send the test result to a database or personal account numbers of the testee, such as WeChat, and the like according to different customization requirements.
In the invention, a deep neural network is used for finishing human body detection, eye shielding identification and action identification, a network structure based on ResNet-18 is adopted in the invention as shown in the attached figure 5, and the detailed parameters are as follows:
(1) first layer convolution layer conv 1: inputting three channels 224 x 224 images, convolution kernel size 3 x 3, step 1, and outputting 64 channels feature maps.
(2) The second layer is a residual module ResidualBlock _ A _1, and a 64-channel feature map is input and output.
(3) The third layer is a residual module ResidualBlock _ B _1, and a 64-channel feature map is input and a 128-channel feature map is output.
(4) The fourth layer is a residual module ResidualBlock _ A _2, and a 128-channel feature map is input and a 256-channel feature map is output.
(5) The fifth layer is a residual module ResidualBlock _ B _2, and a 256-channel characteristic diagram is input and output.
(6) The sixth layer is a residual module ResidualBlock _ A _3, and a 256-channel feature map is input and a 512-channel feature map is output.
(7) The seventh layer is a residual module ResidualBlock _ B _3, and a 512-channel feature map is input and output.
(8) Fifth full-interconnect layer FC: a 512-dimensional vector is input. Depending on the target, 4 different outputs may be selected.
When the human body target is predicted, a 5-dimensional vector V is outputBody=[Pb,xb,yb,wb,hb]Indicating the occurrence probability and position of the human subject in step S12;
when the human body feature point positioning target is output, the position coordinates of 7 feature points of the human face are output, the total 14-dimensional feature is obtained, and V isFace=[PLeyeL,PLeyeR,PReyeL,PReyeR,PMouthL,PMouthR,PNose]Respectively representing two eye corner points of the left eye and the right eye, two side points of the mouth andcoordinates of nose center point, where P ═ x, y]The coordinates of the feature points are obtained, and the eye region of interest in step S21 is obtained from the feature points of the corner points of the left and right eyes.
Outputting 3-dimensional features, V, in determining eye stateeye=[PLeft,PRight,PGlass]In which P isLeftIndicating the probability that the left eye is open, PRightIndicates the probability that the right eye is open, PGlassIs the probability of wearing glasses and is used for indicating whether the testee wears the glasses to perform vision test. The decision in step S22 is derived as to which eye is being tested.
Outputting 5-dimensional features, V, when recognizing gesture actionsGesture=[PReady,Pdown,Pleft,Pright,Pup]In which P isReadyIndicating the probability of a preparatory action, Pdown,Pleft,Pright,PupProbabilities of gestures facing down, left, right, and up, respectively, are used to determine whether the subject correctly indicates the orientation of the "E" appearing on the screen. If the five probabilities are less than a threshold, it indicates that the subject is not performing the appropriate voice prompt.
The training parameters were as follows:
using the dynamic learning rate, starting at 0.01, every 10 epochs are reduced to one tenth of the previous.
batchsize=32
momentum=0.9
And continuously inputting the training samples and the target true value into the network, adjusting the parameters of each layer of the network by using error back propagation, continuously carrying out iterative training, and finally realizing convergence to obtain a network model.
The self-service vision testing system designed by the invention is free from the guidance of auxiliary personnel, the key point is to realize the natural interaction of equipment and a testee, and the invention adopts a recognition technology based on a deep neural network to process several key algorithms. The system of the invention is provided with the laser projection line, thus objectively reducing the space for searching the detected person on the image and increasing the reliability for detecting the human body bounding box; the image analysis of the eye part of the testee is added, which eye is tested or whether the glasses are worn can be automatically detected, the system does not forcibly require the testee to shield the eyes for testing, but automatically analyzes the eye sequence of the test, and the natural interaction of the vision test is powerfully increased; finally, the posture of the arm is adopted, in order to enable the testee to test the eyesight by more familiar natural actions, the arm action of the system is consistent with the manual eyesight test action, only one preparation action is additionally defined for judging the interval detection of the two indication actions, the system is convenient to judge the eyesight judgment of the testee on the current letter, and the identification accuracy is improved. After the vision test result is obtained, the tested person can be informed of the test result on site, the test result can also be sent to a mailbox WeChat of an individual or a guardian for preservation, and the tested person can perform the vision test regularly, so that a history record of the change of the individual vision along with time can be formed, and the vision of the tested person can be conveniently managed.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (5)
1. A self-service vision testing method based on computer vision is characterized by comprising the following steps:
s1, judging whether the testee is in place;
s2, judging which eye of the testee is in the test state;
s3, performing vision test and outputting a result;
the self-service vision testing system based on computer vision corresponding to the method comprises a host computer control module, a light ray projection module, a camera module, a display screen module, a voice prompt module, a network storage module, a human body detection module, a human face five sense organs feature positioning module, an eye state detection module and a gesture recognition module, wherein the host computer control module is respectively connected with the light ray projection module, the camera module, the display screen module, the voice prompt module and the network storage module, the camera module is respectively connected with the human body detection module, the human face five sense organs feature positioning module, the eye state detection module and the gesture recognition module, and after the camera module shoots, the position of a human body, the human face five sense organs, the eye state and the gesture recognition are processed and analyzed and fed back to the host computer control module.
2. The computer vision based self-service vision testing method of claim 1, wherein said determining whether the subject is already in position comprises the steps of:
s11, projecting the colorful light emitted by the light projection module onto the ground at a preset distance from the vision testing system, and sending a voice prompt from the station to the line edge by the voice prompt module;
s12, the camera module acquires images in real time, the human body detection module picks up images of ground projection line areas, a deep convolution network recognition algorithm is adopted to judge whether a testee stands on the projection line, and a human body bounding box is detected.
3. The computer vision based self-service vision testing method of claim 2, wherein the projected light module comprises a laser light lamp.
4. The self-service computer vision-based vision testing method of claim 3, wherein said determining which eye of the subject is in the testing state comprises the steps of:
s21, intercepting the region of interest of the human head image according to the position of the human body bounding box, positioning the position of the facial feature points of the human face by the facial feature positioning module by adopting a depth convolution network, and predicting all the facial feature points by the network no matter whether the eyes are blocked or not;
and S22, intercepting the interested areas of the left and right eyes according to the characteristic points of the left and right eyes, sending the interested areas into a left and right eye shielding network in the eye state detection module, and outputting the state of which eyes are shielded.
5. The self-service vision testing method based on computer vision according to claim 4, wherein the performing vision testing and outputting the result comprises the following steps:
s31, the voice prompt module outputs prompt to enter a vision test preparation state, letters E with a plurality of sizes and orientations are sequentially displayed on the screen, the camera module captures images and sends the images to the deep neural network in the gesture recognition module to classify gestures;
s32, when the letter E is processed to a certain level, if the recognition error of the preset times is continuously made, the vision level is judged, the test is stopped, and the voice prompt is used for testing the other eye;
and S33, outputting the vision level by voice or sending the vision level by short message or WeChat, and storing and recording the vision level in the host control module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110503183.9A CN113288038A (en) | 2021-05-10 | 2021-05-10 | Self-service vision testing method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110503183.9A CN113288038A (en) | 2021-05-10 | 2021-05-10 | Self-service vision testing method based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113288038A true CN113288038A (en) | 2021-08-24 |
Family
ID=77321075
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110503183.9A Pending CN113288038A (en) | 2021-05-10 | 2021-05-10 | Self-service vision testing method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113288038A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106821296A (en) * | 2016-11-11 | 2017-06-13 | 奇酷互联网络科技(深圳)有限公司 | Eyesight test method, device and terminal device |
CN109157186A (en) * | 2018-10-25 | 2019-01-08 | 武汉目明乐视健康科技有限公司 | Unmanned self-service vision monitoring instrument |
CN109924941A (en) * | 2019-01-22 | 2019-06-25 | 深圳市聚派乐品科技有限公司 | A kind of automatic carry out data collection and the quick vision drop method of analysis |
CN111419168A (en) * | 2020-04-14 | 2020-07-17 | 上海美沃精密仪器股份有限公司 | Vision screening method, terminal, device and storage medium thereof |
CN111700583A (en) * | 2020-05-23 | 2020-09-25 | 福建生物工程职业技术学院 | Indoor shared self-service vision detection system and detection method thereof |
CN112617740A (en) * | 2020-12-28 | 2021-04-09 | 浙江省桐庐莪山工业总公司 | Vision detection method, system and device |
-
2021
- 2021-05-10 CN CN202110503183.9A patent/CN113288038A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106821296A (en) * | 2016-11-11 | 2017-06-13 | 奇酷互联网络科技(深圳)有限公司 | Eyesight test method, device and terminal device |
CN109157186A (en) * | 2018-10-25 | 2019-01-08 | 武汉目明乐视健康科技有限公司 | Unmanned self-service vision monitoring instrument |
CN109924941A (en) * | 2019-01-22 | 2019-06-25 | 深圳市聚派乐品科技有限公司 | A kind of automatic carry out data collection and the quick vision drop method of analysis |
CN111419168A (en) * | 2020-04-14 | 2020-07-17 | 上海美沃精密仪器股份有限公司 | Vision screening method, terminal, device and storage medium thereof |
CN111700583A (en) * | 2020-05-23 | 2020-09-25 | 福建生物工程职业技术学院 | Indoor shared self-service vision detection system and detection method thereof |
CN112617740A (en) * | 2020-12-28 | 2021-04-09 | 浙江省桐庐莪山工业总公司 | Vision detection method, system and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11330977B2 (en) | Digital visual acuity eye examination for remote physician assessment | |
US11744540B2 (en) | Method for measuring parameters in ultrasonic image and ultrasonic imaging system | |
CN105516280B (en) | A kind of Multimodal Learning process state information packed record method | |
CN109690553A (en) | The system and method for executing eye gaze tracking | |
TW202038133A (en) | System and method for rapidly locating iris using deep learning | |
CN109634431B (en) | Medium-free floating projection visual tracking interaction system | |
CN110766656B (en) | Method, device, equipment and storage medium for screening fundus macular region abnormality | |
CN115273180B (en) | Online examination invigilating method based on random forest | |
CN111523445B (en) | Examination behavior detection method based on improved Openpost model and facial micro-expression | |
CN114931353A (en) | Convenient and fast contrast sensitivity detection system | |
CN110020628A (en) | Sitting posture detecting method, system, equipment and storage medium based on face detection | |
CN113288038A (en) | Self-service vision testing method based on computer vision | |
CN115331282A (en) | Intelligent vision testing system | |
JPH09305743A (en) | Human face motion detecting system | |
CN115861977A (en) | Evaluation method for simulated driving posture and simulated driving device | |
CN113197542A (en) | Online self-service vision detection system, mobile terminal and storage medium | |
CN111507555B (en) | Human body state detection method, classroom teaching quality evaluation method and related device | |
KR100686517B1 (en) | Method For Modeling Pupil Shape | |
CN112418022A (en) | Human body data detection method and device | |
CN217938189U (en) | Vision detection device | |
CN115153416B (en) | Human eye contrast sensitivity inspection system based on virtual reality scene | |
CN116458835B (en) | Detection and prevention system for myopia and amblyopia of infants | |
CN109118070A (en) | test method and device | |
CN113283340B (en) | Method, device and system for detecting vaccination condition based on ocular surface characteristics | |
CN112587083B (en) | Visual processing method, device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |