CN111700583A - Indoor shared self-service vision detection system and detection method thereof - Google Patents

Indoor shared self-service vision detection system and detection method thereof Download PDF

Info

Publication number
CN111700583A
CN111700583A CN202010445925.2A CN202010445925A CN111700583A CN 111700583 A CN111700583 A CN 111700583A CN 202010445925 A CN202010445925 A CN 202010445925A CN 111700583 A CN111700583 A CN 111700583A
Authority
CN
China
Prior art keywords
gesture
detection
level
tester
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010445925.2A
Other languages
Chinese (zh)
Other versions
CN111700583B (en
Inventor
李昌锋
曾燕茹
林蔚
童文琴
张星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Vocational College Of Bioengineering
Original Assignee
Fujian Vocational College Of Bioengineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Vocational College Of Bioengineering filed Critical Fujian Vocational College Of Bioengineering
Priority to CN202010445925.2A priority Critical patent/CN111700583B/en
Publication of CN111700583A publication Critical patent/CN111700583A/en
Application granted granted Critical
Publication of CN111700583B publication Critical patent/CN111700583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an indoor shared self-service vision detection system and a detection method thereof. Firstly, right eye vision detection is carried out, and the right eye vision level is searched by adopting a dichotomy, and the specific process comprises the following steps: when a tester sees the visual target of the display screen, gesture movement information is made, the host calls a gesture movement detection algorithm based on a frame difference method to acquire single gesture information and judge the direction, then whether the tester can see the visual target of the level clearly is judged according to a single-level visual target identification judgment rule, and a dichotomy is called to generate the visual target of the next level in a recycling mode until the tester can see the visual target of the minimum level clearly is detected. The left eye vision test is completed by repeating the process. The vision detection result can be stored in the cloud server terminal for the tester to call and check at any time. The invention has the advantages of convenient and quick vision detection process and high detection efficiency, and can realize self-service vision detection and area sharing detection.

Description

Indoor shared self-service vision detection system and detection method thereof
Technical Field
The invention relates to the technical field of vision detection, in particular to an indoor shared self-service vision detection system and a detection method thereof.
Background
The traditional vision detection mode is that the examinee under test can see the test frequently in daily activities, under the guidance of a doctor and under the indication of an indication rod, the optotypes on the visual chart are distinguished in sequence, and finally the vision level of the examinee is measured through the numerical values of the optotypes, wherein one of the most important characteristics is that the whole test process needs to be controlled by the doctor.
Namely a tester, a doctor, a pointer stick, an eye chart. The tester, under the direction of the physician, resolves the direction of the optotype on the eye chart indicated by the pointer stick. But due to the influence of a number of factors, for example: inspection conditions, methods, speeds, etc. all contribute to the outcome. Secondly, the level and the state of the medical staff are completely depended on, the labor cost is consumed, the medical staff is easy to fatigue in a test post for a long time, and the detection result is possibly wrong.
The other mode is that the current advanced electronic projection equipment is utilized to replace the traditional visual chart, and the image required by detection is more clearly presented to the tested person through technical means, so that the vision detection process is completed. The existing electronic projection equipment displays 'E' -shaped characters randomly by an inspection tester host, a tested person answers the direction of the 'E' -shaped characters through a remote control answering machine, and the host adjusts the size of the characters according to the answering condition, so that the minimum characters which can be seen clearly by the tested person are determined, and the purpose of inspecting eyesight is achieved.
The electronic vision independent detection system which is controlled by an embedded single chip microcomputer in the whole process and can finish the test process independently by a single person mostly adopts an infrared remote controller to answer the visual target direction, so that the method is difficult to realize shared detection; secondly, human-computer interaction is lacked, and the operation is basically finished under the guidance of medical staff; and the vision data is stored through paper or a local host, and cloud service storage is not realized. Therefore, the problems of loss of test data, difficulty in query and the like can be caused.
In summary, the following problems to be solved exist in the prior art:
(1) the vision detection service is usually required to places such as an ophthalmic institution, a glasses shop and the like, and for children and teenagers with rapidly changing vision, the conventional vision detection service is difficult to meet regular detection;
(2) traditional detection methods or modes such as electronic vision detector lack human-computer interaction, and detection needs to be completed under the assistance of medical personnel, and self-service detection cannot be realized. The scheme provides that the high-definition camera is used for collecting dynamic gesture information of a human body, a traditional remote controller or a manual auxiliary mode is replaced, self-service operation of vision detection is achieved, and detection efficiency is improved;
(3) the problem of storing the vision data is that most of the prior art is only responsible for measuring the vision, the problem of storing the vision data is not well solved all the time, the data is easy to lose and difficult to query, and a personal vision file cannot be formed;
(4) the vision detection is carried out by using an image detection method, most methods judge gestures in the vision detection by static gesture recognition, are not in accordance with a daily vision detection mode, cause misoperation to occur easily in the detection process, and are easily influenced by light change, so that the recognition accuracy is low.
Disclosure of Invention
In view of the above, the present invention provides an indoor shared self-service vision detection system and a detection method thereof, which enable a vision detection process to be convenient and fast, have high detection efficiency, and can implement self-service vision detection and area-shared detection.
The invention is realized by adopting the following scheme: an indoor shared self-service vision detection system comprises a detection cabinet, a host control panel, a high-definition camera, a touch liquid crystal display screen, a voice broadcaster, a 4G communication module, a height-adjustable seat and a cloud server; the host control board, the high-definition camera, the touch liquid crystal display screen and the voice broadcaster are all arranged in the detection cabinet; the host control board is electrically connected with the high-definition camera and is used for controlling the high-definition camera to start or close the acquisition of image information; the touch liquid crystal display screen is electrically connected with the host control panel and is used for video playing, visual target display and man-machine interaction operation; the host control board is electrically connected with the voice broadcaster and used for controlling the voice broadcaster to broadcast voice information; the 4G communication module is integrated on the host control board, is electrically connected with the host control board and is used for realizing network data transmission between the host control board and the cloud server through the 4G communication module.
Further, the invention also provides a detection method of the indoor shared self-service vision detection system, which comprises the following steps:
step S1: the tester clicks the vision detection key on the touch liquid crystal display screen, and the host control board receives the key information and starts the vision detection function;
step S2: starting to play a short video on the touch liquid crystal display screen, and demonstrating a user-defined gesture rule in vision detection;
step S3: after the gesture rule demonstration is finished, a tester sits on a chair in an appointed lineation area according to the voice prompt of the voice broadcaster, adjusts the sitting posture and the height of the height-adjustable chair, and keeps the horizontal distance between the sight line and a visual target E on the touch liquid crystal display screen; voice broadcast ware suggestion: the left eye is covered first, and the vision detection of the right eye is started;
step S4: the host control panel calls a dichotomy to retrieve the vision level:
when a tester sees the sighting marks of the liquid crystal display screen, gesture movement information is made, the host control board carries out single gesture information acquisition and direction judgment by calling a gesture movement detection algorithm based on a frame difference method, whether the tester can clearly see the sighting marks of the level is judged according to a single-level sighting mark identification judgment rule, then a dichotomy is circularly called to generate the next-level sighting mark until the sighting mark of the minimum level can be clearly seen by the right eye of the tester is detected; if the gesture information acquisition fails or the operation is overtime in the detection process, returning to failure and ending the vision detection;
step S5: voice broadcast ware suggestion: covering the right eye and starting the vision detection of the left eye; step S4 is repeated until it is detected that the subject can see the minimum level of optotype clearly with the left eye.
Step S6: after binocular vision detection finishes, voice broadcast ware suggestion: after the vision detection is finished, please input personal information on the liquid crystal display screen; the tester inputs personal information including names and mobile phone contact ways;
step S7: the host control board transmits vision test data to the cloud server through the 4G communication module for storage, and the touch display screen displays vision test results and two-dimensional codes; the voice player broadcasts vision results;
step S8: the tester can also scan the personal information authorized to be logged in by the two-dimensional code through an external intelligent terminal to obtain a test result and a vision test file.
Further, the specific content of the customized gesture rule in step S2 is: the method of dynamically moving the hand up and down, left and right is adopted, and the following details are specified for improving the accuracy of dynamic gesture recognition based on a frame difference method:
(1) the hand which performs gesture motion in the detection process keeps a lifting state all the time and is placed right in front of two sides of the face so as not to shield the face;
(2) after the sighting target appears on the liquid crystal display screen, the tester is required to keep the body and the gesture as still as possible before the tester does not see the sighting target clearly and the gesture action is about to be made;
(1) when the hand gesture moves up, down, left and right, the gesture moving path is horizontal and vertical as much as possible, and the moving span is as large as possible. Further, the specific content of invoking the gesture movement detection algorithm based on the frame difference method to perform single gesture information acquisition and direction determination in step S4 is as follows: after the visual target is displayed, the high-definition camera is started by the host control board to acquire image information, when a tester starts to move in a vertical and horizontal gesture after seeing the visual target clearly, two frames of images are continuously acquired to perform grey-scale image and Gaussian filter preprocessing, the two processed frames of images are subjected to difference processing to obtain a new image, the new image is subjected to binarization processing and expansion operation, and all angular points (x) with large characteristic values in the image are searched by adopting Shi-Tomasi angular point detection algorithmij,yij) Wherein i represents the ith acquisition, j represents the jth corner point in the ith acquisition, if the corner point exists, the gesture starts to move, and the central points of all the corner points acquired at this time are calculated
Figure BDA0002505232190000051
The point is the center position of a gesture outline at a certain moment in the gesture moving process, the coordinate of the point is stored in an array, and a collection frequency counter is added with 1; repeating the process until 30 groups of central points are collected, and obtaining 30 coordinate points of the outline central path in the gesture moving process; respectively calculating the minimum abscissa of 30 central points
Figure BDA0002505232190000052
Maximum abscissa
Figure BDA0002505232190000053
Minimum ordinate
Figure BDA0002505232190000061
And maximum ordinate
Figure BDA0002505232190000062
Estimate the starting point (i.e. the first five times)Average of center points) coordinates
Figure BDA0002505232190000063
And endpoint (average of last five center points) coordinates
Figure BDA0002505232190000064
Calculating horizontal span
Figure BDA0002505232190000065
And span in the vertical direction
Figure BDA0002505232190000066
Calculating horizontal displacement
Figure BDA0002505232190000067
And displacement in the vertical direction
Figure BDA0002505232190000068
And finally, judging the moving direction of the gesture: if L isy≥LxIf the gesture moves in the up-down direction, then according to DyGesture downward can be recognized > 0, DyIf the gesture is less than or equal to 0, the gesture can be recognized to be upward; if L isx>LyIf the gesture moves in the left-right direction, then move according to DxIf > 0, the gesture can be recognized to the left, DxIf the gesture is less than or equal to 0, the gesture can be recognized to the right; if gesture acquisition fails for 3 times or operation is overtime in the whole acquisition process, returning detection failure, otherwise, returning success and gesture recognition direction.
Further, the dichotomy in step S4 retrieves the eyesight level, and the specific process is as follows: searching the current vision level Ec of the tester in the visual target array E [14] ═ 4.0,4.1,4.2,4.3,4.4,4.5,4.6,4.7,4.8,4.9,5.0,5.1,5.2,5.3] elements, and specifically, the following steps:
step Sa: initializing parameters, wherein the index of the starting point of the sighting target is 0, and the index of the ending point of the sighting target is 14-1; the visual acuity test success or failure returns a Flag, 1: indicating success, 0: indicates failure, initial state Flag is 1; the current vision level Ec of the tester is 4.0;
step (ii) ofSb: if Low is less than or equal to High, taking the intermediate value of Low and High
Figure BDA0002505232190000069
Visual target E [ Mid ] is taken]Displaying and calling a gesture movement detection algorithm based on a frame difference method on a liquid crystal screen to detect the visual target level for 3 times;
step Sc: checking a return value of a gesture movement detection algorithm based on a frame difference method, judging whether the gesture detection process has abnormal operation and overtime, and if the detection process is successful, executing the step Sd; if the detection process fails, returning Flag to be 0 and finishing the vision detection;
step Sd: and calling a single-level visual target identification judgment rule to judge whether the tester can clearly see the visual level of the level. If the tester can clearly see the visual target E [ Mid ] at the level, the tester needs to further search the visual target area at a smaller level, so that the search area is reduced, Low is Mid +1, and the current visual level Ec is updated to E [ Mid ]; if the tester does not see the visual target E [ Mid ] clearly, the tester needs to further search a visual target area with a larger level, and the search area is also reduced to make High be Mid-1;
step Se: and repeating the step Sb-step Sd until the vision level search is completed when the Low is higher than the High, and returning the vision level Ec and the return Flag of the tester to 1.
Further, the single-level optotype recognition determination rule in step S4 means that the following rule is adopted as the criterion for determining whether the tester can see clearly at a certain level of visual acuity: 3 times of acquisition is carried out on the visual target identification of a certain level, if a tester continuously identifies correctly for 2 times or only fails for 1 time in 3 identifications, the visual target of the level can be identified to be clearly seen; if 2 consecutive recognitions are incorrect or only 1 of 3 recognitions are successful, the level optotype is deemed to be invisible.
Compared with the prior art, the invention has the following beneficial effects:
(1) compared with other static gesture recognition methods, the gesture movement detection based on the frame difference method is applied to dynamic gesture recognition in vision detection, detection errors caused by light change, face and hand image superposition and the like can be eliminated, the accuracy of gesture recognition is improved, the calculation amount of the algorithm is small, the movement contour track of a target in a scene can be quickly detected, and the detection speed is higher.
(2) The vision level is retrieved by using the dichotomy, compared with a common mode visual target retrieval mode from 4.0 to 5.3, the vision level needs to be detected by 7 levels on average, and the dichotomy successive approximation mode can be shortened to 4 levels, so that the detection time can be shortened, the detection efficiency can be greatly improved, and the user experience degree can be improved.
(3) The vision detection sharing and intelligentization are realized by utilizing the dynamic gesture recognition technology, and the self-service operation is beneficial to the convenience and the rapidness of the vision detection; the cloud service technology solves the problem that the vision data are easy to lose and difficult to query, and realizes permanent storage and random query of the vision data.
Drawings
Fig. 1 is a schematic diagram of a configuration of a vision testing hardware system according to an embodiment of the present invention.
Fig. 2 is a main flow chart of a vision testing method according to an embodiment of the present invention.
Fig. 3 is a flowchart of a gesture movement detection algorithm based on a frame difference method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a corner point of a contour of a moment of gesture movement according to an embodiment of the present invention.
FIG. 5 is a track diagram of the contour center points collected during the left-right movement of the gesture according to the embodiment of the present invention.
FIG. 6 is a diagram of a trace of the center point of the contour collected during the up-and-down movement of the gesture according to the embodiment of the present invention.
Fig. 7 is a flowchart of a dichotomy vision level retrieval algorithm according to an embodiment of the invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiment provides an indoor shared self-service vision detection system, and is a schematic diagram of a vision detection hardware system in an embodiment of the invention, as shown in fig. 1. The detection system comprises a detection cabinet, a host control panel, a high-definition camera, a touch liquid crystal display screen, a voice broadcaster, a 4G communication module, a height-adjustable seat and a cloud server; the host control board, the high-definition camera, the touch liquid crystal display screen and the voice broadcaster are all arranged in the detection cabinet; the host control board is electrically connected with the high-definition camera and is used for controlling the high-definition camera to start or close the acquisition of image information; the touch liquid crystal display screen is electrically connected with the host control panel and is used for video playing, visual target display and man-machine interaction operation; the host control board is electrically connected with the voice broadcaster and used for controlling the voice broadcaster to broadcast voice information; the 4G communication module is integrated on the host control board, is electrically connected with the host control board and is used for realizing network data transmission between the host control board and the cloud server through the 4G communication module.
As shown in fig. 2, this embodiment further provides a detection method based on an indoor shared self-service vision detection system, including the following steps:
step S1: a touch key with a text label of vision detection is arranged on the touch liquid crystal display screen, a tester clicks the vision detection key, and the host control panel receives key information and starts a vision detection function;
step S2: starting to play a short video on the touch liquid crystal display screen, and demonstrating a user-defined gesture rule in vision detection;
step S3: after the gesture rule demonstration is finished, a tester sits on a chair in a specified line drawing area according to the voice prompt of the voice broadcaster, adjusts the sitting posture and the height of the height-adjustable chair, and keeps the horizontal distance between the sight line and a visual target E on the touch liquid crystal display screen, wherein the horizontal distance is defined as 1m in the example; voice broadcast ware suggestion: the left eye is covered first, and the vision detection of the right eye is started;
step S4: the host control panel calls a dichotomy to retrieve the vision level, and the specific process comprises the following steps: when a tester sees the sighting marks of the liquid crystal display screen, gesture movement information is made, the host control board calls a gesture movement detection algorithm based on a frame difference method to conduct single gesture information acquisition and direction judgment, whether the tester can see the sighting marks at the level clearly is judged according to a single-level sighting mark identification judgment rule, then a dichotomy is circularly called to generate the next-level sighting mark until the tester can see the sighting mark at the minimum level clearly is detected. If the gesture information acquisition fails or the operation is overtime in the detection process, returning to failure and ending the vision detection;
step S5: voice broadcast ware suggestion: covering the right eye and starting the vision detection of the left eye; step S4 is repeated until it is detected that the subject can see the minimum level of optotype clearly with the left eye.
Step S6: after binocular vision detection finishes, voice broadcast ware suggestion: and after the vision detection is finished, please input personal information on the liquid crystal display screen. The tester inputs personal information such as name, mobile phone contact information and the like.
Step S7: the host control board transmits vision test data to the cloud server through the 4G communication module for storage, and the touch display screen displays vision test results and two-dimensional codes; the voice player broadcasts vision results;
step S8: the tester can also scan the personal information authorized to be logged in by the two-dimensional code through an external intelligent terminal to obtain a test result and a vision test file.
In this embodiment, the specific content of the customized gesture rule in step S2 is: the gesture rule of the detection method is consistent with that of the common vision detection gesture method, and the hand is dynamically moved up and down, left and right, so that the test experience of a tester is facilitated. However, in order to make the accuracy of the dynamic gesture recognition based on the frame difference method higher, the following details need to be specified:
(1) the hand which performs gesture motion in the detection process keeps a lifting state all the time and is placed right in front of two sides of the face so as not to shield the face;
(2) after the sighting target appears on the liquid crystal display screen, the tester is required to keep the body and the gesture as still as possible before the tester does not see the sighting target clearly and the gesture action is about to be made;
(3) when the hand gesture moves up, down, left and right, the gesture moving path is horizontal and vertical as much as possible, and the moving span is as large as possible.
As shown in fig. 3, in the present embodiment, the gesture movement detection algorithm based on the frame difference method in step S4 is a dynamic gesture information collection and recognition algorithm. The specific process is as follows: after the sighting target is displayed, the high-definition camera is started by the host control board to collect image information, when a tester starts to move in a vertical and horizontal gesture after seeing the sighting target clearly, two frames of images are continuously collected to be preprocessed by a grey-scale image and a Gaussian filter, the two processed frames of images are subjected to subtraction to obtain a new image, the new image is subjected to binarization processing and expansion operation, and all angular points (x) with large characteristic values in the image are searched by adopting a Shi-Tomasi angular point detection algorithmij,yij) Where i denotes the ith acquisition and j denotes the jth corner in the ith acquisition. If the angular points exist, the gesture starts to move, and the central points of all the angular points acquired at this time are calculated
Figure BDA0002505232190000121
The point is the center position of a gesture outline at a certain moment in the gesture moving process, the coordinate of the point is stored in an array, and a collection frequency counter is added with 1. As shown in the figure4, in this embodiment, a schematic diagram of contour corner points at a moment when the gesture moves is collected. And repeating the process until 30 groups of central points are collected, and obtaining 30 coordinate points of the contour central path in the gesture moving process. Fig. 5 is a track diagram of 30 sets of contour center points acquired in the process of moving a gesture left and right according to the embodiment of the present invention, and fig. 6 is a track diagram of 30 sets of contour center points acquired in the process of moving a gesture up and down according to the embodiment of the present invention, where a large circle represents a start point, a square represents a movement end point, and a small circle represents a process point of the track movement. The gesture moving direction judgment concrete calculation flow is as follows: respectively calculating the minimum abscissa of 30 central points
Figure BDA0002505232190000122
Maximum abscissa
Figure BDA0002505232190000123
Minimum ordinate
Figure BDA0002505232190000124
And maximum ordinate
Figure BDA0002505232190000125
Estimate the coordinates of the starting point (average of the first five center points)
Figure BDA0002505232190000126
Figure BDA0002505232190000127
And endpoint (average of last five center points) coordinates
Figure BDA0002505232190000128
Figure BDA0002505232190000129
Calculating horizontal span
Figure BDA00025052321900001210
And span in the vertical direction
Figure BDA00025052321900001211
Calculating horizontal displacement
Figure BDA00025052321900001212
And displacement in the vertical direction
Figure BDA00025052321900001213
If L isy≥LxIf the gesture moves in the up-down direction, then according to DyGesture downward can be recognized > 0, DyIf the gesture is less than or equal to 0, the gesture can be recognized to be upward; if L isx>LyIf the gesture moves in the left-right direction, then move according to DxIf > 0, the gesture can be recognized to the left, DxIf not more than 0, the gesture can be recognized to the right. If gesture acquisition fails for 3 times or operation is overtime in the whole acquisition process, returning detection failure, otherwise, returning success and gesture recognition direction.
As shown in fig. 7, in the present embodiment, the dichotomy described in step S4 retrieves the eyesight level, and the specific process is as follows: searching the current vision level Ec of the tester in the visual target array E [14] ═ 4.0,4.1,4.2,4.3,4.4,4.5,4.6,4.7,4.8,4.9,5.0,5.1,5.2,5.3] elements, and specifically, the following steps:
step Sa: initializing parameters, wherein the index of the starting point of the sighting target is 0, and the index of the ending point of the sighting target is 14-1; the visual acuity test success or failure returns a Flag, 1: indicating success, 0: indicates failure, initial state Flag is 1; the current vision level Ec of the tester is 4.0;
and Sb: if Low is less than or equal to High, taking the intermediate value of Low and High
Figure BDA0002505232190000131
Visual target E [ Mid ] is taken]Displaying and calling a gesture movement detection algorithm based on a frame difference method on a liquid crystal screen to detect the visual target level for 3 times;
step Sc: checking a return value of a gesture movement detection algorithm based on a frame difference method, judging whether the gesture detection process has abnormal operation and overtime, and if the detection process is successful, executing the step Sd; if the detection process fails, returning Flag to be 0 and finishing the vision detection;
step Sd: and calling a single-level visual target identification judgment rule to judge whether the tester can clearly see the visual level of the level. If the tester can clearly see the visual target E [ Mid ] at the level, the tester needs to further search the visual target area at a smaller level, so that the search area is reduced, Low is Mid +1, and the current visual level Ec is updated to E [ Mid ]; if the tester does not see the visual target E [ Mid ] clearly, the tester needs to further search a visual target area with a larger level, and the search area is also reduced to make High be Mid-1;
step Se: and repeating the step Sb-step Sd until the vision level search is completed when the Low is higher than the High, and returning the vision level Ec and the return Flag of the tester to 1.
In this embodiment, the single-level optotype recognition determination rule in step S4 means that the following rule is adopted as the criterion for determining whether the tester can see clearly at a certain level of vision level: for the visual target identification of a certain level, if the tester continuously identifies correctly for 2 times or only fails for 1 time in 3 identifications, the visual target of the level can be identified as clearly visible; if 2 consecutive recognitions are incorrect or only 1 of 3 recognitions are successful, the level optotype is deemed to be invisible.
Preferably, the basic principle of this embodiment is that, after a tester sees a visual target, the tester makes a gesture movement, and applies a frame difference method to dynamic gesture recognition in a vision detection scene, since a hand target moves in a detection process, images of the target have different positions in different image frames, difference operation is performed on two continuous frames of images in time, pixel points corresponding to different frames are subtracted, an absolute value of a gray difference is determined, when the absolute value exceeds a certain threshold, the target can be determined as a moving target, tracking of a moving track of the target is realized, and then the gesture moving direction of the tester can be recognized. And finally, the detection of the vision level can be realized through a single-level sighting target identification and judgment rule and dichotomy retrieval.
Preferably, in the detection method based on the indoor shared self-service vision detection system of the embodiment, when the tester makes gesture movement information after seeing the visual target on the display screen, the host control board calls a gesture movement detection algorithm based on a frame difference method to identify that a gesture starts to move, so that a movement contour track of a hand target is triggered to be continuously collected, and the gesture movement direction of the tester can be identified through calculation of a contour central point track. And then judging whether the testee can see the visual target of the level clearly according to the single-level visual target identification judgment rule, and circularly calling the dichotomy to search to generate the visual target of the next level until the testee can see the visual target of the minimum level clearly, thereby completing the vision test. And finally, the vision detection result can be stored in a cloud server terminal through the 4G communication module, and a tester can call and check the vision detection result at any time.
Preferably, in this embodiment, the shared device is configured to respond to a sudden destructive behavior, the detection device has protection measures, and the whole detection cabinet is made of steel, so that the cabinet is hard and not easy to destroy; the toughened glass is arranged on the outer side of the liquid crystal display screen to prevent malicious damage. The high-definition camera is arranged at the top of the detection cabinet and used for capturing dynamic gesture movement information, pixels are not less than 500 ten thousand pixels, and resolution is 1920 x 1080. A touch liquid crystal display screen is arranged below the camera, the screen size is 22 inches, the screen resolution is 1920 x 1080, and the screen display brightness is at least 500cd/m2And the outer side of the screen is protected by toughened glass. The screen is capacitive touch control, and the sensitivity is high. A voice player is arranged below the liquid crystal display, and the voice broadcaster has clear sound and is mainly used for voice prompt, so that the convenience of man-machine interaction is improved; the host computer is located the bottom that detects the rack, and the host computer is whole detecting system's control core, is responsible for opening and closes the collection of camera image information, voice information's report, gesture information's processing and judgement, the control of visual acuity test flow to and the communication transmission of data and network. The 4G communication module is arranged in the host control board and used for connecting a network to realize remote data transmission and storage; a height-adjustable seat is arranged 1m away from the liquid crystal display screen, the seat is used for improving the testing comfort level and relieving the fatigue of a tested person on one hand, and is used for adjusting the height to keep the sight of human eyes parallel to the visual targets in the liquid crystal display screen on the other hand. The cloud server is used for permanently storing vision data, and a tester can use WeChat IIAfter the access is authorized by the dimension code scanning, the personal vision data test file is obtained, so that the user can conveniently check the vision record at any time.
In particular, in the embodiment, the gesture movement detection based on the frame difference method is applied to dynamic gesture recognition in vision detection, compared with other static gesture recognition methods, detection errors caused by light change, superposition of human face images and hand images and the like can be eliminated, the accuracy of gesture recognition can be improved, the calculation amount of the algorithm is small, the motion contour track of the target in the scene can be quickly detected, and the detection speed is higher. The vision level is retrieved by using the dichotomy, compared with a common mode visual target retrieval mode from 4.0 to 5.3, the vision level needs to be detected by 7 levels on average, and the dichotomy successive approximation mode can be shortened to 4 levels, so that the detection time can be shortened, the detection efficiency can be greatly improved, and the user experience degree can be improved. Based on the algorithms with excellent detection, the invention realizes the sharing and intellectualization of vision detection and convenient and fast self-service operation; the cloud service technology solves the problem that the vision data are easy to lose and difficult to query, and realizes permanent storage and random query of the vision data.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (6)

1. The utility model provides an indoor sharing self-service eyesight detecting system which characterized in that: the device comprises a detection cabinet, a host control board, a high-definition camera, a touch liquid crystal display screen, a voice broadcaster, a 4G communication module, a height-adjustable seat and a cloud server; the host control board, the high-definition camera, the touch liquid crystal display screen and the voice broadcaster are all arranged in the detection cabinet; the host control board is electrically connected with the high-definition camera and is used for controlling the high-definition camera to start or close the acquisition of image information; the touch liquid crystal display screen is electrically connected with the host control panel and is used for video playing, visual target display and man-machine interaction operation; the host control board is electrically connected with the voice broadcaster and used for controlling the voice broadcaster to broadcast voice information; the 4G communication module is integrated on the host control board, is electrically connected with the host control board and is used for realizing network data transmission between the host control board and the cloud server through the 4G communication module.
2. A detection method based on the indoor shared self-service vision detection system of claim 1, characterized in that: the method comprises the following steps:
step S1: the tester clicks the vision detection key on the touch liquid crystal display screen, and the host control board receives the key information and starts the vision detection function;
step S2: starting to play a short video on the touch liquid crystal display screen, and demonstrating a user-defined gesture rule in vision detection;
step S3: after the gesture rule demonstration is finished, a tester sits on a chair in an appointed lineation area according to the voice prompt of the voice broadcaster, adjusts the sitting posture and the height of the height-adjustable chair, and keeps the horizontal distance between the sight line and a visual target E on the touch liquid crystal display screen; voice broadcast ware suggestion: the left eye is covered first, and the vision detection of the right eye is started;
step S4: the host control panel calls a dichotomy to retrieve the vision level:
when a tester sees the sighting marks of the liquid crystal display screen, gesture movement information is made, the host control board carries out single gesture information acquisition and direction judgment by calling a gesture movement detection algorithm based on a frame difference method, whether the tester can clearly see the sighting marks of the level is judged according to a single-level sighting mark identification judgment rule, then a dichotomy is circularly called to generate the next-level sighting mark until the sighting mark of the minimum level can be clearly seen by the right eye of the tester is detected; if the gesture information acquisition fails or the operation is overtime in the detection process, returning to failure and ending the vision detection;
step S5: voice broadcast ware suggestion: covering the right eye and starting the vision detection of the left eye; repeating the step S4 until the test subject can see the visual target with the minimum level clearly by the left eye;
step S6: after binocular vision detection finishes, voice broadcast ware suggestion: after the vision detection is finished, please input personal information on the liquid crystal display screen; the tester inputs personal information including names and mobile phone contact ways;
step S7: the host control board transmits vision test data to the cloud server through the 4G communication module for storage, and the touch display screen displays vision test results and two-dimensional codes; the voice player broadcasts vision results;
step S8: the tester can also scan the personal information authorized to be logged in by the two-dimensional code through an external intelligent terminal to obtain a test result and a vision test file.
3. The detection method of the indoor shared self-service vision detection system according to claim 2, characterized in that: the specific content of the customized gesture rule in the step S2 is as follows: the method of dynamically moving the hand up and down, left and right is adopted, and the following details are specified for improving the accuracy of dynamic gesture recognition based on a frame difference method:
(1) the hand which performs gesture motion in the detection process keeps a lifting state all the time and is placed right in front of two sides of the face so as not to shield the face;
(2) after the sighting target appears on the liquid crystal display screen, the tester is required to keep the body and the gesture as still as possible before the tester does not see the sighting target clearly and the gesture action is about to be made;
(3) when the hand gesture moves up, down, left and right, the gesture moving path is horizontal and vertical as much as possible, and the moving span is as large as possible.
4. The detection method of the indoor shared self-service vision detection system according to claim 2, characterized in that: the specific contents of calling the frame difference method-based gesture movement detection algorithm to perform single gesture information acquisition and direction judgment in step S4 are as follows: after the visual target is displayed, the host control board starts the high-definition camera to collect image information, and a tester clearly sees the visual target and then starts to make vertical and horizontal gesture movementContinuously collecting two frames of images to carry out grey-scale image and Gaussian filtering pretreatment, carrying out difference processing on the two frames of images after treatment to obtain a new image, carrying out binarization processing and expansion operation on the new image, and searching all angular points (x) with large characteristic values in the image by adopting Shi-Tomasi angular point detection algorithmij,yij) Wherein i represents the ith acquisition, j represents the jth corner point in the ith acquisition, if the corner point exists, the gesture starts to move, and the central points of all the corner points acquired at this time are calculated
Figure RE-FDA0002623264280000031
The point is the center position of a gesture outline at a certain moment in the gesture moving process, the coordinate of the point is stored in an array, and a collection frequency counter is added with 1; repeating the process until 30 groups of central points are collected, and obtaining 30 coordinate points of the outline central path in the gesture moving process; respectively calculating the minimum abscissa of 30 central points
Figure RE-FDA0002623264280000041
Maximum abscissa
Figure RE-FDA0002623264280000042
Minimum ordinate
Figure RE-FDA0002623264280000043
And maximum ordinate
Figure RE-FDA0002623264280000044
Estimating start point coordinates
Figure RE-FDA0002623264280000045
And endpoint coordinates
Figure RE-FDA0002623264280000046
Figure RE-FDA0002623264280000047
Calculating the levelSpan of direction
Figure RE-FDA0002623264280000048
And span in the vertical direction
Figure RE-FDA0002623264280000049
Calculating horizontal displacement
Figure RE-FDA00026232642800000410
And displacement in the vertical direction
Figure RE-FDA00026232642800000411
And finally, judging the moving direction of the gesture: if L isy≥LxIf the gesture moves in the up-down direction, then according to DyGesture downward can be recognized > 0, DyIf the gesture is less than or equal to 0, the gesture can be recognized to be upward; if L isx>LyIf the gesture moves in the left-right direction, then move according to DxIf > 0, the gesture can be recognized to the left, DxIf the gesture is less than or equal to 0, the gesture can be recognized to the right; if gesture acquisition fails for 3 times or operation is overtime in the whole acquisition process, returning detection failure, otherwise, returning success and gesture recognition direction.
5. The detection method of the indoor shared self-service vision detection system according to claim 2, characterized in that: the dichotomy in step S4 retrieves the eyesight level, and the specific process is as follows: searching the current vision level Ec of the tester in the visual target array E [14] ═ 4.0,4.1,4.2,4.3,4.4,4.5,4.6,4.7,4.8,4.9,5.0,5.1,5.2,5.3] elements, and specifically, the following steps:
step Sa: initializing parameters, wherein the index of the starting point of the sighting target is 0, and the index of the ending point of the sighting target is 14-1; the visual acuity test success or failure returns a Flag, 1: indicating success, 0: indicates failure, initial state Flag is 1; the current vision level Ec of the tester is 4.0;
and Sb: if Low is less than or equal to High, taking the intermediate value of Low and High
Figure RE-FDA00026232642800000412
Visual target E [ Mid ] is taken]Displaying and calling a gesture movement detection algorithm based on a frame difference method on a liquid crystal screen to detect the visual target level for 3 times;
step Sc: checking a return value of a gesture movement detection algorithm based on a frame difference method, judging whether the gesture detection process has abnormal operation or overtime, and if the detection process is successful, executing the step Sd; if the detection process fails, returning Flag to be 0 and finishing the vision detection;
step Sd: calling a single-level visual target identification judgment rule to judge whether the tester can clearly see the visual level of the level; if the tester can clearly see the visual target E [ Mid ] at the level, the tester needs to further search the visual target area at a smaller level, so that the search area is reduced, the Low is Mid +1, and the current vision level Ec is updated to E [ Mid ]; if the tester does not see the visual target E [ Mid ] clearly, the tester needs to further search a visual target area with a larger level, and the search area is also reduced to make High be Mid-1;
step Se: and repeating the steps Sb to Sd until the vision level search is completed when the Low is higher than the High, and returning the vision level Ec and the return Flag of the tester to 1.
6. The detection method of the indoor shared self-service vision detection system according to claim 2, characterized in that: the single-level optotype recognition and determination rule in step S4 means that the following rule is adopted as the criterion for determining whether the tester can see clearly at a certain level of visual acuity: 3 times of acquisition is carried out on the visual target identification of a certain level, if a tester continuously identifies correctly for 2 times or only fails for 1 time in 3 identifications, the visual target of the level can be identified to be clearly seen; if 2 consecutive recognitions are incorrect or only 1 of 3 recognitions are successful, the level optotype is deemed to be invisible.
CN202010445925.2A 2020-05-23 2020-05-23 Detection method of indoor shared self-service vision detection system Active CN111700583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010445925.2A CN111700583B (en) 2020-05-23 2020-05-23 Detection method of indoor shared self-service vision detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010445925.2A CN111700583B (en) 2020-05-23 2020-05-23 Detection method of indoor shared self-service vision detection system

Publications (2)

Publication Number Publication Date
CN111700583A true CN111700583A (en) 2020-09-25
CN111700583B CN111700583B (en) 2023-04-18

Family

ID=72537357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010445925.2A Active CN111700583B (en) 2020-05-23 2020-05-23 Detection method of indoor shared self-service vision detection system

Country Status (1)

Country Link
CN (1) CN111700583B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113143194A (en) * 2021-05-24 2021-07-23 张婧怡 Eyesight test method based on mobile terminal, mobile terminal and system
CN113143193A (en) * 2021-05-24 2021-07-23 张婧怡 Intelligent vision testing method, device and system
CN113288038A (en) * 2021-05-10 2021-08-24 杭州电子科技大学 Self-service vision testing method based on computer vision
WO2021248671A1 (en) * 2020-06-12 2021-12-16 海信视像科技股份有限公司 Display device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5035500A (en) * 1988-08-12 1991-07-30 Rorabaugh Dale A Automated ocular perimetry, particularly kinetic perimetry
US20120254779A1 (en) * 2011-04-01 2012-10-04 Arthur Austin Ollivierre System and method for displaying objects in a user interface based on a visual acuity of a viewer
WO2016131337A1 (en) * 2015-09-06 2016-08-25 中兴通讯股份有限公司 Method and terminal for detecting vision
CN107007247A (en) * 2017-06-01 2017-08-04 徐仲昭 A kind of interactive vision inspection system and its vision testing method
CN208212008U (en) * 2017-06-09 2018-12-11 重庆师范大学涉外商贸学院 From survey formula vision inspection system
CN110037647A (en) * 2019-04-22 2019-07-23 深圳市聚派乐品科技有限公司 Vision drop method
CN110123258A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 Method, apparatus, eyesight detection device and the computer storage medium of sighting target identification
CN110353622A (en) * 2018-10-16 2019-10-22 武汉交通职业学院 A kind of vision testing method and eyesight testing apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5035500A (en) * 1988-08-12 1991-07-30 Rorabaugh Dale A Automated ocular perimetry, particularly kinetic perimetry
US20120254779A1 (en) * 2011-04-01 2012-10-04 Arthur Austin Ollivierre System and method for displaying objects in a user interface based on a visual acuity of a viewer
WO2016131337A1 (en) * 2015-09-06 2016-08-25 中兴通讯股份有限公司 Method and terminal for detecting vision
CN107007247A (en) * 2017-06-01 2017-08-04 徐仲昭 A kind of interactive vision inspection system and its vision testing method
CN208212008U (en) * 2017-06-09 2018-12-11 重庆师范大学涉外商贸学院 From survey formula vision inspection system
CN110353622A (en) * 2018-10-16 2019-10-22 武汉交通职业学院 A kind of vision testing method and eyesight testing apparatus
CN110123258A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 Method, apparatus, eyesight detection device and the computer storage medium of sighting target identification
CN110037647A (en) * 2019-04-22 2019-07-23 深圳市聚派乐品科技有限公司 Vision drop method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021248671A1 (en) * 2020-06-12 2021-12-16 海信视像科技股份有限公司 Display device
CN113288038A (en) * 2021-05-10 2021-08-24 杭州电子科技大学 Self-service vision testing method based on computer vision
CN113143194A (en) * 2021-05-24 2021-07-23 张婧怡 Eyesight test method based on mobile terminal, mobile terminal and system
CN113143193A (en) * 2021-05-24 2021-07-23 张婧怡 Intelligent vision testing method, device and system
WO2022247067A1 (en) * 2021-05-24 2022-12-01 弗徕威智能机器人科技(上海)有限公司 Intelligent eye examination method, device and system

Also Published As

Publication number Publication date
CN111700583B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111700583B (en) Detection method of indoor shared self-service vision detection system
CN106598221B (en) 3D direction of visual lines estimation method based on eye critical point detection
KR101499271B1 (en) Unitary vision testing center
US8513055B2 (en) Unitary vision and coordination testing center
KR101455200B1 (en) Learning monitering device and method for monitering of learning
CN111127848A (en) Human body sitting posture detection system and method
CN101453943B (en) Image recording apparatus and image recording method
US8317324B2 (en) Unitary vision and neuro-processing testing center
CN113288044B (en) Dynamic vision testing system and method
CN111344222A (en) Method of performing an eye examination test
CN114931353A (en) Convenient and fast contrast sensitivity detection system
JP3317754B2 (en) Perimeter measurement device
CN114190879A (en) Visual function detection system for amblyopia children based on virtual reality technology
CN113143193A (en) Intelligent vision testing method, device and system
US11823413B2 (en) Eye gaze tracking system, associated methods and computer programs
CN114569056B (en) Eyeball detection and vision simulation device and eyeball detection and vision simulation method
CN115331282A (en) Intelligent vision testing system
JP2002345752A (en) Ophthalmic data transfer storage device
JP2023549865A (en) Method and system for measuring binocular distance for children
TW201902412A (en) Virtual reality eye detection system and eye detection method thereof
CN115315217A (en) Cognitive dysfunction diagnosis device and cognitive dysfunction diagnosis program
CN109875498B (en) Dynamic vision measuring system based on reaction time
CN108495584A (en) For determining oculomotor device and method by tactile interface
CN113397471B (en) Vision data acquisition system based on Internet of things
EP4101367A1 (en) Method and device for determining a visual performance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant