CN117243560A - View meter system for view detection and method thereof - Google Patents
View meter system for view detection and method thereof Download PDFInfo
- Publication number
- CN117243560A CN117243560A CN202310147483.7A CN202310147483A CN117243560A CN 117243560 A CN117243560 A CN 117243560A CN 202310147483 A CN202310147483 A CN 202310147483A CN 117243560 A CN117243560 A CN 117243560A
- Authority
- CN
- China
- Prior art keywords
- module
- unit
- detection
- scene
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 182
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000000007 visual effect Effects 0.000 claims abstract description 170
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 107
- 238000012937 correction Methods 0.000 claims abstract description 91
- 238000010276 construction Methods 0.000 claims abstract description 80
- 238000004364 calculation method Methods 0.000 claims abstract description 79
- 230000003993 interaction Effects 0.000 claims abstract description 51
- 230000004044 response Effects 0.000 claims abstract description 34
- 238000012360 testing method Methods 0.000 claims abstract description 33
- 230000008569 process Effects 0.000 claims abstract description 32
- 210000001747 pupil Anatomy 0.000 claims description 38
- 230000010354 integration Effects 0.000 claims description 36
- 210000001508 eye Anatomy 0.000 claims description 34
- 230000000638 stimulation Effects 0.000 claims description 24
- 230000005540 biological transmission Effects 0.000 claims description 12
- 230000010365 information processing Effects 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 3
- 238000007726 management method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 3
- 238000007689 inspection Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 208000001491 myopia Diseases 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 210000004087 cornea Anatomy 0.000 description 3
- 208000002173 dizziness Diseases 0.000 description 3
- 210000000554 iris Anatomy 0.000 description 3
- 206010029864 nystagmus Diseases 0.000 description 3
- 239000013078 crystal Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 206010034960 Photophobia Diseases 0.000 description 1
- 208000004350 Strabismus Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 201000009310 astigmatism Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007831 electrophysiology Effects 0.000 description 1
- 238000002001 electrophysiology Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000001028 reflection method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/024—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0041—Operational features thereof characterised by display arrangements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/028—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
- A61B3/04—Trial frames; Sets of lenses for use therewith
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
- A61B3/145—Arrangements specially adapted for eye photography by video means
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Ophthalmology & Optometry (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Human Computer Interaction (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention relates to the technical field of visual detection, and discloses a visual field meter system for visual field detection and a method thereof, wherein the visual field meter system for visual field detection comprises the following components: the testing system comprises an eyeball information acquisition module, an inserting piece throwing module, a scene throwing module and an eyepiece adjusting module; the man-machine interaction system comprises a response module. According to the invention, a virtual visual field construction and imaging means are adopted, eyeball position information of a detector is determined by utilizing an eyeball tracking calculation mode, refractive correction, visual axis correction and visual detection operation of the detector are realized in the detection process, and virtual scenes with adjustable contents, brightness, color and position of a background and stress sites are formed according to detection requirements, so that the detector is suitable for different types of visual detection requirements, is convenient for different types of visual detection, improves the applicability of visual detection, is convenient for the detector to detect, and improves the detection accuracy.
Description
Technical Field
The invention relates to the technical field of visual detection, in particular to a perimeter system for visual field detection and a method thereof.
Background
In the process of eye vision detection, a series of visual performance detection such as refraction of eye vision is needed by using a detection instrument, when the refraction correction detection is carried out, the refraction correction can only be carried out within a 30-degree visual field detection range, and no lens is added to the 30-degree visual field detection range for refraction correction, because the edge of the lens and the lens frame influence the visual field detection range, the accuracy of the visual field detection of more than 30 degrees can be obviously influenced; in the working range of 30 degrees, the refractive correction of 1/XD is preset, and a subject who is excessively corrected is expected to perform certain compensation by changing the refractive power of the crystal of the subject, but the problems that the crystal has diseases or cannot be compensated are not considered, so that the accuracy of visual field inspection is affected;
when visual axis detection is performed, fixation deviation is the most common cause for influencing the accuracy of visual field detection, a subject is difficult to keep fixation during the detection, and when the subject looks for a stimulation site by means of eyeball rotation, the accuracy of visual field detection is seriously influenced, which is also the biggest problem to be solved by visual field detection. The conventional visual field meter relies on a fixation detection system to avoid inaccurate visual field examination caused by fixation deviation, but in the practical application process, the fixation point of a patient still moves according to the position of a stimulation site, and the system is almost unusable for patients with low central vision, unstable pupils and nystagmus.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a perimeter system for perimeter detection and a method thereof.
The invention is realized by adopting the following technical scheme: a perimeter system for visual field detection, comprising:
the testing system comprises an eyeball information acquisition module, an inserting piece throwing module, a scene throwing module and an eyepiece adjusting module;
the man-machine interaction system comprises a response module;
the information processing system comprises a data interaction module, a storage module, a data classification and identification module, a scene establishment module, an eyeball tracking calculation module, a detection control module, a data integration module and a data transmission module;
the data interaction module and the storage module are connected with the eyeball information acquisition module, the inserting piece putting module, the scene putting module, the eyepiece adjusting module, the response module, the data classification and identification module, the scene establishment module, the eyeball tracking calculation module, the detection control module, the data integration module and the data transmission module; the data classification and identification module is connected with the scene establishment module, the eyeball tracking calculation module, the detection control module and the data integration module; the scene establishment module is connected with the eyeball tracking calculation module, the detection control module and the data integration module; the eyeball tracking calculation module is connected with the detection control module and the data integration module; the data integration module is connected with the detection control module and the data transmission module.
As a further improvement of the scheme, the data respectively identifying module comprises a data classifying unit connected with the data interaction module, and a construction instruction identifying unit, a response instruction identifying unit and a detection instruction identifying unit which are connected with the data classifying unit; the response instruction identification unit and the detection instruction identification unit are connected with the storage module, the eyeball tracking calculation module, the detection control module and the scene establishment module; the construction instruction equipment unit is connected with the storage module and the scene establishment module;
the data classifying unit is used for classifying the information received by the data interaction unit, the construction instruction identifying unit is used for identifying the construction instruction classified by the data classifying unit, the response instruction identifying unit is used for identifying the response content classified by the data classifying unit, and the detection instruction identifying unit is used for identifying the detection instruction classified by the data classifying unit.
As a further improvement of the above scheme, the scene building module comprises a scene building instruction receiving unit, a scene building unit, a scene spot building unit and a scene model integrating unit, wherein the scene building instruction receiving unit is connected with the data classification and identification module, the scene building unit and the scene spot building unit; the scene model integrating unit is connected with the data integrating module, the storage module, the data classifying and identifying module, the eyeball tracking calculation module, the detection control module, the scene constructing unit and the scene spot constructing unit;
the scene construction instruction receiving unit is used for receiving scene construction instruction content, the scene construction unit is used for constructing scene content for detection, the scene spot construction unit is used for constructing stimulation site spot content for detection, and the scene model integration unit integrates the scene content for detection and the stimulation site spot content to form detection scene content.
As a further improvement of the above scheme, the eyeball tracking calculation module comprises an eyeball position calculation unit, an eyeball visual axis calculation unit and a pupil tracking calculation unit, wherein the eyeball position calculation unit is connected with the data interaction module, the data classification and identification module, the scene establishment module, the detection control module, the eyeball visual axis calculation unit and the pupil tracking calculation unit; the eyeball visual axis calculation unit and the pupil tracking calculation unit are connected with the storage module, the data classification and identification module, the scene establishment module and the detection control module;
the eyeball position calculation unit is used for calculating eyeball information of the person to be detected, the eyeball visual axis calculation unit is used for calculating eyeball visual axis information of the person to be detected, and the pupil tracking calculation unit is used for calculating pupil information of the person to be detected.
As a further improvement of the above scheme, the detection control module comprises a vision detection control unit, a visual axis correction control unit and a refraction correction control unit which are all connected with the data classification and identification module, the scene establishment module, the eyeball tracking calculation module and the storage module;
the visual detection control unit is used for detecting and controlling a tester, the visual axis correction control unit is used for performing visual axis correction control on the tester, and the refraction correction control unit is used for performing refraction correction control on the tester.
As a further improvement of the scheme, the eyeball information acquisition module is used for acquiring eyeball information of a tester, the inserting sheet putting module is used for carrying out vision correction inserting sheet placement management, the scene putting module is used for putting visual field content for detection, and the eyepiece adjusting module is used for adjusting the position of the eyepiece; the response module is used for carrying out man-machine interaction response in the detection process.
As a further improvement of the scheme, the data interaction module is used for data interaction among the information processing system, the testing system and the man-machine interaction system, the storage module is used for storing data in the detection process of the information processing system, the testing system and the man-machine interaction system, the data classification and identification module is used for respectively identifying information acquired by detection and detection instructions, the scene establishment module is used for constructing a detection visual field for detection, the eyeball tracking calculation module is used for calculating eyeball information, the detection control module is used for controlling and adjusting visual detection, the data integration module is used for integrating data formed in the detection process, and the data transmission module is used for transmitting the data generated in the detection process of the information processing system to the testing system and the man-machine interaction system for detection operation.
A method of using a perimeter system for visual field detection, comprising the steps of:
s1, constructing a virtual field:
the data interaction module receives a virtual view construction instruction transmitted by the test end, the data classification unit classifies the virtual view construction instruction and transmits the virtual view construction instruction to the construction instruction identification unit for identification, specific information of virtual view construction is determined, then the scene construction instruction receiving unit receives the construction instruction and sends the construction instruction to the scene construction unit and the scene spot construction unit, and the scene construction unit and the scene spot construction unit establish view contents of the virtual view and stimulation site spot contents; integrating the established content in a data integration unit to form a finished virtual field content, and then sending the virtual field content to a scene throwing module through a data sending module and a data interaction module, and finally throwing the virtual field content into a virtual detection image to be detected by the scene throwing module;
s2 refractive correction:
on the basis of completing the virtual view construction of the step S1, the data interaction module receives a refraction correction instruction transmitted by the test end, the data classification unit classifies the refraction correction instruction and transmits the refraction correction instruction to the test instruction identification unit for identification, specific refraction correction information is determined, then the refraction correction control unit controls the data integration module to correct the refraction correction virtual view content constructed by the scene construction unit and the scene spot construction unit according to refraction correction instruction content, and at the moment, the test end responds to a tester by using the response module and performs refraction correction by combining the insert delivery module and the eyepiece adjustment module;
s3, correcting the visual axis:
on the basis of completing the refraction correction in the step S2, the visual axis correction control unit extracts eyeball information acquired from the eyeball information acquisition module according to the received visual axis correction instruction, performs classification processing through the eyeball position calculation unit, calculates the eyeball information to form eyeball visual axis information content, and the data integration module adjusts coordinates of a virtual detection image constructed by the scene construction unit and the scene spot construction unit according to the calculated eyeball visual axis information content, and then performs throwing;
s4, visual detection:
on the basis of completing the visual axis correction in step S3, the visual detection control unit forms a virtual field for detection according to the received visual detection instruction, and the data integration module performs visual detection and judgment by using the data calculated by the response module and the pupil tracking calculation unit in the detection process.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention adopts a virtual visual field construction and imaging means, utilizes an eyeball tracking calculation mode to determine eyeball position information of a detector, and realizes refractive correction, visual axis correction and visual detection operation of the detector in the detection process;
2. in the process of constructing the virtual field of view, the invention forms a virtual scene with adjustable background and stress locus content, brightness, color and position according to the detection requirement, and is suitable for different types of visual detection requirements;
3. in the refraction correction process, the virtual image adjustment is combined with the inserting sheet and the vision distance adjustment mode to carry out refraction correction, so that the correction is convenient and quick, the correction range is improved, and the correction accuracy is improved;
4. in the correcting process of the visual axis, the invention adopts an eyeball tracking calculation mode, the movement of the eyeball is calculated, the coordinate of the whole virtual visual field is changed, the central fixation point of the virtual visual field and the visual axis of the eyeball are kept to coaxially move, the coordinate of the corresponding stimulation site is also changed, and the subjects can not feel the coaxial movement of the virtual visual field scene and the eyes and can not feel dizzy and the like through extremely high refreshing rate; in the system, the whole virtual visual field moves coaxially along with the eyeball, so that inaccuracy of visual field examination results caused by fixation deviation can be avoided, and the system is particularly suitable for patients with low central vision, unstable pupils and nystagmus, and is convenient for subsequent visual detection operation;
5. in the visual detection process, the invention adjusts the background and the content of the stress locus in the virtual visual field according to the detection requirement, thereby facilitating the visual detection of different types, improving the applicability of the visual detection, facilitating the detection of detection personnel and improving the detection accuracy.
Drawings
FIG. 1 is a schematic view of a visual field detection perimeter system according to the present invention;
FIG. 2 is a schematic diagram of virtual field construction provided by the present invention;
FIG. 3 is a schematic diagram of refractive correction provided by the present invention;
FIG. 4 is a schematic diagram of visual axis correction provided by the present invention;
fig. 5 is a schematic diagram of visual inspection provided by the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and detailed description, wherein it is to be understood that, on the premise of no conflict, the following embodiments or technical features may be arbitrarily combined to form new embodiments.
Example 1:
referring to fig. 1, a perimeter system for detecting a perimeter of the present embodiment includes:
the testing system comprises an eyeball information acquisition module, an inserting piece throwing module, a scene throwing module and an eyepiece adjusting module; the eyeball information acquisition module acquires the eye information by adopting a mode of combining an iris camera with an infrared illuminating lamp;
the man-machine interaction system comprises a response module;
the information processing system comprises a data interaction module, a storage module, a data classification and identification module, a scene establishment module, an eyeball tracking calculation module, a detection control module, a data integration module and a data transmission module;
the data interaction module and the storage module are connected with the eyeball information acquisition module, the inserting piece throwing module, the scene throwing module, the eyepiece adjusting module, the response module, the data classification and identification module, the scene establishment module, the eyeball tracking calculation module, the detection control module, the data integration module and the data transmission module; the data classification and identification module is connected with the scene establishment module, the eyeball tracking calculation module, the detection control module and the data integration module; the scene establishment module is connected with the eyeball tracking calculation module, the detection control module and the data integration module; the eyeball tracking calculation module is connected with the detection control module and the data integration module; the data integration module is connected with the detection control module and the data transmission module.
The data respective identification module comprises a data classification unit connected with the data interaction module, a construction instruction identification unit, a response instruction identification unit and a detection instruction identification unit which are connected with the data classification unit; the response instruction identification unit and the detection instruction identification unit are connected with the storage module, the eyeball tracking calculation module, the detection control module and the scene establishment module; the construction instruction equipment unit is connected with the storage module and the scene establishment module;
the data classifying unit is used for classifying the information received by the data interaction unit, the construction instruction identifying unit is used for identifying the construction instruction classified by the data classifying unit, the response instruction identifying unit is used for identifying the response content classified by the data classifying unit, and the detection instruction identifying unit is used for identifying the detection instruction classified by the data classifying unit.
The scene building module comprises a scene building instruction receiving unit, a scene building unit, a scene spot building unit and a scene model integrating unit, wherein the scene building instruction receiving unit is connected with the data classification and identification module, the scene building unit and the scene spot building unit; the scene model integrating unit is connected with the data integrating module, the storage module, the data classifying and identifying module, the eyeball tracking calculation module, the detection control module, the scene constructing unit and the scene spot constructing unit;
the scene construction instruction receiving unit is used for receiving scene construction instruction content, the scene construction unit is used for constructing scene content for detection, the scene spot construction unit is used for constructing stimulation site spot content for detection, and the scene model integration unit integrates the scene content for detection and the stimulation site spot content to form detection scene content.
The eyeball tracking calculation module comprises an eyeball position calculation unit, an eyeball visual axis calculation unit and a pupil tracking calculation unit, wherein the eyeball position calculation unit is connected with the data interaction module, the data classification and identification module, the scene establishment module, the detection control module, the eyeball visual axis calculation unit and the pupil tracking calculation unit; the eyeball visual axis calculation unit and the pupil tracking calculation unit are connected with the storage module, the data classification and identification module, the scene establishment module and the detection control module;
the eyeball position calculation unit is used for calculating eyeball information of the person to be detected, the eyeball visual axis calculation unit is used for calculating eyeball visual axis information of the person to be detected, and the pupil tracking calculation unit is used for calculating pupil information of the person to be detected.
The detection control module comprises a visual detection control unit, a visual axis correction control unit and a refraction correction control unit which are all connected with the data classification and identification module, the scene establishment module, the eyeball tracking calculation module and the storage module;
the visual detection control unit is used for detecting and controlling a tester, the visual axis correction control unit is used for performing visual axis correction control on the tester, and the refraction correction control unit is used for performing refraction correction control on the tester.
The eye information acquisition module is used for acquiring eye information of a tester, the inserting sheet putting module is used for carrying out vision correction inserting sheet placement management, the scene putting module is used for putting visual field content for detection, and the eyepiece adjusting module is used for adjusting the position of the eyepiece; the response module is used for carrying out man-machine interaction response in the detection process.
The data interaction module is used for data interaction among the information processing system, the testing system and the human-computer interaction system, the storage module is used for storing data in the detection process of the information processing system, the testing system and the human-computer interaction system, the data classification recognition module is used for respectively recognizing detection collected information and detection instructions, the scene establishment module is used for constructing a detection visual field for detection, the eyeball tracking calculation module is used for calculating eyeball information, the detection control module is used for controlling and adjusting visual detection, the data integration module is used for integrating the data formed in the detection process, and the data transmission module is used for transmitting the data generated in the detection process of the information processing system to the testing system and the human-computer interaction system for detection operation.
Example 2:
referring to fig. 2-5, a method for using a visual field meter system for visual field detection includes the steps of:
s1, constructing a virtual field:
the data interaction module receives a virtual view construction instruction transmitted by the test end, the data classification unit classifies the virtual view construction instruction and transmits the virtual view construction instruction to the construction instruction identification unit for identification, specific information of virtual view construction is determined, then the scene construction instruction receiving unit receives the construction instruction and sends the construction instruction to the scene construction unit and the scene spot construction unit, and the scene construction unit and the scene spot construction unit establish view contents of the virtual view and stimulation site spot contents; integrating the established content in a data integration unit to form a finished virtual field content, and then sending the virtual field content to a scene throwing module through a data sending module and a data interaction module, and finally throwing the virtual field content into a virtual detection image to be detected by the scene throwing module;
s2 refractive correction:
on the basis of completing the virtual view construction of the step S1, the data interaction module receives a refraction correction instruction transmitted by the test end, the data classification unit classifies the refraction correction instruction and transmits the refraction correction instruction to the test instruction identification unit for identification, specific refraction correction information is determined, then the refraction correction control unit controls the data integration module to correct the refraction correction virtual view content constructed by the scene construction unit and the scene spot construction unit according to refraction correction instruction content, and at the moment, the test end responds to a tester by using the response module and performs refraction correction by combining the insert delivery module and the eyepiece adjustment module;
s3, correcting the visual axis:
on the basis of completing the refraction correction of the step S2, the visual axis correction control unit extracts eyeball information calculated from the eyeball position calculation unit according to the received visual axis correction instruction, then the eyeball visual axis calculation unit calculates the eyeball information to form eyeball visual axis information content, and then the data integration module adjusts coordinates of a virtual detection image constructed by the scene construction unit and the scene spot construction unit according to the calculated eyeball visual axis information content, and then the virtual detection image is put in;
s4, visual detection:
on the basis of completing the visual axis correction in step S3, the visual detection control unit forms a virtual field for detection according to the received visual detection instruction, and the data integration module performs visual detection and judgment by using the data calculated by the response module and the pupil tracking calculation unit in the detection process.
Example 3:
in the step S1, when the virtual view is constructed, a scene construction unit renders a virtual scene into a plane, arc-shaped and hemispherical virtual scene, a scene throwing module forms a corresponding virtual image, the distance between the virtual scene and the virtual image and an eyepiece or eyes of a human eye can be adjusted, and meanwhile, the background brightness and the color of the virtual scene can be adjusted;
the brightness, color and shape of the stimulation light spot constructed by the scene light spot constructing unit can be adjusted, and the stimulation light spot can move;
the range of the virtual scene, the nasal side 60 °, temporal side 90 °, upper 60 °, lower 70 ° of the human monocular vision; for a central perimeter system, above, below, nasal, temporal should be greater than 25 °, for a peripheral central perimeter system, above 40 °, below 50 °, nasal should be greater than 40 °, temporal should be greater than 50 °, for a full perimeter system, the virtual scene range provided should be greater than 45 °, below 60 °, nasal should be greater than 45 °, temporal should be greater than 60 °; to meet the needs of certain specific scenes, the binocular vision level should be not less than 160 °;
the brightness of the virtual scene or virtual image should be able to be measured using a photometry table or calculated indirectly using a formula; the intensity of the light of this virtual scene or virtual image is referred to as luminance, expressed in candela per square meter (cd/m 2), or using achyran (asb), 1 cd/cm2=3.14 asb. Light sensitivity is represented in visual field inspection using decibels as a unit, and the correlation formula is db=10xlog (Lmax/L); lmax is the maximum brightness that the field of view can display, L is the threshold brightness of the optotype;
the background brightness of the virtual scene should be adjustable, but should be constant during each test, the most commonly used background brightness being 31.4asb, or 4asb; the distance of the virtual scene from the eyes is variable, but should be constant in each test, most often 300 mm; the size of the optotype on the stimulation site in the virtual scene can be changed, but the size of the optotype is constant in the process of each test, the specific optotype size is according to the size of the stimulation point of the Goldman test, the sizes are not constant, and after the distance of the virtual scene is changed, the size of the stimulation point of the test is changed correspondingly;
the color of the visual target of the stimulation site in the virtual scene can be changed, but the visual target color is consistent in the process of each test, and the most common is white background-white visual target, yellow background-blue visual target and the like; the duration of the optotype in the virtual scene may vary, but the duration of the optotype is uniform during each test, with the most common time being 100 ms, 200 ms.
Example 4:
in the step S2 of the refractive correction,
before visual field examination, near vision examination is carried out at the position of the virtual visual field fixation point, and the examination standard is changed according to the near vision chart, if the virtual visual field examination distance is changed, the character size of the near vision examination chart is changed;
if the near vision is found to be poor, corresponding insert correction is carried out according to the refraction degree and the checking distance of the patient, and inserts are carried out on the insert delivery module until the best near vision is obtained, and if the patient has astigmatism, correction is carried out according to the following principles: the degree of the cylindrical lens is smaller than or equal to 0.25D without correction, the degree of the cylindrical lens is 0.5-1.0D,1/2 degree of the cylindrical lens is added on the spherical lens, the degree of the cylindrical lens is larger than 1D, and the cylindrical lens is used for correction;
because the virtual field ocular is close to the eyes and a larger virtual field range can be generated through the ocular, even if lenses are added in the middle area of the ocular and the eyes, a smaller lens area can generate a larger virtual field range, so that the edges of the lenses and the frame can not influence the field inspection range;
the vision meter system can adopt another scheme which does not depend on adding lenses to carry out refractive correction, namely, the distance between eyes and ocular lens is changed through the ocular lens adjusting module to carry out refractive correction; changing the distance between the eye and the eyepiece, according to the formula 1/XD (dioptre degree, unit of refractive power), X is the distance between the eye and the eyepiece, so that the light collection point correctly falls on the retina;
the perimeter system may employ another method of refractive correction that does not rely on the addition of lenses, i.e., by varying the object distance (distance between the projected virtual image and the lens group) to properly place the light collection point on the retina according to 1/focal length = 1/object distance + 1/image distance.
Example 5:
in the step S3 of visual axis correction, the eyeball visual axis calculation unit calculates the eyeball visual axis, then the coordinates of the whole virtual visual field are changed, so that the central fixation point of the virtual visual field and the visual axis of the eyeball move coaxially, the coordinates of corresponding stimulation sites also change, and a subject cannot feel that the virtual visual field scene and eyes move coaxially through an extremely high refreshing rate, and no feeling such as dizzy is generated;
before each examination, the visual field meter system corrects the visual axis of the subject, and the eyeball visual axis calculation unit can automatically search the visual axis of the subject and aim the fixed point of the virtual visual field at the visual axis; therefore, factors influencing the visual axis deviation are basically solved, and the pupil distance and the strabismus degree can be calculated according to the numerical value automatically corrected by the binocular visual axes.
Example 6:
in the visual inspection of step S4, the pupil tracking calculation unit performs calculation in the following manner:
the collected eye image video data is trained by machine learning to generate a model for detecting an eye image area, the model is imported to detect the eye image area, cutting coordinate information is obtained, and cutting pretreatment is carried out on the eye image; cutting and removing a part irrelevant to eye information in the eye image; converting the image into a gray level image, and performing preprocessing operations such as Gaussian filtering, binarization and the like; performing edge detection on the binarized eye image by setting a pupil binarization threshold; judging whether the edge area is elliptical, fitting, eliminating abnormal pupils through a set pupil constraint condition, and taking the center of the ellipse as the center position of the pupil if the condition is met; finally, counting blink times by the eye image model obtained through the training;
since pupil cornea reflection technology is the most commonly used technology for current gaze tracking systems; the reflectivity of each part of human eyes to infrared light is different, the cornea is most sensitive to the infrared light, more infrared light can be reflected, and the reflectivity of pupils and irises to the infrared light is greatly different; under visible light, pupil and iris are similar in brightness and are difficult to distinguish, but clear dark pupil images can be obtained under infrared light, and bright reflection light spots are accompanied; this makes pupil extraction easier and more accurate; the relative position of the light spot and the pupil can obviously change along with the rotation of the eyeball, namely, the pupil cornea reflection method is to estimate the sight line direction by using the relative position;
in the two-dimensional vision estimation method, the vision direction is estimated through a vision mapping function model, two-dimensional eye movement characteristics extracted from an eye image are taken as independent variables of a mapping function to be input, and the dependent variables of the function are the required vision direction or the gaze point; generally, in order to obtain the line-of-sight mapping function, an online calibration is required for each user; the line-of-sight mapping function model is represented by
Px=f(Vx,Vy)
Py=g(Vx,Vy)
Wherein, (Px, py) is the line-of-sight falling point, (Vx, vy) is the pupil reflection spot vector; the infrared camera shoots pictures and obtains vectors of reflection facula on pupils through a processing unit; the eye focus on the spherical screen can be determined through sight line estimation, and the information relationship between the eye and the spherical screen is judged through the focus; by way of example, using a 9-point 6-parameter model, the line-of-sight mapping model may be expressed as:
Px=a0+a1Vx+a2Vy+a3VxVy+a4Vx2+a5Vy2
Py=b0+b1Vx+b2Vy+b3VxVy+b4Vx2+b5Vy2
the coefficients a0-a5 and b0-b5 are determined in the user calibration process, polynomial regression is carried out on the coefficients a0-a5 and b0-b5 through 18 equations corresponding to 9-point calibration, and a coefficient determination mapping model is solved.
Upon detection, the subject is able to perform human-machine interactions in the virtual scene, and the subject may pass through a response module, for example: pressing the button indicates that the optotype is seen, while an indication beam can appear in the virtual scene, the subject can indicate the position, direction, etc. of the stimulation site through the indication beam by the response module; meanwhile, the system improves the comfort and interaction accuracy of the subjects through voice inquiry, subject answer and other modes, and the above are subjective interactions of the subjects;
in the virtual visual field inspection system, a series of objective interaction indexes are also arranged, in the early test, the observation shows that the fixation of the eyeball is almost impossible in the visual field inspection process, especially when the stimulus site appears at the periphery, the eyeball can move and trace the stimulus site towards the direction of the stimulus site at will, although the whole virtual environment and the eyeball realize coaxial movement through the virtual vision following movement system so as to solve the fixation problem, and the observation also shows that whether the subject sees the stimulus site can be accurately judged through the correlation analysis of the positions and the directions of the stimulus site;
similarly, if the subject sees the stimulation site, the pupil of the subject will also change accordingly, and by analyzing the pupil change, it can also be determined whether the subject sees the stimulation site;
also, by detecting visual electrophysiology of the subject, it can be determined whether the stimulation site is seen.
Although many multidimensional indicators are designed in the virtual visual inspection system to improve the accuracy of response interaction, traditional false positive and false negative inspections remain to further increase the reliability of visual field inspection results.
The design adopts a virtual visual field construction and imaging means, eyeball position information of a detector is determined by utilizing an eyeball tracking calculation mode, and refraction correction, visual axis correction and visual detection operation of the detector are realized in the detection process;
in the process of constructing the virtual field of view, forming a virtual scene with adjustable contents, brightness, color and position of a background and a stress site according to detection requirements, and being suitable for different types of visual detection requirements;
in the refraction correction process, the virtual image adjustment is combined with the inserting sheet and the vision distance adjustment mode to carry out refraction correction, so that the correction is convenient and quick, the correction range is improved, and the correction accuracy is improved;
in the correcting process of the visual axis, an eyeball tracking calculation mode is adopted, the motion of the eyeball is calculated, the coordinate of the whole virtual visual field is changed, the central fixation point of the virtual visual field and the visual axis of the eyeball are kept to coaxially move, the coordinate of a corresponding stimulation site is also changed, and a subject cannot feel that the virtual visual field scene and the eyes coaxially move and feel dizzy and the like through an extremely high refreshing rate; in the system, the whole virtual visual field moves coaxially along with the eyeball, so that inaccuracy of visual field examination results caused by fixation deviation can be avoided, and the system is particularly suitable for patients with low central vision, unstable pupils and nystagmus, and is convenient for subsequent visual detection operation;
in the visual detection process, the contents of the background and the stress sites in the virtual visual field are adjusted according to the detection requirement, so that different types of visual detection is facilitated, the applicability of visual detection is improved, detection personnel can conveniently detect, and the detection accuracy is improved.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.
Claims (8)
1. A perimeter system for visual field detection, comprising:
the testing system comprises an eyeball information acquisition module, an inserting piece throwing module, a scene throwing module and an eyepiece adjusting module;
the man-machine interaction system comprises a response module;
the information processing system comprises a data interaction module, a storage module, a data classification and identification module, a scene establishment module, an eyeball tracking calculation module, a detection control module, a data integration module and a data transmission module;
the data interaction module and the storage module are connected with the eyeball information acquisition module, the inserting piece putting module, the scene putting module, the eyepiece adjusting module, the response module, the data classification and identification module, the scene establishment module, the eyeball tracking calculation module, the detection control module, the data integration module and the data transmission module; the data classification and identification module is connected with the scene establishment module, the eyeball tracking calculation module, the detection control module and the data integration module; the scene establishment module is connected with the eyeball tracking calculation module, the detection control module and the data integration module; the eyeball tracking calculation module is connected with the detection control module and the data integration module; the data integration module is connected with the detection control module and the data transmission module.
2. The visual field detection perimeter system according to claim 1, wherein the data individual recognition module includes a data classification unit connected to the data interaction module, and a construction instruction recognition unit, a response instruction recognition unit, and a detection instruction recognition unit connected to the data classification unit; the response instruction identification unit and the detection instruction identification unit are connected with the storage module, the eyeball tracking calculation module, the detection control module and the scene establishment module; the construction instruction equipment unit is connected with the storage module and the scene establishment module;
the data classifying unit is used for classifying the information received by the data interaction unit, the construction instruction identifying unit is used for identifying the construction instruction classified by the data classifying unit, the response instruction identifying unit is used for identifying the response content classified by the data classifying unit, and the detection instruction identifying unit is used for identifying the detection instruction classified by the data classifying unit.
3. The visual field meter system for visual field detection according to claim 1, wherein the scene building module comprises a scene building instruction receiving unit, a scene building unit, a scene spot building unit and a scene model integrating unit, wherein the scene building instruction receiving unit is connected with the data classification and identification module, the scene building unit and the scene spot building unit; the scene model integrating unit is connected with the data integrating module, the storage module, the data classifying and identifying module, the eyeball tracking calculation module, the detection control module, the scene constructing unit and the scene spot constructing unit;
the scene construction instruction receiving unit is used for receiving scene construction instruction content, the scene construction unit is used for constructing scene content for detection, the scene spot construction unit is used for constructing stimulation site spot content for detection, and the scene model integration unit integrates the scene content for detection and the stimulation site spot content to form detection scene content.
4. The visual field detection perimeter system according to claim 1, wherein the eye tracking calculation module includes an eye position calculation unit, an eye visual axis calculation unit, and a pupil tracking calculation unit, the eye position calculation unit being connected to the data interaction module, the data classification and identification module, the scene creation module, the detection control module, the eye visual axis calculation unit, and the pupil tracking calculation unit; the eyeball visual axis calculation unit and the pupil tracking calculation unit are connected with the storage module, the data classification and identification module, the scene establishment module and the detection control module;
the eyeball position calculation unit is used for calculating eyeball information of the person to be detected, the eyeball visual axis calculation unit is used for calculating eyeball visual axis information of the person to be detected, and the pupil tracking calculation unit is used for calculating pupil information of the person to be detected.
5. The vision meter system for vision field detection as set forth in claim 1, wherein the detection control module includes a vision detection control unit, a visual axis correction control unit and a refractive correction control unit, each connected to the data classification recognition module, the scene creation module, the eye tracking calculation module and the storage module;
the visual detection control unit is used for detecting and controlling a tester, the visual axis correction control unit is used for performing visual axis correction control on the tester, and the refraction correction control unit is used for performing refraction correction control on the tester.
6. The visual field meter system for visual field detection as set forth in claim 1, wherein the eyeball information acquisition module is used for acquiring eyeball information of a tester, the inserting sheet putting module is used for visual correction inserting sheet placement management, the scene putting module is used for putting visual field content for detection, and the eyepiece adjusting module is used for adjusting the position of the eyepiece; the response module is used for carrying out man-machine interaction response in the detection process.
7. The visual field detection visual field meter system according to claim 1, wherein the data interaction module is used for data interaction among an information processing system, a testing system and a man-machine interaction system, the storage module is used for storing data in the detection process of the information processing system, the testing system and the man-machine interaction system, the data classification and identification module is used for respectively identifying information acquired by detection and detection instructions, the scene establishment module is used for constructing a detection visual field for detection, the eyeball tracking calculation module is used for calculating eyeball information, the detection control module is used for controlling and adjusting visual detection, the data integration module is used for integrating data formed in the detection process, and the data transmission module is used for transmitting data generated in the detection process of the information processing system to the testing system and the man-machine interaction system for detection operation.
8. A method of using a visual field meter system for visual field detection as claimed in any one of claims 1 to 7, comprising the steps of:
s1, constructing a virtual field:
the data interaction module receives a virtual view construction instruction transmitted by the test end, the data classification unit classifies the virtual view construction instruction and transmits the virtual view construction instruction to the construction instruction identification unit for identification, specific information of virtual view construction is determined, then the scene construction instruction receiving unit receives the construction instruction and sends the construction instruction to the scene construction unit and the scene spot construction unit, and the scene construction unit and the scene spot construction unit establish view contents of the virtual view and stimulation site spot contents; integrating the established content in a data integration unit to form a finished virtual field content, and then sending the virtual field content to a scene throwing module through a data sending module and a data interaction module, and finally throwing the virtual field content into a virtual detection image to be detected by the scene throwing module;
s2 refractive correction:
on the basis of completing the virtual view construction of the step S1, the data interaction module receives a refraction correction instruction transmitted by the test end, the data classification unit classifies the refraction correction instruction and transmits the refraction correction instruction to the test instruction identification unit for identification, specific refraction correction information is determined, then the refraction correction control unit controls the data integration module to correct the refraction correction virtual view content constructed by the scene construction unit and the scene spot construction unit according to refraction correction instruction content, and at the moment, the test end responds to a tester by using the response module and performs refraction correction by combining the insert delivery module and the eyepiece adjustment module;
s3, correcting the visual axis:
on the basis of completing the refraction correction in the step S2, the visual axis correction control unit extracts eyeball information acquired from the eyeball information acquisition module according to the received visual axis correction instruction, performs classification processing through the eyeball position calculation unit, calculates the eyeball information to form eyeball visual axis information content, and the data integration module adjusts coordinates of a virtual detection image constructed by the scene construction unit and the scene spot construction unit according to the calculated eyeball visual axis information content, and then performs throwing;
s4, visual detection:
on the basis of completing the visual axis correction in step S3, the visual detection control unit forms a virtual field for detection according to the received visual detection instruction, and the data integration module performs visual detection and judgment by using the data calculated by the response module and the pupil tracking calculation unit in the detection process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310147483.7A CN117243560A (en) | 2023-02-22 | 2023-02-22 | View meter system for view detection and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310147483.7A CN117243560A (en) | 2023-02-22 | 2023-02-22 | View meter system for view detection and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117243560A true CN117243560A (en) | 2023-12-19 |
Family
ID=89135669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310147483.7A Pending CN117243560A (en) | 2023-02-22 | 2023-02-22 | View meter system for view detection and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117243560A (en) |
-
2023
- 2023-02-22 CN CN202310147483.7A patent/CN117243560A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6470746B2 (en) | Apparatus and method for determining ophthalmic prescription | |
US8708490B2 (en) | Method and a device for automatically measuring at least one refractive characteristic of both eyes of an individual | |
CN109310314B (en) | Mobile device application for eye skew measurement | |
CN106037626A (en) | Head-mounted visual field inspector | |
JP5421146B2 (en) | Visual field inspection system | |
CN109431452B (en) | Unmanned eye health screening instrument | |
CN111295129B (en) | Visual acuity examination | |
MXPA03002692A (en) | Method for determining distances in the anterior ocular segment. | |
CN114025659A (en) | Ametropia determination of an eye | |
CN109634431B (en) | Medium-free floating projection visual tracking interaction system | |
CN108403078A (en) | A kind of eye eyesight check device | |
CN114931353B (en) | Convenient and fast contrast sensitivity detection system | |
CN115409774A (en) | Eye detection method based on deep learning and strabismus screening system | |
JP3851824B2 (en) | Contrast sensitivity measuring device | |
US11445904B2 (en) | Joint determination of accommodation and vergence | |
CN109008937A (en) | Method for detecting diopter and equipment | |
KR102295587B1 (en) | Method and system for virtual reality-based visual field inspection | |
KR20200027187A (en) | SYSTEM AND METHOD FOR EXAMINATING ophthalmic using VR | |
CN117243560A (en) | View meter system for view detection and method thereof | |
US20210093193A1 (en) | Patient-induced trigger of a measurement for ophthalmic diagnostic devices | |
CN114468977A (en) | Ophthalmologic vision examination data collection and analysis method, system and computer storage medium | |
CN208988837U (en) | A kind of eye eyesight check device | |
CN116458835B (en) | Detection and prevention system for myopia and amblyopia of infants | |
CN112043235B (en) | Portable eyeball static rotation measuring instrument and method for measuring eyeball rotation angle by utilizing same | |
Goyal et al. | Estimation of spherical refractive errors using virtual reality headset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |